PySpark SQL中具有重叠行的分组依据

PySpark SQL中具有重叠行的分组依据,sql,pyspark,apache-spark-sql,Sql,Pyspark,Apache Spark Sql,下表是使用Parquet/PySpark创建的,目的是汇总1

下表是使用Parquet/PySpark创建的,目的是汇总1
+-----+-----+
|count|value|
+-----+-----+
|  1.1|    1|
|  1.2|    2|
|  4.1|    3|
|  5.5|    4|
|  5.6|    5|
|  5.7|    6|
+-----+-----+
下面是创建上表并将其作为PySpark数据帧读取的代码

import pandas as pd
import pyarrow.parquet as pq
import pyarrow as pa
from pyspark import SparkContext, SQLContext


# create Parquet DataFrame
pdf = pd.DataFrame({
    'count': [1.1, 1.2, 4.1, 5.5, 5.6, 5.7],
    'value': [1, 2, 3, 4, 5, 6]})
table = pa.Table.from_pandas(pdf)
pq.write_to_dataset(table, r'c:/data/data.parquet')

# read Parquet DataFrame and create view
sc = SparkContext()
sql = SQLContext(sc)
df = sql.read.parquet(r'c:/data/data.parquet')
df.createTempView('data')
该操作可以使用两个单独的查询

q1 = sql.sql("""
    SELECT AVG(value) AS va
    FROM data
    WHERE count > 1
    AND count < 5
    """)
+---+
| va|
+---+
|2.0|
+---+
上面的查询生成

+---+---+
| va| id|
+---+---+
|2.0|  1|
|5.0|  2|
+---+---+
明确地说,期望的结果更像是

+---+---+
| va| id|
+---+---+
|2.0|  1|
|4.5|  2|
+---+---+

最简单的方法可能是union all:

您也可以将其表述为:

select r.id, avg(d.value)
from data d join
     (select 1 as lo, 5 as hi, 1 as id union all
      select 2 as lo, 6 as hi, 2 as id 
     ) r
     on d.count > r.lo and d.count < r.hi
group by r.id;
  
+---+---+
| va| id|
+---+---+
|2.0|  1|
|4.5|  2|
+---+---+
SELECT 1, AVG(value) AS va
FROM data
WHERE count > 1 AND count < 5
UNION ALL
SELECT 2, AVG(value) as va
FROM data
WHERE count > 2 AND count < 6;
select r.id, avg(d.value)
from data d join
     (select 1 as lo, 5 as hi, 1 as id union all
      select 2 as lo, 6 as hi, 2 as id 
     ) r
     on d.count > r.lo and d.count < r.hi
group by r.id;