Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache spark 如何在Pyspark中对窗口上的每N行求和?_Apache Spark_Pyspark_Apache Spark Sql_Window Functions - Fatal编程技术网

Apache spark 如何在Pyspark中对窗口上的每N行求和?

Apache spark 如何在Pyspark中对窗口上的每N行求和?,apache-spark,pyspark,apache-spark-sql,window-functions,Apache Spark,Pyspark,Apache Spark Sql,Window Functions,我尝试了不同的窗口函数来做这个练习,但没有成功。有人能想出一个不同的方法吗?考虑添加和索引列或r_编号 年 月 周 项目 部门 状态 出售 总(销售)周数 总额(销售额)_4周 总额(销售额)_6周 2020 1. 1. 1. 1. 德克萨斯州 $100 $250 $680 $1380 2020 1. 2. 1. 1. 德克萨斯州 $150 $250 $680 $1380 2020 1. 3. 1. 1. 德克萨斯州 $200 $430 $680 $1380 2020 1. 4. 1. 1. 德

我尝试了不同的窗口函数来做这个练习,但没有成功。有人能想出一个不同的方法吗?考虑添加和索引列或r_编号

年 月 周 项目 部门 状态 出售 总(销售)周数 总额(销售额)_4周 总额(销售额)_6周 2020 1. 1. 1. 1. 德克萨斯州 $100 $250 $680 $1380 2020 1. 2. 1. 1. 德克萨斯州 $150 $250 $680 $1380 2020 1. 3. 1. 1. 德克萨斯州 $200 $430 $680 $1380 2020 1. 4. 1. 1. 德克萨斯州 $230 $430 $680 $1380 2020 1. 5. 1. 1. 德克萨斯州 $400 $700 $1050 $1380 2020 1. 6. 1. 1. 德克萨斯州 $300 $700 $1050 $1380 2020 1. 7. 1. 1. 德克萨斯州 $250 $350 $1050 $1200 2020 1. 8. 1. 1. 德克萨斯州 $100 $350 $1050 $1200 2020 1. 9 1. 1. 德克萨斯州 $200 $400 $850 $1200 2020 1. 10 1. 1. 德克萨斯州 $200 $400 $850 $1200 2020 1. 11 1. 1. 德克萨斯州 $300 $450 $850 $1200 2020 1. 11 1. 1. 德克萨斯州 $150 $450 $850 $1200
您可以指定行号,将其四舍五入到最接近的2/4/6,并将其用作分区列,以便在窗口上求和:

from pyspark.sql import functions as F, Window

result = df.withColumn(
    'rn', 
    F.row_number().over(Window.partitionBy('item', 'department', 'state').orderBy('year', 'month', 'week')) - 1
).withColumn(
    'sum_2wks', 
    F.sum('sales').over(Window.partitionBy('item', 'department', 'state', (F.col('rn') / 2).cast('int')))
).withColumn(
    'sum_4wks', 
    F.sum('sales').over(Window.partitionBy('item', 'department', 'state', (F.col('rn') / 4).cast('int')))
).withColumn(
    'sum_6wks', 
    F.sum('sales').over(Window.partitionBy('item', 'department', 'state', (F.col('rn') / 6).cast('int')))
)

result.show()
+----+-----+----+----+----------+-----+-----+---+--------+--------+--------+
|year|month|week|item|department|state|sales| rn|sum_2wks|sum_4wks|sum_6wks|
+----+-----+----+----+----------+-----+-----+---+--------+--------+--------+
|2020|    1|   1|   1|         1|   TX|  100|  0|     250|     680|    1380|
|2020|    1|   2|   1|         1|   TX|  150|  1|     250|     680|    1380|
|2020|    1|   3|   1|         1|   TX|  200|  2|     430|     680|    1380|
|2020|    1|   4|   1|         1|   TX|  230|  3|     430|     680|    1380|
|2020|    1|   5|   1|         1|   TX|  400|  4|     700|    1050|    1380|
|2020|    1|   6|   1|         1|   TX|  300|  5|     700|    1050|    1380|
|2020|    1|   7|   1|         1|   TX|  250|  6|     350|    1050|    1200|
|2020|    1|   8|   1|         1|   TX|  100|  7|     350|    1050|    1200|
|2020|    1|   9|   1|         1|   TX|  200|  8|     400|     850|    1200|
|2020|    1|  10|   1|         1|   TX|  200|  9|     400|     850|    1200|
|2020|    1|  11|   1|         1|   TX|  300| 10|     450|     850|    1200|
|2020|    1|  12|   1|         1|   TX|  150| 11|     450|     850|    1200|
+----+-----+----+----+----------+-----+-----+---+--------+--------+--------+

上面的解决方案是好的,只是如果我们在同一周有多行,行数会给人一个错误的印象,因为同一周行的模数(行数/2)应该是相同的。 相反,出于明显的原因,您更喜欢使用密集的_rank()函数,而不是行数()和rank()函数

val sales_data = Seq((2020,1,1,"1","1","TX",100),
                       (2020,1,1,"1","1","TX",150),
                       (2020,1,2,"1","1","TX",150),
                       (2020,1,3,"1","1","TX",200),
                       (2020,1,4,"1","1","TX",230),
                       (2020,1,5,"1","1","TX",400),
                       (2020,1,6,"1","1","TX",300),
                       (2020,1,7,"1","1","TX",250),
                       (2020,1,8,"1","1","TX",100),
                       (2020,1,9,"1","1","TX",200),
                       (2020,1,10,"1","1","TX",200),
                       (2020,1,11,"1","1","TX",300),
                       (2020,1,11,"1","1","TX",150))
                       
  //Calculate moving sales for 2 weeks, 4 weeks, 6 weeks
                               
  val sales_df = sales_data.toDF("year", "month", "week", "item", "dept", "state", "sale")
//  sales_df.show
  
  sales_df.withColumn("row_no", dense_rank().over(Window.partitionBy("item", "state","dept").orderBy("year", "month", "week"))-1)
          .withColumn("sum(sales)_2wks", sum($"sale").over(Window.partitionBy($"item", $"state",$"dept", ($"row_no"/2).cast("int"))))
          .withColumn("sum(sales)_3wks", sum($"sale").over(Window.partitionBy($"item", $"state",$"dept", ($"row_no"/3).cast("int"))))
          .withColumn("sum(sales)_4wks", sum($"sale").over(Window.partitionBy($"item", $"state",$"dept", ($"row_no"/4).cast("int"))))
          .withColumn("sum(sales)_6wks", sum($"sale").over(Window.partitionBy($"item", $"state",$"dept", ($"row_no"/6).cast("int"))))
          .show