Apache spark spark中获得每个组作为新数据帧并在循环中传递另一个函数的最佳方法是什么?

Apache spark spark中获得每个组作为新数据帧并在循环中传递另一个函数的最佳方法是什么?,apache-spark,apache-spark-sql,Apache Spark,Apache Spark Sql,我使用的是spark-sql-2.4.1v,我试图在给定数据的每一列上找到分位数,即百分位0、百分位25等 我的数据: +----+---------+-------------+----------+-----------+--------+ | id| date| revenue|con_dist_1| con_dist_2| state | +----+---------+-------------+----------+-----------+--------+ |

我使用的是spark-sql-2.4.1v,我试图在给定数据的每一列上找到分位数,即百分位0、百分位25等

我的数据:

+----+---------+-------------+----------+-----------+--------+
|  id|     date|      revenue|con_dist_1| con_dist_2| state  |
+----+---------+-------------+----------+-----------+--------+
|  10|1/15/2018|  0.010680705|         6|0.019875458|   TX   |
|  10|1/15/2018|  0.006628853|         4|0.816039063|   AZ   |
|  10|1/15/2018|   0.01378215|         4|0.082049528|   TX   |
|  10|1/15/2018|  0.010680705|         6|0.019875458|   TX   |
|  10|1/15/2018|  0.006628853|         4|0.816039063|   AZ   |
|  10|1/15/2018|   0.01378215|         4|0.082049528|   CA   |
|  10|1/15/2018|  0.010680705|         6|0.019875458|   CA   |
|  10|1/15/2018|  0.006628853|         4|0.816039063|   CA   |
+----+---------+-------------+----------+-----------+--------+
我会让各州计算,即

val states = Seq("CA","AZ");
val cols = Seq("con_dist_1" ,"con_dist_2")
对于每个给定的状态,我需要从源表中获取数据,并只计算给定列的百分位数

我试着如下

for( state <- states){

     for( col <- cols){
        // pecentile calculation
     }
}

更新

我从hive上下文中找到了
percentile_approx
函数,因此您不需要使用
stat
函数

val states = Seq("CA", "AZ")
val cols = Seq("con_dist_1", "con_dist_2")

val l = cols.map(c => expr(s"percentile_approx($c, Array(0.25, 0.5, 0.75)) as ${c}_quantiles"))
val df2 = df.filter($"state".isin(states: _*)).groupBy("state").agg(l.head, l.tail: _*)

df2.select(col("state") +: cols.flatMap( c => (1 until 4).map( i => col(c + "_quantiles")(i - 1).alias(c + "_quantile_" + i))): _*).show(false)
在这里,我对给定的
状态
cols
尝试了自动化方法。结果将是,

+-----+---------------------+---------------------+---------------------+---------------------+---------------------+---------------------+
|state|con_dist_1_quantile_1|con_dist_1_quantile_2|con_dist_1_quantile_3|con_dist_2_quantile_1|con_dist_2_quantile_2|con_dist_2_quantile_3|
+-----+---------------------+---------------------+---------------------+---------------------+---------------------+---------------------+
|AZ   |4                    |4                    |4                    |0.816039063          |0.816039063          |0.816039063          |
|CA   |4                    |4                    |6                    |0.019875458          |0.082049528          |0.816039063          |
+-----+---------------------+---------------------+---------------------+---------------------+---------------------+---------------------+
请注意,结果与预期结果有点不同,因为我设置了您给定的
states=Seq(“CA”,“AZ”)


原创

对状态使用
窗口
,并计算每列的
百分比排名

import org.apache.spark.sql.expressions.Window

val w1 = Window.partitionBy("state").orderBy("con_dist_1")
val w2 = Window.partitionBy("state").orderBy("con_dist_2")
df.withColumn("p1", percent_rank.over(w1))
  .withColumn("p2", percent_rank.over(w2))
  .show(false)
您可以先过滤数据帧,仅针对特定状态。无论如何,结果是:

+---+---------+-----------+----------+-----------+-----+---+---+
|id |date     |revenue    |con_dist_1|con_dist_2 |state|p1 |p2 |
+---+---------+-----------+----------+-----------+-----+---+---+
|10 |1/15/2018|0.006628853|4         |0.816039063|AZ   |0.0|0.0|
|10 |1/15/2018|0.006628853|4         |0.816039063|AZ   |0.0|0.0|
|10 |1/15/2018|0.010680705|6         |0.019875458|CA   |1.0|0.0|
|10 |1/15/2018|0.01378215 |4         |0.082049528|CA   |0.0|0.5|
|10 |1/15/2018|0.006628853|4         |0.816039063|CA   |0.0|1.0|
|10 |1/15/2018|0.010680705|6         |0.019875458|TX   |0.5|0.0|
|10 |1/15/2018|0.010680705|6         |0.019875458|TX   |0.5|0.0|
|10 |1/15/2018|0.01378215 |4         |0.082049528|TX   |0.0|1.0|
+---+---------+-----------+----------+-----------+-----+---+---+

更新

我从hive上下文中找到了
percentile_approx
函数,因此您不需要使用
stat
函数

val states = Seq("CA", "AZ")
val cols = Seq("con_dist_1", "con_dist_2")

val l = cols.map(c => expr(s"percentile_approx($c, Array(0.25, 0.5, 0.75)) as ${c}_quantiles"))
val df2 = df.filter($"state".isin(states: _*)).groupBy("state").agg(l.head, l.tail: _*)

df2.select(col("state") +: cols.flatMap( c => (1 until 4).map( i => col(c + "_quantiles")(i - 1).alias(c + "_quantile_" + i))): _*).show(false)
在这里,我对给定的
状态
cols
尝试了自动化方法。结果将是,

+-----+---------------------+---------------------+---------------------+---------------------+---------------------+---------------------+
|state|con_dist_1_quantile_1|con_dist_1_quantile_2|con_dist_1_quantile_3|con_dist_2_quantile_1|con_dist_2_quantile_2|con_dist_2_quantile_3|
+-----+---------------------+---------------------+---------------------+---------------------+---------------------+---------------------+
|AZ   |4                    |4                    |4                    |0.816039063          |0.816039063          |0.816039063          |
|CA   |4                    |4                    |6                    |0.019875458          |0.082049528          |0.816039063          |
+-----+---------------------+---------------------+---------------------+---------------------+---------------------+---------------------+
请注意,结果与预期结果有点不同,因为我设置了您给定的
states=Seq(“CA”,“AZ”)


原创

对状态使用
窗口
,并计算每列的
百分比排名

import org.apache.spark.sql.expressions.Window

val w1 = Window.partitionBy("state").orderBy("con_dist_1")
val w2 = Window.partitionBy("state").orderBy("con_dist_2")
df.withColumn("p1", percent_rank.over(w1))
  .withColumn("p2", percent_rank.over(w2))
  .show(false)
您可以先过滤数据帧,仅针对特定状态。无论如何,结果是:

+---+---------+-----------+----------+-----------+-----+---+---+
|id |date     |revenue    |con_dist_1|con_dist_2 |state|p1 |p2 |
+---+---------+-----------+----------+-----------+-----+---+---+
|10 |1/15/2018|0.006628853|4         |0.816039063|AZ   |0.0|0.0|
|10 |1/15/2018|0.006628853|4         |0.816039063|AZ   |0.0|0.0|
|10 |1/15/2018|0.010680705|6         |0.019875458|CA   |1.0|0.0|
|10 |1/15/2018|0.01378215 |4         |0.082049528|CA   |0.0|0.5|
|10 |1/15/2018|0.006628853|4         |0.816039063|CA   |0.0|1.0|
|10 |1/15/2018|0.010680705|6         |0.019875458|TX   |0.5|0.0|
|10 |1/15/2018|0.010680705|6         |0.019875458|TX   |0.5|0.0|
|10 |1/15/2018|0.01378215 |4         |0.082049528|TX   |0.0|1.0|
+---+---------+-----------+----------+-----------+-----+---+---+

您可能需要执行与下面代码类似的操作

df.groupBy(col("state"))
    .agg(collect_list(col("con_dist_1")).as("col1_quant"), collect_list(col("con_dist_2")).as("col2_quant"))
    .withColumn("col1_quant1", col("col1_quant")(0))
    .withColumn("col1_quant2", col("col1_quant")(1))
    .withColumn("col2_quant1", col("col2_quant")(0))
    .withColumn("col2_quant2", col("col2_quant")(1))
    .show

OutPut:
+-----+----------+--------------------+-----------+-----------+-----------+-----------+
|state|col1_quant|          col2_quant|col1_quant1|col1_quant2|col2_quant1|col2_quant2|
+-----+----------+--------------------+-----------+-----------+-----------+-----------+
|   AZ|    [4, 4]|[0.816039063, 0.8...|          4|          4|0.816039063|0.816039063|
|   CA|    [4, 6]|[0.082049528, 0.0...|          4|          6|0.082049528|0.019875458|
|   TX| [6, 4, 6]|[0.019875458, 0.0...|          6|          4|0.019875458|0.082049528|
+-----+----------+--------------------+-----------+-----------+-----------+-----------+
根据每个状态的记录数,withColumn的最后一组可能位于循环内


希望这有帮助

您可能需要执行与下面代码类似的操作

df.groupBy(col("state"))
    .agg(collect_list(col("con_dist_1")).as("col1_quant"), collect_list(col("con_dist_2")).as("col2_quant"))
    .withColumn("col1_quant1", col("col1_quant")(0))
    .withColumn("col1_quant2", col("col1_quant")(1))
    .withColumn("col2_quant1", col("col2_quant")(0))
    .withColumn("col2_quant2", col("col2_quant")(1))
    .show

OutPut:
+-----+----------+--------------------+-----------+-----------+-----------+-----------+
|state|col1_quant|          col2_quant|col1_quant1|col1_quant2|col2_quant1|col2_quant2|
+-----+----------+--------------------+-----------+-----------+-----------+-----------+
|   AZ|    [4, 4]|[0.816039063, 0.8...|          4|          4|0.816039063|0.816039063|
|   CA|    [4, 6]|[0.082049528, 0.0...|          4|          6|0.082049528|0.019875458|
|   TX| [6, 4, 6]|[0.019875458, 0.0...|          6|          4|0.019875458|0.082049528|
+-----+----------+--------------------+-----------+-----------+-----------+-----------+
根据每个状态的记录数,withColumn的最后一组可能位于循环内


希望这有帮助

我现在该回家了,等我到了再看看。:)我现在该回家了,等我到了再看看。:)