Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 如何将行信息转换为列?_Scala_Apache Spark_Apache Spark Sql - Fatal编程技术网

Scala 如何将行信息转换为列?

Scala 如何将行信息转换为列?,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,Spark 2.2和Scala 2.11.8中有以下数据帧: +--------+---------+-------+-------+----+-------+ |event_id|person_id|channel| group|num1| num2| +--------+---------+-------+-------+----+-------+ | 560| 9410| web| G1| 0| 5| | 290| 1430

Spark 2.2和Scala 2.11.8中有以下数据帧:

+--------+---------+-------+-------+----+-------+
|event_id|person_id|channel|  group|num1|   num2|
+--------+---------+-------+-------+----+-------+
|     560|     9410|    web|     G1|   0|      5|
|     290|     1430|    web|     G1|   0|      3|
|     470|     1370|    web|     G2|   0|     18|
|     290|     1430|    web|     G2|   0|      5|
|     290|     1430|    mob|     G2|   1|      2|
+--------+---------+-------+-------+----+-------+
以下是Scala中的等效数据帧:

df = sqlCtx.createDataFrame(
    [(560,9410,"web","G1",0,5), 
     (290,1430,"web","G1",0,3), 
     (470,1370,"web","G2",0,18), 
     (290,1430,"web","G2",0,5), 
     (290,1430,"mob","G2",1,2)],
    ["event_id","person_id","channel","group","num1","num2"]
)
group
只能有两个值:
G1
G2
。我需要将列
组的这些值转换为新列,如下所示:

+--------+---------+-------+--------+-------+--------+-------+
|event_id|person_id|channel| num1_G1|num2_G1| num1_G2|num2_G2|
+--------+---------+-------+--------+-------+--------+-------+
|     560|     9410|    web|       0|      5|       0|      0|
|     290|     1430|    web|       0|      3|       0|      0|
|     470|     1370|    web|       0|      0|       0|     18|
|     290|     1430|    web|       0|      0|       0|      5|
|     290|     1430|    mob|       0|      0|       1|      2|
+--------+---------+-------+--------+-------+--------+-------+
我该怎么做呢?

AFAIK(至少我找不到一种方法在没有聚合的情况下执行PIVOT)我们在Spark中进行旋转时必须使用聚合函数

Scala版本:

scala> df.groupBy("event_id","person_id","channel")
         .pivot("group")
         .agg(max("num1") as "num1", max("num2") as "num2")
         .na.fill(0)
         .show
+--------+---------+-------+-------+-------+-------+-------+
|event_id|person_id|channel|G1_num1|G1_num2|G2_num1|G2_num2|
+--------+---------+-------+-------+-------+-------+-------+
|     560|     9410|    web|      0|      5|      0|      0|
|     290|     1430|    web|      0|      3|      0|      5|
|     470|     1370|    web|      0|      0|      0|     18|
|     290|     1430|    mob|      0|      0|      1|      2|
+--------+---------+-------+-------+-------+-------+-------+
AFAIK(至少我找不到一种方法来执行PIVOT而不执行聚合)在Spark中进行旋转时,我们必须使用聚合函数

Scala版本:

scala> df.groupBy("event_id","person_id","channel")
         .pivot("group")
         .agg(max("num1") as "num1", max("num2") as "num2")
         .na.fill(0)
         .show
+--------+---------+-------+-------+-------+-------+-------+
|event_id|person_id|channel|G1_num1|G1_num2|G2_num1|G2_num2|
+--------+---------+-------+-------+-------+-------+-------+
|     560|     9410|    web|      0|      5|      0|      0|
|     290|     1430|    web|      0|      3|      0|      5|
|     470|     1370|    web|      0|      0|      0|     18|
|     290|     1430|    mob|      0|      0|      1|      2|
+--------+---------+-------+-------+-------+-------+-------+
可能的重复可能的重复