Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用spark scala函数编程递归生成层次数据集_Scala_Apache Spark_Functional Programming - Fatal编程技术网

使用spark scala函数编程递归生成层次数据集

使用spark scala函数编程递归生成层次数据集,scala,apache-spark,functional-programming,Scala,Apache Spark,Functional Programming,我对函数式编程有点陌生。如何生成以下数据序列 以下是以下列的输入数据集: 输入 ID PARENT_ID AMT NAME 1 none 1000 A 2 1 -5000 B 3 2 -2000 C 5 3 7000 D 6 4 -7000 E 4

我对函数式编程有点陌生。如何生成以下数据序列

以下是以下列的输入数据集:

输入

ID       PARENT_ID     AMT      NAME
 1       none          1000     A
 2       1            -5000     B
 3       2            -2000     C
 5       3             7000     D
 6       4            -7000     E
 4       none          7000     F
ID       PARENT_ID     AMT       AMT_1     AMT_2     AMT_3   NAME_1  ...
 1       none          1000      none      none      none    none
 2       1            -5000      1000      none      none    A
 3       2            -2000     -5000      1000      none    B
 4       none          7000      none      none      none    none
 5       3             7000     -2000     -5000      1000    C
 6       4            -7000      7000      none      none    D
输出

ID       PARENT_ID     AMT      NAME
 1       none          1000     A
 2       1            -5000     B
 3       2            -2000     C
 5       3             7000     D
 6       4            -7000     E
 4       none          7000     F
ID       PARENT_ID     AMT       AMT_1     AMT_2     AMT_3   NAME_1  ...
 1       none          1000      none      none      none    none
 2       1            -5000      1000      none      none    A
 3       2            -2000     -5000      1000      none    B
 4       none          7000      none      none      none    none
 5       3             7000     -2000     -5000      1000    C
 6       4            -7000      7000      none      none    D

这里有一种方法可以执行递归的
连接
,直至达到特定的
级别

import org.apache.spark.sql.functions._

val df = Seq(
  (Some(1), None, Some(1000), Some("A")),
  (Some(2), Some(1), Some(-5000), Some("B")),
  (Some(3), Some(2), Some(-2000), Some("C")),
  (Some(4), None, Some(7000), Some("D")),
  (Some(5), Some(3), Some(7000), Some("E")),
  (Some(6), Some(4), Some(-7000), Some("F"))
).toDF("id", "parent_id", "amt", "name")

val nestedLevel = 3

(1 to nestedLevel).foldLeft( df.as("d0") ){ (accDF, i) =>
    val j = i - 1
    accDF.join(df.as(s"d$i"), col(s"d$j.parent_id") === col(s"d$i.id"), "left_outer")
  }.
  select(
    col("d0.id") :: col("d0.parent_id") ::
    col("d0.amt").as("amt") :: col("d0.name").as("name") :: (
      (1 to nestedLevel).toList.map(i => col(s"d$i.amt").as(s"amt_$i")) :::
      (1 to nestedLevel).toList.map(i => col(s"d$i.name").as(s"name_$i"))
    ): _*
  ).
  show
// +---+---------+-----+----+-----+-----+-----+------+------+------+
// | id|parent_id|  amt|name|amt_1|amt_2|amt_3|name_1|name_2|name_3|
// +---+---------+-----+----+-----+-----+-----+------+------+------+
// |  1|     null| 1000|   A| null| null| null|  null|  null|  null|
// |  2|        1|-5000|   B| 1000| null| null|     A|  null|  null|
// |  3|        2|-2000|   C|-5000| 1000| null|     B|     A|  null|
// |  4|     null| 7000|   D| null| null| null|  null|  null|  null|
// |  5|        3| 7000|   E|-2000|-5000| 1000|     C|     B|     A|
// |  6|        4|-7000|   F| 7000| null| null|     D|  null|  null|
// +---+---------+-----+----+-----+-----+-----+------+------+------+

@Sampat Kumar,请根据您的扩展需求查看更新的答案。