Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala Spark连接循环中的数据帧_Scala_Apache Spark_Apache Spark Sql_Inner Join - Fatal编程技术网

Scala Spark连接循环中的数据帧

Scala Spark连接循环中的数据帧,scala,apache-spark,apache-spark-sql,inner-join,Scala,Apache Spark,Apache Spark Sql,Inner Join,我正在尝试加入动态循环中的DataFrames。我正在使用一个属性文件来获取要在最终数据帧中使用的列详细信息。 属性文件- a01=status:single,perm_id:multi a02=status:single,actv_id:multi a03=status:single,perm_id:multi,actv_id:multi ............................ ............................ 对于属性文件中的每一行,我需要创建一个

我正在尝试加入动态循环中的
DataFrames
。我正在使用一个属性文件来获取要在最终数据帧中使用的列详细信息。 属性文件-

a01=status:single,perm_id:multi
a02=status:single,actv_id:multi
a03=status:single,perm_id:multi,actv_id:multi
............................
............................
对于属性文件中的每一行,我需要创建一个数据帧并将其保存在一个文件中。使用
PropertiesReader
加载属性文件。如果模式是单模式,那么我只需要从表中获取列值。但是如果是multi,那么我需要得到值的列表

val propertyColumn = properties.get("a01") //a01 value we are getting as an argument. This might be a01,a02 or a0n
val columns = propertyColumn.toString.split(",").map(_.toString)
行动详情表-

+-------+--------+-----------+-----------+-----------+------------+
|id     |act_id  |status     |perm_id    |actv_id    | debt_id    |
+-------+--------+-----------+-----------+-----------+------------+
| 1     |1       |   4       | 1         | 10        | 1          |
+-------+--------+-----------+-----------+-----------+------------+
| 2     |1       |   4       | 2         | 20        | 2          |
+-------+--------+-----------+-----------+-----------+------------+
| 3     |1       |   4       | 3         | 30        | 1          |
+-------+--------+-----------+-----------+-----------+------------+
| 4     |2       |   4       | 5         | 10        | 3          |
+-------+--------+-----------+-----------+-----------+------------+
| 5     |2       |   4       | 6         | 20        | 1          |
+-------+--------+-----------+-----------+-----------+------------+
| 6     |2       |   4       | 7         | 30        | 1          |
+-------+--------+-----------+-----------+-----------+------------+
| 7     |3       |   4       | 1         | 10        | 3          |
+-------+--------+-----------+-----------+-----------+------------+
| 8     |3       |   4       | 5         | 20        | 1          |
+-------+--------+-----------+-----------+-----------+------------+
| 9     |3       |   4       | 2         | 30        | 3          |
+-------+--------+-----------+-----------+------------+-----------+
主数据帧-

val data=sqlContext.sql(“从act_det中选择*)

我想要以下输出-

对于a01-

+-------+--------+-----------+
|act_id |status  |perm_id    |
+-------+--------+-----------+
|     1 |   4    | [1,2,3]   |
+-------+--------+-----------+
|     2 |   4    |  [5,6,7]  |
+-------+--------+-----------+
|     3 |   4    |  [1,5,2]  |
+-------+--------+-----------+
对于a02-

    +-------+--------+-----------+
    |act_id |status  |actv_id    |
    +-------+--------+-----------+
    |     1 |   4    | [10,20,30]|
    +-------+--------+-----------+
    |     2 |   4    | [10,20,30]|
    +-------+--------+-----------+
    |     3 |   4    | [10,20,30]|
    +-------+--------+-----------+
对于a03-

    +-------+--------+-----------+-----------+
    |act_id |status  |perm_id    |actv_id    |
    +-------+--------+-----------+-----------+
    |     1 |   4    | [1,2,3]   |[10,20,30] |
    +-------+--------+-----------+-----------+
    |     2 |   4    |  [5,6,7]  |[10,20,30] |
    +-------+--------+-----------+-----------+
    |     3 |   4    |  [1,5,2]  |[10,20,30] |
    +-------+--------+-----------+-----------+
但数据帧创建过程应该是动态的

我已经尝试了下面的代码,但我无法为循环中的数据帧实现连接逻辑

val finalDF:DataFrame = ??? //empty dataframe
    for {
        column <- columns
    } yeild {
        val eachColumn = column.toString.split(":").map(_.toString)
        val columnName = eachColumn(0)
        val mode = eachColumn(1)
        if(mode.equalsIgnoreCase("single")) {
            data.select($"act_id", $"status").distinct
            //I want to join finalDF with data.select($"act_id", $"status").distinct
        } else if(mode.equalsIgnoreCase("multi")) {
            data.groupBy($"act_id").agg(collect_list($"perm_id").as("perm_id"))
            //I want to join finalDF with data.groupBy($"act_id").agg(collect_list($"perm_id").as("perm_id"))
        }
    }
val finalDF:DataFrame=//空数据帧
为了{
列检查下面的代码

scala> df.show(false)
+---+------+------+-------+-------+-------+
|id |act_id|status|perm_id|actv_id|debt_id|
+---+------+------+-------+-------+-------+
|1  |1     |4     |1      |10     |1      |
|2  |1     |4     |2      |20     |2      |
|3  |1     |4     |3      |30     |1      |
|4  |2     |4     |5      |10     |3      |
|5  |2     |4     |6      |20     |1      |
|6  |2     |4     |7      |30     |1      |
|7  |3     |4     |1      |10     |3      |
|8  |3     |4     |5      |20     |1      |
|9  |3     |4     |2      |30     |3      |
+---+------+------+-------+-------+-------+
定义
主键

scala> val primary_key = Seq("act_id").map(col(_))
primary_key: Seq[org.apache.spark.sql.Column] = List(act_id)
配置

scala> configs.foreach(println)
/*
(a01,status:single,perm_id:multi)
(a02,status:single,actv_id:multi)
(a03,status:single,perm_id:multi,actv_id:multi)
*/

构造表达式。

scala> 
val columns = configs
                .map(c => {
                    c._2
                    .split(",")
                    .map(c => {
                            val cc = c.split(":"); 
                            if(cc.tail.contains("single")) 
                                first(col(cc.head)).as(cc.head) 
                            else 
                                collect_list(col(cc.head)).as(cc.head)
                        }
                    )
                })

/*
columns: scala.collection.immutable.Iterable[Array[org.apache.spark.sql.Column]] = List(
    Array(first(status, false) AS `status`, collect_list(perm_id) AS `perm_id`), 
    Array(first(status, false) AS `status`, collect_list(actv_id) AS `actv_id`), 
    Array(first(status, false) AS `status`, collect_list(perm_id) AS `perm_id`, collect_list(actv_id) AS `actv_id`)
)
*/

最终结果

scala> columns.map(c => df.groupBy(primary_key:_*).agg(c.head,c.tail:_*)).map(_.show(false))
+------+------+---------+
|act_id|status|perm_id  |
+------+------+---------+
|3     |4     |[1, 5, 2]|
|1     |4     |[1, 2, 3]|
|2     |4     |[5, 6, 7]|
+------+------+---------+

+------+------+------------+
|act_id|status|actv_id     |
+------+------+------------+
|3     |4     |[10, 20, 30]|
|1     |4     |[10, 20, 30]|
|2     |4     |[10, 20, 30]|
+------+------+------------+

+------+------+---------+------------+
|act_id|status|perm_id  |actv_id     |
+------+------+---------+------------+
|3     |4     |[1, 5, 2]|[10, 20, 30]|
|1     |4     |[1, 2, 3]|[10, 20, 30]|
|2     |4     |[5, 6, 7]|[10, 20, 30]|
+------+------+---------+------------+
检查下面的代码

scala> df.show(false)
+---+------+------+-------+-------+-------+
|id |act_id|status|perm_id|actv_id|debt_id|
+---+------+------+-------+-------+-------+
|1  |1     |4     |1      |10     |1      |
|2  |1     |4     |2      |20     |2      |
|3  |1     |4     |3      |30     |1      |
|4  |2     |4     |5      |10     |3      |
|5  |2     |4     |6      |20     |1      |
|6  |2     |4     |7      |30     |1      |
|7  |3     |4     |1      |10     |3      |
|8  |3     |4     |5      |20     |1      |
|9  |3     |4     |2      |30     |3      |
+---+------+------+-------+-------+-------+
定义
主键

scala> val primary_key = Seq("act_id").map(col(_))
primary_key: Seq[org.apache.spark.sql.Column] = List(act_id)
配置

scala> configs.foreach(println)
/*
(a01,status:single,perm_id:multi)
(a02,status:single,actv_id:multi)
(a03,status:single,perm_id:multi,actv_id:multi)
*/

构造表达式。

scala> 
val columns = configs
                .map(c => {
                    c._2
                    .split(",")
                    .map(c => {
                            val cc = c.split(":"); 
                            if(cc.tail.contains("single")) 
                                first(col(cc.head)).as(cc.head) 
                            else 
                                collect_list(col(cc.head)).as(cc.head)
                        }
                    )
                })

/*
columns: scala.collection.immutable.Iterable[Array[org.apache.spark.sql.Column]] = List(
    Array(first(status, false) AS `status`, collect_list(perm_id) AS `perm_id`), 
    Array(first(status, false) AS `status`, collect_list(actv_id) AS `actv_id`), 
    Array(first(status, false) AS `status`, collect_list(perm_id) AS `perm_id`, collect_list(actv_id) AS `actv_id`)
)
*/

最终结果

scala> columns.map(c => df.groupBy(primary_key:_*).agg(c.head,c.tail:_*)).map(_.show(false))
+------+------+---------+
|act_id|status|perm_id  |
+------+------+---------+
|3     |4     |[1, 5, 2]|
|1     |4     |[1, 2, 3]|
|2     |4     |[5, 6, 7]|
+------+------+---------+

+------+------+------------+
|act_id|status|actv_id     |
+------+------+------------+
|3     |4     |[10, 20, 30]|
|1     |4     |[10, 20, 30]|
|2     |4     |[10, 20, 30]|
+------+------+------------+

+------+------+---------+------------+
|act_id|status|perm_id  |actv_id     |
+------+------+---------+------------+
|3     |4     |[1, 5, 2]|[10, 20, 30]|
|1     |4     |[1, 2, 3]|[10, 20, 30]|
|2     |4     |[5, 6, 7]|[10, 20, 30]|
+------+------+---------+------------+

只需创建中间表act_id、perm_id并将其与else if语句中的dataframe连接即可。您可以添加初始dataframe并从中计算结果吗?添加了主表记录。Avijit,我已在下面添加了解决方案检查一次。谢谢@Srinivas。我将实现您共享的解决方案并让您知道。只需创建中间表act_id、perm_id并将其与else if语句中的dataframe连接。您可以添加初始dataframe以计算结果吗?添加了主表记录。Avijit,我已在下面添加了解决方案检查一次。谢谢@Srinivas。我将实施您共享的解决方案并让您知道。