在CSV示例Spark中定义ArrayType

在CSV示例Spark中定义ArrayType,csv,apache-spark,Csv,Apache Spark,我需要使用ArrayType为Spark定义一个测试样本来读取此数据。 以下是数据模式的外观: |-- data: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- id: integer (nullable = true) | | |-- stat: float (nullable = true) |-- naming: string (nullable

我需要使用ArrayType为Spark定义一个测试样本来读取此数据。 以下是数据模式的外观:

 |-- data: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- id: integer (nullable = true)
 |    |    |-- stat: float (nullable = true)
 |-- naming: string (nullable = true)
我当前对数据字段的定义显示所有行的空值,那么如何在CSV文件中从结构上定义此数据

下面是我的CSV文件结构现在的样子:

"data1_id","data1_stat","data2_id","data2_stat","data3_id","data3_stat","naming"
"1","0.76","2","0.55","3","0.16","Default1"
"1","0.2","2","0.41","3","0.89","Default2"
"1","0.96","2","0.12","3","0.4","Default3"
"1","0.28","2","0.15","3","0.31","Default4"
"1","0.84","2","0.41","3","0.15","Default5"
当我在输入数据帧上调用show时,我得到以下结果:

+-------+-----------+
|data   |naming     |                                                                                  
+-------+-----------+
|null   |Default1   |
|null   |Default2   |
|null   |Default3   |
|null   |Default4   |
|null   |Default5   |
+-------+-----------+
预期结果:

+----------------------------+-----------+
|data                        |naming     |                                                                                  
+----------------------------+-----------+
|[[1,0.76],[2,0.55],[3,0.16]]|Default1   |
|[[1,0.2],[2,0.41],[3,0.89]] |Default2   |
|[[1,0.96],[2,0.12],[3,0.4]] |Default3   |
|[[1,0.28],[2,0.15],[3,0.31]]|Default4   |
|[[1,0.84],[2,0.41],[3,0.15]]|Default5   |
+----------------------------+-----------+

您必须转换数据并构造表达式,如
array(struct())

提取数组的必需列

scala> val arrayColumns = df
            .columns
            .filter(_.contains("data"))
            .map(_.split("_")(0))
            .distinct
            .map(c => struct(col(s"${c}_id").as("id"),col(s"${c}_stat").as("stat")))

scala> val colExpr = array(arrayColumns:_*).as("data")
colExpr表达式应用于数据帧

scala> val finalDf = df.select(colExpr,$"naming")
模式

scala> finalDf.printSchema
root
 |-- data: array (nullable = false)
 |    |-- element: struct (containsNull = false)
 |    |    |-- id: string (nullable = true)
 |    |    |-- stat: string (nullable = true)
 |-- naming: string (nullable = true)
结果

scala> finalDf.show(false)
+------------------------------+--------+
|data                          |naming  |
+------------------------------+--------+
|[[1,0.76], [2,0.55], [3,0.16]]|Default1|
|[[1,0.2], [2,0.41], [3,0.89]] |Default2|
|[[1,0.96], [2,0.12], [3,0.4]] |Default3|
|[[1,0.28], [2,0.15], [3,0.31]]|Default4|
|[[1,0.84], [2,0.41], [3,0.15]]|Default5|
+------------------------------+--------+

您必须转换数据并构造表达式,如
array(struct())

提取数组的必需列

scala> val arrayColumns = df
            .columns
            .filter(_.contains("data"))
            .map(_.split("_")(0))
            .distinct
            .map(c => struct(col(s"${c}_id").as("id"),col(s"${c}_stat").as("stat")))

scala> val colExpr = array(arrayColumns:_*).as("data")
colExpr表达式应用于数据帧

scala> val finalDf = df.select(colExpr,$"naming")
模式

scala> finalDf.printSchema
root
 |-- data: array (nullable = false)
 |    |-- element: struct (containsNull = false)
 |    |    |-- id: string (nullable = true)
 |    |    |-- stat: string (nullable = true)
 |-- naming: string (nullable = true)
结果

scala> finalDf.show(false)
+------------------------------+--------+
|data                          |naming  |
+------------------------------+--------+
|[[1,0.76], [2,0.55], [3,0.16]]|Default1|
|[[1,0.2], [2,0.41], [3,0.89]] |Default2|
|[[1,0.96], [2,0.12], [3,0.4]] |Default3|
|[[1,0.28], [2,0.15], [3,0.31]]|Default4|
|[[1,0.84], [2,0.41], [3,0.15]]|Default5|
+------------------------------+--------+