Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
在Scala/Spark中将数组的一列转换为数组类型_Scala_Apache Spark_Dataframe - Fatal编程技术网

在Scala/Spark中将数组的一列转换为数组类型

在Scala/Spark中将数组的一列转换为数组类型,scala,apache-spark,dataframe,Scala,Apache Spark,Dataframe,我有一个如下所示的数据框: +---+-----+--------------------------------------------------------------------------------------------------+------+ |uid|label|features |weight|

我有一个如下所示的数据框:

+---+-----+--------------------------------------------------------------------------------------------------+------+
|uid|label|features                                                                                          |weight|
+---+-----+--------------------------------------------------------------------------------------------------+------+
|1  |1.0  |[WrappedArray([animal_indexed,2.0,animal_indexed]), WrappedArray([talk_indexed,3.0,talk_indexed])]|1     |
|2  |0.0  |[WrappedArray([animal_indexed,1.0,animal_indexed]), WrappedArray([talk_indexed,2.0,talk_indexed])]|1     |
|3  |1.0  |[WrappedArray([animal_indexed,0.0,animal_indexed]), WrappedArray([talk_indexed,1.0,talk_indexed])]|1     |
|4  |2.0  |[WrappedArray([animal_indexed,0.0,animal_indexed]), WrappedArray([talk_indexed,0.0,talk_indexed])]|1     |
+---+-----+--------------------------------------------------------------------------------------------------+------+
模式是

root
 |-- uid: integer (nullable = false)
 |-- label: double (nullable = false)
 |-- features: array (nullable = false)
 |    |-- element: array (containsNull = true)
 |    |    |-- element: struct (containsNull = true)
 |    |    |    |-- name: string (nullable = true)
 |    |    |    |-- value: double (nullable = false)
 |    |    |    |-- term: string (nullable = true)
 |-- weight: integer (nullable = false)
但是我想把Array[Array]的特性转换成Array i、 e.将列数组映射到同一列中,以获得类似

  root
     |-- uid: integer (nullable = false)
     |-- label: double (nullable = false)
     |-- features: array (nullable = false)
     |    |    |-- element: struct (containsNull = true)
     |    |    |    |-- name: string (nullable = true)
     |    |    |    |-- value: double (nullable = false)
     |    |    |    |-- term: string (nullable = true)
     |-- weight: integer (nullable = false)

提前感谢。

您应该将数据作为具有模式的数据集读取:

case class Something(name: String, value: Double, term: String)
case class MyClass(uid: Int, label: Double, array: Seq[Seq[Something]], weight: Int)
然后像这样使用UDF:

val flatUDF = udf((list: Seq[Seq[Something]]) => list.flatten)

val flattedDF = myDataFrame.withColumn("flatten", flatUDF($"features"))
读取数据集的示例:

val myDataFrame = spark.read.json(path).as[MyClass]

希望这有帮助。

尝试使用explode功能我不想从同一行中获取多行。而是使用类似于对阵列执行展平操作的方法删除数据帧中的额外嵌套,即从阵列[Array[]到阵列[]。您可以在阵列上尝试展平功能吗?所有scala集合都有此函数。@PJFanning您能给一个示例数据帧列,并将其从Array[Array]转换为Array类型吗?