Scala dataframe:从列表到字符串的列类型

Scala dataframe:从列表到字符串的列类型,scala,apache-spark,arraylist,Scala,Apache Spark,Arraylist,我得到了一个带有(String,List[String])的数据帧。我想拆分de List[String]并将列表中的每个值放入一个字段中。例如: String 1, [1, 2, 3, 4] => String 1, 1, 2, 3, 4 输入(字符串,列表[字符串]): 输出(字符串,字符串,字符串,字符串,…,字符串) 我正在尝试下一个代码 df.withColumn("temp",split(col("fieldList"), ",")) .select(col("

我得到了一个带有(String,List[String])的数据帧。我想拆分de List[String]并将列表中的每个值放入一个字段中。例如:

String 1, [1, 2, 3, 4]    =>   String 1, 1, 2, 3, 4
输入(字符串,列表[字符串]):

输出(字符串,字符串,字符串,字符串,…,字符串)

我正在尝试下一个代码

df.withColumn("temp",split(col("fieldList"), ","))
  .select(col("*") +: (0 until 9).map(i => col("temp").getItem(i).as(s"col$i")):_*)
我的问题是,当我执行该操作时,会出现如下错误:

用户类引发异常:org.apache.spark.sql.AnalysisException: 由于数据类型不匹配,无法解析“拆分(
fieldList
,,)”): 参数1需要字符串类型,但是,“
fieldList
”是 阵列类型

知道如何将列表转换为字符串吗?我已尝试使用.mkString(),但缺少一些内容

谢谢查看下面的代码

scala> val df = Seq(("Hey",Seq("wooa","mmmm","ehhh")),("Hey1",Seq("woooe", "rrrr", "ough", "shhhhh"))).toDF("aa","bb")
df: org.apache.spark.sql.DataFrame = [aa: string, bb: array<string>]

scala> val max = df.withColumn("length",size($"bb")).orderBy($"length".desc).select($"length").head.getAs[Int](0)
max: Int = 4

scala> df.withColumn("length",size($"bb")).orderBy($"length".desc).select(col("*") +: (0 until max).map(i => col("bb")(i).as(s"col$i")):_*).show(false)
+----+---------------------------+------+-----+----+----+------+
|aa  |bb                         |length|col0 |col1|col2|col3  |
+----+---------------------------+------+-----+----+----+------+
|Hey1|[woooe, rrrr, ough, shhhhh]|4     |woooe|rrrr|ough|shhhhh|
|Hey |[wooa, mmmm, ehhh]         |3     |wooa |mmmm|ehhh|null  |
+----+---------------------------+------+-----+----+----+------+

scala> df.withColumn("length",size($"bb")).orderBy($"length".desc).select(col("*") +: (0 until max).map(i => col("bb")(i).as(s"col$i")):_*).drop("length","bb")show(false)
+----+-----+----+----+------+
|aa  |col0 |col1|col2|col3  |
+----+-----+----+----+------+
|Hey1|woooe|rrrr|ough|shhhhh|
|Hey |wooa |mmmm|ehhh|null  |
+----+-----+----+----+------+

scala>val df=Seq((“嘿”,Seq(“wooa”,“mmmm”,“ehhh”),(“Hey1”,Seq(“woooe”,“rrrrrr”,“ough”,“shhhh”))。toDF(“aa”,“bb”)
df:org.apache.spark.sql.DataFrame=[aa:string,bb:array]
scala>val max=df.withColumn(“length”,size($“bb”)).orderBy($“length.desc”).select($“length”).head.getAs[Int](0)
最大值:Int=4
scala>df.withColumn(“length”,size($“bb”)。orderBy($“length”.desc)。选择(col(“*”)+:(0到max)。映射(i=>col(“bb”)(i)。as(s“col$i”):*)。显示(false)
+----+---------------------------+------+-----+----+----+------+
|aa | bb |长度| col0 | col1 | col2 | col3|
+----+---------------------------+------+-----+----+----+------+
|嘿嘿|[woooe,rrrr,ough,Shhhh]| 4 | woooe | rrrr | ough | Shhhh|
|嘿|[wooa,mmmm,ehhh]| 3 | wooa | mmmm | ehhh |空|
+----+---------------------------+------+-----+----+----+------+
scala>df.withColumn(“length”,size($“bb”)).orderBy($“length.desc”).select(col(“*”)+:(0到max)。map(i=>col(“bb”)(i)。as(s“col$i”):*)。drop(“length”,“bb”)show(false)
+----+-----+----+----+------+
|aa | col0 | col1 | col2 | col3|
+----+-----+----+----+------+
|嘿|呜呜| rrrr | ough | shhhh|
|嘿|哇|嗯|呃|空|
+----+-----+----+----+------+
检查下面的代码

scala> val df = Seq(("Hey",Seq("wooa","mmmm","ehhh")),("Hey1",Seq("woooe", "rrrr", "ough", "shhhhh"))).toDF("aa","bb")
df: org.apache.spark.sql.DataFrame = [aa: string, bb: array<string>]

scala> val max = df.withColumn("length",size($"bb")).orderBy($"length".desc).select($"length").head.getAs[Int](0)
max: Int = 4

scala> df.withColumn("length",size($"bb")).orderBy($"length".desc).select(col("*") +: (0 until max).map(i => col("bb")(i).as(s"col$i")):_*).show(false)
+----+---------------------------+------+-----+----+----+------+
|aa  |bb                         |length|col0 |col1|col2|col3  |
+----+---------------------------+------+-----+----+----+------+
|Hey1|[woooe, rrrr, ough, shhhhh]|4     |woooe|rrrr|ough|shhhhh|
|Hey |[wooa, mmmm, ehhh]         |3     |wooa |mmmm|ehhh|null  |
+----+---------------------------+------+-----+----+----+------+

scala> df.withColumn("length",size($"bb")).orderBy($"length".desc).select(col("*") +: (0 until max).map(i => col("bb")(i).as(s"col$i")):_*).drop("length","bb")show(false)
+----+-----+----+----+------+
|aa  |col0 |col1|col2|col3  |
+----+-----+----+----+------+
|Hey1|woooe|rrrr|ough|shhhhh|
|Hey |wooa |mmmm|ehhh|null  |
+----+-----+----+----+------+

scala>val df=Seq((“嘿”,Seq(“wooa”,“mmmm”,“ehhh”),(“Hey1”,Seq(“woooe”,“rrrrrr”,“ough”,“shhhh”))。toDF(“aa”,“bb”)
df:org.apache.spark.sql.DataFrame=[aa:string,bb:array]
scala>val max=df.withColumn(“length”,size($“bb”)).orderBy($“length.desc”).select($“length”).head.getAs[Int](0)
最大值:Int=4
scala>df.withColumn(“length”,size($“bb”)。orderBy($“length”.desc)。选择(col(“*”)+:(0到max)。映射(i=>col(“bb”)(i)。as(s“col$i”):*)。显示(false)
+----+---------------------------+------+-----+----+----+------+
|aa | bb |长度| col0 | col1 | col2 | col3|
+----+---------------------------+------+-----+----+----+------+
|嘿嘿|[woooe,rrrr,ough,Shhhh]| 4 | woooe | rrrr | ough | Shhhh|
|嘿|[wooa,mmmm,ehhh]| 3 | wooa | mmmm | ehhh |空|
+----+---------------------------+------+-----+----+----+------+
scala>df.withColumn(“length”,size($“bb”)).orderBy($“length.desc”).select(col(“*”)+:(0到max)。map(i=>col(“bb”)(i)。as(s“col$i”):*)。drop(“length”,“bb”)show(false)
+----+-----+----+----+------+
|aa | col0 | col1 | col2 | col3|
+----+-----+----+----+------+
|嘿|呜呜| rrrr | ough | shhhh|
|嘿|哇|嗯|呃|空|
+----+-----+----+----+------+

如果您提供预期的输入和输出会很酷如果您已经有字符串列表为什么需要执行拆分?我需要获取列表中的每个元素并将其放入自己的列中。如果您提供预期的输入和输出会很酷如果您已经有字符串列表为什么需要执行拆分?我需要获取每个元素元素,并将其放在自己的列中。嗨@Srinivas,最后一个问题。如何将此结果保存在数据帧中?我尝试了val dataframe=df.withColumn(“length”,size($“bb”)).orderBy($“length.desc”).select(col(“”+):(0到max.map(I=>col(“bb”)(I.as(s“col$I”):))。drop(“length”,“bb”)show(false)”但我无法使用as dataframeremove last show(false),它将结果保存到DF.:)Hi@Srinivas,最后一个问题。如何将此结果保存到数据帧中?我尝试了val dataframe=DF.withColumn(“length”,size($“bb”))。orderBy($“length”).desc。选择(col(“”+:(0直到max)。映射(I=>col(“bb”)(I)。作为(s“col$I”):。。删除(“length”,“bb”)show(false)”但我无法使用as dataframeremove last show(false)”(假),将结果保存到DF:)
scala> val df = Seq(("Hey",Seq("wooa","mmmm","ehhh")),("Hey1",Seq("woooe", "rrrr", "ough", "shhhhh"))).toDF("aa","bb")
df: org.apache.spark.sql.DataFrame = [aa: string, bb: array<string>]

scala> val max = df.withColumn("length",size($"bb")).orderBy($"length".desc).select($"length").head.getAs[Int](0)
max: Int = 4

scala> df.withColumn("length",size($"bb")).orderBy($"length".desc).select(col("*") +: (0 until max).map(i => col("bb")(i).as(s"col$i")):_*).show(false)
+----+---------------------------+------+-----+----+----+------+
|aa  |bb                         |length|col0 |col1|col2|col3  |
+----+---------------------------+------+-----+----+----+------+
|Hey1|[woooe, rrrr, ough, shhhhh]|4     |woooe|rrrr|ough|shhhhh|
|Hey |[wooa, mmmm, ehhh]         |3     |wooa |mmmm|ehhh|null  |
+----+---------------------------+------+-----+----+----+------+

scala> df.withColumn("length",size($"bb")).orderBy($"length".desc).select(col("*") +: (0 until max).map(i => col("bb")(i).as(s"col$i")):_*).drop("length","bb")show(false)
+----+-----+----+----+------+
|aa  |col0 |col1|col2|col3  |
+----+-----+----+----+------+
|Hey1|woooe|rrrr|ough|shhhhh|
|Hey |wooa |mmmm|ehhh|null  |
+----+-----+----+----+------+