Dataframe 嵌套json展平spark数据帧

Dataframe 嵌套json展平spark数据帧,dataframe,apache-spark,apache-spark-sql,Dataframe,Apache Spark,Apache Spark Sql,我正在尝试从嵌套的jsonString创建一个数据帧,并拆分为多个数据帧,即外部元素数据将转到一个数据帧,嵌套的子数据将转到另一个数据帧。可能有多个嵌套元素。我查看了其他帖子,没有一篇文章提供了下面场景的工作示例。下面是一个州数是动态的示例,我想将国家信息和州信息存储在两个单独的hdfs文件夹中。因此,父数据帧包含一行,如下所示 val jsonStr=“”{“国家”:“美国”,“ISD”:“001”,“州”:[{“州1”:“NJ”,“州2”:“NY”,“州3”:“PA”}]。” 我看了stac

我正在尝试从嵌套的jsonString创建一个数据帧,并拆分为多个数据帧,即外部元素数据将转到一个数据帧,嵌套的子数据将转到另一个数据帧。可能有多个嵌套元素。我查看了其他帖子,没有一篇文章提供了下面场景的工作示例。下面是一个州数是动态的示例,我想将国家信息和州信息存储在两个单独的hdfs文件夹中。因此,父数据帧包含一行,如下所示

val jsonStr=“”{“国家”:“美国”,“ISD”:“001”,“州”:[{“州1”:“NJ”,“州2”:“NY”,“州3”:“PA”}]。”


我看了stack overflow中关于嵌套json平坦化的其他问题。没有人有相同的工作解决方案。

这里有一些代码可以完成这项工作。您应该考虑性能和列数是否很大。我已经收集了所有地图字段并将它们添加到dataframe

val jsonStr="""{"country":"US","ISD":"001","states":[{"state1":"NJ","state2":"NY","state3":"PA"}]}"""
import spark.implicits._

val countryDf = spark.read.json(Seq(jsonStr).toDS)

countryDf.show(false)
val statesDf = countryDf.select($"country", explode($"states").as("states"))

val index = statesDf.schema.fieldIndex("states")
val stateSchema = statesDf.schema(index).dataType.asInstanceOf[StructType]
var columns = mutable.LinkedHashSet[Column]()
stateSchema.fields.foreach(field =>{
  columns.add(lit(field.name))
  columns.add(col( "state." + field.name))
})


val s2 = statesDf
  .withColumn("statesMap", map(columns.toSeq: _*))

val allMapKeys = s2.select(explode($"statesMap")).select($"key").distinct.collect().map(_.get(0).toString)

val s3 = allMapKeys.foldLeft(s2)((a, b) => a.withColumn(b, a("statesMap")(b)))
  .drop("statesMap")
s3.show(false)
val jsonStr="""{"country":"US","ISD":"001","states":[{"state1":"NJ","state2":"NY","state3":"PA"}]}"""
val countryDf = spark.read.json(Seq(jsonStr).toDS)

countryDf.show(false)
+---+-------+--------------+
|ISD|country|states        |
+---+-------+--------------+
|001|US     |[[NJ, NY, PA]]|
+---+-------+--------------+

val countryDfExploded = countryDf.withColumn("states",explode($"states"))
countryDfExploded.show(false)
+---+-------+------------+
|ISD|country|states      |
+---+-------+------------+
|001|US     |[NJ, NY, PA]|
+---+-------+------------+

val countrySelectDf = countryDfExploded.select($"ISD", $"country")
countrySelectDf.show(false)
+---+-------+
|ISD|country|
+---+-------+
|001|US     |
+---+-------+

val statesDf = countryDfExploded.select( $"country",$"states.*")
statesDf.show(false)
+-------+------+------+------+
|country|state1|state2|state3|
+-------+------+------+------+
|US     |NJ    |NY    |PA    |
+-------+------+------+------+

读取嵌套的JSON并将其转换为数据集时,嵌套部分将存储为结构类型。因此,您必须考虑在数据帧中展平结构类型

val jsonStr="""{"country":"US","ISD":"001","states":[{"state1":"NJ","state2":"NY","state3":"PA"}]}"""
import spark.implicits._

val countryDf = spark.read.json(Seq(jsonStr).toDS)

countryDf.show(false)
val statesDf = countryDf.select($"country", explode($"states").as("states"))

val index = statesDf.schema.fieldIndex("states")
val stateSchema = statesDf.schema(index).dataType.asInstanceOf[StructType]
var columns = mutable.LinkedHashSet[Column]()
stateSchema.fields.foreach(field =>{
  columns.add(lit(field.name))
  columns.add(col( "state." + field.name))
})


val s2 = statesDf
  .withColumn("statesMap", map(columns.toSeq: _*))

val allMapKeys = s2.select(explode($"statesMap")).select($"key").distinct.collect().map(_.get(0).toString)

val s3 = allMapKeys.foldLeft(s2)((a, b) => a.withColumn(b, a("statesMap")(b)))
  .drop("statesMap")
s3.show(false)
val jsonStr="""{"country":"US","ISD":"001","states":[{"state1":"NJ","state2":"NY","state3":"PA"}]}"""
val countryDf = spark.read.json(Seq(jsonStr).toDS)

countryDf.show(false)
+---+-------+--------------+
|ISD|country|states        |
+---+-------+--------------+
|001|US     |[[NJ, NY, PA]]|
+---+-------+--------------+

val countryDfExploded = countryDf.withColumn("states",explode($"states"))
countryDfExploded.show(false)
+---+-------+------------+
|ISD|country|states      |
+---+-------+------------+
|001|US     |[NJ, NY, PA]|
+---+-------+------------+

val countrySelectDf = countryDfExploded.select($"ISD", $"country")
countrySelectDf.show(false)
+---+-------+
|ISD|country|
+---+-------+
|001|US     |
+---+-------+

val statesDf = countryDfExploded.select( $"country",$"states.*")
statesDf.show(false)
+-------+------+------+------+
|country|state1|state2|state3|
+-------+------+------+------+
|US     |NJ    |NY    |PA    |
+-------+------+------+------+

将statesDf放在列中而不是行中不是更好吗?是否总是有固定数量的状态?谢谢Shailesh不,状态是动态的