使用Scala展平JSON

使用Scala展平JSON,json,scala,apache-spark,databricks,Json,Scala,Apache Spark,Databricks,我有一个包含以下数据的json文件 { "@odata.context": "XXXX", "value": [ { "@odata.etag": "W/\"JzQ0OzlxaDNzLys1WXBPbWFXaE5MbFdKbVpNYjMrWDQ1MmJSeGdxVVhrTVRZUXc9MTswMDsn\"",

我有一个包含以下数据的json文件

{
    "@odata.context": "XXXX",
    "value": [
        {
            "@odata.etag": "W/\"JzQ0OzlxaDNzLys1WXBPbWFXaE5MbFdKbVpNYjMrWDQ1MmJSeGdxVVhrTVRZUXc9MTswMDsn\"",
            "E_No": 345345,
            "G_Code": "007",
            "G_2_Code": ""
        },
        {
            "@odata.etag": "W/\"JzQ0O0ZNWkF2OGd1dVE2L21OQTdKR2g4YU05TldKMERpMUpMWTRSazFKQzZuTDQ9MTswMDsn\"",
            "E_No": 234543,
            "G_Code": "008",
            "G_2_Code": ""
        }
    ],
    "@odata.nextLink": "XXXX"
}
我正在尝试使用Scala在Databricks中将其展平。我创建了一个数据帧DF

val DF= spark.read.json(path)
我想把它作为json提供,我需要一个只使用E_No、G_代码和G_2_代码创建的数据帧。其余的列可以从数据框中删除

我试图将这个json输入到我在一个博客中找到的扁平化代码中

def flattenDataframe(df: DataFrame): DataFrame = {

    val fields = df.schema.fields
    val fieldNames = fields.map(x => x.name)
    val length = fields.length
    
    for(i <- 0 to fields.length-1){
      val field = fields(i)
      val fieldtype = field.dataType
      val fieldName = field.name
      fieldtype match {
        case arrayType: ArrayType =>
          val fieldNamesExcludingArray = fieldNames.filter(_!=fieldName)
          val fieldNamesAndExplode = fieldNamesExcludingArray ++ Array(s"explode_outer($fieldName) as $fieldName")
          val explodedDf = df.selectExpr(fieldNamesAndExplode:_*)
          return flattenDataframe(explodedDf)
        case structType: StructType =>
          val childFieldnames = structType.fieldNames.map(childname => fieldName +"."+childname)
          val newfieldNames = fieldNames.filter(_!= fieldName) ++ childFieldnames
          val renamedcols = newfieldNames.map(x => (col(x.toString()).as(x.toString().replace(".", "_"))))
         val explodedf = df.select(renamedcols:_*)
          return flattenDataframe(explodedf)
        case _ =>
      }
    }
    df
  }
我猜它不喜欢我也不需要的“@odata”列。我需要把那根柱子去掉,然后看看这根柱子能不能压平

如果除了我正在使用的展平代码之外,还有其他更好的展平方法,请告诉我


谢谢

分解嵌套数组JSON并选择要写入JSON格式文件的字段

val jsonDF= spark.read.json(path)

val explodeColName = "value" // name of the column we want to explode
val flattenColName = explodeColName + "_flat" // temp name

val listOfColsFromArrayType =
  jsonDF.schema
    .find(
      s => s.name == explodeColName && s.dataType.isInstanceOf[ArrayType])
    .map(
      _.dataType
        .asInstanceOf[ArrayType]
        .elementType
        .asInstanceOf[StructType]
        .names
    )

val filterColList =
  listOfColsFromArrayType.getOrElse(throw new Exception("explode Col Name not found")) // or handle the error as needed

val flattenFilterCols = filterColList.map { c =>
  if (c.contains(".")) { col(s"$flattenColName.`$c`") } else {
    col(s"$flattenColName.$c")
  }
}

val flatten = jsonDF
  .select(explode(col(explodeColName)).as(flattenColName))
  .select(flattenFilterCols: _*)
   
    flattenDF
      .write
      .json(outputPath)
结果将是

{"@odata.etag":"W/\"JzQ0OzlxaDNzLys1WXBPbWFXaE5MbFdKbVpNYjMrWDQ1MmJSeGdxVVhrTVRZUXc9MTswMDsn\"","E_No":345345,"G_2_Code":"","G_Code":"007"}
{"@odata.etag":"W/\"JzQ0O0ZNWkF2OGd1dVE2L21OQTdKR2g4YU05TldKMERpMUpMWTRSazFKQzZuTDQ9MTswMDsn\"","E_No":234543,"G_2_Code":"","G_Code":"008"}

我对你的方法做了一些修改,现在它开始工作了

请注意,我没有重命名任何基础列。如果您想在进一步处理中获取它,请使用backtique(`)

测试数据
DF.show(假)
DF.printSchema()
/**
* +--------------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
*|@odata.context |@odata.nextLink | value|
* +--------------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
*|XXXX | XXXX |[[W/“JZQ0Ozlxandzlys1WxBPWFXae5mbfdkbvpnyjmrwdq1mmJSEGDXVvHRTvrZuxC9mtswmdsn”,345,007],[W/“JZQ0OznwkF2OGD1DVE2l21OqtdKr2G4YU05TLDKMerpmupmWtrsAzfkQZUtQ9mtswmdsn”,234543008]]|
* +--------------+---------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
*
*根
*@odata.context:string(nullable=true)
*@odata.nextLink:string(nullable=true)
*|--值:数组(nullable=true)
*| |--元素:struct(containsnall=true)
*| | |--@odata.etag:string(nullable=true)
*| | |--E|u No:long(nullable=true)
*| | |--G|U 2|U代码:字符串(nullable=true)
*| | |--G|U代码:字符串(nullable=true)
*
*/
展平array和struct类型的嵌套列

数据帧(df:DataFrame):数据帧={
val fields=df.schema.fields
val fieldNames=fields.map(x=>x.name)
val length=fields.length
为了
val fieldnamescludingaray=fieldNames.filter(!=fieldName)
val fieldNamesAndExplode=fieldnamesexcludingaray.map(c=>s“`c`”)++
数组(“将外部($fieldName)分解为$fieldName”)
val explodedDf=df.selectExpr(字段名称和explode:*)
返回数据帧(DDF)
案例structType:structType=>
val childFieldnames=structType.fieldNames.map(childname=>s“$fieldName.`$childname`”)
val newfieldNames=fieldNames.filter(!=fieldName).map(c=>s“`$c`”)++childFieldnames
val renamedcols=newfieldNames.map(x=>col(x))
val explodedf=df.select(重命名列:*)
返回数据帧(explodedf)
案例=>
}
}
df
}
val flattededJSON=flatteDataFrame(DF)
flattedJSON.show(false)
FlattedJSON.printSchema()
/**
* +--------------+---------------+----------------------------------------------------------------------------+------+--------+------+
*|@odata.context |@odata.nextLink |@odata.etag | E|u No | G|u 2_Code | G|u Code|
* +--------------+---------------+----------------------------------------------------------------------------+------+--------+------+
*|XXXX | XXXX | W/“JZQ0OZLXADNZLYS1WXBPWFXAE5MBFDFKBVPNYJMRWDQ1MMJSEGDXVVHRTVRZUXC9MTSWMDSN”| 345345 | 007|
*|XXXX | XXXX | W/“JZQ0O0ZNWKF2OGD1DVE2L2OKTDKR2G4YU05TLDKMERPUPMWTRSAZFKQZZUTQ9MTSWMDSN”| 234543 | 008|
* +--------------+---------------+----------------------------------------------------------------------------+------+--------+------+
*
*根
*@odata.context:string(nullable=true)
*@odata.nextLink:string(nullable=true)
*@odata.etag:string(nullable=true)
*|--E_No:long(nullable=true)
*|--G_2_代码:字符串(nullable=true)
*|--G_代码:字符串(nullable=true)
*/

quick question,你也不需要@odata.context吗?你能粘贴你的预期输出吗?我不需要任何@odata字段。我只需要一个带有E_No、D_代码和D_2_代码的数据框谢谢@vkt的回复。实际上有50多列,我提到了上下文的3列。有什么方法可以让它工作而不用提及吗我需要将此逻辑应用于多个表。是的,如果您传递所需的列列表,它们将显示数组对象的名称(“值”)这里还有您要选择的列。@kranny检查更新后的答案。我假设您有基于某些筛选器或作为输入参数的50列名称列表。这50列需要从jsonDF中提取,因为这是关键。我需要从代码中提取filterColList,从jsonDF中自动提取,而无需命名它们.我是scala的新手,所以尝试获取准确的语法/function@kranny你会怎么做
{"@odata.etag":"W/\"JzQ0OzlxaDNzLys1WXBPbWFXaE5MbFdKbVpNYjMrWDQ1MmJSeGdxVVhrTVRZUXc9MTswMDsn\"","E_No":345345,"G_2_Code":"","G_Code":"007"}
{"@odata.etag":"W/\"JzQ0O0ZNWkF2OGd1dVE2L21OQTdKR2g4YU05TldKMERpMUpMWTRSazFKQzZuTDQ9MTswMDsn\"","E_No":234543,"G_2_Code":"","G_Code":"008"}