如何在Spark/Scala中使用explode

如何在Spark/Scala中使用explode,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,这是我的父数据帧 +------------------+-------------------------+---------------+---------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+----

这是我的父数据帧

+------------------+-------------------------+---------------+---------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+--------------------------+----------------------+-----------------------------+----------+------------+-------------+-------------------------+--------------------------+----------------------------+-----------------+---------------+-------------------------+-----------------------+-------------------------+---------------------------+-----------+
|DataPartition | TimeStamp | | U organizationId | | U sourceId | sr:审计员|sr:CapitalChangeAdjustmentDate | sr:ContainesDeliminaryData | sr:ContainesRestation | sr:CumulativeAdjustmentFactor | sr:Dcn | sr:DocFormat | sr:DocumentId | sr:FilingDateTimeUTCOffset | sr:IsFilingDateTimeEstimated | sr:SourceTypeCode | sr:SourceTypeId | sr:StatementDate日期|sr:ThirdPartySourceCode | sr:ThirdPartySourceCodeId | sr:ThirdPartySourcePriority | FFAction |||
+------------------+-------------------------+---------------+---------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------+------------------------------+--------------------------+----------------------+-----------------------------+----------+------------+-------------+-------------------------+--------------------------+----------------------------+-----------------+---------------+-------------------------+-----------------------+-------------------------+---------------------------+-----------+
|SelfSourcedPrivate | 2017-11-02T10:23:59+00:00 | 4298009288 | 80 |[WrappedArray([16165,null,UWE,30105473020538,true,false,true])]| 2017-07-31T00:00:00+00:00 |假|假| 1.0 | 171105584 | asfield | null | 2017-09-28223:00:00+00:00 |-300 |假| 10K|3011835 | 2017-07-31T00:00:00+00:00 | SS | 1000716240 | 1 | I | ||
|selfsourcedppublic | 2017-11-21T12:09:23+00:00 | 4295904170 | 364 | null | 2017-07-30T00:00:00+00:00 | false | false | 1.0|空|空|空| 2017-08-08T17:00:00+00:00 |-300 |假| 10Q | 3011836 | 2017-07-30T00:00:00+00:00 | SS | 1000716240 | 1 | I ||
|selfsourcedppublic | 2017-11-21T12:09:23+00:00 | 4295904170 | 365 |[WrappedArray([35413024068,UNQ,3010546,null,true,true,false])]| 2017-09-30T00:00:00+00:00 |假|假| 1.0 |空|空|空| 2017-10-10T17:00:00+00:00 |-300 |假| 10K|3011835 | 2017-09-30T00:00:00+00:00 | SS | 1000716240 | 1 | I | ||
|selfsourcedppublic | 2017-11-21T12:17:49+00:00 | 4295904170 | 365 |[WrappedArray([35413024068,UNQ,3010546,null,true,true,false])]| 2017-09-30T00:00:00+00:00 |假|假| 1.0 |空|空|空| 2017-10-10T17:00:00+00:00 |-300 |假| 10K|3011835 | 2017-09-30T00:00:00+00:00 | SS | 1000716240 | 1 | I | ||
|selfsourcedppublic | 2017-11-21T12:18:55+00:00 | 4295904170 | 364 | null | 2017-07-30T00:00:00+00:00 | false | false | 1.0|空|空|空| 2017-08-08T17:00:00+00:00 |-300 |假| 10Q | 3011836 | 2017-07-30T00:00:00+00:00 | SS | 1000716240 | 1 | I ||
|selfsourcedppublic | 2017-11-21T12:18:55+00:00 | 4295904170 | 365 |[WrappedArray([35413024068,UNQ,3010546,null,true,true,false])]| 2017-09-30T00:00:00+00:00 |假|假| 1.0 |空|空|空| 2017-10-10T17:00:00+00:00 |-300 |假| 10K|3011835 | 2017-09-30T00:00:00+00:00 | SS | 1000716240 | 1 | I | ||
|selfsourcedppublic | 2017-11-03T12:30:00+00:00 | 4295858941 | 10 | null | 2016-03-31T00:00:00+00:00 | false | false | 1.0 | n
val dfContentEnvelope = sqlContext.read.format("com.databricks.spark.xml").option("rowTag", "env:ContentEnvelope").load("s3://trfsmallfffile/XML")
val dfContentItem = dfContentEnvelope.withColumn("column1", explode(dfContentEnvelope("env:Body.env:ContentItem"))).select($"env:Header.fun:DataPartitionId".as("DataPartition"),$"env:Header.env:info.env:TimeStamp".as("TimeStamp"),$"column1.*")


//val childDF=dfType.select($"_organizationId".as("organizationId"), $"_sourceId".as("sourceId"), explode($"sr:Auditors.sr:Auditor").as("Auditors"), getFFActionChild($"FFAction|!|").as("FFAction|!|"))

//childDF.show()
val sourceDF=dfContentItem.select($"DataPartition", $"TimeStamp", $"env:Data.sr:Source.*")
val childDF = sourceDF.select($"DataPartition", $"TimeStamp", $"_organizationId", $"_sourceId", explode($"sr:Auditors.sr:Auditor").as("Auditors"))
childDF.select($"DataPartition", $"TimeStamp", $"-organizationId", $"_sourceId", $"Auditors.*").show(false)