Scala 如何从struct元素的嵌套数组创建Spark数据帧?

Scala 如何从struct元素的嵌套数组创建Spark数据帧?,scala,apache-spark,dataframe,apache-spark-sql,Scala,Apache Spark,Dataframe,Apache Spark Sql,我已将一个JSON文件读入Spark。此文件具有以下结构: scala> tweetBlob.printSchema root |-- related: struct (nullable = true) | |-- next: struct (nullable = true) | | |-- href: string (nullable = true) |-- search: struct (nullable = true) | |-- current:

我已将一个JSON文件读入Spark。此文件具有以下结构:

scala> tweetBlob.printSchema
root
 |-- related: struct (nullable = true)
 |    |-- next: struct (nullable = true)
 |    |    |-- href: string (nullable = true)
 |-- search: struct (nullable = true)
 |    |-- current: long (nullable = true)
 |    |-- results: long (nullable = true)
 |-- tweets: array (nullable = true)
 |    |-- element: struct (containsNull = true)
 |    |    |-- cde: struct (nullable = true)
...
...
 |    |    |-- cdeInternal: struct (nullable = true)
...
...
 |    |    |-- message: struct (nullable = true)
...
...
理想情况下,我想要的是一个包含列cde、cdeInternal、message。。。如下图所示

root
|-- cde: struct (nullable = true)
...
...
|-- cdeInternal: struct (nullable = true)
...
...
|-- message: struct (nullable = true)
...
...
我已经设法使用explode将元素从tweets数组提取到一个名为tweets的列中

scala> val tweets = tweetBlob.select(explode($"tweets").as("tweets"))
tweets: org.apache.spark.sql.DataFrame = [tweets: struct<cde:struct<author:struct<gender:string,location:struct<city:string,country:string,state:string>,maritalStatus:struct<evidence:string,isMarried:string>,parenthood:struct<evidence:string,isParent:string>>,content:struct<sentiment:struct<evidence:array<struct<polarity:string,sentimentTerm:string>>,polarity:string>>>,cdeInternal:struct<compliance:struct<isActive:boolean,userProtected:boolean>,tracks:array<struct<id:string>>>,message:struct<actor:struct<displayName:string,favoritesCount:bigint,followersCount:bigint,friendsCount:bigint,id:string,image:string,languages:array<string>,link:string,links:array<struct<href:string,rel:string>>,listedCount:bigint,location:struct<displayName:string,objectType:string>,objectType:string,postedTime...
scala> tweets.printSchema
root
 |-- tweets: struct (nullable = true)
 |    |-- cde: struct (nullable = true)
...
...
 |    |-- cdeInternal: struct (nullable = true)
...
...
 |    |-- message: struct (nullable = true)
...
...
如何选择结构中的所有列并从中创建数据帧?如果我的理解正确,Explode在结构上不起作用


非常感谢您的帮助。

处理此问题的一种可能方法是从模式中提取所需信息。让我们从一些虚拟数据开始:

import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.types._


case class Bar(x: Int, y: String)
case class Foo(bar: Bar)

val df = sc.parallelize(Seq(Foo(Bar(1, "first")), Foo(Bar(2, "second")))).toDF

df.printSchema

// root
//  |-- bar: struct (nullable = true)
//  |    |-- x: integer (nullable = false)
//  |    |-- y: string (nullable = true)
和一个助手函数:

def children(colname: String, df: DataFrame) = {
  val parent = df.schema.fields.filter(_.name == colname).head
  val fields = parent.dataType match {
    case x: StructType => x.fields
    case _ => Array.empty[StructField]
  }
  fields.map(x => col(s"$colname.${x.name}"))
}
最后,结果如下:

df.select(children("bar", df): _*).printSchema

// root
// |-- x: integer (nullable = true)
// |-- y: string (nullable = true)
你可以用

df
  .select(explode(col("path_to_collection")).as("collection"))
  .select(col("collection.*"))`:
例如:

scala> val json = """{"name":"Michael", "schools":[{"sname":"stanford", "year":2010}, {"sname":"berkeley", "year":2012}]}"""

scala> val inline = sqlContext.read.json(sc.parallelize(json :: Nil)).select(explode(col("schools")).as("collection")).select(col("collection.*"))

scala> inline.printSchema
root
 |-- sname: string (nullable = true)
 |-- year: long (nullable = true)

scala> inline.show
+--------+----+
|   sname|year|
+--------+----+
|stanford|2010|
|berkeley|2012|
+--------+----+
或者,也可以使用SQL函数内联:


在上也问了这个问题,但没有回答。您好@zero323这可能是一个愚蠢的问题,但childrenbar,df:*这个语法的意思是什么?childrenbar,df只是一个返回Seq[Column]的调用。:*执行varargs解包。在SparkR中有什么方法可以做到这一点吗?@nate应该是这样的。SparkR模式等同于Scala模式,点语法对于所有Spark SQL实现都是通用的。这种方法更简单。改进您对w.r.t Spark 2.0的回答。spark.read.jsonspark.createDatasetjson::Nil.createOrreplaceTempViewTMP此sql查询可能对上述解决方案有用,请从tmp中选择名称
scala> val json = """{"name":"Michael", "schools":[{"sname":"stanford", "year":2010}, {"sname":"berkeley", "year":2012}]}"""

scala> sqlContext.read.json(sc.parallelize(json :: Nil)).registerTempTable("tmp")

scala> val inline = sqlContext.sql("SELECT inline(schools) FROM tmp")

scala> inline.printSchema
root
 |-- sname: string (nullable = true)
 |-- year: long (nullable = true)

scala> inline.show
+--------+----+
|   sname|year|
+--------+----+
|stanford|2010|
|berkeley|2012|
+--------+----+
scala> import org.apache.spark.sql.DataFrame
import org.apache.spark.sql.DataFrame

scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._

scala> case class Bar(x: Int, y: String)
defined class Bar

scala> case class Foo(bar: Bar)
defined class Foo

scala> val df = sc.parallelize(Seq(Foo(Bar(1, "first")), Foo(Bar(2, "second")))).toDF
df: org.apache.spark.sql.DataFrame = [bar: struct<x: int, y: string>]


scala> df.printSchema
root
 |-- bar: struct (nullable = true)
 |    |-- x: integer (nullable = false)
 |    |-- y: string (nullable = true)


scala> df.select("bar.*").printSchema
root
 |-- x: integer (nullable = true)
 |-- y: string (nullable = true)


scala>