如何在spark scala中使用dataset设置数组类型
我有这样的源数据:如何在spark scala中使用dataset设置数组类型,scala,apache-spark,dataset,Scala,Apache Spark,Dataset,我有这样的源数据: {A:123,B:"Hello world",C:[{D:123,E:"Spark"}]} 我有一个目标: case class TestClass (A:Int;B:String;C:???) val obj:Dataset[TestClass] = df.as[TestClass] 如何定义C的类型?一个选项 case class Nested(D: Long, E: String) case class TestClass (A: Long, B:String, C
{A:123,B:"Hello world",C:[{D:123,E:"Spark"}]}
我有一个目标:
case class TestClass (A:Int;B:String;C:???)
val obj:Dataset[TestClass] = df.as[TestClass]
如何定义C的类型?一个选项
case class Nested(D: Long, E: String)
case class TestClass (A: Long, B:String, C: Seq[Nested])
用法:
spark.read.json(sc.parallelize(
Seq("""{"A": 123, "B": "Hello world", "C": [{"D": 123, "E": "Spark"}]}"""
))).as[TestClass].show
+---+-----------+-------------+
| A| B| C|
+---+-----------+-------------+
|123|Hello world|[[123,Spark]]|
+---+-----------+-------------+