将Spark JSON格式的RDD值解析为不同的值

将Spark JSON格式的RDD值解析为不同的值,json,scala,apache-spark,Json,Scala,Apache Spark,我正在尝试使用Spark(Scala)对RDD进行某种平面映射,RDD有N个值,其中一个是JSON格式的 例如,当我打印RDD时,我有类似的内容: myRDD.collect().foreach(println) [2020,{'COL_A': 1064.3667, 'col_B': 14534.2}] [2020,{'COL_A': 1064.3667, 'col_B': 145.2}] [2020,{'COL_A': 1064.3667, 'col_B': 15576.2}] 我想要这样

我正在尝试使用Spark(Scala)对RDD进行某种平面映射,RDD有N个值,其中一个是JSON格式的

例如,当我打印RDD时,我有类似的内容:

myRDD.collect().foreach(println)

[2020,{'COL_A': 1064.3667, 'col_B': 14534.2}]
[2020,{'COL_A': 1064.3667, 'col_B': 145.2}]
[2020,{'COL_A': 1064.3667, 'col_B': 15576.2}]
我想要这样的东西:

[2020,1064.3667,14534.2]
[2020,1064.3667,145.2]
[2020,1064.3667,15576.2]
我不知道这是否可以用flatmap完成


谢谢

使用内置的
json4s
库解析json

导入所需库

scala> import org.json4s.jackson.JsonMethods._
import org.json4s.jackson.JsonMethods._

scala> import org.json4s._
import org.json4s._
scala> val rdd = spark
.sparkContext
.parallelize(
    Seq(
        (2020,"""{"COL_A": 1064.3667, "col_B": 14534.2}"""),
        (2020,"""{"COL_A": 1064.3667, "col_B": 145.2}"""),
        (2020,"""{"COL_A": 1064.3667, "col_B": 15576.2}""")
       )
)
scala> rdd.collect.foreach(println)
(2020,{"COL_A": 1064.3667, "col_B": 14534.2})
(2020,{"COL_A": 1064.3667, "col_B": 145.2})
(2020,{"COL_A": 1064.3667, "col_B": 15576.2})
scala> :paste
// Entering paste mode (ctrl-D to finish)

val transformedRdd = rdd.map { c =>
      implicit val formats = DefaultFormats
      val values = parse(c._2).extract[Map[String,Double]].values.toList
      (c._1,values.head,values.last)
}

// Exiting paste mode, now interpreting.

scala> transformedRdd.collect.foreach(println)
(2020,1064.3667,14534.2)
(2020,1064.3667,145.2)
(2020,1064.3667,15576.2)