如何将json转换为RDD[json]

如何将json转换为RDD[json],json,scala,apache-spark,Json,Scala,Apache Spark,我想在spark中编写json对象,但当我尝试使用sc.parallelize将其转换为RDD时,它会再次将其转换回字符串 import scala.util.parsing.json._ import org.apache.spark.sql._ import org.apache.spark.sql.types._ import org.apache.spark.sql.functions.lit import org.json4s._ import org.json4s.JsonDSL._

我想在spark中编写json对象,但当我尝试使用sc.parallelize将其转换为RDD时,它会再次将其转换回字符串

import scala.util.parsing.json._
import org.apache.spark.sql._
import org.apache.spark.sql.types._
import org.apache.spark.sql.functions.lit
import org.json4s._
import org.json4s.JsonDSL._
import org.json4s.jackson.JsonMethods._

val df = Seq((2012, 8, "Batman", 9.8), 
             (2012, 9, "Batman", 10.0), 
             (2012, 8, "Hero", 8.7),
             (2012, 10, "Hero", 5.7), 
             (2012, 2, "Robot", 5.5), 
             (2011, 7, "Git", 2.0),
             (2010, 1, "Dom", 2.0),
             (2019, 3, "Sri", 2.0)).toDF("year", "month", "title", "rating")

case class Rating(year:Int, month:Int, title:String, rating:Double)


import scala.collection.JavaConversions._
val ratingList = df.as[Rating].collectAsList

import java.io._
val output = for (c <- ratingList) yield
{
      val json = ("record" ->
              ("year" -> c.year) ~
              ("month" -> c.month) ~
              ("title" -> c.title) ~
              ("rating" -> c.rating))
      compact(render(json))
}

output.foreach(println)    
输出为:

{"test":{"json":"{\"record\":{\"year\":2012,\"month\":8,\"title\":\"Batman\",\"rating\":9.8}}"}}

当您调用
compact
时,您将从呈现的json中创建字符串。 见:

这意味着您的
输出是字符串的集合。当你并行化它时,你会得到RDD[String]

您可能希望返回
render
函数的结果以获取JSON对象的集合。但是你需要检查文档

当然,Spark不知道如何使用
toDF()
函数将JSON对象从第三方库转换为DataFrame。也许你可以这样做:

val anotherPeopleRDD = sc.parallelize(
  """{"name":"Yin","address":{"city":"Columbus","state":"Ohio"}}""" :: Nil)
val anotherPeople = sqlContext.read.json(anotherPeopleRDD)
因此,基本上是使用RDD[String],然后将其作为JSON读取

和顺便说一句

你为什么先这样做:

val ratingList = df.as[Rating].collectAsList
val output = for (c <- ratingList) yield
{
      val json = ("record" ->
              ("year" -> c.year) ~
              ("month" -> c.month) ~
              ("title" -> c.title) ~
              ("rating" -> c.rating))
      compact(render(json))
}
为什么不这样处理集群中的所有数据:

df.as[Rating].map{c =>
  val json = ("record" ->
    ("year" -> c.year) ~
      ("month" -> c.month) ~
      ("title" -> c.title) ~
      ("rating" -> c.rating))
  compact(render(json))
}

这样会更有效率。

多亏了Vladislav,我解决了这个问题,现在我可以并行化我的输出,但不能使用toDF使用sparkI将数据写入文件。我扩展了我的答案。价格和接受是感激的,顺便说一句:)非常感谢,你让我的一天。
val ratingList = df.as[Rating].collectAsList
val output = for (c <- ratingList) yield
{
      val json = ("record" ->
              ("year" -> c.year) ~
              ("month" -> c.month) ~
              ("title" -> c.title) ~
              ("rating" -> c.rating))
      compact(render(json))
}
val outputDF = sc.parallelize(output).toDF("json")
df.as[Rating].map{c =>
  val json = ("record" ->
    ("year" -> c.year) ~
      ("month" -> c.month) ~
      ("title" -> c.title) ~
      ("rating" -> c.rating))
  compact(render(json))
}