Apache spark 如何将Spark数据集与节俭结合使用

Apache spark 如何将Spark数据集与节俭结合使用,apache-spark,thrift,apache-spark-sql,shapeless,Apache Spark,Thrift,Apache Spark Sql,Shapeless,我的数据格式是用apache thrift定义的,代码由scrooge生成。我用镶木地板将它储存在spark中,非常像本文中所解释的 我可以非常轻松地将数据读回数据帧,只需执行以下操作: val df = sqlContext.read.parquet("/path/to/data") 我可以在RDD中读到更多关于体操的内容: def loadRdd[V <: TBase[_, _]](inputDirectory: String, vClass: Class[V]): RDD[V] =

我的数据格式是用apache thrift定义的,代码由scrooge生成。我用镶木地板将它储存在spark中,非常像本文中所解释的

我可以非常轻松地将数据读回数据帧,只需执行以下操作:

val df = sqlContext.read.parquet("/path/to/data")
我可以在RDD中读到更多关于体操的内容:

def loadRdd[V <: TBase[_, _]](inputDirectory: String, vClass: Class[V]): RDD[V] = {
    implicit val ctagV: ClassTag[V] = ClassTag(vClass)
    ParquetInputFormat.setReadSupportClass(jobConf, classOf[ThriftReadSupport[V]])
    ParquetThriftInputFormat.setThriftClass(jobConf, vClass)
    val rdd = sc.newAPIHadoopFile(
      inputDirectory, classOf[ParquetThriftInputFormat[V]], classOf[Void], vClass, jobConf)
    rdd.asInstanceOf[NewHadoopRDD[Void, V]].values
  }
loadRdd("/path/to/data", classOf[MyThriftClass])

def loadRdd[V应该可以通过将
Encoders.bean(My.getClass)
作为显式-隐式传递来解决这个问题


示例:
df.as[MyJavaThriftClass](Encoders.bean(MyJavaThriftClass.getClass))

您看过吗?您好@MilesSabin,这看起来很有希望,但通过查看代码,我无法确定如果没有case类,它是否可以工作。事实上,它似乎是唯一的公共api RichDataSet,已经开始使用Dataset。我将ping gitter频道,看看作者是否有任何好的建议。您知道吗明白了吗?
val df = sqlContext.read.parquet("/path/to/data")

df.as[MyJavaThriftClass]

<console>:25: error: Unable to find encoder for type stored in a Dataset.  Primitive types (Int, String, etc) and Product types (case classes) are supported by importing sqlContext.implicits._  Support for serializing other types will be added in future releases.

df.as[MyScalaThriftClass]

scala.ScalaReflectionException: <none> is not a term
  at scala.reflect.api.Symbols$SymbolApi$class.asTerm(Symbols.scala:199)
  at scala.reflect.internal.Symbols$SymbolContextApiImpl.asTerm(Symbols.scala:84)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:492)
  at org.apache.spark.sql.catalyst.ScalaReflection$.extractorsFor(ScalaReflection.scala:394)
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:54)
  at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:41)
  ... 48 elided


df.as[MyScalaThriftClass.Immutable]

java.lang.UnsupportedOperationException: No Encoder found for org.apache.thrift.protocol.TField
- field (class: "org.apache.thrift.protocol.TField", name: "field")
- array element class: "com.twitter.scrooge.TFieldBlob"
- field (class: "scala.collection.immutable.Map", name: "_passthroughFields")
- root class: "com.worldsense.scalathrift.ThriftRange.Immutable"
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:597)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:509)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:502)
  at scala.collection.immutable.List.flatMap(List.scala:327)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:502)
  at org.apache.spark.sql.catalyst.ScalaReflection$.toCatalystArray$1(ScalaReflection.scala:419)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:537)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:509)
  at org.apache.spark.sql.catalyst.ScalaReflection$$anonfun$org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor$1.apply(ScalaReflection.scala:502)
  at scala.collection.immutable.List.flatMap(List.scala:327)
  at org.apache.spark.sql.catalyst.ScalaReflection$.org$apache$spark$sql$catalyst$ScalaReflection$$extractorFor(ScalaReflection.scala:502)
  at org.apache.spark.sql.catalyst.ScalaReflection$.extractorsFor(ScalaReflection.scala:394)
  at org.apache.spark.sql.catalyst.encoders.ExpressionEncoder$.apply(ExpressionEncoder.scala:54)
  at org.apache.spark.sql.SQLImplicits.newProductEncoder(SQLImplicits.scala:41)
  ... 48 elided