Scala 值toDF不是org.apache.spark.rdd.rdd的成员

Scala 值toDF不是org.apache.spark.rdd.rdd的成员,scala,apache-spark,apache-spark-sql,Scala,Apache Spark,Apache Spark Sql,我在其他SO帖子中读到过这个问题,但我仍然不知道我做错了什么。原则上,增加这两行: val sqlContext = new org.apache.spark.sql.SQLContext(sc) import sqlContext.implicits._ 本应完成此操作,但错误仍然存在 这是my build.sbt: name := "PickACustomer" version := "1.0" scalaVersion := "2.11.7" libraryDependencie

我在其他SO帖子中读到过这个问题,但我仍然不知道我做错了什么。原则上,增加这两行:

val sqlContext = new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
本应完成此操作,但错误仍然存在

这是my build.sbt:

name := "PickACustomer"

version := "1.0"

scalaVersion := "2.11.7"


libraryDependencies ++= Seq("com.databricks" %% "spark-avro" % "2.0.1",
"org.apache.spark" %% "spark-sql" % "1.6.0",
"org.apache.spark" %% "spark-core" % "1.6.0")
我的scala代码是:

import scala.collection.mutable.Map
import scala.collection.immutable.Vector

import org.apache.spark.rdd.RDD
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.sql._


    object Foo{

    def reshuffle_rdd(rawText: RDD[String]): RDD[Map[String, (Vector[(Double, Double, String)], Map[String, Double])]]  = {...}

    def do_prediction(shuffled:RDD[Map[String, (Vector[(Double, Double, String)], Map[String, Double])]], prediction:(Vector[(Double, Double, String)] => Map[String, Double]) ) : RDD[Map[String, Double]] = {...}

    def get_match_rate_from_results(results : RDD[Map[String, Double]]) : Map[String, Double]  = {...}


    def retrieve_duid(element: Map[String,(Vector[(Double, Double, String)], Map[String,Double])]): Double = {...}




    def main(args: Array[String]){
        val conf = new SparkConf().setAppName(this.getClass.getSimpleName)
        if (!conf.getOption("spark.master").isDefined) conf.setMaster("local")

        val sc = new SparkContext(conf)

        //This should do the trick
        val sqlContext = new org.apache.spark.sql.SQLContext(sc)
        import sqlContext.implicits._

        val PATH_FILE = "/mnt/fast_export_file_clean.csv"
        val rawText = sc.textFile(PATH_FILE)
        val shuffled = reshuffle_rdd(rawText)

        // PREDICT AS A FUNCTION OF THE LAST SEEN UID
        val results = do_prediction(shuffled.filter(x => retrieve_duid(x) > 1) , predict_as_last_uid)
        results.cache()

        case class Summary(ismatch: Double, t_to_last:Double, nflips:Double,d_uid: Double, truth:Double, guess:Double)

        val summary = results.map(x => Summary(x("match"), x("t_to_last"), x("nflips"), x("d_uid"), x("truth"), x("guess")))


        //PROBLEMATIC LINE
        val sum_df = summary.toDF()

    }
    }
我总是得到:

值toDF不是org.apache.spark.rdd.rdd[摘要]的成员


现在有点迷路了。有什么想法吗?

将您的案例类移出
main

object Foo {

  case class Summary(ismatch: Double, t_to_last:Double, nflips:Double,d_uid: Double, truth:Double, guess:Double)

  def main(args: Array[String]){
    ...
  }

}
它的作用域的某些方面阻止了Spark处理
Summary
模式的自动派生。仅供参考,我实际上从
sbt
得到了一个不同的错误:

没有可用于摘要的TypeTag

太好了。救我一命


将您的案例类移到main之外:

object Foo {

    case class Summary(ismatch: Double, t_to_last:Double, nflips:Double,d_uid: Double, truth:Double, guess:Double)

    def main(args: Array[String]){
...
    }
}

将您的case类移到函数体之外。然后使用
import spark.implicits.\u

您至少可以键入您的值并为我们提供所用方法的定义吗?其他对象中的case类可能仍然会对序列化造成问题。实际上,您可以在main中用以下内容总结代码:val sc=new SparkContext(conf)val sqlContext=new sqlContext(sc)导入sqlContext.implicits.\val summary:RDD[summary]=?案例类摘要(ismatch:Double,t_to_last:Double,nflips:Double,d_uid:Double,truth:Double,guess:Double)val sum_df=summary.toDF()不幸的是,我无法重现错误…不需要整个代码。有时定义要使用的值而不实际应用值就足够了,但在这种情况下,键入值很重要。只要尝试运行
sbt update
,以防这有助于某人-将类移到对象之外会导致类似问题。类必须在foo内部和main外部