Scala 在RDD上使用take方法时,Apache Spark抛出反序列化错误

Scala 在RDD上使用take方法时,Apache Spark抛出反序列化错误,scala,apache-spark,apache-spark-mllib,Scala,Apache Spark,Apache Spark Mllib,我是Spark的新手,我正在使用Scala 2.12.8和Spark 2.4.0。我试图在Spark MLLib中使用随机林分类器。我可以构建和训练分类器,分类器可以预测我是否对结果RDD使用first()函数。然而,如果我尝试使用take(n)函数,我会得到一个相当大、丑陋的堆栈跟踪。有人知道我做错了什么吗?错误发生在行:“.take(3)”中。我知道这是我在RDD上执行的第一个有效操作,因此如果有人能向我解释为什么它会失败以及如何修复它,我将非常感激 object ItsABreeze {

我是Spark的新手,我正在使用Scala 2.12.8和Spark 2.4.0。我试图在Spark MLLib中使用随机林分类器。我可以构建和训练分类器,分类器可以预测我是否对结果RDD使用first()函数。然而,如果我尝试使用take(n)函数,我会得到一个相当大、丑陋的堆栈跟踪。有人知道我做错了什么吗?错误发生在行:“.take(3)”中。我知道这是我在RDD上执行的第一个有效操作,因此如果有人能向我解释为什么它会失败以及如何修复它,我将非常感激

object ItsABreeze {
  def main(args: Array[String]): Unit = {
    val spark: SparkSession = SparkSession
      .builder()
      .appName("test")
      .getOrCreate()

    //Do stuff to file
    val data: RDD[LabeledPoint] = MLUtils.loadLibSVMFile(spark.sparkContext, "file.svm")

    // Split the data into training and test sets (30% held out for testing)
    val splits: Array[RDD[LabeledPoint]] = data.randomSplit(Array(0.7, 0.3))
    val (trainingData, testData) = (splits(0), splits(1))

    // Train a RandomForest model.
    // Empty categoricalFeaturesInfo indicates all features are continuous
    val numClasses = 4
    val categoricaFeaturesInfo = Map[Int, Int]()
    val numTrees = 3
    val featureSubsetStrategy = "auto"
    val impurity = "gini"
    val maxDepth = 5
    val maxBins = 32

    val model: RandomForestModel = RandomForest.trainClassifier(
      trainingData,
      numClasses,
      categoricaFeaturesInfo,
      numTrees,
      featureSubsetStrategy,
      impurity,
      maxDepth,
      maxBins
    )

    testData
      .map((point: LabeledPoint) => model.predict(point.features))
      .take(3)
      .foreach(println)

    spark.stop()
  }
}
堆栈跟踪的顶部如下所示:

java.io.IOException: unexpected exception type
    at java.io.ObjectStreamClass.throwMiscException(ObjectStreamClass.java:1736)
    at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1266)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2078)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:75)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:114)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:83)
    at org.apache.spark.scheduler.Task.run(Task.scala:121)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.reflect.InvocationTargetException
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.lang.invoke.SerializedLambda.readResolve(SerializedLambda.java:230)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at java.io.ObjectStreamClass.invokeReadResolve(ObjectStreamClass.java:1260)
    ... 25 more
Caused by: java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: scala/runtime/LambdaDeserialize
    at ItsABreeze$.$deserializeLambda$(ItsABreeze.scala)
    ... 35 more
Caused by: java.lang.NoClassDefFoundError: scala/runtime/LambdaDeserialize
    ... 36 more
Caused by: java.lang.ClassNotFoundException: scala.runtime.LambdaDeserialize
    at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)

我试图运行的代码是本页上的稍微修改的版本(来自Spark机器学习库文档)

关于我最初问题的两位评论都是正确的:我将使用的Scala版本从2.12.8更改为2.11.12,并将Spark恢复为2.2.1,代码按原样运行


对于有资格回答这个问题的任何人来说,下面是一个后续问题:Spark 2.4.0声称对Scala 2.12.x有新的实验性支持。2.12.x支持是否存在许多已知问题?

这很可能是Scala版本的问题。请参阅以下内容-可能重复的@DemetriKots是正确的,这是一个版本控制问题。我放弃了这个项目,并使用Scala 2.11.12和相关的解析器重新构建了它,代码按原样运行。谢谢你的帮助,我为反应迟钝而道歉。我想这是我发布的第一个问题。