Java SparkException:本地类不兼容

Java SparkException:本地类不兼容,java,hadoop,apache-spark,cloudera,cloudera-manager,Java,Hadoop,Apache Spark,Cloudera,Cloudera Manager,我正在尝试将spark作业从客户端提交到cloudera集群。在集群中,我们使用CDH-5.3.2,它的spark版本是1.2.0,hadoop版本是2.5.0。因此,为了测试我们的集群,我们提交了来自spark网站的wordcount样本。我们可以成功提交用java编写的spark作业。但是,我们无法将结果写入hdfs上的文件。 我们得到了以下错误 20/06/25 09:38:16 INFO DAGScheduler: Job 0 failed: saveAsTextFile at Simp

我正在尝试将spark作业从客户端提交到cloudera集群。在集群中,我们使用CDH-5.3.2,它的spark版本是1.2.0,hadoop版本是2.5.0。因此,为了测试我们的集群,我们提交了来自spark网站的wordcount样本。我们可以成功提交用java编写的spark作业。但是,我们无法将结果写入hdfs上的文件。 我们得到了以下错误

20/06/25 09:38:16 INFO DAGScheduler: Job 0 failed: saveAsTextFile at SimpleWordCount.java:36, took 5.450531 s
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task 1 in stage 1.0 failed 4 times, most recent failure: Lost task 1.3 in stage 1.0 (TID 8, obelix2): java.io.InvalidClassException: org.apache.spark.rdd.PairRDDFunctions; local class incompatible: stream classdesc serialVersionUID = 8789839749593513237, local class serialVersionUID = -4145741279224749316
    at java.io.ObjectStreamClass.initNonProxy(ObjectStreamClass.java:617)
    at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1622)
    at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1517)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1771)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1990)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1915)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1798)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1350)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:370)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:62)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:87)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:57)
    at org.apache.spark.scheduler.Task.run(Task.scala:56)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:196)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
    at java.lang.Thread.run(Thread.java:745)

Driver stacktrace:
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1214)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1203)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1202)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1202)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:696)
    at scala.Option.foreach(Option.scala:236)
    at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:696)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1420)
    at akka.actor.Actor$class.aroundReceive(Actor.scala:465)
    at org.apache.spark.scheduler.DAGSchedulerEventProcessActor.aroundReceive(DAGScheduler.scala:1375)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:516)
    at akka.actor.ActorCell.invoke(ActorCell.scala:487)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:238)
    at akka.dispatch.Mailbox.run(Mailbox.scala:220)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
这是我们的代码示例

import java.util.Arrays;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaPairRDD;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.FlatMapFunction;
import org.apache.spark.api.java.function.Function2;
import org.apache.spark.api.java.function.PairFunction;

import scala.Tuple2;

public class SimpleWordCount {
    public static void main(String[] args) {
        SparkConf conf = new SparkConf().setAppName("Simple Application");
        JavaSparkContext spark = new JavaSparkContext(conf);
        JavaRDD<String> textFile = spark.textFile("hdfs://obelix1:8022/user/U079681/deneme/example.txt");
        JavaRDD<String> words = textFile
                .flatMap(new FlatMapFunction<String, String>() {
                    public Iterable<String> call(String s) {
                        return Arrays.asList(s.split(" "));
                    }
                });
        JavaPairRDD<String, Integer> pairs = words
                .mapToPair(new PairFunction<String, String, Integer>() {
                    public Tuple2<String, Integer> call(String s) {
                        return new Tuple2<String, Integer>(s, 1);
                    }
                });
        JavaPairRDD<String, Integer> counts = pairs
                .reduceByKey(new Function2<Integer, Integer, Integer>() {
                    public Integer call(Integer a, Integer b) {
                        return a + b;
                    }
                });
//      System.out.println(counts.collect());
        counts.saveAsTextFile("hdfs://obelix1:8022/user/U079681/deneme/result");
    }
}
导入java.util.array;
导入org.apache.spark.SparkConf;
导入org.apache.spark.api.java.javapairdd;
导入org.apache.spark.api.java.JavaRDD;
导入org.apache.spark.api.java.JavaSparkContext;
导入org.apache.spark.api.java.function.FlatMapFunction;
导入org.apache.spark.api.java.function.Function2;
导入org.apache.spark.api.java.function.PairFunction;
导入scala.Tuple2;
公共类SimpleWordCount{
公共静态void main(字符串[]args){
SparkConf conf=new SparkConf().setAppName(“简单应用程序”);
JavaSparkContext spark=新的JavaSparkContext(conf);
JavaRDD textFile=spark.textFile(“hdfs://obelix1:8022/user/U079681/deneme/example.txt");
JavaRDD words=textFile
.flatMap(新的flatMap函数(){
公共Iterable调用(字符串s){
返回数组.asList(s.split(“”);
}
});
javapairdd对=字
.mapToPair(新的PairFunction(){
公共元组2调用(字符串s){
返回新的Tuple2(s,1);
}
});
JavaPairRDD计数=对
.reduceByKey(新功能2(){
公共整数调用(整数a、整数b){
返回a+b;
}
});
//System.out.println(counts.collect());
counts.saveAsTextFile(“hdfs://obelix1:8022/user/U079681/deneme/result");
}
}
而专业人士的依赖是

        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>2.10.5</version>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>1.2.0-cdh5.3.2</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.5.0-mr1-cdh5.3.2</version>
        </dependency>

org.scala-lang
scala图书馆
2.10.5
org.apache.spark
spark-core_2.10
1.2.0-cdh5.3.2
org.apache.hadoop
hadoop客户端
2.5.0-mr1-cdh5.3.2
我完全不知道错误来自哪里,因为据我了解,应用程序的spark版本和cloudera的spark版本是相同的。任何想法都是非常受欢迎的


注意:我们可以在写入控制台时看到结果。

如您的说明所述,当结果在控制台中打印时,应用程序可以正常工作,但当您尝试将结果保存在基础HDFS中时,会出现错误

如果我没有弄错的话,这意味着:

  • 将结果输出到控制台时,Spark可能没有使用底层Hadoop基础设施

  • 在HDFS中保存结果时,Spark确实使用了底层Hadoop基础设施

这些场景让我觉得Hadoop版本不匹配正在某处发生。虽然Spark版本可能在应用程序节点和集群节点中都匹配,但在使用的Hadoop版本中可能存在差异

您应该查看
CDH-5.3.2
中使用的库,并检查它们是否与应用程序中使用的库匹配

另外,看看这个问题:


花了几个小时后,我们解决了这个问题。我们的问题的根本原因是我们从官方网站下载了ApacheSpark并构建了它。因此,有些JAR无法与cloudera发行版竞争。今天我们终于了解到,spark cloudera发行版在github()中可用,构建后,我们将作业结果保存到hdfs。

感谢分享。我也面临同样的问题。若客户机和服务器协商一个版本,并在不匹配时优雅地断开连接并打印一条警告,那个就容易多了。