Apache spark Spark任务不可序列化Hadoop MongoDB连接器Enron
我正在尝试为Spark运行Hadoop MongoDB连接器的EnronMail示例。 因此,我使用的是GitHub的java代码示例: 我根据需要调整了服务器名称并添加了用户名和密码 我收到的错误消息如下:Apache spark Spark任务不可序列化Hadoop MongoDB连接器Enron,apache-spark,serialization,mongodb-hadoop,Apache Spark,Serialization,Mongodb Hadoop,我正在尝试为Spark运行Hadoop MongoDB连接器的EnronMail示例。 因此,我使用的是GitHub的java代码示例: 我根据需要调整了服务器名称并添加了用户名和密码 我收到的错误消息如下: Exception in thread "main" org.apache.spark.SparkException: Task not serializable at org.apache.spark.util.ClosureCleaner$.ensureSerializable
Exception in thread "main" org.apache.spark.SparkException: Task not serializable
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
at org.apache.spark.SparkContext.clean(SparkContext.scala:2066)
at org.apache.spark.rdd.RDD$$anonfun$flatMap$1.apply(RDD.scala:333)
at org.apache.spark.rdd.RDD$$anonfun$flatMap$1.apply(RDD.scala:332)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.flatMap(RDD.scala:332)
at org.apache.spark.api.java.JavaRDDLike$class.flatMap(JavaRDDLike.scala:130)
at org.apache.spark.api.java.AbstractJavaRDDLike.flatMap(JavaRDDLike.scala:46)
at Enron.run(Enron.java:43)
at Enron.main(Enron.java:104)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.io.NotSerializableException: Enron
Serialization stack:
- object not serializable (class: Enron, value: Enron@62b09715)
- field (class: Enron$1, name: this$0, type: class Enron)
- object (class Enron$1, Enron$1@ee8e7ff)
- field (class: org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1, name: f$3, type: interface org.apache.spark.api.java.function.FlatMapFunction)
- object (class org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1, <function1>)
at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
... 22 more
线程“main”org.apache.spark.SparkException中的异常:任务不可序列化
位于org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
位于org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
位于org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
位于org.apache.spark.SparkContext.clean(SparkContext.scala:2066)
位于org.apache.spark.rdd.rdd$$anonfun$flatMap$1.apply(rdd.scala:333)
位于org.apache.spark.rdd.rdd$$anonfun$flatMap$1.apply(rdd.scala:332)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
位于org.apache.spark.rdd.rdd.withScope(rdd.scala:316)
位于org.apache.spark.rdd.rdd.flatMap(rdd.scala:332)
位于org.apache.spark.api.java.JavaRDDLike$class.flatMap(JavaRDDLike.scala:130)
位于org.apache.spark.api.java.AbstractJavaRDDLike.flatMap(JavaRDDLike.scala:46)
运行(Enron.java:43)
位于Enron.main(Enron.java:104)
在sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法)处
位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)中
位于java.lang.reflect.Method.invoke(Method.java:498)
位于org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:731)
位于org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:181)
位于org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:206)
位于org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:121)
位于org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
原因:java.io.NotSerializableException:安然
序列化堆栈:
-对象不可序列化(类:Enron,值:Enron@62b09715)
-字段(类别:Enron$1,名称:this$0,类型:类别Enron)
-对象(类别:安然$1,安然$1@ee8e7ff)
-字段(类:org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1,名称:f$3,类型:interface org.apache.spark.api.java.function.FlatMapFunction)
-对象(类org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1,)
位于org.apache.spark.serializer.SerializationDebugger$.ImproveeException(SerializationDebugger.scala:40)
位于org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
位于org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
位于org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
... 还有22个
然后,我为FlatMapFunction创建了一个新类,并通过这个类扩展了Enron类。这并不能解决问题。有没有办法解决这个问题
class FlatMapFunctionSer implements Serializable{
static FlatMapFunction<Tuple2<Object, BSONObject>, String> flatFunc = new FlatMapFunction<Tuple2<Object, BSONObject>, String>() {
@Override
public Iterable<String> call(final Tuple2<Object, BSONObject> t) throws Exception {
BSONObject header = (BSONObject) t._2().get("headers");
String to = (String) header.get("To");
String from = (String) header.get("From");
// each tuple in the set is an individual from|to pair
//JavaPairRDD<String, Integer> tuples = new JavaPairRDD<String, Integer>();
List<String> tuples = new ArrayList<String>();
if (to != null && !to.isEmpty()) {
for (String recipient : to.split(",")) {
String s = recipient.trim();
if (s.length() > 0) {
tuples.add(from + "|" + s);
}
}
}
return tuples;
}
};
}
类FlatMapFunctionSer实现可序列化{
静态FlatMapFunction flatFunc=新的FlatMapFunction(){
@凌驾
公共Iterable调用(final Tuple2 t)引发异常{
BSONObject头=(BSONObject)t._2().get(“头”);
stringto=(String)header.get(“to”);
stringfrom=(String)header.get(“from”);
//集合中的每个元组都是从|到对的单个元组
//javapairdd tuples=新的javapairdd();
列表元组=新的ArrayList();
if(to!=null&!to.isEmpty()){
for(字符串收件人:to.split(“,”)){
字符串s=recipient.trim();
如果(s.长度()>0){
tuples.add(from+“|”+s);
}
}
}
返回元组;
}
};
}
通过在调用中包含mongo-hadoop-spark-2.0.2.jar,问题得以解决。也可使用以下pom:
<dependencies>
<dependency>
<groupId>junit</groupId>
<artifactId>junit</artifactId>
<version>3.8.1</version>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>1.5.1</version>
</dependency>
<dependency>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
<version>1.2.14</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.mongodb.mongo-hadoop/mongo-hadoop-core -->
<dependency>
<groupId>org.mongodb.mongo-hadoop</groupId>
<artifactId>mongo-hadoop-core</artifactId>
<version>1.4.1</version>
</dependency>
<!-- https://mvnrepository.com/artifact/org.mongodb/bson -->
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>bson</artifactId>
<version>3.4.2</version>
</dependency>
</dependencies>
</project>
朱尼特
朱尼特
3.8.1
测试
org.apache.spark
spark-sql_2.11
1.5.1
org.apache.spark
spark-core_2.11
1.5.1
log4j
log4j
1.2.14
org.mongodb.mongo-hadoop
mongo hadoop内核
1.4.1
org.mongodb
布森
3.4.2