org.apache.spark.SparkException:任务不可序列化java

org.apache.spark.SparkException:任务不可序列化java,java,serialization,apache-spark,Java,Serialization,Apache Spark,我试图将结果添加到mysql thorugh foreachpartition,但得到错误org.apache.spark.SparkException:Task not serializable java 公共类插入实现了可序列化{ transient static JavaSparkContext spc; public static void main(String gg[]) { Map<String, String> options = new HashMap<

我试图将结果添加到mysql thorugh foreachpartition,但得到错误org.apache.spark.SparkException:Task not serializable java

公共类插入实现了可序列化{

 transient static JavaSparkContext spc;
public static void main(String gg[]) 
{

 Map<String, String> options = new HashMap<String, String>();
        options.put("url","jdbc:mysql://localhost:3306/testing?user=root&password=pwd");
        options.put("dbtable", "rtl");
 SparkConf ss=new SparkConf().setAppName("insert").setMaster("local");

 spc=new JavaSparkContext(ss);

    JavaRDD<String> rbm=spc.textFile(path);
    // DataFrame jdbcDF = sqlContext.jdbc(options.get("url"),options.get("dbtable"));

    // System.out.println("Data------------------->" + jdbcDF.toJSON().first());


 JavaRDD<String> file=rbm.flatMap(new FlatMapFunction<String, String>() {
NotSerializableException nn=new NotSerializableException();
    public Iterable<String> call(String x)  {
        // TODO Auto-generated method stub

        return Arrays.asList(x.split("  ")[0]);
    }
});



try {
    file.foreachPartition(new VoidFunction<Iterator<String>>()   {
    Connection conn= (Connection) DriverManager.getConnection("jdbc:mysql://localhost/testing","root","amd@123");

        PreparedStatement del = (PreparedStatement) conn.prepareStatement ("INSERT INTO rtl (rtl_s) VALUES (?) ");
        NotSerializableException nn=new NotSerializableException();
            public void call(Iterator<String> x) throws Exception {
                // TODO Auto-generated method stub
    while(x.hasNext())
    {
                String y=x.toString();
                del.setString(1, y);
                del.executeUpdate();
    }
            }

    });
} catch (Exception e) {
    // TODO Auto-generated catch block
    e.printStackTrace();
}
}
瞬态静态JavaSparkContext spc;
公共静态void main(字符串gg[]
{
Map options=newhashmap();
options.put(“url”,“jdbc:mysql://localhost:3306/testing?user=root&password=pwd");
期权。看跌期权(“dbtable”、“rtl”);
SparkConf ss=新的SparkConf().setAppName(“插入”).setMaster(“本地”);
spc=新的JavaSparkContext(ss);
JavaRDD rbm=spc.textFile(路径);
//DataFrame jdbcDF=sqlContext.jdbc(options.get(“url”)、options.get(“dbtable”);
//System.out.println(“数据----------------->”+jdbcDF.toJSON().first());
JavaRDD file=rbm.flatMap(新的flatMap函数(){
NotSerializableException nn=新的NotSerializableException();
公共Iterable调用(字符串x){
//TODO自动生成的方法存根
返回数组.asList(x.split(“”[0]);
}
});
试一试{
foreachPartition(新的VoidFunction(){
连接conn=(连接)DriverManager.getConnection(“jdbc:mysql://localhost/testing“,”根“,”amd@123");
PreparedStatement del=(PreparedStatement)conn.prepareStatement(“插入rtl(rtl_)值(?)”;
NotSerializableException nn=新的NotSerializableException();
公共void调用(迭代器x)引发异常{
//TODO自动生成的方法存根
while(x.hasNext())
{
字符串y=x.toString();
del.setString(1,y);
del.executeUpdate();
}
}
});
}捕获(例外e){
//TODO自动生成的捕捉块
e、 printStackTrace();
}
}
我正在犯错误

6/09/20 12:37:58 INFO SparkContext: Created broadcast 0 from textFile at Insert.java:41
org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
    at org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:919)
    at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1.apply(RDD.scala:918)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
    at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
    at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
    at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:918)
    at org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225)
    at org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46)
    at final_file.Insert.main(Insert.java:59)
Caused by: java.io.NotSerializableException: java.lang.Object
Serialization stack:
    - object not serializable (class: java.lang.Object, value: java.lang.Object@4395342)
    - writeObject data (class: java.util.HashMap)
    - object (class java.util.HashMap, {UTF-8=java.lang.Object@4395342, WINDOWS-1252=com.mysql.jdbc.SingleByteCharsetConverter@72ffabab, US-ASCII=com.mysql.jdbc.SingleByteCharsetConverter@6f5fa288})
    - field (class: com.mysql.jdbc.ConnectionImpl, name: charsetConverterMap, type: interface java.util.Map)
    - object (class com.mysql.jdbc.JDBC4Connection, com.mysql.jdbc.JDBC4Connection@6761e52a)
    - field (class: final_file.Insert$2, name: conn, type: interface com.mysql.jdbc.Connection)
    - object (class final_file.Insert$2, final_file.Insert$2@45436e66)
    - field (class: org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1, name: f$12, type: interface org.apache.spark.api.java.function.VoidFunction)
    - object (class org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
    ... 12 more
6/09/20 12:37:58信息SparkContext:从Insert.java:41的文本文件创建广播0
org.apache.spark.SparkException:任务不可序列化
位于org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:304)
位于org.apache.spark.util.ClosureCleaner$.org$apache$spark$util$ClosureCleaner$$clean(ClosureCleaner.scala:294)
位于org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:122)
位于org.apache.spark.SparkContext.clean(SparkContext.scala:2055)
位于org.apache.spark.rdd.rdd$$anonfun$foreachPartition$1.apply(rdd.scala:919)
位于org.apache.spark.rdd.rdd$$anonfun$foreachPartition$1.apply(rdd.scala:918)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
位于org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
位于org.apache.spark.rdd.rdd.withScope(rdd.scala:316)
位于org.apache.spark.rdd.rdd.foreachPartition(rdd.scala:918)
位于org.apache.spark.api.java.JavaRDDLike$class.foreachPartition(JavaRDDLike.scala:225)
位于org.apache.spark.api.java.AbstractJavaRDDLike.foreachPartition(JavaRDDLike.scala:46)
在final_file.Insert.main(Insert.java:59)处
原因:java.io.NotSerializableException:java.lang.Object
序列化堆栈:
-对象不可序列化(类:java.lang.object,值:java.lang)。Object@4395342)
-writeObject数据(类:java.util.HashMap)
-对象(类java.util.HashMap,{UTF-8=java.lang。Object@4395342,WINDOWS-1252=com.mysql.jdbc。SingleByteCharsetConverter@72ffabab,US-ASCII=com.mysql.jdbc。SingleByteCharsetConverter@6f5fa288})
-字段(类:com.mysql.jdbc.ConnectionImpl,名称:charsetConverterMap,类型:interface java.util.Map)
-对象(类com.mysql.jdbc.JDBC4Connection,com.mysql.jdbc)。JDBC4Connection@6761e52a)
-字段(类:final_file.Insert$2,名称:conn,类型:interface com.mysql.jdbc.Connection)
-对象(类final\u file.Insert$2,final\u file.Insert$2@45436e66)
-字段(类:org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1,名称:f$12,类型:interface org.apache.spark.api.java.function.VoidFunction)
-对象(类org.apache.spark.api.java.JavaRDDLike$$anonfun$foreachPartition$1,)
位于org.apache.spark.serializer.SerializationDebugger$.ImproveeException(SerializationDebugger.scala:40)
位于org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:47)
位于org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:101)
位于org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:301)
…还有12个

我在尝试将结果更新到mysql时遇到上述错误。

当您使用spark的某些操作方法(如map、flapMap…)时,spark会尝试序列化您使用的所有函数、方法和字段

但是方法和字段不能序列化,所以来自的整个类方法或字段将被序列化

如果这些类没有实现
java.io.seAlizable
,则会发生此异常。 您可以在where by search序列化路径中找到遇到的
notserializableeexception

在您的情况下,您可以查看以下内容:

Caused by: java.io.NotSerializableException: java.lang.Object
Serialization stack:
    - object not serializable (class: java.lang.Object, value: java.lang.Object@4395342)

DriverManager
包含什么?它似乎无法序列化。实际上它包含mysql的属性。它有用户名、密码和数据库名。