Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/323.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Spark-使用OpenCSV解析文件时出现序列化问题_Java_Csv_Apache Spark_Rdd_Opencsv - Fatal编程技术网

Java Spark-使用OpenCSV解析文件时出现序列化问题

Java Spark-使用OpenCSV解析文件时出现序列化问题,java,csv,apache-spark,rdd,opencsv,Java,Csv,Apache Spark,Rdd,Opencsv,我正在使用Spark处理csv文件。最近,我用opencsv替换了手动CSV行解析。这里是简化的代码 public class Main { public static void main(String[] args) { CSVParser parser = new CSVParserBuilder() .withSeparator(';') .build(); SparkConf c

我正在使用Spark处理csv文件。最近,我用opencsv替换了手动CSV行解析。这里是简化的代码

public class Main {

    public static void main(String[] args) {

        CSVParser parser = new CSVParserBuilder()
                .withSeparator(';')
                .build();

        SparkConf cfg = new SparkConf()
                .setMaster("local[4]")
                .setAppName("Testapp");
        JavaSparkContext sc = new JavaSparkContext(cfg);

        JavaRDD<String> textFile = sc.textFile("testdata.csv", 1);

        List<String> categories = textFile
                .map(line -> parser.parseLine(line)[10])
                .collect();
        System.out.println(categories);
    }
}
公共类主{
公共静态void main(字符串[]args){
CSVParser parser=新的CSVParserBuilder()
.带分隔符(“;”)
.build();
SparkConf cfg=新SparkConf()
.setMaster(“本地[4]”)
.setAppName(“Testapp”);
JavaSparkContext sc=新的JavaSparkContext(cfg);
JavaRDD textFile=sc.textFile(“testdata.csv”,1);
列表类别=文本文件
.map(line->parser.parseLine(line)[10])
.收集();
系统输出打印项次(类别);
}
}
不幸的是,该代码不起作用。它产生一个异常

Caused by: java.io.NotSerializableException: com.opencsv.CSVParser
Serialization stack:
    - object not serializable (class: com.opencsv.CSVParser, value: com.opencsv.CSVParser@1290c49)
    - element of array (index: 0)
    - array (class [Ljava.lang.Object;, size 1)
    - field (class: java.lang.invoke.SerializedLambda, name: capturedArgs, type: class [Ljava.lang.Object;)
    - object (class java.lang.invoke.SerializedLambda, SerializedLambda[capturingClass=class test.Main, functionalInterfaceMethod=org/apache/spark/api/java/function/Function.call:(Ljava/lang/Object;)Ljava/lang/Object;, implementation=invokeStatic test/Main.lambda$main$49bd2722$1:(Lcom/opencsv/CSVParser;Ljava/lang/String;)Ljava/lang/String;, instantiatedMethodType=(Ljava/lang/String;)Ljava/lang/String;, numCaptured=1])
    - writeReplace data (class: java.lang.invoke.SerializedLambda)
    - object (class test.Main$$Lambda$19/429639728, test.Main$$Lambda$19/429639728@72456279)
    - field (class: org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, name: fun$1, type: interface org.apache.spark.api.java.function.Function)
    - object (class org.apache.spark.api.java.JavaPairRDD$$anonfun$toScalaFunction$1, <function1>)
    at org.apache.spark.serializer.SerializationDebugger$.improveException(SerializationDebugger.scala:40)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:400)
    ... 12 more
原因:java.io.NotSerializableException:com.opencsv.CSVParser
序列化堆栈:
-对象不可序列化(类:com.opencsv.CSVParser,值:com.opencsv)。CSVParser@1290c49)
-数组的元素(索引:0)
-数组(类[Ljava.lang.Object;,大小1)
-字段(类:java.lang.invoke.SerializedLambda,名称:capturedArgs,类型:class[Ljava.lang.Object;)
-对象(类java.lang.invoke.SerializedLambda,SerializedLambda[capturingClass=class test.Main,FunctionInterfaceMethod=org/apache/spark/api/java/function/function.call:(Ljava/lang/object;)Ljava/lang/object;,实现=invokeStatic test/Main.lambda$Main$49bd2722$1:(Lcom/opencsv/CSVParser;Ljava/lang/String;)Ljava/lang/String;,实例化MethodType=(Ljava/lang/String;)Ljava/lang/String;,numCaptured=1])
-writeReplace数据(类:java.lang.invoke.SerializedLambda)
-对象(类test.Main$$Lambda$19/429639728,test.Main$$Lambda$19/429639728@72456279)
-字段(类:org.apache.spark.api.java.javapairdd$$anonfun$toScalaFunction$1,名称:fun$1,类型:interface org.apache.spark.api.java.function.function)
-对象(类org.apache.spark.api.java.javapairdd$$anonfun$toScalaFunction$1,)
位于org.apache.spark.serializer.SerializationDebugger$.ImproveeException(SerializationDebugger.scala:40)
位于org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:46)
位于org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:100)
位于org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:400)
…还有12个
Spark似乎试图序列化lambda表达式,但lamba表达式以某种方式保留了对
解析器的引用,这导致了上述错误


问题是:有没有办法避免这种异常并在传递给Spark的lambda表达式中使用不可序列化的库?我真的不想实现自己的csv解析器。

Spark支持现成的csv文件

import org.apache.spark.sql.Row;
import org.apache.spark.sql.Dataset;

Dataset<Row> df = spark.read().format("csv")
                      .option("sep", ";")
                      .option("header", "true") //or "false" if no headers
                      .load("filename.csv");
import org.apache.spark.sql.Row;
导入org.apache.spark.sql.Dataset;
数据集df=spark.read().format(“csv”)
.期权(“sep”、“;”)
.option(“header”、“true”)//如果没有头,则为“false”
.load(“filename.csv”);
编辑(将注释提升到主答案)

如果您确实需要它,您可以使用
df.javaRDD()

尽管最好使用DataSet/DataFrame API(参见示例)我意识到我的问题有一个非常简单的解决方案。任何导致序列化问题的外部库使用都可能被包装在一个静态方法中。方法
parse
隐藏了对
解析器的引用。这种方法显然不是一个完美的解决方案,但很有效

public class Main {

    private static CSVParser parser = new CSVParserBuilder()
            .withSeparator(';')
            .build();

    public static void main(String[] args) {
        SparkConf cfg = new SparkConf()
                .setMaster("local[4]")
                .setAppName("Testapp");
        JavaSparkContext sc = new JavaSparkContext(cfg);

        JavaRDD<String> textFile = sc.textFile("testdata.csv", 1);

        List<String> categories = textFile
                .map(line -> parse(line)[0])
                .collect();
        System.out.println(categories);
    }

    static String[] parse(String line) throws IOException {
        return parser.parseLine(line);
    }
}
公共类主{
私有静态CSVParser解析器=新的CSVParserBuilder()
.带分隔符(“;”)
.build();
公共静态void main(字符串[]args){
SparkConf cfg=新SparkConf()
.setMaster(“本地[4]”)
.setAppName(“Testapp”);
JavaSparkContext sc=新的JavaSparkContext(cfg);
JavaRDD textFile=sc.textFile(“testdata.csv”,1);
列表类别=文本文件
.map(行->解析(行)[0])
.收集();
系统输出打印项次(类别);
}
静态字符串[]解析(字符串行)引发IOException{
返回parser.parseLine(line);
}
}

RDD api中是否有类似的解决方案?没有,但您可以使用df.javaRDD()从数据集获取RDD(尽管建议使用数据集/数据帧api)。我改变了使用内部解析的方法。顺便说一句,我找到了不同的解决方案。请看@我的答案。