Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/331.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java scala类spark submit的反序列化问题_Java_Scala_Spark Submit - Fatal编程技术网

Java scala类spark submit的反序列化问题

Java scala类spark submit的反序列化问题,java,scala,spark-submit,Java,Scala,Spark Submit,我正在尝试一个scala和java相结合的项目,我有一个scala类,它的缩写结构如下 case class Dl(name:String, length:Int) extends Serializable class DlStruct private(xs:List[Dl]) extends Serializable { def this()= this(Nil) private def +=(dl:DataLayout): RowSchema = new

我正在尝试一个scala和java相结合的项目,我有一个scala类,它的缩写结构如下

 case class Dl(name:String, length:Int) extends Serializable 

 class DlStruct private(xs:List[Dl]) extends Serializable {
    def this()= this(Nil)

    private def +=(dl:DataLayout): RowSchema =
      new RowSchema(xs :+ dl)

    def appendDl(fieldName:String, fieldLength:Int):DlStruct=
      this += Dl(fieldName,fieldLength)

 }
上面的类是从java对象调用的,用于填充DlStruct,完成后,我将把类文件写成序列化文件

当我再次反序列化文件并将其转换回object时,当我使用IntelliJ时,它工作得非常好,但如果我尝试从spark submit运行相同的代码,则会抛出以下错误:-

java.lang.ClassCastException: cannot assign instance of scala.collection.immutable.List$SerializationProxy to field somepackage.DlStruct.xs of type scala.collection.immutable.List in instance of somepackage.DlStruct.xs
at java.base/java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2205)
at java.base/java.io.ObjectStreamClass$FieldReflector.checkObjectFieldValueTypes(ObjectStreamClass.java:2168)
at java.base/java.io.ObjectStreamClass.checkObjFieldValueTypes(ObjectStreamClass.java:1422)
at java.base/java.io.ObjectInputStream.defaultCheckFieldValues(ObjectInputStream.java:2450)
at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2357)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2166)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.base/java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2434)
at java.base/java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2328)
at java.base/java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2166)
at java.base/java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1668)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:482)
at java.base/java.io.ObjectInputStream.readObject(ObjectInputStream.java:440)
对于公共java对象的反序列化也没有问题

用于反序列化的代码段

 File file = new File(serializedFilePath);
 FileInputStream fin = new FileInputStream(file);
 ObjectInputStream in = new ObjectInputStream(fin);

 infoHolder = (ObjectCarrier) in.readObject(); // <- this line gives error if it has scala object, else runs smoothly

 in.close();
 fileIn.close();
File File=新文件(序列化文件路径);
FileInputStream fin=新的FileInputStream(文件);
ObjectInputStream in=新的ObjectInputStream(fin);

infoHolder=(ObjectCarrier)在.readObject()中;// 我必须将scala类转换为java,才能让它最终在spark submit中工作,我希望有人能找到更好的答案。

您使用的是哪种spark和scala版本?它们是否与spark submit使用的版本相同?顺便说一下,case类在默认情况下是可序列化的。在我的POM中指定的版本是2.12.8 2.4.4,哪个JDK版本?基于stacktrace,它看起来相当新。并不是所有最新的JDK版本都能保证与所有不是最新的scala版本(OpenJDK1.8.0òu)一起工作,但我很确定,提供stacktrace的JVM应该比1.8版本更新。