Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何读取Scala序列化堆栈(来自Spark)?_Scala_Apache Spark - Fatal编程技术网

如何读取Scala序列化堆栈(来自Spark)?

如何读取Scala序列化堆栈(来自Spark)?,scala,apache-spark,Scala,Apache Spark,如何读取序列化堆栈 我正在Spark上构建一个分布式NLP应用程序。我周期性地遇到这些不可序列化的异常,并且总是在这些异常中混淆视听。但是,我从未找到关于序列化堆栈中所有内容的含义的好文档 如何读取Scala中伴随NotSerializable错误的序列化堆栈?如何确定导致错误的类或对象?堆栈中“字段”、“对象”、“writeObject”和“writeReplace”字段的意义是什么 一个例子如下: Caused by: java.io.NotSerializableException: My

如何读取序列化堆栈

我正在Spark上构建一个分布式NLP应用程序。我周期性地遇到这些不可序列化的异常,并且总是在这些异常中混淆视听。但是,我从未找到关于序列化堆栈中所有内容的含义的好文档

如何读取Scala中伴随NotSerializable错误的序列化堆栈?如何确定导致错误的类或对象?堆栈中“字段”、“对象”、“writeObject”和“writeReplace”字段的意义是什么

一个例子如下:

Caused by: java.io.NotSerializableException: MyPackage.testing.PreprocessTest$$typecreator1$1
Serialization stack:
        - object not serializable (class: MyPackage.testing.PreprocessTest$$typecreator1$1, value: MyPackage.testing.PreprocessTest$$typecreator1$1@27f6854b)
        - writeObject data (class: scala.reflect.api.SerializedTypeTag)
        - object (class scala.reflect.api.SerializedTypeTag, scala.reflect.api.SerializedTypeTag@4a571516)
        - writeReplace data (class: scala.reflect.api.SerializedTypeTag)
        - object (class scala.reflect.api.TypeTags$TypeTagImpl, TypeTag[String])
        - field (class: MyPackage.package$$anonfun$deserializeMaps$1, name: evidence$1$1, type: interface scala.reflect.api.TypeTags$TypeTag)
        - object (class MyPackage.package$$anonfun$deserializeMaps$1, <function1>)
        - field (class: MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4, name: $outer, type: class MyPackage.package$$anonfun$deserializeMaps$1)
        - object (class MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4, <function1>)
        - field (class: MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4$$anonfun$apply$5, name: $outer, type: class MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4)
        - object (class MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4$$anonfun$apply$5, <function1>)
        - field (class: org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2, name: func$2, type: interface scala.Function1)
        - object (class org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2, <function1>)
        - field (class: org.apache.spark.sql.catalyst.expressions.ScalaUDF, name: f, type: interface scala.Function1)
        - object (class org.apache.spark.sql.catalyst.expressions.ScalaUDF, UDF(UDF(tokenMap#149)))
        - field (class: org.apache.spark.sql.catalyst.expressions.Alias, name: child, type: class org.apache.spark.sql.catalyst.expressions.Expression)
        - object (class org.apache.spark.sql.catalyst.expressions.Alias, UDF(UDF(tokenMap#149)) AS tokenMap#3131)
        - writeObject data (class: scala.collection.immutable.$colon$colon)
        - object (class scala.collection.immutable.$colon$colon, List(id#148, UDF(UDF(tokenMap#149)) AS tokenMap#3131, UDF(UDF(bigramMap#150)) AS bigramMap#3132, sentences#151, se_sentence_count#152, se_word_count#153, se_subjective_count#154, se_objective_count#155, se_document_sentiment#156, UDF(UDF(se_category#157)) AS se_category#3133))
        - field (class: org.apache.spark.sql.execution.Project, name: projectList, type: interface scala.collection.Seq)
        - object (class org.apache.spark.sql.execution.Project, Project [id#148,UDF(UDF(tokenMap#149)) AS tokenMap#3131,UDF(UDF(bigramMap#150)) AS bigramMap#3132,sentences#151,se_sentence_count#152,se_word_count#153,se_subjective_count#154,se_objective_count#155,se_document_sentiment#156,UDF(UDF(se_category#157)) AS se_category#3133]
+- InMemoryColumnarTableScan [se_sentence_count#152,bigramMap#150,id#148,tokenMap#149,se_word_count#153,sentences#151,se_document_sentiment#156,se_subjective_count#154,se_category#157,se_objective_count#155], InMemoryRelation [id#148,tokenMap#149,bigramMap#150,sentences#151,se_sentence_count#152,se_word_count#153,se_subjective_count#154,se_objective_count#155,se_document_sentiment#156,se_category#157], true, 10000, StorageLevel(true, true, false, true, 1), Union, None
)
        - field (class: org.apache.spark.sql.execution.ConvertToSafe, name: child, type: class org.apache.spark.sql.execution.SparkPlan)
        - object (class org.apache.spark.sql.execution.ConvertToSafe, ConvertToSafe
+- Project [id#148,UDF(UDF(tokenMap#149)) AS tokenMap#3131,UDF(UDF(bigramMap#150)) AS bigramMap#3132,sentences#151,se_sentence_count#152,se_word_count#153,se_subjective_count#154,se_objective_count#155,se_document_sentiment#156,UDF(UDF(se_category#157)) AS se_category#3133]
   +- InMemoryColumnarTableScan [se_sentence_count#152,bigramMap#150,id#148,tokenMap#149,se_word_count#153,sentences#151,se_document_sentiment#156,se_subjective_count#154,se_category#157,se_objective_count#155], InMemoryRelation [id#148,tokenMap#149,bigramMap#150,sentences#151,se_sentence_count#152,se_word_count#153,se_subjective_count#154,se_objective_count#155,se_document_sentiment#156,se_category#157], true, 10000, StorageLevel(true, true, false, true, 1), Union, None
)
        - field (class: org.apache.spark.sql.execution.ConvertToSafe$$anonfun$2, name: $outer, type: class org.apache.spark.sql.execution.ConvertToSafe)
        - object (class org.apache.spark.sql.execution.ConvertToSafe$$anonfun$2, <function1>)
原因:java.io.NotSerializableException:MyPackage.testing.PreprocessTest$$typecreator1$1
序列化堆栈:
-对象不可序列化(类:MyPackage.testing.PreprocessTest$$typecreator1$1,值:MyPackage.testing.PreprocessTest$$typecreator1$1@27f6854b)
-writeObject数据(类:scala.reflect.api.SerializedTypeTag)
-对象(类scala.reflect.api.SerializedTypeTag,scala.reflect.api)。SerializedTypeTag@4a571516)
-writeReplace数据(类:scala.reflect.api.SerializedTypeTag)
-对象(类scala.reflect.api.TypeTags$TypeTagImpl,TypeTag[String])
-字段(类:MyPackage.package$$anonfun$deserializeMaps$1,名称:证据$1$1,类型:interface scala.reflect.api.TypeTags$TypeTag)
-对象(类MyPackage.package$$anonfun$deserializeMaps$1,)
-字段(类:MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4,名称:$outer,类型:类MyPackage.package$$anonfun$deserializeMaps$1)
-对象(类MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4,)
-字段(类:MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4$$anonfun$apply$5,名称:$outer,类型:类MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4)
-对象(类MyPackage.package$$anonfun$deserializeMaps$1$$anonfun$apply$4$$anonfun$apply$5,)
-字段(类:org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2,名称:func$2,类型:interface scala.Function1)
-对象(类org.apache.spark.sql.catalyst.expressions.ScalaUDF$$anonfun$2,)
-字段(类:org.apache.spark.sql.catalyst.expressions.ScalaUDF,名称:f,类型:interface scala.Function1)
-对象(类org.apache.spark.sql.catalyst.expressions.ScalaUDF,UDF(UDF(令牌映射#149)))
-字段(类:org.apache.spark.sql.catalyst.expressions.Alias,名称:child,类型:class org.apache.spark.sql.catalyst.expressions.Expression)
-对象(类org.apache.spark.sql.catalyst.expressions.Alias,UDF(UDF(tokenMap#149))作为tokenMap#3131)
-writeObject数据(类:scala.collection.immutable.$colon$colon)
-对象(类scala。收集。收集。不可改变的。收集。收集。收集。收集。不可改变的。$冒号:冒号:148,UDF(UDF(tokenMap(tokenMap)149)作为令牌地图3131,UDF(UDF(UDF(BigramamaMap。收集。收集。收集。收集。收集。收集。冒号:冒号:冒号:冒号:冒号:1.5)作为令牌地图3131,UDF(UDF(UDF(UDF(大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模大规模军事行动。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。收集。5)作为作为作为作为作为作为作为作为作为作为令牌se#U类(3133))
-字段(类:org.apache.spark.sql.execution.Project,名称:projectList,类型:interface scala.collection.Seq)
-项目[ID35148,UDF(UDF(tokenMap)149)作为令牌地图3131,UDF(UDF(tokenMap)149)作为令牌地图3131,UDF(UDF(UDF(Bigramamap(BigramaMap)150)作为UDF(UDF(Bigramamamamamamap(Bigramamamap)150))作为令牌地图3131,UDF(UDF(UDF(UDF(Bigram(Bigramap(Bigramamamamamamamamamaap)150)150)150)作为UDF(UDF(UDF(UDF(UDF(UDF)150)作为)作为))作为域名(UDF(UDF(UDF(Bigram(Bigram(Bigramamamama武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武武#类别#3133]
+-在MemoryColumnartableScan[se#u句子统计152,bigramMap 150,id 148,tokenMap 149,se#u单词统计153,句子统计151,se#u文档情感统计156,se#u主观统计154,se#u类别157,se#u客观统计155]中,记忆[id#148,记号图#149,大地图#150,句子#151,句子#计数152,单词#计数153,主观#计数154,客观#计数155,文档#情感#156,类别#157],真,10000,存储级别(真,真,假,真,1),联合,无
)
-字段(类:org.apache.spark.sql.execution.ConvertToSafe,名称:child,类型:class org.apache.spark.sql.execution.SparkPlan)
-对象(类org.apache.spark.sql.execution.ConvertToSafe,ConvertToSafe
+-项目[id#148,UDF(UDF(tokenMap#149))作为tokenMap#3131,UDF(UDF(bigramMap#150))作为bigramMap#3132,句子#151,seu句子#计数#152,seu单词#计数#153,seu主观#计数#154 154,seu客观#计数#155,seu文档#情感,seu类别35f#33
+-在MemoryColumnartableScan[se#u句子统计152,bigramMap 150,id 148,tokenMap 149,se#u单词统计153,句子统计151,se#u文档情感统计156,se#u主观统计154,se#u类别157,se#u客观统计155]中,记忆[id#148,记号图#149,大地图#150,句子#151,句子#计数152,单词#计数153,主观#计数154,客观#计数155,文档#情感#156,类别#157],真,10000,存储级别(真,真,假,真,1),联合,无
)
-字段(类:org.apache.spark.sql.execution.ConvertToSafe$$anonfun$2,名称:$outer,类型:class org.apache.spark.sql.execution.ConvertToSafe)
-对象(类org.apache.spark.sql.execution.ConvertToSafe$$anonfun$2,)