Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/apache-spark/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Class Apache Spark中的Kryo类错误_Class_Apache Spark_Broadcast_Kryo - Fatal编程技术网

Class Apache Spark中的Kryo类错误

Class Apache Spark中的Kryo类错误,class,apache-spark,broadcast,kryo,Class,Apache Spark,Broadcast,Kryo,我有一些Spark代码,我使用Kryo序列化。当没有服务器出现故障时,一切都会正常运行,但当服务器出现故障时,我会在它试图恢复自身时遇到一些大问题。基本上,错误消息表示服务器不知道我的文章类 Job aborted due to stage failure: Task 29 in stage 4.0 failed 4 times, most recent failure: Lost task 29.3 in stage 4.0 (TID 316, DATANODE-3): com.esoteri

我有一些Spark代码,我使用Kryo序列化。当没有服务器出现故障时,一切都会正常运行,但当服务器出现故障时,我会在它试图恢复自身时遇到一些大问题。基本上,错误消息表示服务器不知道我的
文章

Job aborted due to stage failure: Task 29 in stage 4.0 failed 4 times, most recent failure: Lost task 29.3 in stage 4.0 (TID 316, DATANODE-3): com.esotericsoftware.kryo.KryoException: Unable to find class: $line50.$read$$iwC$$iwC$Article
        com.esotericsoftware.kryo.util.DefaultClassResolver.readName(DefaultClassResolver.java:138)
        com.esotericsoftware.kryo.util.DefaultClassResolver.readClass(DefaultClassResolver.java:115)
        com.esotericsoftware.kryo.Kryo.readClass(Kryo.java:610)
        com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:721)
        org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:133)
        org.apache.spark.serializer.DeserializationStream$$anon$1.getNext(Serializer.scala:133)
        org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
        org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
        org.apache.spark.storage.MemoryStore.unrollSafely(MemoryStore.scala:235)
        org.apache.spark.CacheManager.putInBlockManager(CacheManager.scala:163)
        org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:70)
        org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:68)
        org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
        org.apache.spark.scheduler.Task.run(Task.scala:54)
        org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:177)
        java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
        java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        java.lang.Thread.run(Thread.java:745)
我真的很难理解我做错了什么

我在地图之外声明这些类

case class Contrib ( contribType: Option[String], surname: Option[String], givenNames: Option[String], phone: Option[String], email: Option[String], fax: Option[String] )

// Class to hold references
case class Reference( idRef:Option[String], articleNameRef:Option[String], pmIDFrom: Option[Long], pmIDRef:Option[Long])

// Class to hold articles
case class Article(articleName:String, articleAbstract: Option[String],
                   pmID:Option[Long], doi:Option[String],
                   references: Iterator[Reference],
                   contribs: Iterator[Contrib],
                   keywords: List[String])
似乎有些遗嘱执行人再也不知道什么是
文章了。。。
我如何解决这个问题

谢谢,
Stephane

澄清:Spark使用的是哪个版本?此异常是否仅发生在初始执行器组死亡后添加的新执行器上?使用HDP 2.1预览-Rackspace上的Spark 1.1.0。它是随机发生的(没有执行者死亡)。首先,在某个时刻,我的NODE-3忘记了什么是“Article”,并给出了那个错误。当我创建spark shell时,我的所有执行者都同时启动。