Apache spark MapWithState给出java.lang.ClassCastException:org.apache.spark.util.SerializableConfiguration在从检查点恢复时无法强制转换

Apache spark MapWithState给出java.lang.ClassCastException:org.apache.spark.util.SerializableConfiguration在从检查点恢复时无法强制转换,apache-spark,serialization,spark-streaming,broadcast,checkpointing,Apache Spark,Serialization,Spark Streaming,Broadcast,Checkpointing,我面临spark流媒体作业的问题,我试图在spark中一起使用广播、映射状态和检查点 用法如下: 因为我必须将一些连接对象(不可序列化)传递给执行器,所以我使用的是org.apache.spark.broadcast.broadcast 因为我们必须维护一些缓存信息,所以我使用mapWithState的有状态流 此外,我正在使用流上下文的检查点 我还需要将广播连接对象传递到mapWithState,以便从外部源获取一些数据 当新创建上下文时,流工作正常。然而,当我使应用程序崩溃并尝试从检查点

我面临spark流媒体作业的问题,我试图在spark中一起使用广播映射状态检查点

用法如下:

  • 因为我必须将一些连接对象(不可序列化)传递给执行器,所以我使用的是org.apache.spark.broadcast.broadcast
  • 因为我们必须维护一些缓存信息,所以我使用mapWithState的有状态流
  • 此外,我正在使用流上下文的检查点
我还需要将广播连接对象传递到mapWithState,以便从外部源获取一些数据

当新创建上下文时,流工作正常。然而,当我使应用程序崩溃并尝试从检查点恢复时,我得到一个ClassCastException

我已根据一个示例编写了一个小代码段,以在以下内容中重现该问题:

  • 我的广播逻辑是yuvalitzchakov.utils.KafkaWriter.scala
  • 应用程序的虚拟逻辑为yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast.scala
伪代码段:

val sparkConf = new SparkConf().setMaster("local[*]").setAppName("spark-stateful-example")

...
val prop = new Properties()
...

val config: Config = ConfigFactory.parseString(prop.toString)
val sc = new SparkContext(sparkConf)
val ssc = StreamingContext.getOrCreate(checkpointDir, () =>  {

    println("creating context newly")

    clearCheckpoint(checkpointDir)

    val streamingContext = new StreamingContext(sc, Milliseconds(batchDuration))
    streamingContext.checkpoint(checkpointDir)

    ...
    val kafkaWriter = SparkContext.getOrCreate().broadcast(kafkaErrorWriter)
    ...
    val stateSpec = StateSpec.function((key: Int, value: Option[UserEvent], state: State[UserSession]) =>
        updateUserEvents(key, value, state, kafkaWriter)).timeout(Minutes(jobConfig.getLong("timeoutInMinutes")))

    kafkaTextStream
    .transform(rdd => {
        offsetsQueue.enqueue(rdd.asInstanceOf[HasOffsetRanges].offsetRanges)
        rdd
    })
    .map(deserializeUserEvent)
    .filter(_ != UserEvent.empty)
    .mapWithState(stateSpec)
    .foreachRDD { rdd =>
        ...
        some logic
        ...

    streamingContext
    })
}

ssc.start()
ssc.awaitTermination()


def updateUserEvents(key: Int,
                     value: Option[UserEvent],
                     state: State[UserSession],
                     kafkaWriter: Broadcast[KafkaWriter]): Option[UserSession] = {

    ...
    kafkaWriter.value.someMethodCall()
    ...
}
当发生以下错误时

kafkaWriter.value.someMethodCall()

已执行:

17/08/01 21:20:38 ERROR Executor: Exception in task 2.0 in stage 3.0 (TID 4)
java.lang.ClassCastException: org.apache.spark.util.SerializableConfiguration cannot be cast to yuvalitzchakov.utils.KafkaWriter
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$.updateUserSessions$1(SparkStatefulRunnerWithBroadcast.scala:144)
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$.updateUserEvents(SparkStatefulRunnerWithBroadcast.scala:150)
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$$anonfun$2.apply(SparkStatefulRunnerWithBroadcast.scala:78)
    at yuvalitzchakov.stateful.SparkStatefulRunnerWithBroadcast$$anonfun$2.apply(SparkStatefulRunnerWithBroadcast.scala:77)
    at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:181)
    at org.apache.spark.streaming.StateSpec$$anonfun$1.apply(StateSpec.scala:180)
    at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$1.apply(MapWithStateRDD.scala:57)
    at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$$anonfun$updateRecordWithData$1.apply(MapWithStateRDD.scala:55)
    at scala.collection.Iterator$class.foreach(Iterator.scala:893)
    at org.apache.spark.InterruptibleIterator.foreach(InterruptibleIterator.scala:28)
    at org.apache.spark.streaming.rdd.MapWithStateRDDRecord$.updateRecordWithData(MapWithStateRDD.scala:55)
    at org.apache.spark.streaming.rdd.MapWithStateRDD.compute(MapWithStateRDD.scala:159)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:336)
    at org.apache.spark.rdd.RDD$$anonfun$8.apply(RDD.scala:334)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:1005)
    at org.apache.spark.storage.BlockManager$$anonfun$doPutIterator$1.apply(BlockManager.scala:996)
    at org.apache.spark.storage.BlockManager.doPut(BlockManager.scala:936)
    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:996)
    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:700)
    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:334)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:285)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:323)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:287)
    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
    at org.apache.spark.scheduler.Task.run(Task.scala:99)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:322)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
基本上,kafkaWriter是广播变量,kafkaWriter.value应该返回我们广播的变量,但是它返回的是serializableConfiguration,它没有被强制转换到所需的对象


提前感谢您的帮助

如果需要从Spark streaming中的检查点目录恢复,则广播变量不能与MapwithState一起使用(通常是转换操作)。在这种情况下,它只能在输出操作中使用,因为它需要Spark上下文延迟初始化广播

class JavaWordBlacklist {

private static volatile Broadcast<List<String>> instance = null;

public static Broadcast<List<String>> getInstance(JavaSparkContext jsc) {
if (instance == null) {
synchronized (JavaWordBlacklist.class) {
if (instance == null)

{ List<String> wordBlacklist = Arrays.asList("a", "b", "c"); instance = jsc.broadcast(wordBlacklist); }

}
}
return instance;
}
}

class JavaDroppedWordsCounter {

private static volatile LongAccumulator instance = null;

public static LongAccumulator getInstance(JavaSparkContext jsc) {
if (instance == null) {
synchronized (JavaDroppedWordsCounter.class) {
if (instance == null)

{ instance = jsc.sc().longAccumulator("WordsInBlacklistCounter"); }

}
}
return instance;
}
}

wordCounts.foreachRDD((rdd, time) -> {
// Get or register the blacklist Broadcast
Broadcast<List<String>> blacklist = JavaWordBlacklist.getInstance(new JavaSparkContext(rdd.context()));
// Get or register the droppedWordsCounter Accumulator
LongAccumulator droppedWordsCounter = JavaDroppedWordsCounter.getInstance(new JavaSparkContext(rdd.context()));
// Use blacklist to drop words and use droppedWordsCounter to count them
String counts = rdd.filter(wordCount -> {
if (blacklist.value().contains(wordCount._1()))

{ droppedWordsCounter.add(wordCount._2()); return false; }

else

{ return true; }

}).collect().toString();
String output = "Counts at time " + time + " " + counts;
}
classjavawordblacklist{
私有静态易失性广播实例=null;
公共静态广播getInstance(JavaSparkContext jsc){
if(实例==null){
已同步(JavaWordBlacklist.class){
if(实例==null)
{List-wordBlacklist=Arrays.asList(“a”、“b”、“c”);instance=jsc.broadcast(wordBlacklist);}
}
}
返回实例;
}
}
类JavaDropPedWordsCenter{
私有静态实例=null;
公共静态实例(JavaSparkContext jsc){
if(实例==null){
已同步(JavaDropPedWordsCenter.class){
if(实例==null)
{instance=jsc.sc()
}
}
返回实例;
}
}
foreachRDD((rdd,time)->{
//获取或注册黑名单广播
广播黑名单=JavaWordBlacklist.getInstance(新的JavaSparkContext(rdd.context());
//获取或注册DroppedWordsCenter累加器
LongAccumulator-DroppedWordsCenter=javaDroppedWordsCenter.getInstance(新的JavaSparkContext(rdd.context());
//使用黑名单删除单词,并使用DroppedWordsCenter对单词进行计数
字符串计数=rdd.filter(字计数->{
if(blacklist.value()包含(wordCount.\u 1())
{droppedWordsCenter.add(wordCount._2());返回false;}
其他的
{返回true;}
}).collect().toString();
字符串输出=“时间计数”+时间+“”+计数;
}

为什么您需要
KafkaWriter
内部
mapWithState
?是否可以在更新状态之前创建调用?一些可能在
mapPartitions
内部运行的东西?顺便说一句,您的示例似乎存在复制/粘贴错误,因为某些代码被复制了两次。感谢您的回复Yuval。This是一个虚构的例子,只是为了重现这个问题。在我们的实际用例中,我们必须通过jdbc调用db来获取一些数据,我们使用db来更新状态。因此,我们必须将广播传递给mapWithState。另外,如果您将SparkStateRunner和SparkStateRunnerWithBroadcast称为已复制,则前一个没有broadcasT在稍后的一个版本中传递到MaSuffSt.我知道,在调用<代码> MaPaveStase之前,您是否考虑调用JDBC驱动程序?基于某些事件,我们必须从外部源获取数据。我们不想总是调用DB调用,因为这些调用是昂贵的。因此,在MAPW之前进行JBDC调用是没有意义的。伊斯泰特