Apache spark Spark结构化流:ClassCastException:.streaming.SerializedOffset无法强制转换为class.Spark.sql.streaming.CouchbaseSourceOffset
我在spark结构化流媒体中使用Couchbase spark连接器。我已经在流式查询上启用了检查点。但当我在以前的检查点位置重新运行spark结构化流媒体应用程序时,我得到了类强制转换异常“java.lang.ClassCastException:class org.apache.spark.sql.execution.streaming.SerializedOffset无法强制转换为类com.couchbase.spark.sql.streaming.CouchbaseSourceOffset”。如果我删除检查点spark的内容,它运行良好。是火花上的虫子吗?我正在使用spark 2.4.5Apache spark Spark结构化流:ClassCastException:.streaming.SerializedOffset无法强制转换为class.Spark.sql.streaming.CouchbaseSourceOffset,apache-spark,couchbase,spark-structured-streaming,Apache Spark,Couchbase,Spark Structured Streaming,我在spark结构化流媒体中使用Couchbase spark连接器。我已经在流式查询上启用了检查点。但当我在以前的检查点位置重新运行spark结构化流媒体应用程序时,我得到了类强制转换异常“java.lang.ClassCastException:class org.apache.spark.sql.execution.streaming.SerializedOffset无法强制转换为类com.couchbase.spark.sql.streaming.CouchbaseSourceOffse
20/04/23 19:11:29 ERROR MicroBatchExecution: Query [id = 1ce2e002-20ee-401e-98de-27e70b27f1a4, runId = 0b89094f-3bae-4927-b09c-24d9deaf5901] terminated with error
java.lang.ClassCastException: class org.apache.spark.sql.execution.streaming.SerializedOffset cannot be cast to class com.couchbase.spark.sql.streaming.CouchbaseSourceOffset (org.apache.spark.sql.execution.streaming.SerializedOffset and com.couchbase.spark.sql.streaming.CouchbaseSourceOffset are in unnamed module of loader 'app')
at com.couchbase.spark.sql.streaming.CouchbaseSource.$anonfun$getBatch$2(CouchbaseSource.scala:172)
at scala.Option.map(Option.scala:230)
at com.couchbase.spark.sql.streaming.CouchbaseSource.getBatch(CouchbaseSource.scala:172)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$populateStartOffsets$3(MicroBatchExecution.scala:284)
at scala.collection.Iterator.foreach(Iterator.scala:943)
at scala.collection.Iterator.foreach$(Iterator.scala:943)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
at scala.collection.IterableLike.foreach(IterableLike.scala:74)
at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
at org.apache.spark.sql.execution.streaming.StreamProgress.foreach(StreamProgress.scala:25)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.populateStartOffsets(MicroBatchExecution.scala:281)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$2(MicroBatchExecution.scala:169)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.ProgressReporter.reportTimeTaken$(ProgressReporter.scala:349)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.$anonfun$runActivatedStream$1(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:281)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:193)