Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/scala/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Scala 使用KeyedProcessFunction和RocksDB状态后端在Flink 1.7.2有状态处理中接收异步异常_Scala_Apache Kafka_Apache Flink_Rocksdb - Fatal编程技术网

Scala 使用KeyedProcessFunction和RocksDB状态后端在Flink 1.7.2有状态处理中接收异步异常

Scala 使用KeyedProcessFunction和RocksDB状态后端在Flink 1.7.2有状态处理中接收异步异常,scala,apache-kafka,apache-flink,rocksdb,Scala,Apache Kafka,Apache Flink,Rocksdb,我用Flink 1.7.2编写了一个简单的字数计算应用程序,Kafka 2.2既是消费者又是生产者。我为Kafka生产者使用了一次语义,为有状态处理使用了KeyedProcessFunction,为我的状态和RocksDB使用了增量检查点作为我的状态后端 当我从IntelliJ运行该应用程序时,它运行得非常好,但当我将其提交到本地Flink群集时,我收到AsynchronousException异常,Flink应用程序每0-20秒重试一次。以前有人遇到过这个问题吗?从配置角度看,我是否遗漏了什么

我用Flink 1.7.2编写了一个简单的字数计算应用程序,Kafka 2.2既是消费者又是生产者。我为Kafka生产者使用了一次语义,为有状态处理使用了KeyedProcessFunction,为我的状态和RocksDB使用了增量检查点作为我的状态后端

当我从IntelliJ运行该应用程序时,它运行得非常好,但当我将其提交到本地Flink群集时,我收到AsynchronousException异常,Flink应用程序每0-20秒重试一次。以前有人遇到过这个问题吗?从配置角度看,我是否遗漏了什么

这是我的密码:

类KeyedProcFunctionWordCount扩展KeyedProcessFunction[String,String,String,Int] { 私有变量状态:MapState[String,Int]=_ 覆盖def OPEN参数:配置:装置= { state=getRuntimeContext .getMapStatenew MapStateDescriptor[String,Int]wordCountState,createTypeInformation[String], createTypeInformation[Int] } 重写def processElementvalue:String, ctx:KeyedProcessFunction[String,String,String,Int]上下文, 输出:收集器[字符串,整数]:单位= { 瓦尔电流和= 如果state.containsvalue state.getvalue 其他0 val newSum=currentSum+1 state.putvalue,newSum out.collectvalue,newSum } } 对象kafkaprocfonwordcount { val bootstrapserver=localhost:9092 val inTopic=测试 val outTopic=测试输出 def mainargs:数组[字符串]:单位= { val env=StreamExecutionEnvironment.getExecutionEnvironment 环境启用检查点30000 环境设置状态后端新岩石BStatebackendfile:///tmp/data/db.rdb符合事实的 env.getCheckpointConfig.setCheckpointingModeCheckpointingMode.exe val consumerProps=新属性 ConsumerOps.setPropertyConsumerConfig.BOOTSTRAP\u服务器\u配置,BootstrapServer ConsumerOps.setPropertyConsumerConfig.GROUP\u ID\u配置,KafkaProcFuncWordCount ConsumerOps.setPropertyConsumerConfig.ISOLATION\u LEVEL\u配置,读取\u提交 val-kafkaConsumer=new-FlinkKafkaConsumer011[String]inTopic,新的SimpleStringSchema,ConsumerOps val producerProps=新属性 ProducerOps.setPropertyProducerConfig.BOOTSTRAP\u服务器\u配置,BootstrapServer ProducerOps.setPropertyProducerConfig.RETRIES\u配置,2147483647 producerProps.setPropertyProducerConfig.MAX\ U IN\ U航班请求\每个\ U连接,1 ProducerOps.setPropertyProducerConfig.ACKS\u配置,全部 ProducerOps.setPropertyProducerConfig.ENABLE\u幂等性\u配置,true val kafkaProducer=新FlinkKafkaProducer011[字符串] 话题, new KeyedSerializationSchemaRapper[String]new SimpleStringSchema, 生产商, 可选。对于新FlinkFixedPartitioner[字符串], FlinkKafkaProducer011.Semantic.就一次, 5. val text=env.addSourcekafkaConsumer val runningCounts=文本 .keyBy_uu.toString .processnew KeyedProcFuncWordCount .map\uu.toString 运行计数 .addSinkkafkaProducer env.executeKafkaProcFuncWordCount } } 以下是flink taskexecutor日志中不断重复的部分:

2019-07-05 14:05:47,548 INFO  org.apache.flink.streaming.connectors.kafka.internal.FlinkKafkaProducer  - Flushing new partitions
2019-07-05 14:05:47,552 INFO  org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011  - Starting FlinkKafkaProducer (1/1) to produce into default topic test-out
2019-07-05 14:05:47,775 INFO  org.apache.flink.runtime.taskmanager.Task                     - Attempting to fail task externally KeyedProcess -> Map -> Sink: Unnamed (1/1) (f61d24c993f400394eaa028981a26bfe).
2019-07-05 14:05:47,776 INFO  org.apache.flink.runtime.taskmanager.Task                     - KeyedProcess -> Map -> Sink: Unnamed (1/1) (f61d24c993f400394eaa028981a26bfe) switched from RUNNING to FAILED.
AsynchronousException{java.lang.Exception: Could not materialize checkpoint 6 for operator KeyedProcess -> Map -> Sink: Unnamed (1/1).}
    at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointExceptionHandler.tryHandleCheckpointException(StreamTask.java:1153)
    at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:947)
    at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:884)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.Exception: Could not materialize checkpoint 6 for operator KeyedProcess -> Map -> Sink: Unnamed (1/1).
    at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.handleExecutionException(StreamTask.java:942)
    ... 6 more
Caused by: java.util.concurrent.ExecutionException: java.lang.NoSuchMethodError: org.apache.flink.api.common.typeutils.SimpleTypeSerializerSnapshot.<init>(Ljava/util/function/Supplier;)V
    at java.util.concurrent.FutureTask.report(FutureTask.java:122)
    at java.util.concurrent.FutureTask.get(FutureTask.java:192)
    at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:53)
    at org.apache.flink.streaming.api.operators.OperatorSnapshotFinalizer.<init>(OperatorSnapshotFinalizer.java:53)
    at org.apache.flink.streaming.runtime.tasks.StreamTask$AsyncCheckpointRunnable.run(StreamTask.java:853)
    ... 5 more
Caused by: java.lang.NoSuchMethodError: org.apache.flink.api.common.typeutils.SimpleTypeSerializerSnapshot.<init>(Ljava/util/function/Supplier;)V
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011$TransactionStateSerializer$TransactionStateSerializerSnapshot.<init>(FlinkKafkaProducer011.java:1244)
    at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer011$TransactionStateSerializer.snapshotConfiguration(FlinkKafkaProducer011.java:1235)
    at org.apache.flink.api.common.typeutils.CompositeTypeSerializerConfigSnapshot.<init>(CompositeTypeSerializerConfigSnapshot.java:53)
    at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction$StateSerializerConfigSnapshot.<init>(TwoPhaseCommitSinkFunction.java:847)
    at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction$StateSerializer.snapshotConfiguration(TwoPhaseCommitSinkFunction.java:792)
    at org.apache.flink.streaming.api.functions.sink.TwoPhaseCommitSinkFunction$StateSerializer.snapshotConfiguration(TwoPhaseCommitSinkFunction.java:615)
    at org.apache.flink.runtime.state.RegisteredOperatorStateBackendMetaInfo.computeSnapshot(RegisteredOperatorStateBackendMetaInfo.java:170)
    at org.apache.flink.runtime.state.RegisteredOperatorStateBackendMetaInfo.snapshot(RegisteredOperatorStateBackendMetaInfo.java:103)
    at org.apache.flink.runtime.state.DefaultOperatorStateBackend$DefaultOperatorStateBackendSnapshotStrategy$1.callInternal(DefaultOperatorStateBackend.java:711)
    at org.apache.flink.runtime.state.DefaultOperatorStateBackend$DefaultOperatorStateBackendSnapshotStrategy$1.callInternal(DefaultOperatorStateBackend.java:696)
    at org.apache.flink.runtime.state.AsyncSnapshotCallable.call(AsyncSnapshotCallable.java:76)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at org.apache.flink.util.FutureUtil.runIfNotDoneAndGet(FutureUtil.java:50)
    ... 7 more


非常感谢您的帮助。

您能再次检查一下您是否没有打包Flink核心依赖项Flink java、Flink streaming java、Flink runtime。。。在你的罐子里?另外,请再次检查您在群集中运行的Flink版本是否与Kafka连接器Flink Kafka连接器的依赖项相同。flink kakfa连接器和所有连接器一样,需要成为fatjar的一部分

希望这有帮助

干杯

康斯坦丁