Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/angularjs/24.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka 卡夫卡流:重试_Apache Kafka_Apache Kafka Streams - Fatal编程技术网

Apache kafka 卡夫卡流:重试

Apache kafka 卡夫卡流:重试,apache-kafka,apache-kafka-streams,Apache Kafka,Apache Kafka Streams,卡夫卡版本-1.0.1 我以随机间隔收到以下异常。尝试将request.timeout.ms增加到5分钟,但仍会以随机间隔再次超时(几个小时)。目前还不清楚为什么会出现异常,但重启似乎会从原来的位置恢复,但需要手动执行。因此,尝试启用重试,但似乎没有效果,因为我在日志中没有看到任何重试(意味着失败,然后第一次尝试,再次失败,然后第二次,直到最大重试次数)。您能否解释一下下面的异常,并建议我们如何在发生此异常时让Kafka流应用程序继续运行,或者重试?如果我们需要增加request.timeout

卡夫卡版本-1.0.1
我以随机间隔收到以下异常。尝试将request.timeout.ms增加到5分钟,但仍会以随机间隔再次超时(几个小时)。目前还不清楚为什么会出现异常,但重启似乎会从原来的位置恢复,但需要手动执行。因此,尝试启用重试,但似乎没有效果,因为我在日志中没有看到任何重试(意味着失败,然后第一次尝试,再次失败,然后第二次,直到最大重试次数)。您能否解释一下下面的异常,并建议我们如何在发生此异常时让Kafka流应用程序继续运行,或者重试?如果我们需要增加request.timeout.ms的最大值,那么我们需要注意的缺点是什么,这意味着当代理失败时,我们不应该让线程处于无限挂起状态

props.put(ProducerConfig.RETRIES\u CONFIG,3)

尝试将请求超时增加到最大整数值,但遇到另一个超时异常


我猜你遇到了一个熟悉的错误。KIP-91解释了背景:嗨,马提亚,谢谢。是的,我之前经历过那次危机。但是,我不清楚解决方法,因为上述问题是已知的。请查看以上更新的超时时间。请建议在KIP实现之前我们可以利用的解决方法。我们通常建议增加
request.timeout.ms
。你已经这么做了。。。如果没有更深入的分析(我不能这样做),很难说问题出在哪里以及如何解决。也许可以在卡夫卡邮件列表或合流社区Slack上寻求帮助。
    2018-07-05 06:04:25 ERROR Housestream:91 - Unknown Exception occurred
    org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending since an error caught with a previous record (key GCB21K1X value [L@5e86f18a timestamp 1530783812110)
    to topic housestream-digitstore-changelog due to org.apache.kafka.common.errors.TimeoutException: Expiring 201 record(s) for housestream-digitstore-changelog: 30144 ms has passed since last append.
            at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:118)
            at org.apache.kafka.clients.producer.internals.ProducerBatch.completeFutureAndFireCallbacks(ProducerBatch.java:204)
            at org.apache.kafka.clients.producer.internals.ProducerBatch.done(ProducerBatch.java:187)
            at org.apache.kafka.clients.producer.internals.Sender.failBatch(Sender.java:627)
            at org.apache.kafka.clients.producer.internals.Sender.sendProducerData(Sender.java:287)
            at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:238)
            at org.apache.kafka.clients.producer.internals.Sender.run(Sender.java:163)
            at java.lang.Thread.run(Thread.java:745)
    Caused by: org.apache.kafka.common.errors.TimeoutException: Expiring 201 record(s) for housestream-digitstore-changelog: 30144 ms has passed since last append
2018-07-05 12:22:15 ERROR Housestream:179 - Unknown Exception occurred
org.apache.kafka.streams.errors.StreamsException: task [1_0] Exception caught while punctuating processor 'validatequote'
        at org.apache.kafka.streams.processor.internals.StreamTask.punctuate(StreamTask.java:267)
        at org.apache.kafka.streams.processor.internals.PunctuationQueue.mayPunctuate(PunctuationQueue.java:54)
        at org.apache.kafka.streams.processor.internals.StreamTask.maybePunctuateSystemTime(StreamTask.java:619)
        at org.apache.kafka.streams.processor.internals.AssignedTasks.punctuate(AssignedTasks.java:430)
        at org.apache.kafka.streams.processor.internals.TaskManager.punctuate(TaskManager.java:324)
        at org.apache.kafka.streams.processor.internals.StreamThread.punctuate(StreamThread.java:969)
        at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:834)
        at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
        at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744)
Caused by: org.apache.kafka.streams.errors.StreamsException: task [1_1] Abort sending since an error caught with a previous record (key 32342 value com.int.digital.QUOTE@2c73fa63 timestamp 153083237883) to topic digital_quote due to org.apache.kafka.common.errors.TimeoutException: Failed to allocate memory within the configured max blocking time 60000 ms..
        at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:118)
        at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:819)
        at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:760)
        at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:100)
        at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:78)
        at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:87)
        at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:113)
        at org.cox.processor.CheckQuote.handleTasks(CheckQuote.java:122)
        at org.cox.processor.CheckQuote$1.punctuate(CheckQuote.java:145)
        at org.apache.kafka.streams.processor.internals.ProcessorNode$4.run(ProcessorNode.java:131)
        at org.apache.kafka.streams.processor.internals.StreamsMetricsImpl.measureLatencyNs(StreamsMetricsImpl.java:208)
        at org.apache.kafka.streams.processor.internals.ProcessorNode.punctuate(ProcessorNode.java:134)
        at org.apache.kafka.streams.processor.internals.StreamTask.punctuate(StreamTask.java:263)
        ... 8 more
Caused by: org.apache.kafka.common.errors.TimeoutException: Failed to allocate memory within the configured max blocking time 60000 ms.