Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/asp.net/30.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Apache kafka Kafka流在从处理器转发消息时引发RecordTooLargeException异常_Apache Kafka_Apache Kafka Streams_Kafka Producer Api - Fatal编程技术网

Apache kafka Kafka流在从处理器转发消息时引发RecordTooLargeException异常

Apache kafka Kafka流在从处理器转发消息时引发RecordTooLargeException异常,apache-kafka,apache-kafka-streams,kafka-producer-api,Apache Kafka,Apache Kafka Streams,Kafka Producer Api,由于我们迁移到Kafka client 2.3(以前是1.1),当消息发送到Kafka topic时,我们会间歇性地收到RecordTooLargeException异常。 max.request.size生产者的值为524288。正如您在下面看到的Event对象甚至不接近max.request.sizelimit: (key 4bc2eef4-ac1c-97bf-518b-4a32c38b9e4f value Event(transactionId=88834013-28c3-405d-9f

由于我们迁移到Kafka client 2.3(以前是1.1),当消息发送到Kafka topic时,我们会间歇性地收到
RecordTooLargeException
异常。
max.request.size
生产者的值为
524288
。正如您在下面看到的
Event
对象甚至不接近
max.request.size
limit:


(key 4bc2eef4-ac1c-97bf-518b-4a32c38b9e4f value Event(transactionId=88834013-28c3-405d-9f69-81089dfa9246, action=RECORDING_REQUESTED, dataCenter=NEWPORT, ccid=2455, resourceId=4bc2eef4-ac1c-97bf-518b-4a32c38b9e4f, channelId=721ab65b-5333-8c23-4a03-3fff869176c9, canonicalId=462a90a8-7e1e-71a5-4859-eb076e5397ba, seriesId=c8ff610c-77f6-713a-32c3-eac8f6e632fa, externalId=EP001890580021, scheduleDurationInSec=2160, scheduleStartTimeMillisecs=1569356100000, recordingCount=38, dataCenterPath=null, startIndex=0, relativeFolderPath=4bc2eef4-ac1c-97bf-518b-4a32c38b9e4f-0, compressionModel=COMPRESSED, dataPlaneStatus=null, extraProperties=null, error=null, errorDesc=null) timestamp 1569356100063)
通常,当应用程序处于加载状态并且产生许多消息(每秒数百条或更多)时,就会发生这种情况。我不确定
max.request.size
是否包括消息头的大小,但对于每条消息,我们可以有几个总大小在100字节以下的头

org.apache.kafka.streams.errors.StreamsException: task [0_3] Abort sending since an error caught with a previous record (key 4bc2eef4-ac1c-97bf-518b-4a32c38b9e4f value Event(transactionId=88834013-28c3-405d-9f69-81089dfa9246, action=RECORDING_REQUESTED, dataCenter=NEWPORT, ccid=2455, resourceId=4bc2eef4-ac1c-97bf-518b-4a32c38b9e4f, channelId=721ab65b-5333-8c23-4a03-3fff869176c9, canonicalId=462a90a8-7e1e-71a5-4859-eb076e5397ba, seriesId=c8ff610c-77f6-713a-32c3-eac8f6e632fa, externalId=EP001890580021, scheduleDurationInSec=2160, scheduleStartTimeMillisecs=1569356100000, recordingCount=38, dataCenterPath=null, startIndex=0, relativeFolderPath=4bc2eef4-ac1c-97bf-518b-4a32c38b9e4f-0, compressionModel=COMPRESSED, dataPlaneStatus=null, extraProperties=null, error=null, errorDesc=null) timestamp 1569356100063) to topic datacenter due to org.apache.kafka.common.errors.RecordTooLargeException: The message is 716830 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.   
    at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.recordSendError(RecordCollectorImpl.java:138)   
    at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.access$500(RecordCollectorImpl.java:50)   
    at org.apache.kafka.streams.processor.internals.RecordCollectorImpl$1.onCompletion(RecordCollectorImpl.java:201)  
    at org.apache.kafka.clients.producer.KafkaProducer.doSend(KafkaProducer.java:930)   
    at org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:856)   
    at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:167)   
    at org.apache.kafka.streams.processor.internals.RecordCollectorImpl.send(RecordCollectorImpl.java:102)   
    at org.apache.kafka.streams.processor.internals.SinkNode.process(SinkNode.java:89)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)   
    at com.lnt.eg.kafka.RecordingRequestProcessor.lambda$processPayloadForDataCenter$0(RecordingRequestProcessor.java:116)   
    at java.util.ArrayList.forEach(ArrayList.java:1257)   
    at com.lnt.eg.kafka.RecordingRequestProcessor.processPayloadForDataCenter(RecordingRequestProcessor.java:116)   
    at com.lnt.eg.kafka.RecordingRequestProcessor.transform(RecordingRequestProcessor.java:85)   
    at com.lnt.eg.kafka.RecordingRequestProcessor.transform(RecordingRequestProcessor.java:27)   
    at org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapter$1.transform(TransformerSupplierAdapter.java:47)   
    at org.apache.kafka.streams.kstream.internals.TransformerSupplierAdapter$1.transform(TransformerSupplierAdapter.java:36)   
    at org.apache.kafka.streams.kstream.internals.KStreamFlatTransform$KStreamFlatTransformProcessor.process(KStreamFlatTransform.java:56)   
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)   
    at org.apache.kafka.streams.kstream.internals.KStreamFilter$KStreamFilterProcessor.process(KStreamFilter.java:43)   
    at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:117)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:201)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:180)   
    at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:133)   
    at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:87)   
    at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:366)   
    at org.apache.kafka.streams.processor.internals.AssignedStreamsTasks.process(AssignedStreamsTasks.java:199)   
    at org.apache.kafka.streams.processor.internals.TaskManager.process(TaskManager.java:420)   
    at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:890)   
    at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:805)   
    at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:774)   
Caused by: org.apache.kafka.common.errors.RecordTooLargeException: The message is 716830 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.```



在Streams API的2.0版中添加了记录头支持——在此之前,删除了记录头。自2.0以来,它们都是通过复制来实现的。是的,在消息大小中会遇到头。如果您不需要标题,您可能希望通过
transformValues()
使用
context.headers().remove()
@MatthiasJ.Sax手动剥离标题。我不认为标题的大小会使消息如此大。正如我提到的,头的总大小小于100字节,消息本身小于1KB,所以RecordCollectorImpl抱怨记录太大的可能性有多大。我们正在使用batch.size的默认值(16KB)。所以看起来组合消息的大小也不会太大。不确定。但是,错误显示
序列化时消息为716830字节
——可能尝试进入
序列化程序
,并验证它如何为相应记录返回字节(这是确定字节来源的起点)。