Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/amazon-s3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Amazon s3 Kafka连接接收器到S3:`AmazonS3Exception:我们遇到了一个内部错误`_Amazon S3_Apache Kafka_Apache Kafka Connect_Confluent Platform - Fatal编程技术网

Amazon s3 Kafka连接接收器到S3:`AmazonS3Exception:我们遇到了一个内部错误`

Amazon s3 Kafka连接接收器到S3:`AmazonS3Exception:我们遇到了一个内部错误`,amazon-s3,apache-kafka,apache-kafka-connect,confluent-platform,Amazon S3,Apache Kafka,Apache Kafka Connect,Confluent Platform,我有一个卡夫卡连接S3接收器,将记录写入AmazonS3。这个特定的接收器大约每秒写入4k rec。每隔几天,一个Kafka Connect辅助任务就会失败,并出现以下错误。手动重新启动可以完全修复此问题,直到几天后再次发生 我还将“s3.part.retries”从默认值“3”增加到“10”,但这似乎没有效果 还有其他解决办法吗 我正在运行5.0.1与卡夫卡2.0.1的融合。我看不到最新的和最大的合流5.1.2()中有任何相关的变化 最好联系AWS支持部门,并向他们提出这些请求。您的问题解决了

我有一个卡夫卡连接S3接收器,将记录写入AmazonS3。这个特定的接收器大约每秒写入4k rec。每隔几天,一个Kafka Connect辅助任务就会失败,并出现以下错误。手动重新启动可以完全修复此问题,直到几天后再次发生

我还将“s3.part.retries”从默认值“3”增加到“10”,但这似乎没有效果

还有其他解决办法吗

我正在运行5.0.1与卡夫卡2.0.1的融合。我看不到最新的和最大的合流5.1.2()中有任何相关的变化


最好联系AWS支持部门,并向他们提出这些请求。您的问题解决了吗?我面临的同样的解决方案是设置监视+警报。此错误大约每月发生一次,它会触发警报,需要有人手动重新启动连接器。最近发生的次数似乎较少,所以这是可以容忍的。也许最好联系AWS支持人员,向他们提出这些请求。您的问题解决了吗?我面临的同样的解决方案是设置监视+警报。此错误大约每月发生一次,它会触发警报,需要有人手动重新启动连接器。最近发生的次数似乎较少,所以这是可以容忍的。
"org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask due to unrecoverable exception.
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:586)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:322)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)
    at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
    at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.kafka.connect.errors.DataException: Multipart upload failed to complete.
    at io.confluent.connect.s3.storage.S3OutputStream.commit(S3OutputStream.java:160)
    at io.confluent.connect.s3.format.avro.AvroRecordWriterProvider$1.commit(AvroRecordWriterProvider.java:97)
    at io.confluent.connect.s3.TopicPartitionWriter.commitFile(TopicPartitionWriter.java:505)
    at io.confluent.connect.s3.TopicPartitionWriter.commitFiles(TopicPartitionWriter.java:485)
    at io.confluent.connect.s3.TopicPartitionWriter.executeState(TopicPartitionWriter.java:223)
    at io.confluent.connect.s3.TopicPartitionWriter.write(TopicPartitionWriter.java:176)
    at io.confluent.connect.s3.S3SinkTask.put(S3SinkTask.java:195)
    at org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:564)
    ... 10 more
Caused by: com.amazonaws.services.s3.model.AmazonS3Exception: We encountered an internal error. Please try again. (Service: null; Status Code: 0; Error Code: InternalError; Request ID: 08C85ACBE86D55E1), S3 Extended Request ID: 04dLa3n9JpDNKdGesZc9jrNg1Jstx5mwMB6fFEm+7ZpkFz+ivkn4IN7AlRsq894+YuaLbc2BHuM=
    at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$CompleteMultipartUploadHandler.doEndElement(XmlResponsesSaxParser.java:1773)
    at com.amazonaws.services.s3.model.transform.AbstractHandler.endElement(AbstractHandler.java:52)
    at org.apache.xerces.parsers.AbstractSAXParser.endElement(Unknown Source)
    at org.apache.xerces.impl.XMLNSDocumentScannerImpl.scanEndElement(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl$FragmentContentDispatcher.dispatch(Unknown Source)
    at org.apache.xerces.impl.XMLDocumentFragmentScannerImpl.scanDocument(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XML11Configuration.parse(Unknown Source)
    at org.apache.xerces.parsers.XMLParser.parse(Unknown Source)
    at org.apache.xerces.parsers.AbstractSAXParser.parse(Unknown Source)
    at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseXmlInputStream(XmlResponsesSaxParser.java:142)
    at com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser.parseCompleteMultipartUploadResponse(XmlResponsesSaxParser.java:462)
    at com.amazonaws.services.s3.model.transform.Unmarshallers$CompleteMultipartUploadResultUnmarshaller.unmarshall(Unmarshallers.java:230)
    at com.amazonaws.services.s3.model.transform.Unmarshallers$CompleteMultipartUploadResultUnmarshaller.unmarshall(Unmarshallers.java:227)
    at com.amazonaws.services.s3.internal.S3XmlResponseHandler.handle(S3XmlResponseHandler.java:62)
    at com.amazonaws.services.s3.internal.ResponseHeaderHandlerChain.handle(ResponseHeaderHandlerChain.java:44)
    at com.amazonaws.services.s3.internal.ResponseHeaderHandlerChain.handle(ResponseHeaderHandlerChain.java:30)
    at com.amazonaws.http.response.AwsResponseHandlerAdapter.handle(AwsResponseHandlerAdapter.java:70)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleResponse(AmazonHttpClient.java:1501)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1222)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1035)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:747)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:721)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:704)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$500(AmazonHttpClient.java:672)
    at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:654)
    at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:518)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4185)
    at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4132)
    at com.amazonaws.services.s3.AmazonS3Client.completeMultipartUpload(AmazonS3Client.java:2933)
    at io.confluent.connect.s3.storage.S3OutputStream$MultipartUpload.complete(S3OutputStream.java:246)
    at io.confluent.connect.s3.storage.S3OutputStream.commit(S3OutputStream.java:156)
    ... 17 more