Amazon s3 Flink s3读取错误:数据读取的长度与预期长度不同

Amazon s3 Flink s3读取错误:数据读取的长度与预期长度不同,amazon-s3,apache-flink,Amazon S3,Apache Flink,使用flink 1.7.0,但也可以在flink 1.8.0上看到。在通过flink.readFile源从S3读取gzip对象时,我们经常会遇到一些随机错误: org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=9713156; expectedLength=9770429; includ

使用flink 1.7.0,但也可以在flink 1.8.0上看到。在通过flink.readFile源从S3读取gzip对象时,我们经常会遇到一些随机错误:

org.apache.flink.fs.s3base.shaded.com.amazonaws.SdkClientException: Data read has a different length than the expected: dataLength=9713156; expectedLength=9770429; includeSkipped=true; in.getClass()=class org.apache.flink.fs.s3base.shaded.com.amazonaws.services.s3.AmazonS3Client$2; markedSupported=false; marked=0; resetSinceLastMarked=false; markCount=0; resetCount=0
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.checkLength(LengthCheckInputStream.java:151)
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.util.LengthCheckInputStream.read(LengthCheckInputStream.java:93)
    at org.apache.flink.fs.s3base.shaded.com.amazonaws.internal.SdkFilterInputStream.read(SdkFilterInputStream.java:76)
    at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.closeStream(S3AInputStream.java:529)
    at org.apache.flink.fs.shaded.hadoop3.org.apache.hadoop.fs.s3a.S3AInputStream.close(S3AInputStream.java:490)
    at java.io.FilterInputStream.close(FilterInputStream.java:181)
    at org.apache.flink.fs.s3.common.hadoop.HadoopDataInputStream.close(HadoopDataInputStream.java:89)
    at java.util.zip.InflaterInputStream.close(InflaterInputStream.java:227)
    at java.util.zip.GZIPInputStream.close(GZIPInputStream.java:136)
    at org.apache.flink.api.common.io.InputStreamFSInputWrapper.close(InputStreamFSInputWrapper.java:46)
    at org.apache.flink.api.common.io.FileInputFormat.close(FileInputFormat.java:861)
    at org.apache.flink.api.common.io.DelimitedInputFormat.close(DelimitedInputFormat.java:536)
    at org.apache.flink.streaming.api.functions.source.ContinuousFileReaderOperator$SplitReader.run(ContinuousFileReaderOperator.java:336)
ys 在一个给定的作业中,我们通常会看到许多/大多数作业读取成功,但几乎总是至少有一个失败(比如50个文件中有一个失败)

看起来这个错误实际上是由AWS客户机引起的,所以可能flink与此无关,但我希望有人能够洞察如何使其可靠地工作


当错误发生时,它将终止源并取消所有连接的操作符。我对flink还是新手,但我认为这是可以从以前的快照恢复的东西?当这种异常发生时,我是否应该期望flink会重新读取该文件?

也许您可以尝试为类似s3a的应用程序添加更多连接

flink:
...
    config: |
      fs.s3a.connection.maximum: 320