Java AmazonS3.getObject(请求).getObjectContent()经常出现NoHttpResponseException异常

Java AmazonS3.getObject(请求).getObjectContent()经常出现NoHttpResponseException异常,java,amazon-web-services,amazon-s3,Java,Amazon Web Services,Amazon S3,我有一个助手例程,它尝试从S3执行线程下载。通常(大约1%的请求)我会收到一条关于NoHttpResponseException的日志消息,当从S3ObjectInputStream读取时,该消息会在一段时间后导致SocketTimeoutException 是我做错了什么,还是只是我的路由器/互联网?或者这是S3所期望的?我没有注意到其他地方的问题 public void fastRead(final String key, Path path) throws StorageExcepti

我有一个助手例程,它尝试从S3执行线程下载。通常(大约1%的请求)我会收到一条关于
NoHttpResponseException
的日志消息,当从
S3ObjectInputStream
读取时,该消息会在一段时间后导致
SocketTimeoutException

是我做错了什么,还是只是我的路由器/互联网?或者这是S3所期望的?我没有注意到其他地方的问题

  public void
fastRead(final String key, Path path) throws StorageException 
    {
        final int pieceSize = 1<<20;
        final int threadCount = 8;

        try (FileChannel channel = (FileChannel) Files.newByteChannel( path, WRITE, CREATE, TRUNCATE_EXISTING ))
        {
            final long size = s3.getObjectMetadata(bucket, key).getContentLength();
            final long pieceCount = (size - 1) / pieceSize + 1;

            ThreadPool pool = new ThreadPool (threadCount);
            final AtomicInteger progress = new AtomicInteger();

            for(int i = 0; i < size; i += pieceSize)
            {
                final int start = i;
                final long end = Math.min(i + pieceSize, size);

                pool.submit(() ->
                {
                    boolean retry;
                    do
                    {
                        retry = false;
                        try
                        {
                            GetObjectRequest request = new GetObjectRequest(bucket, key);
                            request.setRange(start, end - 1);
                            S3Object piece = s3.getObject(request);
                            ByteBuffer buffer = ByteBuffer.allocate ((int)(end - start));
                            try(InputStream stream = piece.getObjectContent())
                            {
                                IOUtils.readFully( stream, buffer.array() );
                            }
                            channel.write( buffer, start );
                            double percent = (double) progress.incrementAndGet() / pieceCount * 100.0;
                            System.err.printf("%.1f%%\n", percent);
                        }
                        catch(java.net.SocketTimeoutException | java.net.SocketException e)
                        {
                            System.err.println("Read timed out. Retrying...");
                            retry = true;
                        }
                    }
                    while (retry);

                });
            }

            pool.<IOException>await();
        }
        catch(AmazonClientException | IOException | InterruptedException e)
        {
            throw new StorageException (e);
        }
    }

2014-05-28 08:49:58 INFO com.amazonaws.http.AmazonHttpClient executeHelper Unable to execute HTTP request: The target server failed to respond
org.apache.http.NoHttpResponseException: The target server failed to respond
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:95)
at org.apache.http.impl.conn.DefaultHttpResponseParser.parseHead(DefaultHttpResponseParser.java:62)
at org.apache.http.impl.io.AbstractMessageParser.parse(AbstractMessageParser.java:254)
at org.apache.http.impl.AbstractHttpClientConnection.receiveResponseHeader(AbstractHttpClientConnection.java:289)
at org.apache.http.impl.conn.DefaultClientConnection.receiveResponseHeader(DefaultClientConnection.java:252)
at org.apache.http.impl.conn.ManagedClientConnectionImpl.receiveResponseHeader(ManagedClientConnectionImpl.java:191)
at org.apache.http.protocol.HttpRequestExecutor.doReceiveResponse(HttpRequestExecutor.java:300)
at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doReceiveResponse(SdkHttpRequestExecutor.java:66)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:127)
at org.apache.http.impl.client.DefaultRequestDirector.tryExecute(DefaultRequestDirector.java:713)
at org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:518)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:906)
at org.apache.http.impl.client.AbstractHttpClient.execute(AbstractHttpClient.java:805)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:385)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:233)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3569)
at com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:1130)
at com.syncwords.files.S3Storage.lambda$fastRead$0(S3Storage.java:123)
at com.syncwords.files.S3Storage$$Lambda$3/1397088232.run(Unknown Source)
at net.almson.util.ThreadPool.lambda$submit$8(ThreadPool.java:61)
at net.almson.util.ThreadPool$$Lambda$4/1980698753.call(Unknown Source)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:744)
公共空间
fastRead(最终字符串键,路径)引发StorageException
{

final int pieceSize=1我以前也遇到过类似的问题。我发现每次完成一个S3Object后,都需要关闭()它才能将一些资源释放回池中,如下所示:

感谢添加链接。顺便说一句,我想增加最大连接、重试和超时(默认最大连接为50)也可能有助于解决问题,如下所示:

AmazonS3 s3Client = new AmazonS3Cient(aws_credential, 
                       new ClientConfiguration().withMaxConnections(100)
                                      .withConnectionTimeout(120 * 1000)
                                      .withMaxErrorRetry(15))

更新:针对我在GitHub上创建的问题,AWS SDK已经进行了更新。我不确定情况如何变化。回答的第二部分(批评
getObject
)可能(希望?)是错误的


S3被设计为失败,并且经常失败

幸运的是,AWS SDK for Java具有用于重试请求的内置功能。不幸的是,它们没有涵盖下载S3对象时发生SocketException的情况(它们在上载和执行其他操作时起作用)。因此,需要类似于问题中的代码(见下文)

当该机制按要求工作时,您仍将在日志中看到消息。您可以选择通过过滤
com.amazonaws.http.AmazonHttpClient
中的
INFO
日志事件来隐藏消息(AWS SDK使用Apache Commons日志记录)

根据您的网络连接和亚马逊服务器的运行状况,重试机制可能会失败。正如lvlv所指出的,配置相关参数的方法是通过。我建议更改的参数是重试次数,默认情况下为
3
。您可以尝试的其他方法是增加或减少连接和套接字超时(默认值为50秒,这不仅足够长,而且可能太长,因为无论发生什么情况,您都会经常超时)和使用TCP KeepAlive(默认值为关闭)

重试机制甚至可以通过设置
RetryPolicy
(同样,在
ClientConfiguration
中)来覆盖。它最有趣的元素是
RetryCondition
,默认情况下:

按以下顺序检查各种情况:

  • 重试IOException导致的AmazonClientException异常
  • 在500个内部服务器上的AmazonServiceException异常上重试 错误、503服务不可用错误、服务限制错误或 时钟偏移误差
请参阅SDKDefaultRetryCondition和

SDK中其他地方隐藏的半自动重试功能 内置机制(在整个AWS SDK中使用)不处理的是读取S3对象数据

如果调用
AmazonS3.getObject(GetObjectRequest GetObjectRequest,File destinationFile)
,AmazonS3Client将使用自己的重试机制。该机制位于
ServiceUtils.retryableDownloadS3ObjectToFile
()中,它使用次优的硬连线重试行为(它将只重试一次,决不会在SocketException上重试!).ServiceUtils
中的所有代码似乎都设计得很糟糕()

我使用的代码类似于:

  public void
read(String key, Path path) throws StorageException
    {
        GetObjectRequest request = new GetObjectRequest (bucket, key);

        for (int retries = 5; retries > 0; retries--) 
        try (S3Object s3Object = s3.getObject (request))
        {
            if (s3Object == null)
                return; // occurs if we set GetObjectRequest constraints that aren't satisfied

            try (OutputStream outputStream = Files.newOutputStream (path, WRITE, CREATE, TRUNCATE_EXISTING))
            {
                byte[] buffer = new byte [16_384];
                int bytesRead;
                while ((bytesRead = s3Object.getObjectContent().read (buffer)) > -1) {
                    outputStream.write (buffer, 0, bytesRead);
                }
            }
            catch (SocketException | SocketTimeoutException e)
            {
                // We retry exceptions that happen during the actual download
                // Errors that happen earlier are retried by AmazonHttpClient
                try { Thread.sleep (1000); } catch (InterruptedException i) { throw new StorageException (i); }
                log.log (Level.INFO, "Retrying...", e);
                continue;
            }
            catch (IOException e)
            {
                // There must have been a filesystem problem
                // We call `abort` to save bandwidth
                s3Object.getObjectContent().abort();
                throw new StorageException (e);
            }

            return; // Success
        }
        catch (AmazonClientException | IOException e)
        {
            // Either we couldn't connect to S3
            // or AmazonHttpClient ran out of retries
            // or s3Object.close() threw an exception
            throw new StorageException (e);
        }

        throw new StorageException ("Ran out of retries.");
    }

关闭
S3Object
似乎与关闭
getObjectContent()
返回的流相同。设置
ClientConfiguration
是一个很好的主意。不过,我经常会遇到其他错误,如“找不到主机s3.amazonaws.com”即使在EC2.O.oi上运行,如果您对AWS SDK的重试机制不满意,也可以查看一下。它在这个用例中应该可以很好地工作。
ClientConfiguration cc = new ClientConfiguration()
    .withMaxErrorRetry (10)
    .withConnectionTimeout (10_000)
    .withSocketTimeout (10_000)
    .withTcpKeepAlive (true);
AmazonS3 s3Client = new AmazonS3Client (credentials, cc);
  public void
read(String key, Path path) throws StorageException
    {
        GetObjectRequest request = new GetObjectRequest (bucket, key);

        for (int retries = 5; retries > 0; retries--) 
        try (S3Object s3Object = s3.getObject (request))
        {
            if (s3Object == null)
                return; // occurs if we set GetObjectRequest constraints that aren't satisfied

            try (OutputStream outputStream = Files.newOutputStream (path, WRITE, CREATE, TRUNCATE_EXISTING))
            {
                byte[] buffer = new byte [16_384];
                int bytesRead;
                while ((bytesRead = s3Object.getObjectContent().read (buffer)) > -1) {
                    outputStream.write (buffer, 0, bytesRead);
                }
            }
            catch (SocketException | SocketTimeoutException e)
            {
                // We retry exceptions that happen during the actual download
                // Errors that happen earlier are retried by AmazonHttpClient
                try { Thread.sleep (1000); } catch (InterruptedException i) { throw new StorageException (i); }
                log.log (Level.INFO, "Retrying...", e);
                continue;
            }
            catch (IOException e)
            {
                // There must have been a filesystem problem
                // We call `abort` to save bandwidth
                s3Object.getObjectContent().abort();
                throw new StorageException (e);
            }

            return; // Success
        }
        catch (AmazonClientException | IOException e)
        {
            // Either we couldn't connect to S3
            // or AmazonHttpClient ran out of retries
            // or s3Object.close() threw an exception
            throw new StorageException (e);
        }

        throw new StorageException ("Ran out of retries.");
    }