Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/334.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java 将数据从AWS Lambda发送到SQS队列时重置连接_Java_Amazon Web Services_Aws Lambda_Amazon Sqs - Fatal编程技术网

Java 将数据从AWS Lambda发送到SQS队列时重置连接

Java 将数据从AWS Lambda发送到SQS队列时重置连接,java,amazon-web-services,aws-lambda,amazon-sqs,Java,Amazon Web Services,Aws Lambda,Amazon Sqs,我正在使用AWS SDK for Java,其中我正在将数据从AWS Lambda发送到SQS 我们遇到了一个例外: Caused by: java.net.SocketException: Connection reset at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115) at java.net.SocketOutputStream.write(SocketOutputStream.java:155)

我正在使用AWS SDK for Java,其中我正在将数据从AWS Lambda发送到SQS

我们遇到了一个例外:

Caused by: java.net.SocketException: Connection reset
at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:115)
at java.net.SocketOutputStream.write(SocketOutputStream.java:155)
at sun.security.ssl.OutputRecord.writeBuffer(OutputRecord.java:431)
at sun.security.ssl.OutputRecord.write(OutputRecord.java:417)
at sun.security.ssl.SSLSocketImpl.writeRecordInternal(SSLSocketImpl.java:886)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:857)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:123)
at org.apache.http.impl.io.SessionOutputBufferImpl.streamWrite(SessionOutputBufferImpl.java:124)
at org.apache.http.impl.io.SessionOutputBufferImpl.write(SessionOutputBufferImpl.java:160)
at org.apache.http.impl.io.ContentLengthOutputStream.write(ContentLengthOutputStream.java:113)
at org.apache.http.impl.io.ContentLengthOutputStream.write(ContentLengthOutputStream.java:120)
at org.apache.http.entity.StringEntity.writeTo(StringEntity.java:167)
at org.apache.http.impl.DefaultBHttpClientConnection.sendRequestEntity(DefaultBHttpClientConnection.java:156)
at org.apache.http.impl.conn.CPoolProxy.sendRequestEntity(CPoolProxy.java:160)
at org.apache.http.protocol.HttpRequestExecutor.doSendRequest(HttpRequestExecutor.java:238)
at com.amazonaws.http.protocol.SdkHttpRequestExecutor.doSendRequest(SdkHttpRequestExecutor.java:63)
at org.apache.http.protocol.HttpRequestExecutor.execute(HttpRequestExecutor.java:123)
at org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:272)
at org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
at org.apache.http.impl.client.InternalHttpClient.doExecute(InternalHttpClient.java:185)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:83)
at org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:56)
at com.amazonaws.http.apache.client.impl.SdkHttpClient.execute(SdkHttpClient.java:72)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1236)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1056)
代码:

List sqsList=newlinkedlist();
int batchId=0//为批次中的每个味精发送唯一的batchId
for(元数据:metadataList){
字符串jsonString=new Gson().toJson(元数据);
添加(新的SendMessageBatchRequestEntry(batchId+“”,jsonString));
batchId++;
}
sendMessageBatch(新的SendMessageBatchRequest(queueUrl,sqsList));
背景我们正在努力做的事情:

我们有一个主Lambda函数,用于创建和初始化SQS队列,并包含每个应处理记录的详细信息。
现在,SQS队列需要设置为从队列中创建X数量的消息批,并为每个批自动调用另一个SQS Lambda函数。

似乎您的代码很好,而且据我记忆所及(我自己也多次看到此错误),由于SDK如何重用HTTP连接,因此在使用SDK时经常会发生这种情况。此错误仅告诉您,Lambda重置了HTTP连接,但SDK具有内置功能,可以重试失败的请求,因此,如果您没有在每个请求上看到此错误,您应该不会有问题。

每个批的最大消息数为10。您不能一次用20k填充SQS队列并发送该请求。试着把它分成10个


我们可以批量发送10份。 工作代码:

List<SendMessageBatchRequestEntry> sqsList= new LinkedList<SendMessageBatchRequestEntry>();
    int batchId = 1; //To send a unique batchId for each msg in a batch
    for (Metadata metadata: metadataList) {
        String jsonString = new Gson().toJson(metadata);
        if (sqsList.size() == 10) {
            amazonSqs.sendMessageBatch(new SendMessageBatchRequest(queueUrl, sqsList));
            sqsList.clear();
        } 
        sqsList.add(new SendMessageBatchRequestEntry(batchId + "", jsonString)); 
        batchId++;
    }
    if(sqsList.size()>0) {
        amazonSqs.sendMessageBatch(new SendMessageBatchRequest(queueUrl, sqsList));
    }
List sqsList=newlinkedlist();
int batchId=1//为批次中的每个味精发送唯一的batchId
for(元数据:metadataList){
字符串jsonString=new Gson().toJson(元数据);
如果(sqsList.size()==10){
sendMessageBatch(新的SendMessageBatchRequest(queueUrl,sqsList));
sqsList.clear();
} 
添加(新的SendMessageBatchRequestEntry(batchId+“”,jsonString));
batchId++;
}
如果(sqsList.size()>0){
sendMessageBatch(新的SendMessageBatchRequest(queueUrl,sqsList));
}

谢谢Matus。。。甚至我也不介意SDK是否为此进行故障切换。但我每次都是这样。尝试发送的记录总数约为20k。是造成问题的数据数量吗?@Raushan您使用的是FIFO还是标准队列,您发送到队列中的消息的大小是多少?这是一个标准队列。大小为6800KB:(.如何分解?我的意思是它应该分批发送到队列,以便其他Lambda分批处理它。(因为我们限制为15分钟)@Raushan您可以发送到队列的消息的最大大小为256 KB,任何较大的消息都将被丢弃。这方面有一些解决方法,但我需要知道消息的使用情况和内容。为什么消息如此大?嗯……我们正在将CRON作业从EC2实例移动到Lambda。其中一个作业有20k条记录,每个都有record需要从服务中收集元数据。所以15分钟很短。然后我们想将这些数据推送到队列中,然后从另一个lambda处理这些数据,该lambda在一次调用中处理X条记录。因此,SQS队列需要批量创建X条消息。这就是我们这样做的原因。希望我走的方向是正确的!!!Thanks Dejan。一个简单的代码调整已经奏效。在这个问题的答案中添加代码。
List<SendMessageBatchRequestEntry> sqsList= new LinkedList<SendMessageBatchRequestEntry>();
    int batchId = 1; //To send a unique batchId for each msg in a batch
    for (Metadata metadata: metadataList) {
        String jsonString = new Gson().toJson(metadata);
        if (sqsList.size() == 10) {
            amazonSqs.sendMessageBatch(new SendMessageBatchRequest(queueUrl, sqsList));
            sqsList.clear();
        } 
        sqsList.add(new SendMessageBatchRequestEntry(batchId + "", jsonString)); 
        batchId++;
    }
    if(sqsList.size()>0) {
        amazonSqs.sendMessageBatch(new SendMessageBatchRequest(queueUrl, sqsList));
    }