Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/java/311.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Java Netty,HTTP PUT请求的自定义处理程序存在问题“无法发送比请求更多的响应”_Java_Netty - Fatal编程技术网

Java Netty,HTTP PUT请求的自定义处理程序存在问题“无法发送比请求更多的响应”

Java Netty,HTTP PUT请求的自定义处理程序存在问题“无法发送比请求更多的响应”,java,netty,Java,Netty,我在一个Netty服务器上工作,我创建了一个自定义处理程序,用于通过HTTP PUT请求接收文件上传,但该处理程序存在问题。当我一次只发送几个文件时,一切似乎都很好,但是在大约300个连接之后,服务器似乎中断了。然后,服务器将在每个收到的请求上抛出以下异常。发生这种情况后,服务器不再处理请求,需要重新启动: java.lang.IllegalStateException: cannot send more responses than requests at org.jb

我在一个Netty服务器上工作,我创建了一个自定义处理程序,用于通过HTTP PUT请求接收文件上传,但该处理程序存在问题。当我一次只发送几个文件时,一切似乎都很好,但是在大约300个连接之后,服务器似乎中断了。然后,服务器将在每个收到的请求上抛出以下异常。发生这种情况后,服务器不再处理请求,需要重新启动:

    java.lang.IllegalStateException: cannot send more responses than requests
        at org.jboss.netty.handler.codec.http.HttpContentEncoder.writeRequested(HttpContentEncoder.java:104)
        at org.jboss.netty.handler.execution.ExecutionHandler.handleDownstream(ExecutionHandler.java:165)
        at org.jboss.netty.channel.Channels.write(Channels.java:605)
        at org.jboss.netty.channel.Channels.write(Channels.java:572)
....
这是我的处理程序源ChannelReceived,我正在处理的所有请求都是分块的,因此我将包括以下方法:

@Override
public void messageReceived(ChannelHandlerContext context, MessageEvent event) throws Exception {
    try {
        log.trace("Message recieved");
        if (newMessage) {
            log.trace("New message");
            HttpRequest request = (HttpRequest) event.getMessage();
            setDestinationFile(context, request);
            newMessage = false;
            if (request.isChunked()) {
                log.trace("Chunked request, set readingChunks true and create byte buffer");
                requestContentStream = new ByteArrayOutputStream();
                readingChunks = true;
                return;
            } else {
                log.trace("Request not chunked");
                writeNonChunkedFile(request);
                requestComplete(event);
                return;
            }
        } else if (readingChunks){
            log.trace("Reading chunks");
            HttpChunk chunk = (HttpChunk) event.getMessage();
            if (chunk.isLast()) {
                log.trace("Read last chunk");
                readingChunks = false;
                writeChunkedFile();
                requestComplete(event);
                return;
            } else {
                log.trace("Buffering chunk content to byte buffer");
                requestContentStream.write(chunk.getContent().array());
                return;
            }
            // should not happen
        } else {
            log.error("Error handling of MessageEvent, expecting a new message or a chunk from a previous message");
        }
    } catch (Exception ex) {
        log.error("Exception: [" + ex + "]");
        sendError(context, INTERNAL_SERVER_ERROR);
    }
}
以下是我编写分块请求的方式:

private void writeChunkedFile() throws IOException {
    log.trace("Writing chunked file");
    byte[] data = requestContentStream.toByteArray();
    FileOutputStream fos = new FileOutputStream(destinationFile);
    fos.write(data);
    fos.close();
    log.debug("File upload complete, [chunked], path: [" + destinationFile.getAbsolutePath() + "] size: [" + destinationFile.length() + "] bytes");
}
以下是我发送响应和关闭连接的方式:

private void requestComplete(MessageEvent event) {
    log.trace("Request complete");
    HttpResponse response = new DefaultHttpResponse(HTTP_1_1, OK);
    Channel channel = event.getChannel();
    ChannelFuture cf = channel.write(response);
    cf.addListener(ChannelFutureListener.CLOSE);
}
我在requestComplete中尝试了一些方法,其中一种方法是channel.close,但似乎没有帮助。还有其他想法吗

这是我的管道:

@Override
public ChannelPipeline getPipeline() throws Exception {
    final ChannelPipeline pipeline = pipeline();
    pipeline.addLast("decoder", new HttpRequestDecoder());
    pipeline.addLast("encoder", new HttpResponseEncoder());
    pipeline.addLast("deflater", new HttpContentCompressor());
    pipeline.addLast("ExecutionHandler", executionHandler);
2012-03-23T07:46:40.993 [New I/O server worker #1-6] WARN  NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.writeRequested] [] - Sending [DefaultHttpResponse(chunked: false)
HTTP/1.1 100 Continue]
2012-03-23T07:46:40.995 [New I/O server worker #1-6] WARN  NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.writeRequested] [] - Sending [DefaultHttpResponse(chunked: false)
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain; charset=UTF-8]
2012-03-23T07:46:41.000 [New I/O server worker #1-7] DEBUG NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.messageReceived] [] - Received [PUT /a/deeper/path/testFile.txt HTTP/1.1
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.12.9.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2
Host: 192.168.0.1:8080
Accept: */*
Content-Length: 256000
Expect: 100-continue
pipeline.addLasthandler,新的FileUploadHandler; 回流管道; }

谢谢你的任何想法或想法

编辑:在管道中的平减指数和处理程序之间记录时的示例日志条目:

@Override
public ChannelPipeline getPipeline() throws Exception {
    final ChannelPipeline pipeline = pipeline();
    pipeline.addLast("decoder", new HttpRequestDecoder());
    pipeline.addLast("encoder", new HttpResponseEncoder());
    pipeline.addLast("deflater", new HttpContentCompressor());
    pipeline.addLast("ExecutionHandler", executionHandler);
2012-03-23T07:46:40.993 [New I/O server worker #1-6] WARN  NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.writeRequested] [] - Sending [DefaultHttpResponse(chunked: false)
HTTP/1.1 100 Continue]
2012-03-23T07:46:40.995 [New I/O server worker #1-6] WARN  NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.writeRequested] [] - Sending [DefaultHttpResponse(chunked: false)
HTTP/1.1 500 Internal Server Error
Content-Type: text/plain; charset=UTF-8]
2012-03-23T07:46:41.000 [New I/O server worker #1-7] DEBUG NbEvents [c.c.c.r.d.l.s.h.SbApiMessageLogger.messageReceived] [] - Received [PUT /a/deeper/path/testFile.txt HTTP/1.1
User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.12.9.0 zlib/1.2.3 libidn/1.18 libssh2/1.2.2
Host: 192.168.0.1:8080
Accept: */*
Content-Length: 256000
Expect: 100-continue

这是我的实现中的一个问题,与这里发布的任何代码都没有关系,这里发布的逻辑看起来很合理,工作正常。尽管如此,非常感谢大家的宝贵意见

有些与问题无关,但在ChannelPipeline末尾添加ExecutionHandler不会给您带来任何进步。ExecutionHandler仅适用于ChannelPipeline中它后面的ChannelHandler如何将HttpContentEncoder添加到ChannelPipeline?我在你的代码里没看到?您是否有机会共享一个实例?谢谢,我更新了我的pipline,以便pipeline.addLasthandler,newFileUploadHandler;在executionHandler之后。至于HttpContentEncoder,我没有在代码中的任何地方使用它,我需要吗?我认为响应编码是由HttpResponseEncoder处理的。我还尝试添加pipeline.addLastchunkedWriter,new ChunkedWriteHandler;。我使用ChunkedWriteHandler仍然有同样的问题,但是服务器上的异常现在不同了:org.jboss.netty.handler.stream.ChunkedWriteHandler.DiscardeChunkedWriteHandler.java:171HttpContentCompressor是HttpContentEncoder的子类,这就是如何将HttpContentEncoder添加到管道中的?HttpContentEncoder的状态仅允许为接收到的一个请求发送一个响应HTTP\u CONTINUE被排除,任何处理程序一次为一个请求发送多个响应?executionHandler在管道中做了什么?如果其他人看到同样的错误,发表一些关于您在实现中发现了什么问题的文章,这将很有帮助。