Java 增加上传到播放框架中的json限制

Java 增加上传到播放框架中的json限制,java,playframework-2.0,Java,Playframework 2.0,我正在使用PlayFrameworkV2.5(JavaAPI)。在post处理程序中,我得到了以下异常(这使我相信它可能与通过的json数据的大小有关)。我已将以下内容添加到控制器操作的顶部,但这并没有解决问题: @BodyParser.Of(value = BodyParser.Json.class, maxLength = 1024 * 1024 * 10 * 10) 是否有其他配置(可能是application.conf文件中的设置)或其他地方,我可以通过这些配置增加处理此post请求所

我正在使用PlayFrameworkV2.5(JavaAPI)。在post处理程序中,我得到了以下异常(这使我相信它可能与通过的json数据的大小有关)。我已将以下内容添加到控制器操作的顶部,但这并没有解决问题:

@BodyParser.Of(value = BodyParser.Json.class, maxLength = 1024 * 1024 * 10 * 10)
是否有其他配置(可能是application.conf文件中的设置)或其他地方,我可以通过这些配置增加处理此post请求所允许的大小限制。我正在使用JavaAPI

17:01:23.603 43598 [New I/O worker #3] RequestBodyHandler ERROR - Exception caught in RequestBodyHandler
java.nio.channels.ClosedChannelException: null
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.cleanUpWriteBuffer(AbstractNioWorker.java:433) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.writeFromUserCode(AbstractNioWorker.java:128) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:99) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:779) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.Channels.write(Channels.java:725) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784) ~[netty-3.10.4.Final.jar:na]
    at com.typesafe.netty.http.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:88) ~[netty-http-pipelining-1.1.4.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591) ~[netty-3.10.4.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784) ~[netty-3.10.4.Final.jar:na]
    at play.core.server.netty.NettyResultStreamer$.sendDownstream(NettyResultStreamer.scala:182) ~[play-netty-server_2.11-2.4.6.jar:2.4.6]
    at play.core.server.netty.NettyResultStreamer$.play$core$server$netty$NettyResultStreamer$$nettyStreamIteratee(NettyResultStreamer.scala:140) ~[play-netty-server_2.11-2.4.6.jar:2.4.6]
    at play.core.server.netty.NettyResultStreamer$$anonfun$play$core$server$netty$NettyResultStreamer$$send$1$1.streamEnum$1(NettyResultStreamer.scala:79) ~[play-netty-server_2.11-2.4.6.jar:2.4.6]
    at play.core.server.netty.NettyResultStreamer$$anonfun$play$core$server$netty$NettyResultStreamer$$send$1$1.apply(NettyResultStreamer.scala:86) ~[play-netty-server_2.11-2.4.6.jar:2.4.6]
    at play.core.server.netty.NettyResultStreamer$$anonfun$play$core$server$netty$NettyResultStreamer$$send$1$1.apply(NettyResultStreamer.scala:60) ~[play-netty-server_2.11-2.4.6.jar:2.4.6]
    at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:249) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) ~[scala-library-2.11.7.jar:na]
    at play.api.libs.iteratee.Execution$trampoline$.executeScheduled(Execution.scala:109) ~[play-iteratees_2.11-2.4.6.jar:2.4.6]
    at play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:71) ~[play-iteratees_2.11-2.4.6.jar:2.4.6]
    at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.impl.Promise$DefaultPromise.link(Promise.scala:304) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.impl.Promise$DefaultPromise.linkRootOf(Promise.scala:289) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:253) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:249) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) ~[scala-library-2.11.7.jar:na]
    at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55) ~[akka-actor_2.11-2.3.13.jar:na]
    at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91) ~[akka-actor_2.11-2.3.13.jar:na]
    at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) ~[akka-actor_2.11-2.3.13.jar:na]
    at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91) ~[akka-actor_2.11-2.3.13.jar:na]
    at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) ~[scala-library-2.11.7.jar:na]
    at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90) ~[akka-actor_2.11-2.3.13.jar:na]
    at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40) ~[akka-actor_2.11-2.3.13.jar:na]
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397) ~[akka-actor_2.11-2.3.13.jar:na]
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) ~[scala-library-2.11.7.jar:na]
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107) ~[scala-library-2.11.7.jar:na]

我将使用play.http.parser.maxMemoryBuffer
在application.conf中增加数据限制。我将更新后,我得到它的工作

可以使用application.conf文件中的以下字段修改play framework中的各种缓冲区:

parsers.text.maxLength=
play.http.parser.maxDiskBuffer=
play.http.parser.maxMemoryBuffer=

增加缓冲区大小为我解决了这个问题

我将使用play.http.parser.maxMemoryBuffer
在application.conf中增加数据限制。我将更新后,我得到它的工作

可以使用application.conf文件中的以下字段修改play framework中的各种缓冲区:

parsers.text.maxLength=
play.http.parser.maxDiskBuffer=
play.http.parser.maxMemoryBuffer=

增加缓冲区大小为我解决了这个问题

FYI此处介绍了内置的主体解析器:

因此,您可以使用
play.http.parser.maxMemoryBuffer
或在操作中定义它:

def save = Action(parse.maxLength(1024 * 10, storeInUserFile)) {  request =>
  Ok("Saved the request content to " + request.body)
}

仅供参考,此处介绍了内置正文分析器:

因此,您可以使用
play.http.parser.maxMemoryBuffer
或在操作中定义它:

def save = Action(parse.maxLength(1024 * 10, storeInUserFile)) {  request =>
  Ok("Saved the request content to " + request.body)
}