使用Netty优化TCP/SSL服务器性能

使用Netty优化TCP/SSL服务器性能,ssl,networking,server,tcp,netty,Ssl,Networking,Server,Tcp,Netty,我正在开发一个使用TCP/SSL作为传输协议的高性能数据处理服务器。服务器设计用于处理大小介于0.8MB到4MB之间的数据块。客户端发送数据块,每个块大小为10KB 为了在反序列化之前组装整个数据块,我使用ReplayingDecoder。然而,我发现有时候需要10-40毫秒才能获得10KB的块。仔细检查后,我看到TCP数据以约1408字节的块形式出现。我想知道我是否错过了任何优化,以提高每次往返中可以消耗的字节数 这是我的服务器设置,我尝试了TCP_节点延迟,还尝试了使用RCVBUF_分配器。

我正在开发一个使用TCP/SSL作为传输协议的高性能数据处理服务器。服务器设计用于处理大小介于0.8MB到4MB之间的数据块。客户端发送数据块,每个块大小为10KB

为了在反序列化之前组装整个数据块,我使用ReplayingDecoder。然而,我发现有时候需要10-40毫秒才能获得10KB的块。仔细检查后,我看到TCP数据以约1408字节的块形式出现。我想知道我是否错过了任何优化,以提高每次往返中可以消耗的字节数

这是我的服务器设置,我尝试了TCP_节点延迟,还尝试了使用RCVBUF_分配器。但是,传入的TCP块仍然很小,导致处理数据块的处理延迟更高

ServerBootstrap bootstrap = new ServerBootstrap();

bootstrap.group(bossGroup, workerGroup)
         .channel(getServerSocketChannelClass())
         .option(ChannelOption.SO_BACKLOG, 4096)
         .childOption(ChannelOption.RCVBUF_ALLOCATOR,
                 new FixedRecvByteBufAllocator(64 * 1024))
         .childOption(ChannelOption.TCP_NODELAY, true)
         .childOption(ChannelOption.SO_KEEPALIVE, true)
         .childOption(ChannelOption.SO_RCVBUF, 8388608)
         .childOption(ChannelOption.SO_SNDBUF, 8388608)
         .childHandler(channelInitializer);
这是我的管道

/* SSL and Connection stats handlers */
pipeline.addLast("ConnectionStatsHandler", statsHandler);
SslHandler sslHandler = new SslHandler(sslService.createSslEngine());
   
pipeline.addLast("SSLHandler", sslHandler);

/* Data Assembler */
pipeline.addLast("RequestAssemblerHandler", new NettyRequestAssembleHandler());
我正在捕获从下面的“ConnectionStatsHandler”和“RequestAssemblerHandler”获取的数据

是否需要Netty专家的帮助,了解哪些TCP设置允许我在每次往返过程中向应用程序中获取更高的数据块?这是否取决于TCP窗口大小或接收缓冲区设置

[INFO ] 2020-09-25 01:51:51.437 [epoll-worker-group-1] NettyRequestAssembleHandler - Received 10240 bytes after SSL over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.437 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.438 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.438 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.440 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.441 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.444 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.444 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.445 [epoll-worker-group-1] NettyRequestAssembleHandler - Received 10240 bytes after SSL over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.445 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.445 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.446 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.446 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.446 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.446 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.461 [epoll-worker-group-1] ConnStatsHandler - Received 1408 bytes over Channel 815bc3a4
[INFO ] 2020-09-25 01:51:51.461 [epoll-worker-group-1] NettyRequestAssembleHandler - Received 10240 bytes after SSL over Channel 815bc3a4