Java Netty服务器-连接计数与工作计数

Java Netty服务器-连接计数与工作计数,java,netty,worker,Java,Netty,Worker,我尝试了以下连接和工作的组合 4个工人-16个连接-仅在4个连接和 在12个连接中没有交通 8个工人-16个连接-仅在8个连接和 8个连接中没有流量 0个工作人员(我只是将最佳工作人员数量留给 Netty)-8个连接-仅4个连接中的流量,4个连接中没有流量 我的软件配置 Netty 3.2.6带NioServerSocket 管道片段 public void startServer(int numWorkerThreads, String openFlowHost, int

我尝试了以下连接和工作的组合

  • 4个工人-16个连接-仅在4个连接和 在12个连接中没有交通
  • 8个工人-16个连接-仅在8个连接和 8个连接中没有流量
  • 0个工作人员(我只是将最佳工作人员数量留给 Netty)-8个连接-仅4个连接中的流量,4个连接中没有流量
我的软件配置 Netty 3.2.6带NioServerSocket

管道片段

        public void startServer(int numWorkerThreads, String openFlowHost, int openFlowPort, OFChannelHandler ofchan){
        this.workerThreads = numWorkerThreads;
        try {
            final ServerBootstrap bootstrap = createServerBootStrap();

             bootstrap.setOption("reuseAddr", true);
             bootstrap.setOption("child.keepAlive", true);
             bootstrap.setOption("child.tcpNoDelay", true);
             bootstrap.setOption("child.receiveBufferSize", EnhancedController.RECEIVE_BUFFER_SIZE);
             bootstrap.setOption("child.sendBufferSize", EnhancedController.SEND_BUFFER_SIZE);

             // better to have an receive buffer predictor
             //bootstrap.setOption("receiveBufferSizePredictorFactory",
             //      new AdaptiveReceiveBufferSizePredictorFactory());
             //if the server is sending 1000 messages per sec, optimum write buffer water marks will
             //prevent unnecessary throttling, Check NioSocketChannelConfig doc
             //bootstrap.setOption("writeBufferLowWaterMark", WRITE_BUFFER_LOW_WATERMARK);
             //bootstrap.setOption("writeBufferHighWaterMark", WRITE_BUFFER_HIGH_WATERMARK);

             // TODO: IMPORTANT: If the threadpool is supplied as null, ExecutionHandler would 
             // not be present in pipeline. If the load increases and ordering is required ,
             // use OrderedMemoryAwareThreadPoolExecutor as argument instead of null

             execHandler = new OrderedMemoryAwareThreadPoolExecutor(
                             OMATPE_CORE_POOL_SIZE, 
                             OMATPE_PER_CHANNEL_SIZE, 
                             OMATPE_POOL_WIDE_SIZE, 
                             OMATPE_THREAD_KEEP_ALIVE_IN_MILLISECONDS, 
                             TimeUnit.MILLISECONDS);      



             ChannelPipelineFactory pfact =
                     new OpenflowPipelineFactory(controller, execHandler);
             bootstrap.setPipelineFactory(pfact);
             InetSocketAddress sa =
                    (openFlowHost == null)
                    ? new InetSocketAddress(openFlowPort)
                    : new InetSocketAddress(openFlowHost, openFlowPort);
             final ChannelGroup cg = new DefaultChannelGroup();
             cg.add(bootstrap.bind(sa));


         } catch (Exception e) {
             throw new RuntimeException(e);
         }

    }

    private ServerBootstrap createServerBootStrap() {
        if (workerThreads == 0) {
            return new ServerBootstrap(
                    new NioServerSocketChannelFactory(
                            Executors.newCachedThreadPool(),
                            Executors.newCachedThreadPool()));
        } else {
            return new ServerBootstrap(
                    new NioServerSocketChannelFactory(
                            Executors.newCachedThreadPool(),
                            Executors.newCachedThreadPool(), workerThreads));
        }
    }
硬件配置 带Ubuntu 11.x的四核Intel i5 vPro


如果我遗漏了一些明显的内容,请告诉我

你能重新措辞一下吗。。。我不明白你到底在问什么。诺曼道歉。我注意到消息交换只在较少的已建立连接上发生。由于这种行为是在性能测试中观察到的,所以我在每个连接的基础上添加了计数器来隔离问题。计数器表示服务器根本没有从某些连接接收任何消息。例如,当工作池大小为8并且建立了16个连接时,channelConnected被激发16次,我得到16个已建立的连接。但服务器仅通过8个连接接收消息,而不是全部16个连接。