Java zeromq recv数据延迟太大
我写了一个路由器和经销商的例子 这是服务器代码,使用router dealer接收数据,并将数据发送给worker进行负载均衡,当worker接收到消息时,它将一个可运行对象放入ExecutorService以计算pi:Java zeromq recv数据延迟太大,java,multithreading,networking,zeromq,Java,Multithreading,Networking,Zeromq,我写了一个路由器和经销商的例子 这是服务器代码,使用router dealer接收数据,并将数据发送给worker进行负载均衡,当worker接收到消息时,它将一个可运行对象放入ExecutorService以计算pi: public class MtServer { private static Logger logger = LoggerFactory.getLogger(MtServer.class); private static ExecutorService executors =
public class MtServer {
private static Logger logger = LoggerFactory.getLogger(MtServer.class);
private static ExecutorService executors = Executors.newFixedThreadPool(2);
static class Worker extends Thread {
private ZMQ.Socket worker;
public Worker(ZMQ.Context context) {
worker = context.socket(ZMQ.REP);
worker.connect("inproc://workers");
}
public void run() {
while(true) {
byte[] recs = worker.recv();
String message = new String(recs);
String threadName = Thread.currentThread().getName();
logger.info("thread[{}] receive [{}]", threadName, message);
executors.submit(new Runnable() {
public void run() {
double pi = 0;
for(int i = 0; i < 100000000; i++) {
pi += Math.pow(-1, i) / (2 * i + 1) * 4;
}
logger.info("pi = {}", pi);
}
});
logger.info("server added [{}]", message);
worker.send(message.getBytes(), 0);
logger.info("server send [{}] back to client", message);
}
}
}
public static void main(String[] args) {
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket clients = context.socket(ZMQ.ROUTER);
clients.bind("tcp://*:5556");
ZMQ.Socket workers = context.socket(ZMQ.DEALER);
workers.bind("inproc://workers");
for(int i = 0; i < 2; ++i) {
Worker worker = new Worker(context);
worker.start();
}
ZMQ.proxy(clients, workers, null);
clients.close();
workers.close();
context.term();
}
}
我的问题是,有时客户端recv数据(t2-t1)的延迟太大,在我的示例中,它几乎达到8s,延迟与计算pi的执行时间有关,如果我减少worker中的循环次数,延迟也会减少,但是pi是在其他线程中计算的观察日志,大多数情况下,当发生大量延迟时,延迟发生在MtServer中的worker.recv上(客户端发送数据和worker接收数据之间的时间很长),但有时发生在将可运行对象提交给执行器上(worker接收数据和worker发送数据之间的时间很长) 一些日志:
[test] 2016-02-17 15:19:11:705 INFO [Thread-0] org.test.HwClient.run(26) | client[f11d3f6f6e9b4476a67053828c214c6e] send [f11d3f6f6e9b4476a67053828c214c6e-3] to server
[test] 2016-02-17 15:19:19:563 INFO [pool-2-thread-2] org.test.MtServer$Worker.run(30) | thread[pool-2-thread-2] receive [f11d3f6f6e9b4476a67053828c214c6e-3]
[test] 2016-02-17 15:19:19:563 INFO [pool-2-thread-2] org.test.MtServer$Worker.run(42) | server added [f11d3f6f6e9b4476a67053828c214c6e-3]
[test] 2016-02-17 15:19:19:563 INFO [pool-2-thread-2] org.test.MtServer$Worker.run(44) | server send [f11d3f6f6e9b4476a67053828c214c6e-3] back to client
[test] 2016-02-17 15:19:19:565 INFO [pool-1-thread-2] org.test.MtServer$Worker$1.run(38) | pi = 3.141592643589326
[test] 2016-02-17 15:19:19:565 INFO [pool-1-thread-1] org.test.MtServer$Worker$1.run(38) | pi = 3.141592643589326
[test] 2016-02-17 15:19:19:563 INFO [Thread-0] org.test.HwClient.run(30) | f11d3f6f6e9b4476a67053828c214c6e receive [f11d3f6f6e9b4476a67053828c214c6e-3] back from server spend 7858ms
[test] 2016-02-17 15:19:19:666 INFO [Thread-0] org.test.HwClient.run(26) | client[f11d3f6f6e9b4476a67053828c214c6e] send [f11d3f6f6e9b4476a67053828c214c6e-4] to server
[test] 2016-02-17 15:19:19:666 INFO [pool-2-thread-1] org.test.MtServer$Worker.run(30) | thread[pool-2-thread-1] receive [f11d3f6f6e9b4476a67053828c214c6e-4]
有人能帮我吗?谢谢。听起来像是执行者。submit()正在阻止。。。有可能吗?是从线程池还是队列变得不堪重负?发生这种情况时,队列中有多少任务?@Jason从日志中可以看出,在客户端发送消息和服务器接收消息之间(在执行者之前)。submit()大约是8s。高延迟周期性地发生,在我的机器中,当客户端发送大约30条消息时,有一个高延迟。
[test] 2016-02-17 15:19:11:705 INFO [Thread-0] org.test.HwClient.run(26) | client[f11d3f6f6e9b4476a67053828c214c6e] send [f11d3f6f6e9b4476a67053828c214c6e-3] to server
[test] 2016-02-17 15:19:19:563 INFO [pool-2-thread-2] org.test.MtServer$Worker.run(30) | thread[pool-2-thread-2] receive [f11d3f6f6e9b4476a67053828c214c6e-3]
[test] 2016-02-17 15:19:19:563 INFO [pool-2-thread-2] org.test.MtServer$Worker.run(42) | server added [f11d3f6f6e9b4476a67053828c214c6e-3]
[test] 2016-02-17 15:19:19:563 INFO [pool-2-thread-2] org.test.MtServer$Worker.run(44) | server send [f11d3f6f6e9b4476a67053828c214c6e-3] back to client
[test] 2016-02-17 15:19:19:565 INFO [pool-1-thread-2] org.test.MtServer$Worker$1.run(38) | pi = 3.141592643589326
[test] 2016-02-17 15:19:19:565 INFO [pool-1-thread-1] org.test.MtServer$Worker$1.run(38) | pi = 3.141592643589326
[test] 2016-02-17 15:19:19:563 INFO [Thread-0] org.test.HwClient.run(30) | f11d3f6f6e9b4476a67053828c214c6e receive [f11d3f6f6e9b4476a67053828c214c6e-3] back from server spend 7858ms
[test] 2016-02-17 15:19:19:666 INFO [Thread-0] org.test.HwClient.run(26) | client[f11d3f6f6e9b4476a67053828c214c6e] send [f11d3f6f6e9b4476a67053828c214c6e-4] to server
[test] 2016-02-17 15:19:19:666 INFO [pool-2-thread-1] org.test.MtServer$Worker.run(30) | thread[pool-2-thread-1] receive [f11d3f6f6e9b4476a67053828c214c6e-4]