正在取消与*.*的连接(java.io.IOException:远程主机强制关闭了现有连接)

正在取消与*.*的连接(java.io.IOException:远程主机强制关闭了现有连接),java,cassandra,datastax-java-driver,Java,Cassandra,Datastax Java Driver,我有一个简单的mvc应用程序,它存储com.datastax.driver.core.Session类的一个实例。此应用程序连接到由10个节点组成的群集。其中6个是Cassandra节点、2个Hadoop节点和2个Solar节点。将节点添加到查询的主机列表时,我可以看到节点的所有ip地址 这一切都很好,但当我让应用程序运行一段时间而不使用时,我会得到以下结果: 2013-12-18 14:07:53,589 [New I/O worker #6] DEBUG [

我有一个简单的mvc应用程序,它存储com.datastax.driver.core.Session类的一个实例。此应用程序连接到由10个节点组成的群集。其中6个是Cassandra节点、2个Hadoop节点和2个Solar节点。将节点添加到查询的主机列表时,我可以看到节点的所有ip地址

这一切都很好,但当我让应用程序运行一段时间而不使用时,我会得到以下结果:

2013-12-18 14:07:53,589 [New I/O worker #6] DEBUG [                      c.d.d.c.Connection] - Defuncting connection to /10.201.39.25
com.datastax.driver.core.TransportException: [/10.201.39.25] Unexpected exception triggered (java.io.IOException: An existing connection was forcibly closed by the remote host)
    at com.datastax.driver.core.Connection$Dispatcher.exceptionCaught(Connection.java:581) [cassandra-driver-core-1.0.4-dse.jar:na]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:60) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.handler.codec.oneone.OneToOneDecoder.handleUpstream(OneToOneDecoder.java:60) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.handler.codec.frame.FrameDecoder.exceptionCaught(FrameDecoder.java:377) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:112) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.Channels.fireExceptionCaught(Channels.java:525) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:74) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [netty-3.7.0.Final.jar:na]
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) [na:1.6.0_25-ea]
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) [na:1.6.0_25-ea]
    at java.lang.Thread.run(Thread.java:662) [na:1.6.0_25-ea]
Caused by: java.io.IOException: An existing connection was forcibly closed by the remote host
    at sun.nio.ch.SocketDispatcher.read0(Native Method) ~[na:1.6.0_25-ea]
    at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:25) ~[na:1.6.0_25-ea]
    at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:202) ~[na:1.6.0_25-ea]
    at sun.nio.ch.IOUtil.read(IOUtil.java:169) ~[na:1.6.0_25-ea]
    at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243) ~[na:1.6.0_25-ea]
    at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:64) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90) [netty-3.7.0.Final.jar:na]
    at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) [netty-3.7.0.Final.jar:na]
    ... 3 common frames omitted
2013-12-18 14:07:53,698 [New I/O worker #6] DEBUG [                         c.d.d.c.Cluster] - /10.201.39.25 is down, scheduling connection retries
2013-12-18 14:07:53,729 [Cassandra Java Driver worker-2] DEBUG [              c.d.d.c.HostConnectionPool] - Shutting down pool
2013-12-18 14:07:53,823 [New I/O worker #6] DEBUG [     c.d.d.c.AbstractReconnectionHandler] - First reconnection scheduled in 1000ms
2013-12-18 14:07:53,901 [New I/O worker #6] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.25, trying next host (error is: [/10.201.39.25] Unexpected exception triggered (java.io.IOException: An existing connection was forcibly closed by the remote host))
对所有节点重复上述操作。经过这一切,我得到了:

SEVERE: Servlet.service() for servlet [appServlet] in context with path [/logging] threw exception [Request processing failed; nested exception is com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/10.201.39.26, /10.201.39.25, /10.201.39.15, /10.201.39.19, /10.201.39.16, /10.201.39.17, /10.201.39.22, /10.201.39.21] - use getErrors() for details)] with root cause
com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for query failed (tried: [/10.201.39.26, /10.201.39.25, /10.201.39.15, /10.201.39.19, /10.201.39.16, /10.201.39.17, /10.201.39.22, /10.201.39.21] - use getErrors() for details)
    at com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:102)
    at com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:173)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:662)
2013-12-18 14:07:54,620 [Cassandra Java Driver worker-5] DEBUG [               c.d.d.c.ControlConnection] - [Control connection] Refreshing node list and token map
2013-12-18 14:07:54,885 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.25, setting host UP
2013-12-18 14:07:54,885 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:07:54,885 [Cassandra Java Driver worker-10] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.25 to list of queried hosts
2013-12-18 14:07:54,948 [Cassandra Java Driver worker-5] DEBUG [               c.d.d.c.ControlConnection] - [Control connection] Refreshing schema
2013-12-18 14:07:55,213 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.15, setting host UP
2013-12-18 14:07:55,213 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:07:55,213 [Cassandra Java Driver worker-10] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.15 to list of queried hosts
2013-12-18 14:07:55,463 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.26, setting host UP
2013-12-18 14:07:55,463 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:07:55,463 [Cassandra Java Driver worker-10] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.26 to list of queried hosts
2013-12-18 14:07:55,463 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.19, setting host UP
2013-12-18 14:07:55,463 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:07:55,463 [Cassandra Java Driver worker-9] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.19 to list of queried hosts
2013-12-18 14:07:55,463 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.16, setting host UP
2013-12-18 14:07:55,463 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:07:55,463 [Cassandra Java Driver worker-6] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.16 to list of queried hosts
2013-12-18 14:07:55,494 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.18, setting host UP
2013-12-18 14:07:55,494 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.20, setting host UP
2013-12-18 14:07:55,494 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:07:55,494 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:07:55,494 [Cassandra Java Driver worker-10] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.18 to list of queried hosts
2013-12-18 14:07:55,494 [Cassandra Java Driver worker-3] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.20 to list of queried hosts
2013-12-18 14:07:55,666 [Cassandra Java Driver worker-5] DEBUG [               c.d.d.c.ControlConnection] - [Control connection] Successfully connected to /10.201.39.21
2013-12-18 14:08:03,728 [Hashed wheel timer #1] DEBUG [                         c.d.d.c.Cluster] - /10.201.39.18 is down, scheduling connection retries
2013-12-18 14:08:03,728 [Cassandra Java Driver worker-5] DEBUG [              c.d.d.c.HostConnectionPool] - Shutting down pool
2013-12-18 14:08:03,728 [Hashed wheel timer #1] DEBUG [     c.d.d.c.AbstractReconnectionHandler] - First reconnection scheduled in 1000ms
2013-12-18 14:08:03,728 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.25, trying next host (error is: Timeout during read)
2013-12-18 14:08:04,774 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.18, setting host UP
2013-12-18 14:08:04,774 [Reconnection-1] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:08:04,774 [Cassandra Java Driver worker-3] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.18 to list of queried hosts
2013-12-18 14:08:05,930 [Hashed wheel timer #1] DEBUG [                         c.d.d.c.Cluster] - /10.201.39.18 is down, scheduling connection retries
2013-12-18 14:08:05,930 [Cassandra Java Driver worker-3] DEBUG [              c.d.d.c.HostConnectionPool] - Shutting down pool
2013-12-18 14:08:05,930 [Hashed wheel timer #1] DEBUG [     c.d.d.c.AbstractReconnectionHandler] - First reconnection scheduled in 1000ms
2013-12-18 14:08:05,930 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.16, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,040 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.15, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,134 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.20, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,134 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.21, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,134 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.26, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,134 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.17, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,134 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.19, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,134 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.22, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,227 [Hashed wheel timer #1] DEBUG [                  c.d.d.c.RequestHandler] - Error querying /10.201.39.18, trying next host (error is: Timeout during read)
2013-12-18 14:08:06,946 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Successful reconnection to /10.201.39.18, setting host UP
2013-12-18 14:08:06,946 [Reconnection-0] DEBUG [                         c.d.d.c.Cluster] - Cancelling reconnection attempt since node is UP
2013-12-18 14:08:06,946 [Cassandra Java Driver worker-6] DEBUG [                         c.d.d.c.Session] - Adding /10.201.39.18 to list of queried hosts
在那之后,它似乎会自行修复

要创建会话对象,我使用了以下代码:

String[] hostsArray = hosts.split(",");
Cluster cluster = Cluster.builder().addContactPoints(hostsArray).withPort(port).build();
session = cluster.connect(keyspace);
会话对象存储为类的成员。在cassandra节点上,我在大约同一时间段的日志中看不到任何异常。在设置集群和会话对象时,我是否缺少一些东西?我应该添加更多配置吗

我应该补充一点,我是从windows计算机上运行客户端应用程序的。当我将客户端应用程序移动到linux机器上时,我没有遇到同样的问题。这将向我表明,问题可能是在网络级别,而不是在应用程序中