Docker 卡桑德拉连接空转和超时

Docker 卡桑德拉连接空转和超时,docker,cassandra,cassandra-python-driver,Docker,Cassandra,Cassandra Python Driver,我正在尝试使用。我尝试过这两种方法,一种是在docker容器中运行cassandra,另一种是在docker版本给我带来问题后在本地运行。下面是我正在做的一个例子: class Controller(object): def __init__(self): self.cluster = Cluster() self.session = self.cluster.connect('mykeyspace') def insert_into_cassandra(self):

我正在尝试使用。我尝试过这两种方法,一种是在docker容器中运行cassandra,另一种是在docker版本给我带来问题后在本地运行。下面是我正在做的一个例子:

class Controller(object):
def __init__(self):
    self.cluster = Cluster()
    self.session = self.cluster.connect('mykeyspace')

def insert_into_cassandra(self):
    query = ('INSERT INTO mytable (mykey, indexed_key) VALUES (?, ?)')
    prepared = self.session.prepare(query)
    prepared.consistency_level = ConsistencyLevel.QUORUM
    params_gen = self.params_generator(fname)
    execute_concurrent_with_args(self.session, prepared, self.parameter_generator(), concurrency=50)

def delete_param_gen(self, results):
    for r in results:
        yield [r.mykey]

def delete_by_index(self, value):
    query = "SELECT mykey from mytable where indexed_key = '%s'" % value
    res = self.session.execute(query)
    delete_query = "DELETE from mytable where mykey = ?"
    prepared = self.session.prepare(delete_query)
    prepared.consistency_level = ConsistencyLevel.QUORUM
    params_gen = self.delete_param_gen(res)
    execute_concurrent_with_args(self.session, prepared, params_gen, concurrency=50)
没什么疯狂的。加载/删除数据时,我经常看到以下消息:

Sending options message heartbeat on idle connection (4422117360) 127.0.0.1
Heartbeat failed for connection (4422117360) to 127.0.0.1
下面是一些删除数据的日志

[2017-02-28 08:37:20,562] [DEBUG] [cassandra.connection] Defuncting connection (4422117360) to 127.0.0.1: errors=Connection heartbeat timeout after 30 seconds, last_host=127.0.0.1
[2017-02-28 08:37:20,563] [DEBUG] [cassandra.io.libevreactor] Closing connection (4422117360) to 127.0.0.1
[2017-02-28 08:37:20,563] [DEBUG] [cassandra.io.libevreactor] Closed socket to 127.0.0.1
[2017-02-28 08:37:20,564] [DEBUG] [cassandra.pool] Defunct or closed connection (4422117360) returned to pool, potentially marking host 127.0.0.1 as down
[2017-02-28 08:37:20,566] [DEBUG] [cassandra.pool] Replacing connection (4422117360) to 127.0.0.1
[2017-02-28 08:37:20,567] [DEBUG] [cassandra.connection] Defuncting connection (4426057600) to 127.0.0.1: errors=Connection heartbeat timeout after 30 seconds, last_host=127.0.0.1
[2017-02-28 08:37:20,567] [DEBUG] [cassandra.io.libevreactor] Closing connection (4426057600) to 127.0.0.1
[2017-02-28 08:37:20,567] [DEBUG] [cassandra.io.libevreacto[2017-02-28 08:37:20,568] [ERROR] [cassandra.cluster] Unexpected exception while handling result in ResponseFuture:
Traceback (most recent call last):
  File "cassandra/cluster.py", line 3536, in cassandra.cluster.ResponseFuture._set_result (cassandra/cluster.c:67556)
  File "cassandra/cluster.py", line 3711, in cassandra.cluster.ResponseFuture._set_final_result (cassandra/cluster.c:71769)
  File "cassandra/concurrent.py", line 154, in cassandra.concurrent._ConcurrentExecutor._on_success (cassandra/concurrent.c:3357)
  File "cassandra/concurrent.py", line 203, in cassandra.concurrent.ConcurrentExecutorListResults._put_result (cassandra/concurrent.c:5539)
  File "cassandra/concurrent.py", line 209, in cassandra.concurrent.ConcurrentExecutorListResults._put_result (cassandra/concurrent.c:5427)
  File "cassandra/concurrent.py", line 123, in cassandra.concurrent._ConcurrentExecutor._execute_next (cassandra/concurrent.c:2369)
  File "load_cassandra.py", line 148, in delete_param_gen
    for r in rows:
  File "cassandra/cluster.py", line 3991, in cassandra.cluster.ResultSet.next (cassandra/cluster.c:76025)
  File "cassandra/cluster.py", line 4006, in cassandra.cluster.ResultSet.fetch_next_page (cassandra/cluster.c:76193)
  File "cassandra/cluster.py", line 3781, in cassandra.cluster.ResponseFuture.result (cassandra/cluster.c:73073)
cassandra.cluster.NoHostAvailable: ('Unable to complete the operation against any hosts', {})r] Closed socket to 127.0.0.1
下面是一些插入数据的示例:

[2017-02-28 16:50:25,594] [DEBUG] [cassandra.connection] Sending options message heartbeat on idle connection (140301574604448) 127.0.0.1
[2017-02-28 16:50:25,595] [DEBUG] [cassandra.cluster] [control connection] Attempting to reconnect
[2017-02-28 16:50:25,596] [DEBUG] [cassandra.cluster] [control connection] Opening new connection to 127.0.0.1
[2017-02-28 16:50:25,596] [DEBUG] [cassandra.connection] Not sending options message for new connection(140301347717016) to 127.0.0.1 because compression is disabled and a cql version was not specified
[2017-02-28 16:50:25,596] [DEBUG] [cassandra.connection] Sending StartupMessage on <AsyncoreConnection(140301347717016) 127.0.0.1:9042>
[2017-02-28 16:50:25,596] [DEBUG] [cassandra.connection] Sent StartupMessage on <AsyncoreConnection(140301347717016) 127.0.0.1:9042>
[2017-02-28 16:50:30,596] [DEBUG] [cassandra.io.asyncorereactor] Closing connection (140301347717016) to 127.0.0.1
[2017-02-28 16:50:30,596] [DEBUG] [cassandra.io.asyncorereactor] Closed socket to 127.0.0.1
[2017-02-28 16:50:30,596] [DEBUG] [cassandra.connection] Connection to 127.0.0.1 was closed during the startup handshake
[2017-02-28 16:50:30,597] [WARNING] [cassandra.cluster] [control connection] Error connecting to 127.0.0.1:
Traceback (most recent call last):
  File "cassandra/cluster.py", line 2623, in cassandra.cluster.ControlConnection._reconnect_internal (cassandra/cluster.c:47899)
  File "cassandra/cluster.py", line 2645, in cassandra.cluster.ControlConnection._try_connect (cassandra/cluster.c:48416)
  File "cassandra/cluster.py", line 1119, in cassandra.cluster.Cluster.connection_factory (cassandra/cluster.c:15085)
  File "cassandra/connection.py", line 333, in cassandra.connection.Connection.factory (cassandra/connection.c:5790)
cassandra.OperationTimedOut: errors=Timed out creating connection (5 seconds), last_host=None
[2017-02-28 16:50:39,309] [ERROR] [root] Exception inserting data into cassandra
Traceback (most recent call last):
  File "load_cassandra.py", line 54, in run
    controller.insert_into_cassandra(filename)
  File "extract_to_cassandra.py", line 141, in insert_into_cassandra
    for success, result in results:
  File "cassandra/concurrent.py", line 177, in _results (cassandra/concurrent.c:4856)
  File "cassandra/concurrent.py", line 186, in cassandra.concurrent.ConcurrentExecutorGenResults._results (cassandra/concurrent.c:4622)
  File "cassandra/concurrent.py", line 165, in cassandra.concurrent._ConcurrentExecutor._raise (cassandra/concurrent.c:3745)
cassandra.WriteTimeout: Error from server: code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'consistency': 'QUORUM', 'required_responses': 1, 'received_responses': 0}
[2017-02-28 16:50:39,465] [DEBUG] [cassandra.connection] Received options response on connection (140301574604448) from 127.0.0.1
[2017-02-28 16:50:39,466] [DEBUG] [cassandra.cluster] Shutting down Cluster Scheduler
[2017-02-28 16:50:39,467] [DEBUG] [cassandra.cluster] Shutting down control connection
[2017-02-28 16:50:39,467] [DEBUG] [cassandra.io.asyncorereactor] Closing connection (140301574604448) to 127.0.0.1
[2017-02-28 16:50:39,467] [DEBUG] [cassandra.io.asyncorereactor] Closed socket to 127.0.0.1
[2017-02-28 16:50:39,468] [DEBUG] [cassandra.pool] Defunct or closed connection (140301574604448) returned to pool, potentially marking host 127.0.0.1 as down
我也看到这样的信息:

Out of 29 commit log syncs over the past 248s with average duration of      1596.14ms, 1 have exceeded the configured commit interval by an average of 18231.00ms

您可以尝试减少连接中的
idle\u heartbeat\u interval
设置。默认情况下,它是30秒,但您可以在实例化集群类时进行配置。在本例中,我将其设置为10秒:

def __init__(self):
    self.cluster = Cluster(idle_heartbeat_interval=10)
    self.session = self.cluster.connect('mykeyspace')

如果这没有帮助,那么可能是时候检查您的数据模型是否存在反模式了。

不幸的是,这不起作用。我确实注意到,只有当我的数据包含一个blob字段(每个项大约80kb)时,才会发生这种情况。如果我把它换成更小的东西,一切正常。我在cassandra.yaml文件中增加了write_request_timeout_in_ms(当然也重新启动了),但这没有帮助
def __init__(self):
    self.cluster = Cluster(idle_heartbeat_interval=10)
    self.session = self.cluster.connect('mykeyspace')