Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/330.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/templates/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何处理AllServersUnavailable异常_Python_Cassandra_Pycassa - Fatal编程技术网

Python 如何处理AllServersUnavailable异常

Python 如何处理AllServersUnavailable异常,python,cassandra,pycassa,Python,Cassandra,Pycassa,我想在单个节点上对Cassandra实例(v1.1.10)执行一个简单的写操作。我只是想看看它是如何处理常量写入的,以及它是否能跟上写入速度 pool = ConnectionPool('testdb') test_cf = ColumnFamily(pool,'test') test2_cf = ColumnFamily(pool,'test2') test3_cf = ColumnFamily(pool,'test3') test_batch = test_cf.batch(queue_si

我想在单个节点上对Cassandra实例(v1.1.10)执行一个简单的写操作。我只是想看看它是如何处理常量写入的,以及它是否能跟上写入速度

pool = ConnectionPool('testdb')
test_cf = ColumnFamily(pool,'test')
test2_cf = ColumnFamily(pool,'test2')
test3_cf = ColumnFamily(pool,'test3')
test_batch = test_cf.batch(queue_size=1000)
test2_batch = test2_cf.batch(queue_size=1000)
test3_batch = test3_cf.batch(queue_size=1000)

chars=string.ascii_uppercase
counter = 0
while True:
    counter += 1
    uid = uuid.uuid1()
    junk = ''.join(random.choice(chars) for x in range(50))
    test_batch.insert(uid, {'junk':junk})
    test2_batch.insert(uid, {'junk':junk})
    test3_batch.insert(uid, {'junk':junk})
    sys.stdout.write(str(counter)+'\n')

pool.dispose()
在长时间写入(计数器大约为10M+)后,代码会不断崩溃,并显示以下消息

pycassa.pool.AllServersUnavailable:尝试连接到每个服务器两次,但没有一次成功。上次失败是超时:超时

我设置了
队列\u size=100
,这没有帮助。此外,在脚本崩溃并出现以下错误后,我启动了
cqlsh-3
控制台来截断表:

无法完成请求:一个或多个节点不可用。


拖尾
/var/log/cassandra/system.log
只给出压缩、FlushWriter等信息,而不给出错误标志。我做错了什么

我也遇到了这个问题——正如@tyler hobbs在他的评论中所说,节点可能过载了(对我来说)。我使用的一个简单修复方法是后退,让节点跟上。我已经重写了上面的循环以捕获错误,请先睡一会儿,然后再试一次。我在一个单节点集群上运行过这个程序,它可以进行处理——暂停(一分钟)并定期后退(一行不超过5次)。使用此脚本不会丢失任何数据,除非错误连续抛出五次(在这种情况下,您可能希望努力失败,而不是返回循环)


我添加了

您是否看到该节点上的CPU或磁盘使用过度?JVM垃圾收集可能没有很好地处理它,尽管我希望日志会显示一些关于这方面的信息。
while True:
  counter += 1
  uid = uuid.uuid1()
  junk = ''.join(random.choice(chars) for x in range(50))
  tryCount = 5 # 5 is probably unnecessarily high
  while tryCount > 0:
    try:
      test_batch.insert(uid, {'junk':junk})
      test2_batch.insert(uid, {'junk':junk})
      test3_batch.insert(uid, {'junk':junk})
      tryCount = -1
    except pycassa.pool.AllServersUnavailable as e:
      print "Trying to insert [" + str(uid) + "] but got error " + str(e) + " (attempt " + str(tryCount) + "). Backing off for a minute to let Cassandra settle down"
      time.sleep(60) # A delay of 60s is probably unnecessarily high
      tryCount = tryCount - 1
  sys.stdout.write(str(counter)+'\n')