Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/312.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Cassandra群集中的操作超时错误_Python_Cassandra_Datastax_Datastax Enterprise - Fatal编程技术网

Python Cassandra群集中的操作超时错误

Python Cassandra群集中的操作超时错误,python,cassandra,datastax,datastax-enterprise,Python,Cassandra,Datastax,Datastax Enterprise,我的群集大小为6台机器,我经常收到此错误消息,但我真的不知道如何解决此问题: code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'LOCA

我的群集大小为6台机器,我经常收到此错误消息,但我真的不知道如何解决此问题:

code=1100 [Coordinator node timed out waiting for replica nodes' responses] message="Operation timed out - received only 0 responses." info={'received_responses': 0, 'required_responses': 1, 'consistency': 'LOCAL_ONE'}
这是我的完整代码,出现错误消息的代码部分如下:

batch.add(schedule_remove_stmt, (source, type, row['scheduled_for'],row['id']));session.execute(batch,30)
完整代码:

cluster = Cluster(['localhost'])
session = cluster.connect('keyspace')
d = datetime.utcnow()
scheduled_for = d.replace(second=0, microsecond=0)
rowid=[]
stmt = session.prepare('SELECT * FROM schedules WHERE source=? AND type= ? AND scheduled_for = ?')
schedule_remove_stmt = session.prepare("DELETE FROM schedules WHERE source = ? AND type = ? AND scheduled_for = ? AND id = ?")
schedule_insert_stmt = session.prepare("INSERT INTO schedules(source, type, scheduled_for, id) VALUES (?, ?, ?, ?)")
schedules_to_delete = []
articles={}
source=''
type=''
try:
    rows = session.execute(stmt, [source,type, scheduled_for])
    article_schedule_delete = ''
    for row in rows:
        schedules_to_delete.append({'id':row.id,'scheduled_for':row.scheduled_for})
        article_schedule_delete=article_schedule_delete+'\''+row.id+'\','
        rowid.append(row.id)
    article_schedule_delete = article_schedule_delete[0:-1]
    cql = 'SELECT * FROM articles WHERE id in (%s)' % article_schedule_delete
    articles_row = session.execute(cql)
    for row in articles_row:
        articles[row.id]=row.created_at
except Exception as e:
    print e
    log.info('select error is:%s' % e)
try:
    for row in schedules_to_delete:
        batch = BatchStatement()
        batch.add(schedule_remove_stmt, (source, type, row['scheduled_for'],row['id']))
        try:
            if row['id'] in articles.keys():
                next_schedule =d
                elapsed = datetime.utcnow() - articles[row['id']]
                if elapsed <= timedelta(hours=1):
                    next_schedule += timedelta(minutes=6)
                elif elapsed <= timedelta(hours=3):
                    next_schedule += timedelta(minutes=18)
                elif elapsed <= timedelta(hours=6):
                    next_schedule += timedelta(minutes=36)
                elif elapsed <= timedelta(hours=12):
                    next_schedule += timedelta(minutes=72)
                elif elapsed <= timedelta(days=1):
                    next_schedule += timedelta(minutes=144)
                elif elapsed <= timedelta(days=3):
                    next_schedule += timedelta(minutes=432)
                elif elapsed <= timedelta(days=30) :
                    next_schedule += timedelta(minutes=1440)
                if not next_schedule==d:
                    batch.add(schedule_insert_stmt, (source,type, next_schedule.replace(second=0, microsecond=0),row['id']))
                    #log.info('schedule id:%s' % row['id'])
        except Exception as e:
            print 'key error:',e
            log.info('HOW IT CHANGES %s %s %s %s ERROR:%s' % (source,type, next_schedule.replace(second=0, microsecond=0), row['id'],e))
        session.execute(batch,30)
except Exception as e:
    print 'schedules error is =======================>',e
    log.info('schedules error is:%s' % e)
cluster=cluster(['localhost'])
session=cluster.connect('keyspace')
d=datetime.utcnow()
预定时间=d.replace(秒=0,微秒=0)
rowid=[]
stmt=session.prepare('SELECT*FROM schedules,其中source=?和type=?以及scheduled_for=?'))
schedule_remove_stmt=session.prepare(“从源=?和类型=?以及scheduled_for=?和id=?”的计划中删除”)
schedule_insert_stmt=会话。准备(“插入到计划(源、类型、计划的、id)值(?、、?、?)”)
附表_至_删除=[]
文章={}
来源=“”
类型=“”
尝试:
行=会话.execute(stmt,[source,type,scheduled\u for])
条款_附表_删除=“”
对于行中的行:
将_安排到_delete.append({'id':row.id,'scheduled _for':row.scheduled _for})
article\u schedule\u delete=article\u schedule\u delete+'\''+行.id+'\','
rowid.append(row.id)
article_schedule_delete=article_schedule_delete[0:-1]
cql='从id在(%s)中的文章中选择*。%article\u schedule\u delete
文章行=会话执行(cql)
对于文章中的行_行:
articles[row.id]=在处创建的row.created\u
例外情况除外,如e:
打印e
log.info('选择错误为:%s“%e”)
尝试:
对于要删除的明细表中的行:
batch=BatchStatement()
batch.add(schedule_remove_stmt,(源,类型,行['scheduled_for'],行['id']))
尝试:
如果articles.keys()中的行['id']:
下一个计划=d
已用时间=datetime.utcnow()-articles[行['id']]

如果经过我认为在这种情况下不应该使用批处理语句,因为要使用批处理对不同的分区键执行大量操作,这会导致超时异常。您应该使用批处理来保持表同步,但不能用于性能优化。 您可以在中找到有关滥用批次的更多信息


使用更适合于为您的案例执行大量删除查询。它将允许保持代码的性能并避免协调器过载。

我认为在这种情况下不应该使用批处理语句,因为您要使用批处理来对不同的分区键执行大量操作,这会导致超时异常。您应该使用批处理来保持表同步,但不能用于性能优化。 您可以在中找到有关滥用批次的更多信息

使用更适合于为您的案例执行大量删除查询。它将允许保持代码的性能并避免协调器过载