Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/343.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/apache-kafka/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无块';而对';使用asyncio_Python_Apache Kafka_Python Asyncio_Confluent Platform - Fatal编程技术网

Python 无块';而对';使用asyncio

Python 无块';而对';使用asyncio,python,apache-kafka,python-asyncio,confluent-platform,Python,Apache Kafka,Python Asyncio,Confluent Platform,使用以下代码,我尝试使用asyncio启动两个无限循环: async def do_job_1(): while True : print('do_job_1') await asyncio.sleep(5) async def do_job_2(): while True : print('do_job_2') await asyncio.sleep(5) if __name__ == '__main__':

使用以下代码,我尝试使用asyncio启动两个无限循环:

async def do_job_1():
    while True :
        print('do_job_1')
        await asyncio.sleep(5)

async def do_job_2():
    while True :
        print('do_job_2')
        await asyncio.sleep(5)

if __name__ == '__main__':
    asyncio.run(do_job_1())
    asyncio.run(do_job_2())
do_job_1
阻止
do_job_2
,因为
do_job_2
从不打印do_job_1。我犯了什么错误

最终,我试图转换卡夫卡消费代码:

from confluent_kafka import Consumer, KafkaError

settings = {
    'bootstrap.servers': 'localhost:9092',
    'group.id': 'mygroup',
    'client.id': 'client-1',
    'enable.auto.commit': True,
    'session.timeout.ms': 6000,
    'default.topic.config': {'auto.offset.reset': 'smallest'}
}

c = Consumer(settings)

c.subscribe(['mytopic'])

try:
    while True:
        msg = c.poll(0.1)
        if msg is None:
            continue
        elif not msg.error():
            print('Received message: {0}'.format(msg.value()))
        elif msg.error().code() == KafkaError._PARTITION_EOF:
            print('End of partition reached {0}/{1}'
                  .format(msg.topic(), msg.partition()))
        else:
            print('Error occured: {0}'.format(msg.error().str()))

except KeyboardInterrupt:
    pass

finally:
    c.close()
从中获取的信息是并发的,这样我就可以并行处理卡夫卡消息。

来自
帮助(asyncio.run)

它应该用作asyncio程序的主要入口点,理想情况下只能调用一次

但您可以使用
asyncio.gather
加入任务:

导入异步IO
异步def do_作业_1():
尽管如此:
打印('do_job_1')
等待异步睡眠(5)
异步def do_作业_2():
尽管如此:
打印('do_job_2')
等待异步睡眠(5)
异步def main():
等待asyncio.gather(执行任务1(),执行任务2())
如果uuuu name uuuuuu='\uuuuuuu main\uuuuuuu':
asyncio.run(main())
来自
帮助(asyncio.run)

它应该用作asyncio程序的主要入口点,理想情况下只能调用一次

但您可以使用
asyncio.gather
加入任务:

导入异步IO
异步def do_作业_1():
尽管如此:
打印('do_job_1')
等待异步睡眠(5)
异步def do_作业_2():
尽管如此:
打印('do_job_2')
等待异步睡眠(5)
异步def main():
等待asyncio.gather(执行任务1(),执行任务2())
如果uuuu name uuuuuu='\uuuuuuu main\uuuuuuu':
asyncio.run(main())