Python 为什么gevent无法获取rabbitmq的完整池

Python 为什么gevent无法获取rabbitmq的完整池,python,rabbitmq,gevent,Python,Rabbitmq,Gevent,我想从我的rabbitmq获取消息,然后使用gevent生成greenlet来处理它。 我的代码是: import pika import gevent.monkey gevent.monkey.patch_all() import gevent from gevent.pool import Pool from gevent import Timeout from meliae import scanner import MySQLdb import urllib2 from MySQLdb.

我想从我的rabbitmq获取消息,然后使用gevent生成greenlet来处理它。 我的代码是:

import pika
import gevent.monkey
gevent.monkey.patch_all()
import gevent
from gevent.pool import Pool
from gevent import Timeout
from meliae import scanner
import MySQLdb
import urllib2
from MySQLdb.cursors import SSCursor

db=MySQLdb.connect(host='125.221.225.12',user='root',passwd='young001',charset='utf8',db='delicious',use_unicode=True) 
cur = db.cursor()

p = Pool(300)
success_count = 0
fail_count = 0


connection = pika.BlockingConnection(pika.ConnectionParameters(host='125.221.225.12'))
channel = connection.channel()
channel.queue_declare(queue='url_queue')

#logfile = file("log.txt",'aw',buffering=0)

def insert_into_avail(url):
    cur.execute("insert into avail_urls(url) values (%s)",url)
    db.commit()

def insert_into_fail(url):
    cur.execute("insert into fail_urls(url) values (%s)",url)
    db.commit()


class TooLong(Exception):
    pass

def down(url):
    global success_count
    global fail_count
    print 'the free pool is', p.free_count()
    try:
        with Timeout(30,TooLong):
            url_data = urllib2.urlopen(url)
            if url_data and url_data.getcode() ==200:
                #print 'url is ok'
                insert_into_avail(url)
                success_count = success_count+1
            else:
                print 'the code is ',url_data.getcode()
            #logfile.write(url)
            #print 'the pool is ', len(p)
            print 'success count is', success_count
    except:
        insert_into_fail(url)
        fail_count = fail_count+1
        #print 'the url is down',url
        #print 'the pool is ', len(p)
        print 'the fail_count is', fail_count


for method,properties,body in channel.consume('url_queue'):
    #print 'body is ',body
    channel.basic_ack(method.delivery_tag)
    scanner.dump_all_objects("dump.txt")
    p.spawn(down,body)
    #scanner.dump_all_objects("dump.txt")

    #print 'the pool is ', len(p)
    #logfile.write
p.join()
它的工作速度非常慢,一些输出如下:

the fail_count is 30
the free pool is 295
the fail_count is 31
the free pool is 295
the fail_count is 32
the free pool is 295
the fail_count is 33
the free pool is 295
the fail_count is 34
the free pool is 295
the fail_count is 35
the free pool is 295
它在游泳池中只有大约5个绿球,我想在游泳池中得到300个绿球,所以程序运行得很快。 怎么了?如何调试它?因为rabbitmq、gevent或我的代码? thx

*更新:*当我注释scanner.dump_all_objectsdump.txt时,它会在完整池中运行,可能是因为当我将对象写入文件时需要时间,所以从rabbitmq获取消息的速度非常慢


但在我的代码中,它似乎有内存泄漏问题,程序占用了数百M内存,所以我想打印所有对象来查找内存泄漏问题?如何修复它?有更好的方法吗?

请注意,您的代码持续超时;失败率一直在上升。你应该先研究一下并解决这个问题。许多URL在我的国家是无法访问的,这就是超时的原因。写入文件是一个阻止操作,它肯定会减慢代码的速度。你的更新也帮我修复了我的问题,我建议你写一个答案并将其标记为正确答案