Python Scrapy:为什么无休止的日志:爬网14091页(0页/分钟)?

Python Scrapy:为什么无休止的日志:爬网14091页(0页/分钟)?,python,scrapy,web-crawler,Python,Scrapy,Web Crawler,当程序运行一段时间时,CPU使用率为100%,并且程序似乎不会抓取页面 当我终止程序时,我会得到以下日志: 2018-05-06 18:21:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats: {'downloader/exception_count': 479, 'downloader/exception_type_count/twisted.internet.error.ConnectError': 1, 'downloader

当程序运行一段时间时,CPU使用率为100%,并且程序似乎不会抓取页面

当我终止程序时,我会得到以下日志:

2018-05-06 18:21:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/exception_count': 479,
 'downloader/exception_type_count/twisted.internet.error.ConnectError': 1,
 'downloader/exception_type_count/twisted.internet.error.ConnectionRefusedError': 146,
 'downloader/exception_type_count/twisted.internet.error.TimeoutError': 332,
 'downloader/request_bytes': 7834053,
 'downloader/request_count': 14825,
 'downloader/request_method_count/GET': 14825,
 'downloader/response_bytes': 155349329,
 'downloader/response_count': 14346,
 'downloader/response_status_count/200': 14316,
 'downloader/response_status_count/302': 21,
 'downloader/response_status_count/400': 9,
 'dupefilter/filtered': 90830,
 'finish_reason': 'shutdown',
 'finish_time': datetime.datetime(2018, 5, 6, 10, 21, 43, 859725),
 'item_scraped_count': 1781,
 'log_count/DEBUG': 15880,
 'log_count/INFO': 14672,
 'memusage/max': 601972736,
 'memusage/startup': 55906304,
 'request_depth_max': 428,
 'response_received_count': 14091,
 'scheduler/dequeued': 14825,
 'scheduler/dequeued/memory': 14825,
 'scheduler/enqueued': 143352,
 'scheduler/enqueued/memory': 143352,
 'start_time': datetime.datetime(2018, 5, 6, 9, 8, 22, 330943)}
2018-05-06 18:21:44 [scrapy.core.engine] INFO: Spider closed (shutdown)
您是否可以检查您的文件(或将其包含在此处):
dont\u filter=True

如果是这种情况,scrapy不会过滤掉重复的请求

请附上你的剪贴簿