Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/332.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 来自脚本的Scrapy爬网总是在刮取后阻止脚本执行_Python_Twisted_Scrapy - Fatal编程技术网

Python 来自脚本的Scrapy爬网总是在刮取后阻止脚本执行

Python 来自脚本的Scrapy爬网总是在刮取后阻止脚本执行,python,twisted,scrapy,Python,Twisted,Scrapy,我按照这个指南从我的脚本运行scrapy。 以下是我脚本的一部分: crawler = Crawler(Settings(settings)) crawler.configure() spider = crawler.spiders.create(spider_name) crawler.crawl(spider) crawler.start() log.start() reactor.run() print "It can't b

我按照这个指南从我的脚本运行scrapy。 以下是我脚本的一部分:

    crawler = Crawler(Settings(settings))
    crawler.configure()
    spider = crawler.spiders.create(spider_name)
    crawler.crawl(spider)
    crawler.start()
    log.start()
    reactor.run()
    print "It can't be printed out!"
它的工作原理是:访问页面,获取所需信息,并将输出json存储在我告诉它的地方(通过FEED_URI)。但当spider完成他的工作时(我可以通过输出json中的数字看到),我的脚本的执行就不会继续了。 也许这不是个棘手的问题。答案应该在twisted反应堆的某个地方。
如何释放线程执行?

当爬行器完成时,您需要停止反应器。您可以通过监听
spider\u closed
信号来完成此操作:

from twisted.internet import reactor

from scrapy import log, signals
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy.xlib.pydispatch import dispatcher

from testspiders.spiders.followall import FollowAllSpider

def stop_reactor():
    reactor.stop()

dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Running reactor...')
reactor.run()  # the script will block here until the spider is closed
log.msg('Reactor stopped.')
命令行日志输出可能类似于:

stav@maia:/srv/scrapy/testspiders$ ./api
2013-02-10 14:49:38-0600 [scrapy] INFO: Running reactor...
2013-02-10 14:49:47-0600 [followall] INFO: Closing spider (finished)
2013-02-10 14:49:47-0600 [followall] INFO: Dumping Scrapy stats:
    {'downloader/request_bytes': 23934,...}
2013-02-10 14:49:47-0600 [followall] INFO: Spider closed (finished)
2013-02-10 14:49:47-0600 [scrapy] INFO: Reactor stopped.
stav@maia:/srv/scrapy/testspiders$

在scrapy 0.19.x中,您应该执行以下操作:

from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings

spider = FollowAllSpider(domain='scrapinghub.com')
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here until the spider_closed signal was sent
注意这些行

settings = get_project_settings()
crawler = Crawler(settings)
如果没有它,您的爬行器将不会使用您的设置,也不会保存项目。 我花了一段时间才弄明白为什么文档中的示例没有保存我的项目。我发送了一个pull请求来修复doc示例

另一种方法是直接从脚本调用命令

from scrapy import cmdline
cmdline.execute("scrapy crawl followall".split())  #followall is the spider's name

用一个新的脚本包装这段代码并调用它?我不确定你的评论是否正确。“调用脚本”是什么意思?它就挂在reactor.run()中,日志写着:“INFO:Closing spider(finished)”,所以spider似乎已经完成了。它肯定应该在文档中描述。谢谢。我已经向scrapy文档提交了一个pull请求,它描述了如何停止反应堆,应该很快就会出现:)当从这样的脚本运行scrapy时,如何传递scrapy的参数?像
-o output.json-t json
我应该把脚本放在哪里?与其使用额外的
停止反应器
,不如使用:
crawler.signals.connect(reactor.stop,signal=signals.spider\u closed)