Python 联机一个爬行器正在从scrapy import cmdline运行

Python 联机一个爬行器正在从scrapy import cmdline运行,python,python-3.x,python-2.7,web-scraping,scrapy,Python,Python 3.x,Python 2.7,Web Scraping,Scrapy,这是我运行蜘蛛名称ndtv、republic、thehindu、zee和indiatv的python脚本 from scrapy import cmdline cmdline.execute("scrapy crawl ndtv".split()) cmdline.execute("scrapy crawl republic".split()) cmdline.execute("scrapy crawl thehindu".split()) cmdline.execute("scrapy cra

这是我运行蜘蛛名称ndtv、republic、thehindu、zee和indiatv的python脚本

from scrapy import cmdline
cmdline.execute("scrapy crawl ndtv".split())
cmdline.execute("scrapy crawl republic".split())
cmdline.execute("scrapy crawl thehindu".split())
cmdline.execute("scrapy crawl zee".split())
cmdline.execute("scrapy crawl indiatv".split())


当我运行脚本时,只有第一个爬行器(即ndtv爬行器)正在运行,脚本关闭,而没有打开/运行另一个爬行器。我需要一种运行所有spider的方法。

在Scrapy文档中解释:在Scrapy文档中解释: