Python 斯拉皮把两个蜘蛛排成一列
我在一个文件中写了两个spider。当我运行Python 斯拉皮把两个蜘蛛排成一列,python,scrapy,Python,Scrapy,我在一个文件中写了两个spider。当我运行scrapy runspider two_Spider.py时,只执行了第一个Spider。如何在不将文件拆分为两个文件的情况下同时运行这两个文件 两个_.py: import scrapy class MySpider1(scrapy.Spider): # first spider definition ... class MySpider2(scrapy.Spider): # second spider definiti
scrapy runspider two_Spider.py
时,只执行了第一个Spider。如何在不将文件拆分为两个文件的情况下同时运行这两个文件
两个_.py:
import scrapy
class MySpider1(scrapy.Spider):
# first spider definition
...
class MySpider2(scrapy.Spider):
# second spider definition
...
让我们读一读:
在同一进程中运行多个spider
默认情况下,Scrapy运行
运行“刮擦爬网”时,每个进程只有一个蜘蛛。然而,刮痧
支持使用每个进程运行多个spider
下面是一个同时运行多个spider的示例:
(文档中没有更多示例)
从你的问题来看,不清楚你是如何将两个蜘蛛放在一个文件中的。仅仅用一个爬行器连接两个文件的内容是不够的
试着做文档中写的事情。或者至少给我们看看你的代码。没有它,我们就帮不了你。这里有一个完整的Scrapy项目,一个文件中有两个蜘蛛
# quote_spiders.py
import json
import string
import scrapy
from scrapy.crawler import CrawlerProcess
from scrapy.item import Item, Field
class TextCleaningPipeline(object):
def _clean_text(self, text):
text = text.replace('“', '').replace('”', '')
table = str.maketrans({key: None for key in string.punctuation})
clean_text = text.translate(table)
return clean_text.lower()
def process_item(self, item, spider):
item['text'] = self._clean_text(item['text'])
return item
class JsonWriterPipeline(object):
def open_spider(self, spider):
self.file = open(spider.settings['JSON_FILE'], 'a')
def close_spider(self, spider):
self.file.close()
def process_item(self, item, spider):
line = json.dumps(dict(item)) + "\n"
self.file.write(line)
return item
class QuoteItem(Item):
text = Field()
author = Field()
tags = Field()
spider = Field()
class QuotesSpiderOne(scrapy.Spider):
name = "quotes1"
def start_requests(self):
urls = ['http://quotes.toscrape.com/page/1/', ]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
for quote in response.css('div.quote'):
item = QuoteItem()
item['text'] = quote.css('span.text::text').get()
item['author'] = quote.css('small.author::text').get()
item['tags'] = quote.css('div.tags a.tag::text').getall()
item['spider'] = self.name
yield item
class QuotesSpiderTwo(scrapy.Spider):
name = "quotes2"
def start_requests(self):
urls = ['http://quotes.toscrape.com/page/2/', ]
for url in urls:
yield scrapy.Request(url=url, callback=self.parse)
def parse(self, response):
for quote in response.css('div.quote'):
item = QuoteItem()
item['text'] = quote.css('span.text::text').get()
item['author'] = quote.css('small.author::text').get()
item['tags'] = quote.css('div.tags a.tag::text').getall()
item['spider'] = self.name
yield item
if __name__ == '__main__':
settings = dict()
settings['USER_AGENT'] = 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
settings['HTTPCACHE_ENABLED'] = True
settings['JSON_FILE'] = 'items.jl'
settings['ITEM_PIPELINES'] = dict()
settings['ITEM_PIPELINES']['__main__.TextCleaningPipeline'] = 800
settings['ITEM_PIPELINES']['__main__.JsonWriterPipeline'] = 801
process = CrawlerProcess(settings=settings)
process.crawl(QuotesSpiderOne)
process.crawl(QuotesSpiderTwo)
process.start()
安装Scrapy并运行脚本
$ pip install Scrapy
$ python quote_spiders.py
不需要其他文件
此示例与pycharm/vscode的图形调试器相结合,有助于理解scrapy工作流并简化调试。请查看我的更新。我想使用“scrapy runspider”运行它。即使我只抓取一个爬行器,它也会抛出扭曲的.internet.error.ReactorNotRestartable。为什么使用
scrapy runspider
对您很重要?你为什么不想像文档中写的那样运行一对爬行器呢?我是scrapy的新手,只知道runspider
。我应该先仔细阅读文档。谢谢你的帮助。你为什么要把它们放在一个文件里?
$ pip install Scrapy
$ python quote_spiders.py