Python scrapy项目中间件-TypeError:process_start_requests()接受2个位置参数,但给出了3个
一旦我在设置中取消注释项目中间件,我就会得到一个错误Python scrapy项目中间件-TypeError:process_start_requests()接受2个位置参数,但给出了3个,python,scrapy,Python,Scrapy,一旦我在设置中取消注释项目中间件,我就会得到一个错误 SPIDER_MIDDLEWARES = { 'scrapyspider.middlewares.ScrapySpiderProjectMiddleware': 543, } 这是我的蜘蛛 from scrapy.spiders import CrawlSpider, Rule from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor from scrapy.item
SPIDER_MIDDLEWARES = {
'scrapyspider.middlewares.ScrapySpiderProjectMiddleware': 543,
}
这是我的蜘蛛
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
from scrapy.item import Item, Field
class DomainLinks(Item):
links = Field()
class ScrapyProject(CrawlSpider):
name = 'scrapyspider'
#allowed_domains = []
start_urls = ['http://www.example.com']
rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_links', follow=True),)
def parse_start_url(self, response):
self.parse_links(response)
def parse_links(self, response):
item = DomainLinks()
item['links'] = []
links = LxmlLinkExtractor(allow=(),deny = ()).extract_links(response)
for link in links:
if link.url not in item['links']:
item['links'].append(link.url)
return item
下面是从项目中间件文件中提取的一些文本。process\u spider\u输出是过滤内部链接的地方,调用process\u start\u请求会导致错误
def process_spider_output(response, result, spider):
# Called with the results returned from the Spider, after
# it has processed the response.
domain = response.url.strip("http://","").strip("https://","").strip("www.").strip("ww2.").split("/")[0]
filtered_result = []
for i in result:
if domain in i:
filtered_result.append(i)
# Must return an iterable of Request, dict or Item objects.
for i in filtered_result:
yield i
def process_start_requests(start_requests, spider):
# Called with the start requests of the spider, and works
# similarly to the process_spider_output() method, except
# that it doesn’t have a response associated.
# Must return only requests (not items).
for r in start_requests:
yield r
回溯
2017-05-01 12:30:55 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapyproject.middlewares.scrapyprojectSpiderMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-05-01 12:30:55 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-05-01 12:30:55 [scrapy.core.engine] INFO: Spider opened
Unhandled error in Deferred:
2017-05-01 12:30:55 [twisted] CRITICAL: Unhandled error in Deferred:
2017-05-01 12:30:55 [twisted] CRITICAL:
Traceback (most recent call last):
File "/home/matt/.local/lib/python3.5/site-packages/twisted/internet/defer.py", line 1301, in _inlineCallbacks
result = g.send(result)
File "/home/matt/.local/lib/python3.5/site-packages/scrapy/crawler.py", line 74, in crawl
yield self.engine.open_spider(self.spider, start_requests)
TypeError: process_start_requests() takes 2 positional arguments but 3 were given
我正在尝试筛选链接,以便只跟踪/提取内部链接
不完整的文档不是很清楚
谢谢因为我看到的所有肮脏的中间件都在类内部,我怀疑
self
参数丢失了:
def process_spider_output(self, response, result, spider):
# ...
def process_start_requests(self, start_requests, spider):
# ...
希望这有帮助。如果没有,请发布完整的中间件文件。Nevermind,只需取消对其他中间件类方法的注释“#并非所有方法都需要定义。如果没有定义方法,#scrapy就好像spider中间件没有修改#传递的对象一样。”或者将self添加到类方法处理#spider"输出(自我、反应、结果、蜘蛛)