Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/296.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Scrapy request.meta更新不正确_Python_Web Scraping_Scrapy_Scrapy Spider - Fatal编程技术网

Python Scrapy request.meta更新不正确

Python Scrapy request.meta更新不正确,python,web-scraping,scrapy,scrapy-spider,Python,Web Scraping,Scrapy,Scrapy Spider,我正在尝试将爬网路径记录到meta属性: import scrapy from scrapy.linkextractors import LinkExtractor class ExampleSpider(scrapy.Spider): name = "example" allowed_domains = ["www.iana.org"] start_urls = ['http://www.iana.org/'] request_path_css = dict(

我正在尝试将爬网路径记录到
meta
属性:

import scrapy
from scrapy.linkextractors import LinkExtractor

class ExampleSpider(scrapy.Spider):
    name = "example"
    allowed_domains = ["www.iana.org"]
    start_urls = ['http://www.iana.org/']
    request_path_css = dict(
        main_menu = r'#home-panel-domains > h2',
        domain_names = r'#main_right > p',
    )

    def links(self, response, restrict_css=None):
        lex = LinkExtractor(
            allow_domains=self.allowed_domains,
            restrict_css=restrict_css)
        return lex.extract_links(response)

    def requests(self, response, css, cb, append=True):
        links = [link for link in self.links(response, css)]
        for link in links:
            request = scrapy.Request(
                url=link.url,
                callback=cb)
            if append:
                request.meta['req_path'] = response.meta['req_path']
                request.meta['req_path'].append(dict(txt=link.text, url=link.url))
            else:
                request.meta['req_path'] = [dict(txt=link.text, url=link.url)]
            yield request

    def parse(self, response):
        #self.logger.warn('## Request path: %s', response.meta['req_path'])
        css = self.request_path_css['main_menu']
        return self.requests(response, css, self.domain_names, False)

    def domain_names(self, response):
        #self.logger.warn('## Request path: %s', response.meta['req_path'])
        css = self.request_path_css['domain_names']
        return self.requests(response, css, self.domain_names_parser)

    def domain_names_parser(self, response):
        self.logger.warn('## Request path: %s', response.meta['req_path'])
输出:

$ scrapy crawl -L WARN example
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:37 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
2017-02-13 11:06:38 [example] WARNING: ## Request path: [{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]
这不是我所期望的,因为我希望在
response.meta['req_path'][1]
中只包含最后一个url,但是最后一页中的所有url都会以某种方式进入列表

换句话说,预期输出如下所示:

[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/root', 'txt': 'The DNS Root Zone'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/int', 'txt': '.INT'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/arpa', 'txt': '.ARPA'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/idn-tables', 'txt': 'IDN Practices Repository'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/dnssec', 'txt': 'Root Key Signing Key'}]
[{'url': 'http://www.iana.org/domains', 'txt': 'Domain Names'}, {'url': 'http://www.iana.org/domains/special', 'txt': 'Special Purpose Domains'}]

在第二个请求之后,当您使用
append=True
解析并调用
self.requests()
时(因为这是默认值),此行:

request.meta['req_path'] = response.meta['req_path']
不复制列表。相反,它获得了对原始列表的引用。然后用下一行追加(原始列表!):

request.meta['req_path'].append(dict(txt=link.text, url=link.url))
在下一次循环迭代中,您再次获得对同一原始列表的引用(现在已经有两个条目),并再次附加到该列表中,依此类推

您要做的是为每个请求创建一个新列表。例如,您可以通过在第一行添加
.copy()
来执行此操作:

request.meta['req_path'] = response.meta['req_path'].copy()
或者,您可以通过执行以下操作来保存一行:

request.meta['req_path'] = response.meta['req_path'] + [dict(txt=link.text, url=link.url)]