Python TypeError:不支持的+:“NoneType”和“str”的操作数类型在Scrapy的dupefilter中

Python TypeError:不支持的+:“NoneType”和“str”的操作数类型在Scrapy的dupefilter中,python,scrapy,Python,Scrapy,我正在运行一个带有JOBDIR cf的刮板,这样爬网可以暂停并恢复。scraper已经成功运行了一段时间,但现在当我抓取爬行器时,我会得到日志,日志以以下内容结束: scraper_1 | 2017-06-21 14:53:10 [scrapy.middleware] INFO: Enabled item pipelines: scraper_1 | ['scrapy.pipelines.images.ImagesPipeline', scraper_1 | 'scrapy.pipeli

我正在运行一个带有JOBDIR cf的刮板,这样爬网可以暂停并恢复。scraper已经成功运行了一段时间,但现在当我抓取爬行器时,我会得到日志,日志以以下内容结束:

scraper_1  | 2017-06-21 14:53:10 [scrapy.middleware] INFO: Enabled item pipelines:
scraper_1  | ['scrapy.pipelines.images.ImagesPipeline',
scraper_1  |  'scrapy.pipelines.files.FilesPipeline']
scraper_1  | 2017-06-21 14:53:10 [scrapy.core.engine] INFO: Spider opened
scraper_1  | 2017-06-21 14:53:10 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
scraper_1  | 2017-06-21 14:53:10 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
scraper_1  | 2017-06-21 14:53:12 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://www.apkmirror.com/sitemap_index.xml> (referer: None)
scraper_1  | 2017-06-21 14:53:13 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.apkmirror.com/sitemap_index.xml> (referer: None)
scraper_1  | Traceback (most recent call last):
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 102, in iter_errback
scraper_1  |     yield next(it)
scraper_1  | GeneratorExit
scraper_1  | 2017-06-21 14:53:13 [scrapy.core.engine] INFO: Closing spider (closespider_errorcount)
scraper_1  | Exception ignored in: <generator object iter_errback at 0x7f4cc3a754c0>
scraper_1  | RuntimeError: generator ignored GeneratorExit
scraper_1  | 2017-06-21 14:53:13 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
scraper_1  | {'downloader/request_bytes': 306,
scraper_1  |  'downloader/request_count': 1,
scraper_1  |  'downloader/request_method_count/GET': 1,
scraper_1  |  'downloader/response_bytes': 2498,
scraper_1  |  'downloader/response_count': 1,
scraper_1  |  'downloader/response_status_count/200': 1,
scraper_1  |  'finish_reason': 'closespider_errorcount',
scraper_1  |  'finish_time': datetime.datetime(2017, 6, 21, 14, 53, 13, 139012),
scraper_1  |  'log_count/DEBUG': 26,
scraper_1  |  'log_count/ERROR': 1,
scraper_1  |  'log_count/INFO': 10,
scraper_1  |  'memusage/max': 75530240,
scraper_1  |  'memusage/startup': 75530240,
scraper_1  |  'request_depth_max': 1,
scraper_1  |  'response_received_count': 1,
scraper_1  |  'scheduler/dequeued': 1,
scraper_1  |  'scheduler/dequeued/disk': 1,
scraper_1  |  'scheduler/enqueued': 1,
scraper_1  |  'scheduler/enqueued/disk': 1,
scraper_1  |  'spider_exceptions/GeneratorExit': 1,
scraper_1  |  'start_time': datetime.datetime(2017, 6, 21, 14, 53, 10, 532154)}
scraper_1  | 2017-06-21 14:53:13 [scrapy.core.engine] INFO: Spider closed (closespider_errorcount)
scraper_1  | Unhandled error in Deferred:
scraper_1  | 2017-06-21 14:53:13 [twisted] CRITICAL: Unhandled error in Deferred:
scraper_1  | 
scraper_1  | 2017-06-21 14:53:13 [twisted] CRITICAL: 
scraper_1  | Traceback (most recent call last):
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/twisted/internet/task.py", line 517, in _oneWorkUnit
scraper_1  |     result = next(self._iterator)
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/scrapy/utils/defer.py", line 63, in <genexpr>
scraper_1  |     work = (callable(elem, *args, **named) for elem in iterable)
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 183, in _process_spidermw_output
scraper_1  |     self.crawler.engine.crawl(request=output, spider=spider)
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 210, in crawl
scraper_1  |     self.schedule(request, spider)
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 216, in schedule
scraper_1  |     if not self.slot.scheduler.enqueue_request(request):
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/scrapy/core/scheduler.py", line 54, in enqueue_request
scraper_1  |     if not request.dont_filter and self.df.request_seen(request):
scraper_1  |   File "/usr/local/lib/python3.6/site-packages/scrapy/dupefilters.py", line 53, in request_seen
scraper_1  |     self.file.write(fp + os.linesep)
scraper_1  | TypeError: unsupported operand type(s) for +: 'NoneType' and 'str'
apkmirrorscrapercompose_scraper_1 exited with code 0
其中,解析方法是在BaseSpider类中定义的。我定义了一个自定义URLDupefilter,如下所示:

import scrapy
from scrapy.spiders import SitemapSpider
from apkmirror_scraper.spiders.base_spider import BaseSpider


class ApkmirrorSitemapSpider(SitemapSpider, BaseSpider):
    name = 'apkmirror'
    sitemap_urls = ['http://www.apkmirror.com/sitemap_index.xml']
    sitemap_rules = [(r'.*-android-apk-download/$', 'parse')]

    custom_settings = {
        'CLOSESPIDER_PAGECOUNT': 0,
        'CLOSESPIDER_ERRORCOUNT': 1,
        'CONCURRENT_REQUESTS': 32,
        'CONCURRENT_REQUESTS_PER_DOMAIN': 16,
        'TOR_RENEW_IDENTITY_ENABLED': True,
        'TOR_ITEMS_TO_SCRAPE_PER_IDENTITY': 50,
        'FEED_URI': '/scraper/apkmirror_scraper/data/apkmirror.json',
        'FEED_FORMAT': 'json',
        'DUPEFILTER_CLASS': 'apkmirror_scraper.dupefilters.URLDupefilter',
    }

    download_timeout = 60 * 15.0        # Allow 15 minutes for downloading APKs

    def start_requests(self):
        for url in self.sitemap_urls:
            yield scrapy.Request(url, self._parse_sitemap, dont_filter=True)
from scrapy.dupefilters import RFPDupeFilter


class URLDupefilter(RFPDupeFilter):
    def request_fingerprint(self, request):
        '''Simply use the URL as fingerprint. (Scrapy's default is a hash containing the request's canonicalized URL, method, body, and (optionally) headers). Omit sitemap pages, which end with ".xml".'''
        if not request.url.endswith('.xml'):
            return request.url

    def request_seen(self, request):
        '''Same as the RFPDupeFilter's request_seen method, except that a fingerprint of "None" is viewed as 'not seen' (cf. https://stackoverflow.com/questions/44370949/is-it-ok-for-scrapys-request-fingerprint-method-to-return-none).'''
        fp = self.request_fingerprint(request)
        if fp is None:
            return False                            # These two lines are added to the original
        if fp in self.fingerprints:
            return True
        self.fingerprints.add(fp)
        if self.file:
            self.file.write(fp + os.linesep)
然而,错误似乎来自Scrapy的内置RFPDupeFilter类。我不明白,如果我将DUPEFILTER_类设置为自定义类,为什么仍然启用此功能

class URLDupefilter(RFPDupeFilter):
    def request_fingerprint(self, request):
        '''Simply use the URL as fingerprint. (Scrapy's default is a hash containing the request's canonicalized URL, method, body, and (optionally) headers). Omit sitemap pages, which end with ".xml".'''
        if not request.url.endswith('.xml'):
            return request.url
如果请求url以.xml结尾,则返回None。然后,dupefilter尝试将它认为是字符串的内容写入文件,方法是将该字符串与换行符组合。因为它试图将None和字符串组合在一起,所以会得到TypeError

要解决此问题,只需返回.xml页面的字符串:

您不应该使用request.url作为指纹,因为它不准确。例如,如果您转到第x页,它重定向到第y页,则不会过滤到第y页的下一个请求,因为您的指纹是x的url。这样的情况很少,这就是为什么您应该调用super方法来生成指纹。
    def request_fingerprint(self, request):
        if not request.url.endswith('.xml'):
            return request.url
        return ''