Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/319.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Scrapy SitemapSpider仅重复过滤一个项目并完成_Python_Scrapy - Fatal编程技术网

Python Scrapy SitemapSpider仅重复过滤一个项目并完成

Python Scrapy SitemapSpider仅重复过滤一个项目并完成,python,scrapy,Python,Scrapy,我正在运行一个带有文件管道的scraper,到目前为止,它已经下载了14550个项目。然而,在某种程度上,它似乎已经“卡住了”;下载中提到了“丢失”。由于刮板在设置中指定了WORKDIR,因此我尝试停止并重新启动它 然而,奇怪的是,在重新启动时,它在dupefilter中遇到了一个项目并完成了(参见下面的日志)。我不知道蜘蛛为什么会这样;有人能告诉我调试它的正确方向吗 scraper_1 | Tor appears to be working. Proceeding with command.

我正在运行一个带有
文件管道的scraper,到目前为止,它已经下载了14550个项目。然而,在某种程度上,它似乎已经“卡住了”;下载中提到了“丢失”。由于刮板在设置中指定了
WORKDIR
,因此我尝试停止并重新启动它

然而,奇怪的是,在重新启动时,它在dupefilter中遇到了一个项目并完成了(参见下面的日志)。我不知道蜘蛛为什么会这样;有人能告诉我调试它的正确方向吗

scraper_1  | Tor appears to be working. Proceeding with command...
scraper_1  | 2017-06-02 11:38:20 [scrapy.utils.log] INFO: Scrapy 1.4.0 started (bot: apkmirror_scraper)
scraper_1  | 2017-06-02 11:38:20 [scrapy.utils.log] INFO: Overridden settings: {'BOT_NAME': 'apkmirror_scraper', 'NEWSPIDER_MODULE': 'apkmirror_scraper.spiders', 'SPIDER_MODULES': ['apkmirror_scraper.spiders']}
scraper_1  | 2017-06-02 11:38:20 [apkmirror_scraper.extensions] INFO: The crawler will scrape the following (randomized) number of items before changing identity: 32
scraper_1  | 2017-06-02 11:38:20 [scrapy.middleware] INFO: Enabled extensions:
scraper_1  | ['scrapy.extensions.corestats.CoreStats',
scraper_1  |  'scrapy.extensions.telnet.TelnetConsole',
scraper_1  |  'scrapy.extensions.memusage.MemoryUsage',
scraper_1  |  'scrapy.extensions.closespider.CloseSpider',
scraper_1  |  'scrapy.extensions.feedexport.FeedExporter',
scraper_1  |  'scrapy.extensions.logstats.LogStats',
scraper_1  |  'scrapy.extensions.spiderstate.SpiderState',
scraper_1  |  'apkmirror_scraper.extensions.TorRenewIdentity']
scraper_1  | 2017-06-02 11:38:20 [scrapy.middleware] INFO: Enabled downloader middlewares:
scraper_1  | ['scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
scraper_1  |  'scrapy_fake_useragent.middleware.RandomUserAgentMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.retry.RetryMiddleware',
scraper_1  |  'apkmirror_scraper.downloadermiddlewares.TorRetryMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
scraper_1  |  'scrapy.downloadermiddlewares.stats.DownloaderStats']
scraper_1  | 2017-06-02 11:38:20 [scrapy.middleware] INFO: Enabled spider middlewares:
scraper_1  | ['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
scraper_1  |  'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
scraper_1  |  'scrapy.spidermiddlewares.referer.RefererMiddleware',
scraper_1  |  'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
scraper_1  |  'scrapy.spidermiddlewares.depth.DepthMiddleware']
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] DEBUG: Looking for credentials via: env
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] DEBUG: Looking for credentials via: assume-role
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] DEBUG: Looking for credentials via: shared-credentials-file
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] INFO: Found credentials in shared credentials file: ~/.aws/credentials
scraper_1  | 2017-06-02 11:38:21 [botocore.loaders] DEBUG: Loading JSON file: /usr/local/lib/python3.6/site-packages/botocore/data/endpoints.json
scraper_1  | 2017-06-02 11:38:21 [botocore.loaders] DEBUG: Loading JSON file: /usr/local/lib/python3.6/site-packages/botocore/data/s3/2006-03-01/service-2.json
scraper_1  | 2017-06-02 11:38:21 [botocore.loaders] DEBUG: Loading JSON file: /usr/local/lib/python3.6/site-packages/botocore/data/_retry.json
scraper_1  | 2017-06-02 11:38:21 [botocore.client] DEBUG: Registering retry handlers for service: s3
scraper_1  | 2017-06-02 11:38:21 [botocore.hooks] DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x7f9739657a60>
scraper_1  | 2017-06-02 11:38:21 [botocore.hooks] DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x7f9739657840>
scraper_1  | 2017-06-02 11:38:21 [botocore.client] DEBUG: Switching signature version for service s3 to version s3v4 based on config file override.
scraper_1  | 2017-06-02 11:38:21 [botocore.endpoint] DEBUG: Setting s3 timeout as (60, 60)
scraper_1  | 2017-06-02 11:38:21 [botocore.client] DEBUG: Defaulting to S3 virtual host style addressing with path style addressing fallback.
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] DEBUG: Looking for credentials via: env
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] DEBUG: Looking for credentials via: assume-role
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] DEBUG: Looking for credentials via: shared-credentials-file
scraper_1  | 2017-06-02 11:38:21 [botocore.credentials] INFO: Found credentials in shared credentials file: ~/.aws/credentials
scraper_1  | 2017-06-02 11:38:21 [botocore.loaders] DEBUG: Loading JSON file: /usr/local/lib/python3.6/site-packages/botocore/data/endpoints.json
scraper_1  | 2017-06-02 11:38:21 [botocore.loaders] DEBUG: Loading JSON file: /usr/local/lib/python3.6/site-packages/botocore/data/s3/2006-03-01/service-2.json
scraper_1  | 2017-06-02 11:38:21 [botocore.loaders] DEBUG: Loading JSON file: /usr/local/lib/python3.6/site-packages/botocore/data/_retry.json
scraper_1  | 2017-06-02 11:38:21 [botocore.client] DEBUG: Registering retry handlers for service: s3
scraper_1  | 2017-06-02 11:38:21 [botocore.hooks] DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_post at 0x7f9739657a60>
scraper_1  | 2017-06-02 11:38:21 [botocore.hooks] DEBUG: Event creating-client-class.s3: calling handler <function add_generate_presigned_url at 0x7f9739657840>
scraper_1  | 2017-06-02 11:38:21 [botocore.client] DEBUG: Switching signature version for service s3 to version s3v4 based on config file override.
scraper_1  | 2017-06-02 11:38:21 [botocore.endpoint] DEBUG: Setting s3 timeout as (60, 60)
scraper_1  | 2017-06-02 11:38:21 [botocore.client] DEBUG: Defaulting to S3 virtual host style addressing with path style addressing fallback.
scraper_1  | 2017-06-02 11:38:21 [scrapy.middleware] INFO: Enabled item pipelines:
scraper_1  | ['scrapy.pipelines.images.ImagesPipeline',
scraper_1  |  'scrapy.pipelines.files.FilesPipeline']
scraper_1  | 2017-06-02 11:38:21 [scrapy.core.engine] INFO: Spider opened
scraper_1  | 2017-06-02 11:38:21 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
scraper_1  | 2017-06-02 11:38:21 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
scraper_1  | 2017-06-02 11:38:21 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET http://www.apkmirror.com/sitemap_index.xml> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
scraper_1  | 2017-06-02 11:38:21 [scrapy.core.engine] INFO: Closing spider (finished)
scraper_1  | 2017-06-02 11:38:21 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
scraper_1  | {'dupefilter/filtered': 1,
scraper_1  |  'finish_reason': 'finished',
scraper_1  |  'finish_time': datetime.datetime(2017, 6, 2, 11, 38, 21, 946421),
scraper_1  |  'log_count/DEBUG': 26,
scraper_1  |  'log_count/INFO': 10,
scraper_1  |  'memusage/max': 73805824,
scraper_1  |  'memusage/startup': 73805824,
scraper_1  |  'start_time': datetime.datetime(2017, 6, 2, 11, 38, 21, 890151)}
scraper_1  | 2017-06-02 11:38:21 [scrapy.core.engine] INFO: Spider closed (finished)
apkmirrorscrapercompose_scraper_1 exited with code 0
其中,我已覆盖dupefilter类,如下所示:

from scrapy.dupefilters import RFPDupeFilter

class URLDupefilter(RFPDupeFilter):

    def request_fingerprint(self, request):
        '''Simply use the URL as fingerprint. (Scrapy's default is a hash containing the request's canonicalized URL, method, body, and (optionally) headers).'''
        return request.url

它看起来像是
SitemapSpider
start\u requests()
,与

因此,实际上,当重新启动爬网时,
http://www.apkmirror.com/sitemap_index.xml
在您的workdir中可能是“已访问”,因此已过滤


您可以覆盖您的
ApkmirrorSitemapSpider
start\u请求()
以设置
dont\u filter=True
。您也可以在scrapy中打开bug。

它看起来像是
SitemapSpider
start\u requests()
,与

因此,实际上,当重新启动爬网时,
http://www.apkmirror.com/sitemap_index.xml
在您的workdir中可能是“已访问”,因此已过滤

您可以覆盖您的
ApkmirrorSitemapSpider
start\u请求()
以设置
dont\u filter=True
。您也可以在scrapy中打开bug

from scrapy.dupefilters import RFPDupeFilter

class URLDupefilter(RFPDupeFilter):

    def request_fingerprint(self, request):
        '''Simply use the URL as fingerprint. (Scrapy's default is a hash containing the request's canonicalized URL, method, body, and (optionally) headers).'''
        return request.url