Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/fortran/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 刮完成早,没有得到所有的链接_Python_Web Scraping_Scrapy_Scrapy Spider - Fatal编程技术网

Python 刮完成早,没有得到所有的链接

Python 刮完成早,没有得到所有的链接,python,web-scraping,scrapy,scrapy-spider,Python,Web Scraping,Scrapy,Scrapy Spider,我正在尝试运行一个webspider,它获取特定url的所有url。现在,当我知道还有成千上万个URL时,它返回了大约64个URL。有人知道为什么提前结束吗 class MySpider(BaseSpider): custom_settings = { 'AUTOTHROTTLE_ENABLED': True, 'DOWNLOAD_DELAY': 1.5 } name = 'www.shopgoodwill.com' allowe

我正在尝试运行一个webspider,它获取特定url的所有url。现在,当我知道还有成千上万个URL时,它返回了大约64个URL。有人知道为什么提前结束吗

class MySpider(BaseSpider):
    custom_settings = {
        'AUTOTHROTTLE_ENABLED': True,
        'DOWNLOAD_DELAY': 1.5
    }

    name = 'www.shopgoodwill.com'
    allowed_domains = ['www.shopgoodwill.com']
    start_urls = [
        'https://www.shopgoodwill.com'
    ]

    def __init__(self, alexa_site_id, *args, **kwargs):
        super(MySpider, self).__init__(*args, **kwargs)
        self.alexa_site_id = alexa_site_id

    def parse(self, response):
        le = LinkExtractor()
        for link in le.extract_links(response):
            yield Request(link.url, callback=self.parse_item)
这里是结果,我注意到是说请求深度最大值:1,但我在设置中有我的深度限制=0

2019-02-19 23:31:03 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 14739,
 'downloader/request_count': 32,
 'downloader/request_method_count/GET': 32,
 'downloader/response_bytes': 336986,
 'downloader/response_count': 32,
 'downloader/response_status_count/200': 23,
 'downloader/response_status_count/302': 9,
 'dupefilter/filtered': 11,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 2, 19, 23, 31, 3, 824302),
 'log_count/DEBUG': 38,
 'log_count/INFO': 22,
 'memusage/max': 108908544,
 'memusage/startup': 108908544,
 'offsite/domains': 5,
 'offsite/filtered': 5,
 'request_depth_max': 1,
 'response_received_count': 23,
 'scheduler/dequeued': 32,
 'scheduler/dequeued/memory': 32,
 'scheduler/enqueued': 32,
 'scheduler/enqueued/memory': 32,
 'start_time': datetime.datetime(2019, 2, 19, 23, 30, 4, 918201)}
2019-02-19 23:31:03 [scrapy.core.engine] INFO: Spider closed (finished)
Closing spider (finished)
Dumping Scrapy stats:
{'downloader/request_bytes': 14739,
 'downloader/request_count': 32,
 'downloader/request_method_count/GET': 32,
 'downloader/response_bytes': 336986,
 'downloader/response_count': 32,
 'downloader/response_status_count/200': 23,
 'downloader/response_status_count/302': 9,
 'dupefilter/filtered': 11,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2019, 2, 19, 23, 31, 3, 824302),
 'log_count/DEBUG': 38,
 'log_count/INFO': 22,
 'memusage/max': 108908544,
 'memusage/startup': 108908544,
 'offsite/domains': 5,
 'offsite/filtered': 5,
 'request_depth_max': 1,
 'response_received_count': 23,
 'scheduler/dequeued': 32,
 'scheduler/dequeued/memory': 32,
 'scheduler/enqueued': 32,
 'scheduler/enqueued/memory': 32,
 'start_time': datetime.datetime(2019, 2, 19, 23, 30, 4, 918201)}
Spider closed (finished)

根据我们在问题下的评论,您还需要提取
parse_item()
中的链接。如果您仅在
parse()
中提取,则后续链接将不被遵循。

显示您的
parse\u项
函数。如果你只在 PARSE <代码>中提取链接,那么后续的链接将不会在 PARSESIGION中被跟踪,除非你也在那里提取链接。如果你要遵循这样的每一个链接,你就应该考虑使用。很棒的谢谢,我的问题是PARSEBIY没有提取链接。