Python 刮擦爬网多页错误筛选重复

Python 刮擦爬网多页错误筛选重复,python,scrapy,Python,Scrapy,我刚刚开始使用scrapy,我正在尝试一页一页地搜索整个数据库,并获取某些包含我所需内容的链接,但是当我尝试转到下一页时,我遇到了这个错误。也不确定如何进入下一页,如果您能提供正确方法的帮助,我们将不胜感激 这是我的代码: class TestSpider(scrapy.Spider): name = "PLC" allowed_domains = ["exploit-db.com"] start_urls = [ "https://www.explo

我刚刚开始使用scrapy,我正在尝试一页一页地搜索整个数据库,并获取某些包含我所需内容的链接,但是当我尝试转到下一页时,我遇到了这个错误。也不确定如何进入下一页,如果您能提供正确方法的帮助,我们将不胜感激

这是我的代码:

class TestSpider(scrapy.Spider):

    name = "PLC"
    allowed_domains = ["exploit-db.com"]

    start_urls = [
        "https://www.exploit-db.com/local/"
    ]

    def parse(self, response):
        filename = response.url.split("/")[-2] + '.html'
        links = response.xpath('//tr/td[5]/a/@href').extract()
        description = response.xpath('//tr/td[5]/a[@href]/text()').extract()    

        for data, link in zip(description, links):
            if "PLC" in data:
                with open(filename, "a") as f:
                    f.write(data+'\n')
                    f.write(link+'\n\n')
                    f.close()


            else:
                pass

        next_page = response.xpath('//div[@class="pagination"][1]//a/@href').extract()
        if next_page:
            url = response.urljoin(next_page[0])
            yield scrapy.Request(url, callback=self.parse)
但是我在控制台上得到了这个错误(?)

2016-06-08 16:05:21 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-08 16:05:21 [scrapy] INFO: Spider opened
2016-06-08 16:05:21 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-08 16:05:21 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6023
2016-06-08 16:05:22 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/robots.txt> (referer: None)
2016-06-08 16:05:22 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/> (referer: None)
2016-06-08 16:05:23 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2> (referer: https://www.exploit-db.com/local/)
2016-06-08 16:05:23 [scrapy] DEBUG: Crawled (200) <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=1> (referer: https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2)
2016-06-08 16:05:23 [scrapy] DEBUG: Filtered duplicate request: <GET https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2016-06-08 16:05:23 [scrapy] INFO: Closing spider (finished)
2016-06-08 16:05:23 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1162,
 'downloader/request_count': 4,
 'downloader/request_method_count/GET': 4,
 'downloader/response_bytes': 40695,
 'downloader/response_count': 4,
 'downloader/response_status_count/200': 4,
 'dupefilter/filtered': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 6, 8, 8, 5, 23, 514161),
 'log_count/DEBUG': 6,
 'log_count/INFO': 7,
 'request_depth_max': 3,
 'response_received_count': 4,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2016, 6, 8, 8, 5, 21, 561678)}
2016-06-08 16:05:23 [scrapy] INFO: Spider closed (finished)
2016-06-08 16:05:21[scrapy]信息:启用的项目管道:
[]
2016-06-08 16:05:21[scrapy]信息:蜘蛛打开
2016-06-08 16:05:21[抓取]信息:抓取0页(以0页/分钟的速度),抓取0项(以0项/分钟的速度)
2016-06-08 16:05:21[scrapy]调试:Telnet控制台监听127.0.0.1:6023
2016-06-08 16:05:22[scrapy]调试:爬网(200)(参考:无)
2016-06-08 16:05:22[scrapy]调试:爬网(200)(参考:无)
2016-06-08 16:05:23[scrapy]调试:爬网(200)(参考:https://www.exploit-db.com/local/)
2016-06-08 16:05:23[scrapy]调试:爬网(200)(参考:https://www.exploit-db.com/local/?order_by=date&order=desc&pg=2)
2016-06-08 16:05:23[scrapy]调试:已过滤的重复请求:-将不再显示重复项(请参阅DUPEFILTER_调试以显示所有重复项)
2016-06-08 16:05:23[scrapy]信息:关闭卡盘(已完成)
2016-06-08 16:05:23[scrapy]信息:倾销scrapy统计数据:
{'downloader/request_bytes':1162,
“下载程序/请求计数”:4,
“下载器/请求\方法\计数/获取”:4,
“downloader/response_字节”:40695,
“下载程序/响应计数”:4,
“下载/响应状态\计数/200”:4,
“dupefilter/filtered”:1,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2016,6,8,8,5,23514161),
“日志计数/调试”:6,
“日志计数/信息”:7,
“请求深度最大值”:3,
“收到的响应数”:4,
“调度程序/出列”:3,
“调度程序/出列/内存”:3,
“调度程序/排队”:3,
“调度程序/排队/内存”:3,
“开始时间”:datetime.datetime(2016,6,8,8,5,21561678)}
2016-06-08 16:05:23[scrapy]信息:卡盘关闭(完成)

它无法抓取下一页,并且希望得到解释,为什么T.T

您可以在请求中使用参数dont\u filter=True:

if next_page:
     url = response.urljoin(next_page[0])
     yield scrapy.Request(url, callback=self.parse, dont_filter=True)
但您将遇到一个无限循环,因为xpath似乎两次检索同一链接(检查每页上的寻呼机,因为.pagination的第二个元素可能并不总是“下一页”)

另外,如果他们开始使用引导程序或类似程序,并将类btn btn default添加到链接中,该怎么办

我建议使用

selector.css(".pagination").xpath('.//a/@href')
反而

selector.css(".pagination").xpath('.//a/@href')