Python 如何修复爬行蜘蛛重定向?

Python 如何修复爬行蜘蛛重定向?,python,scrapy,web-crawler,Python,Scrapy,Web Crawler,我正在尝试为此网站编写爬行蜘蛛: 这是我的代码: import urlparse from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector import HtmlXPathSelector from project.items import Product import re

我正在尝试为此网站编写爬行蜘蛛: 这是我的代码:

import urlparse
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from project.items import Product
import re



class ShamsStoresSpider(CrawlSpider):
    name = "shamsstores2"
    domain_name = "shams-stores.com"
    CONCURRENT_REQUESTS = 1

    start_urls = ["http://www.shams-stores.com/shop/index.php"]

    rules = (
            #categories
            Rule(SgmlLinkExtractor(restrict_xpaths=('//div[@id="categories_block_left"]/div/ul/li/a'), unique=False), callback='process', follow=True),
            )

    def process(self,response):
        print response
这是我在使用scrapy crawl ShamsStore2时得到的响应

2013-11-05 22:56:36+0200 [scrapy] DEBUG: Web service listening on 0.0.0.0:6081
2013-11-05 22:56:41+0200 [shamsstores2] DEBUG: Crawled (200) <GET http://www.shams-stores.com/shop/index.php> (referer: None)
2013-11-05 22:56:42+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=14&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=14&id_lang=1>
2013-11-05 22:56:42+0200 [shamsstores2] DEBUG: Filtered duplicate request: <GET http://www.shams-stores.com/shop/index.php?id_category=14&controller=category&id_lang=1> - no more duplicates will be shown (see DUPEFILTER_CLASS)
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=13&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=13&id_lang=1>
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=12&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=12&id_lang=1>
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=10&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=10&id_lang=1>
2013-11-05 22:56:43+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=9&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=9&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=8&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=8&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=7&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=7&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] DEBUG: Redirecting (301) to <GET http://www.shams-stores.com/shop/index.php?id_category=6&controller=category&id_lang=1> from <GET http://www.shams-stores.com/shop/index.php?controller=category&id_category=6&id_lang=1>
2013-11-05 22:56:44+0200 [shamsstores2] INFO: Closing spider (finished)
2013-11-05 22:56:36+0200[scrapy]调试:在0.0.0.0:6081上侦听Web服务
2013-11-05 22:56:41+0200[shamsstores2]调试:爬网(200)(参考:无)
2013-11-05 22:56:42+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:42+0200[shamsstores2]调试:已过滤的重复请求:-将不再显示更多的重复项(请参见DUPEFILTER_类)
2013-11-05 22:56:43+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:43+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:43+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:43+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:44+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:44+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:44+0200[shamsstores2]调试:重定向(301)到
2013-11-05 22:56:44+0200[ShamsStore2]信息:关闭卡盘(已完成)
它点击从规则中提取的链接,这些链接重定向到其他一些链接,然后停止,而不执行函数:process。
我可以通过使用基本爬行器来修复此问题,但是我可以修复它并且仍然使用爬行爬行器吗?

问题不在于重定向。Scrapy按照服务器的建议转到备用位置并从那里获取页面

“restrict\u xpaths=('//div[@id=“categories\u block\u left”]/div/ul/li/a')对于所有访问过的页面,它只提取相同的一组8个URL,并将它们作为副本进行过滤

另外,我唯一不明白的是为什么scrapy只给出一页的信息。如果我找到原因,我会更新

编辑:参考github.com/scrapy/scrapy/blob/master/scrapy/utils/request.py

基本上,首先请求排队并存储指纹。接下来生成重定向的url,当通过比较指纹检查它是否重复时,scrapy会找到相同的指纹。Scarpy会找到相同的指纹,因为如示例中所述,根据scrapy,重定向url和原始url的重新排序查询字符串是相同的

一种“利用漏洞”的解决方案

rules = (
    #categories
        Rule(SgmlLinkExtractor(restrict_xpaths=('//div[@id="categories_block_left"]/div/ul/li/a') ), callback='process', process_links= 'appendDummy', follow=True),

    def process(self,response):
        print 'response is called'
        print response

    def appendDummy(self, links):
        for link in links:
            link.url = link.url +"?dummy=true"
        return links
因为服务器忽略了重定向url中附加的虚拟对象,所以我们有点愚弄指纹来处理原始请求和重定向请求来处理不同的请求

另一个解决方案是您自己在process_link回调中对查询参数重新排序(在示例中为appendDummy)

其他解决方案可能是覆盖指纹以区分这些类型的url(我认为在一般情况下是错误的,在这里可以),或者基于url的简单指纹(同样仅适用于这种情况)

请告诉我这个解决方案对你是否有效


p.S.scrapy处理重新排序的原始url的行为是正确的。我不明白服务器重定向到重新排序的查询字符串的原因是什么。

这不仅仅是一个页面,你可以在日志中发现页面是不同的,每个页面都有不同的?id\u类别号我真的不明白xpath的问题吗?这8个url是不同的,为什么scrapy会将它们过滤为重复的?一个url会重定向到另一个url,所以scrapy会将其过滤为重复的,如何阻止它使用爬行器过滤它们?很抱歉,对于所有这些评论,它只会为一个页面显示一条消息,因为它只显示第一个重复:“不再显示重复的内容。”(参见DUPEFILTER_类)“对不起,如果我的答案不够清楚。您解决了您的问题还是仍在使用它?下面的解决方案对您有效吗?如果没有,我很想知道对您有效的替代解决方案。它有效,谢谢:)