Python 2.7 Scrapy没有进入解析函数

Python 2.7 Scrapy没有进入解析函数,python-2.7,web-scraping,web-crawler,scrapy,Python 2.7,Web Scraping,Web Crawler,Scrapy,我正在运行下面的蜘蛛,但它没有进入解析方法,我不知道为什么,请有人帮助 我的代码在下面 from scrapy.item import Item, Field from scrapy.selector import Selector from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector class MyItem(Item):

我正在运行下面的蜘蛛,但它没有进入解析方法,我不知道为什么,请有人帮助

我的代码在下面

    from scrapy.item import Item, Field
    from scrapy.selector import Selector
    from scrapy.spider import BaseSpider
    from scrapy.selector import HtmlXPathSelector


    class MyItem(Item):
        reviewer_ranking = Field()
        print "asdadsa"


    class MySpider(BaseSpider):
        name = 'myspider'
        allowed_domains = ["amazon.com"]
        start_urls = ["http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp"]
        print"sadasds"
        def parse(self, response):
            print"fggfggftgtr"
            sel = Selector(response)
            hxs = HtmlXPathSelector(response)
            item = MyItem()
            item["reviewer_ranking"] = hxs.select('//span[@class="a-size-small a-color-secondary"]/text()').extract()
            return item
我得到的输出如下

    $ scrapy runspider crawler_reviewers_data.py
    asdadsa
    sadasds
    /home/raj/Documents/IIM A/Daily sales rank/Daily      reviews/Reviews_scripts/Scripts_review/Reviews/Reviewer/crawler_reviewers_data.py:18:     ScrapyDeprecationWarning: crawler_reviewers_data.MySpider inherits from deprecated class scrapy.spider.BaseSpider, please inherit from scrapy.spider.Spider. (warning only on first subclass, there may be others)
    class MySpider(BaseSpider):
    2014-06-24 19:21:35+0530 [scrapy] INFO: Scrapy 0.22.2 started (bot: scrapybot)
    2014-06-24 19:21:35+0530 [scrapy] INFO: Optional features available: ssl, http11
    2014-06-24 19:21:35+0530 [scrapy] INFO: Overridden settings: {}
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, HttpProxyMiddleware, ChunkedTransferMiddleware, DownloaderStats
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
    2014-06-24 19:21:35+0530 [scrapy] INFO: Enabled item pipelines: 
    2014-06-24 19:21:35+0530 [myspider] INFO: Spider opened
    2014-06-24 19:21:35+0530 [myspider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
    2014-06-24 19:21:35+0530 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6027
    2014-06-24 19:21:35+0530 [scrapy] DEBUG: Web service listening on 0.0.0.0:6084
    2014-06-24 19:21:36+0530 [myspider] DEBUG: Crawled (403) <GET     http://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp> (referer: None) ['partial']
    2014-06-24 19:21:36+0530 [myspider] INFO: Closing spider (finished)
    2014-06-24 19:21:36+0530 [myspider] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 259,
 'downloader/request_count': 1,
 'downloader/request_method_count/GET': 1,
 'downloader/response_bytes': 28487,
 'downloader/response_count': 1,
 'downloader/response_status_count/403': 1,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2014, 6, 24, 13, 51, 36, 631236),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2014, 6, 24, 13, 51, 35, 472849)}
    2014-06-24 19:21:36+0530 [myspider] INFO: Spider closed (finished)
$scrapy runspider crawler\u reviewers\u data.py
阿斯达萨
萨达斯
/home/raj/Documents/IIM A/每日销售排名/每日评论/评论脚本/脚本/评论/评论/审查者/爬虫审查者\u数据。py:18:scrapydecreaction警告:爬虫审查者\u数据。MySpider继承自不推荐的类scrapy.spider.BaseSpider,请继承自scrapy.spider.spider。(警告仅限于第一个子类,可能还有其他子类)
类MySpider(BaseSpider):
2014-06-2419:21:35+0530[scrapy]信息:scrapy 0.22.2已启动(机器人:scrapybot)
2014-06-2419:21:35+0530[scrapy]信息:可选功能:ssl、http11
2014-06-2419:21:35+0530[scrapy]信息:覆盖的设置:{}
2014-06-2419:21:35+0530[scrapy]信息:启用的扩展:LogStats、TelnetConsole、CloseSpider、WebService、CoreStats、SpiderState
2014-06-24 19:21:35+0530[scrapy]信息:启用的下载中间件:HttpAuthMiddleware、DownloadTimeoutMiddleware、UserAgentMiddleware、RetryMiddleware、DefaultHeadersMiddleware、MetaRefreshMiddleware、HttpCompressionMiddleware、RedirectMiddleware、Cookies Middleware、HttpProxyMiddleware、ChunkedTransferMiddleware、DownloadersStats
2014-06-24 19:21:35+0530[scrapy]信息:启用的spider中间件:HttpErrorMiddleware、OffsiteMiddleware、referermidleware、urlengthmiddleware、DepthMiddleware
2014-06-2419:21:35+0530[scrapy]信息:启用的项目管道:
2014-06-2419:21:35+0530[myspider]信息:蜘蛛打开
2014-06-2419:21:35+0530[myspider]信息:抓取0页(以0页/分钟的速度),抓取0项(以0项/分钟的速度)
2014-06-2419:21:35+0530[scrapy]调试:Telnet控制台在0.0.0.0:6027上侦听
2014-06-2419:21:35+0530[scrapy]调试:Web服务侦听0.0.0.0:6084
2014-06-2419:21:36+0530[myspider]调试:爬网(403)(引用:无)[“部分”]
2014-06-2419:21:36+0530[myspider]信息:关闭蜘蛛(已完成)
2014-06-2419:21:36+0530[myspider]信息:倾销垃圾统计数据:
{'downloader/request_bytes':259,
“下载程序/请求计数”:1,
“downloader/request\u method\u count/GET”:1,
“downloader/response_字节”:28487,
“下载程序/响应计数”:1,
“下载器/响应\状态\计数/403”:1,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2014,6,24,13,51,36,631236),
“日志计数/调试”:3,
“日志计数/信息”:7,
“响应\u已接收\u计数”:1,
“调度程序/出列”:1,
“调度程序/出列/内存”:1,
“调度程序/排队”:1,
“调度程序/排队/内存”:1,
“开始时间”:datetime.datetime(2014,6,24,13,51,35472849)}
2014-06-2419:21:36+0530[myspider]信息:蜘蛛关闭(完成)

请帮帮我,我在这一点上被卡住了。

这是亚马逊使用的一种反网络爬行技术-你得到的是因为它需要
用户代理
头与请求一起发送

一种选择是手动将其添加到以下位置:

另一个选项是在项目范围内设置设置

还注意到<代码> Amazon < /Cord>提供了<代码> API <代码>,请考虑使用它。


希望能有所帮助。

非常感谢您的快速回复。手动添加方法不起作用,我收到了相同的403错误。您能告诉我如何在spider中设置默认的\u请求\u头吗?@user2019135您是否删除了
start\u URL
属性?因为我在发布之前已经测试了代码-对我有效。@user2019135这是。是的,它现在有效,非常感谢,非常感谢。还有一件事,如果你不介意的话,我如何在请求中传递一个包含我想要爬网的URL列表的文件?
class MySpider(BaseSpider):
    name = 'myspider'
    allowed_domains = ["amazon.com"]

    def start_requests(self):
        yield Request("https://www.amazon.com/gp/pdp/profile/A28XDLTGHPIWE1/ref=cm_cr_pr_pdp",
                      headers={'User-Agent': "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1"})

    ...