Python 无法使用Scrapy跟踪链接

Python 无法使用Scrapy跟踪链接,python,scrapy,Python,Scrapy,我无法跟踪链接并获取值 我尝试使用下面的代码,我能够抓取第一个链接,之后它不会重定向到第二个后续链接(函数) 您忘记了在parse()方法中返回您的请求。请尝试以下代码: from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.http.request import Request class ScrapyOrgSpider(BaseSpider):

我无法跟踪链接并获取值

我尝试使用下面的代码,我能够抓取第一个链接,之后它不会重定向到第二个后续链接(函数)


您忘记了在
parse()
方法中返回您的请求。请尝试以下代码:

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.http.request import Request


class ScrapyOrgSpider(BaseSpider):
    name = "example.com"
    allowed_domains = ["example.com"]
    start_urls = ["http://www.example.com/abcd"]

    def parse(self, response):
        self.log('@@ Original response: %s' % response)
        req = Request("http://www.example.com/follow", callback=self.a_1)
        self.log('@@ Next request: %s' % req)
        return req

    def a_1(self, response):
        hxs = HtmlXPathSelector(response)
        self.log('@@ extraction: %s' %
            hxs.select("//a[@class='channel-link']").extract())
日志输出:

2012-11-22 12:20:06-0600 [scrapy] INFO: Scrapy 0.17.0 started (bot: oneoff)
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Enabled item pipelines:
2012-11-22 12:20:06-0600 [example.com] INFO: Spider opened
2012-11-22 12:20:06-0600 [example.com] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-11-22 12:20:06-0600 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-11-22 12:20:07-0600 [example.com] DEBUG: Redirecting (302) to <GET http://www.iana.org/domains/example/> from <GET http://www.example.com/abcd>
2012-11-22 12:20:07-0600 [example.com] DEBUG: Crawled (200) <GET http://www.iana.org/domains/example/> (referer: None)
2012-11-22 12:20:07-0600 [example.com] DEBUG: @@ Original response: <200 http://www.iana.org/domains/example/>
2012-11-22 12:20:07-0600 [example.com] DEBUG: @@ Next request: <GET http://www.example.com/follow>
2012-11-22 12:20:07-0600 [example.com] DEBUG: Redirecting (302) to <GET http://www.iana.org/domains/example/> from <GET http://www.example.com/follow>
2012-11-22 12:20:08-0600 [example.com] DEBUG: Crawled (200) <GET http://www.iana.org/domains/example/> (referer: http://www.iana.org/domains/example/)
2012-11-22 12:20:08-0600 [example.com] DEBUG: @@ extraction: []
2012-11-22 12:20:08-0600 [example.com] INFO: Closing spider (finished)
2012-11-22 12:20:06-0600[scrapy]信息:scrapy 0.17.0已启动(bot:oneoff)
2012-11-22 12:20:06-0600[scrapy]调试:启用的扩展:LogStats、TelnetConsole、CloseSpider、WebService、CoreStats、SpiderState
2012-11-22 12:20:06-0600[scrapy]调试:启用的下载中间件:HttpAuthMiddleware、DownloadTimeoutMiddleware、UserAgentMiddleware、RetryMiddleware、DefaultHeadersMiddleware、RedirectMiddleware、Cookies中间件、HttpCompressionMiddleware、ChunkedTransferMiddleware、DownloadersStats
2012-11-22 12:20:06-0600[scrapy]调试:启用的spider中间件:HttpErrorMiddleware、OffsiteMiddleware、referermidleware、urlengthmiddleware、DepthMiddleware
2012-11-22 12:20:06-0600[scrapy]调试:启用的项目管道:
2012-11-22 12:20:06-0600[example.com]信息:Spider已打开
2012-11-22 12:20:06-0600[example.com]信息:爬网0页(以0页/分钟的速度),抓取0项(以0项/分钟的速度)
2012-11-22 12:20:06-0600[scrapy]调试:Telnet控制台在0.0.0.0上侦听:6023
2012-11-22 12:20:06-0600[scrapy]调试:在0.0.0.0上侦听Web服务
2012-11-22 12:20:07-0600[example.com]调试:重定向(302)到
2012-11-22 12:20:07-0600[example.com]调试:爬网(200)(参考:无)
2012-11-22 12:20:07-0600[example.com]调试:@@Original response:
2012-11-22 12:20:07-0600[example.com]调试:@@n下一个请求:
2012-11-22 12:20:07-0600[example.com]调试:重定向(302)到
2012-11-22 12:20:08-0600[example.com]调试:爬网(200)(参考:http://www.iana.org/domains/example/)
2012-11-22 12:20:08-0600[example.com]调试:@@extraction:[]
2012-11-22 12:20:08-0600[example.com]信息:关闭十字轴(已完成)

解析功能必须返回请求,而不仅仅是打印请求

def parse(self, response):
    hxs = HtmlXPathSelector(response)
    res1 = Request("http://www.example.com/follow", callback=self.a_1)
    print res1  # if you want
    return res1
def parse(self, response):
    hxs = HtmlXPathSelector(response)
    res1 = Request("http://www.example.com/follow", callback=self.a_1)
    print res1  # if you want
    return res1