Python scrapy-实现用户登录功能后如何重定向到另一个页面

Python scrapy-实现用户登录功能后如何重定向到另一个页面,python,web,scrapy,Python,Web,Scrapy,最近我在学刮痧。我想刮一下zoominfo。我已经编写了用户登录函数。但我无法重定向到实现数据抓取的搜索页面。我想重定向的url是http://subscriber.zoominfo.com/zoominfo/ 这是我的密码 #!/usr/bin/env python # -*- coding:utf-8 -*- import scrapy from scrapy.selector import Selector from scrapy.http import Request, FormRequ

最近我在学刮痧。我想刮一下zoominfo。我已经编写了用户登录函数。但我无法重定向到实现数据抓取的搜索页面。我想重定向的url是
http://subscriber.zoominfo.com/zoominfo/

这是我的密码

#!/usr/bin/env python
# -*- coding:utf-8 -*-
import scrapy
from scrapy.selector import Selector
from scrapy.http import Request, FormRequest
from tutorial.items import TutorialItem


class LoginSpider(scrapy.Spider):
    name = 'zoominfo'
    login_page = ['https://www.zoominfo.com/login']
    start_urls = [
    'http://subscriber.zoominfo.com/zoominfo/',
    ]
    headers = {
        "Accept":"text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "Accept-Encoding":"gzip, deflate, br",
        "Accept-Language":"en-US,en;q=0.5",
        "Connectionc":"keep-alive",
        "User-Agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:50.0) Gecko/20100101 Firefox/50.0",
        "Referer":"https://www.zoominfo.com/login/"
    }   
    def init_request(self):
        return Request(url=self.login_page, callback=self.login)

    def login(self, response):
        print "Preparing Login"
        return FormRequest.from_response(
            response,
            headers=self.headers,
            formdata={
            'username':username, 
            'password':password},
            callback=self.after_login,
            dont_filter = True,
        )

    def after_login(self, response):
        if username in response.body:
            self.log("Success")
            self.initialized()
        else:
            self.log("Bad times")

    def parse(self, response):
        base_url = 'http://subscriber.zoominfo.com/zoominfo/'
        text = Selector(response)
        item = TutorialItem()
        item['title'] = text.xpath('//title/text()').extract()
        print {'title':item["title"]}
        request = Request(base_url, callback=self.parse)
主要输出如下:

2017-01-09 16:52:58 [scrapy.core.engine] INFO: Spider opened
2017-01-09 16:52:59 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-09 16:52:59 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://subscriber.zoominfo.com/zoominfo/> (referer: None)
{'title': []}
2017-01-09 16:52:59 [scrapy.core.engine] INFO: Closing spider (finished)
2017-01-09 16:52:58[刮屑核心引擎]信息:蜘蛛打开
2017-01-09 16:52:59[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2017-01-09 16:52:59[scrapy.core.engine]调试:爬网(200)(参考:无)
{'title':[]}
2017-01-09 16:52:59[刮屑芯发动机]信息:关闭卡盘(已完成)

无论输出打印“准备登录”还是打印正确的标题。希望有人能给我一些提示。真的非常感谢从日志判断
init\u request()
没有被执行。这是有原因的
init_request()
仅适用于,不适用于常规的
Spider
类:

from scrapy.spiders.init import InitSpider

class LoginSpider(InitSpider):
    # ...

根据日志判断,
init_request()
没有得到执行。这是有原因的
init_request()
仅适用于,不适用于常规的
Spider
类:

from scrapy.spiders.init import InitSpider

class LoginSpider(InitSpider):
    # ...

嘿,兄弟,我按照你的方法,然后输出显示“准备登录”,但我未能成功登录。你知道我做错了哪些步骤吗?嘿,兄弟,我遵循了你的方法,然后输出显示“准备登录”,但我未能成功登录。你知道我做错了哪些步骤吗?