Scrapy FormRequest无法处理信用卡登录表单

Scrapy FormRequest无法处理信用卡登录表单,scrapy,Scrapy,我无法让刮擦蜘蛛抓取我的发现帐户页面 我是个新手。我已经阅读了所有相关文档,但似乎无法正确提交表单请求。我添加了formname、userID和密码 import scrapy class DiscoverSpider(scrapy.Spider): name = "Discover" start_urls = ['https://www.discover.com'] def parse(self, response): return scrapy.F

我无法让刮擦蜘蛛抓取我的发现帐户页面

我是个新手。我已经阅读了所有相关文档,但似乎无法正确提交表单请求。我添加了formname、userID和密码

import scrapy

class DiscoverSpider(scrapy.Spider):
    name = "Discover"
    start_urls = ['https://www.discover.com']

    def parse(self, response):
        return scrapy.FormRequest.from_response(
            response,
            formname='loginForm',
            formdata={'userID': 'userID', 'password': 'password'},
            callback=self.after_login
        )

    def after_login(self, response):
        # check login succeed before going on
        if "authentication failed" in response.body:
            self.logger.error("Login failed")
        return
提交表单后,我希望蜘蛛能够抓取我的帐户页面。相反,爬行器被重定向到。以下是spider控制台输出:

2018-12-26 11:39:46 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: 
MoneySpiders)
2018-12-26 11:39:46 [scrapy.utils.log] INFO: Versions: lxml 4.2.5.0, 
libxml2 2.9.8, cssselect 1.0.3, parsel 1.5.1, w3lib 1.19.0, Twisted 18.7.0, 
Python 3.7.0 (default, Jun 28 2018, 08:04:48) [MSC v.1912 64 bit (AMD64)], 
pyOpenSSL 18.0.0 (OpenSSL 1.0.2p  14 Aug 2018), cryptography 2.3.1, 
Platform Windows-10-10.0.17134-SP0
2018-12-26 11:39:46 [scrapy.crawler] INFO: Overridden settings: 
{'BOT_NAME': 'MoneySpiders', 'NEWSPIDER_MODULE': 'MoneySpiders.spiders', 
'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['MoneySpiders.spiders']}
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.logstats.LogStats']
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled downloader 
middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
'scrapy.downloadermiddlewares.retry.RetryMiddleware',
'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
'scrapy.downloadermiddlewares.stats.DownloaderStats']
2018-12-26 11:39:46 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
'scrapy.spidermiddlewares.referer.RefererMiddleware',
'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
'scrapy.spidermiddlewares.depth.DepthMiddleware']
2018-12-26 11:39:47 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-12-26 11:39:47 [scrapy.core.engine] INFO: Spider opened
2018-12-26 11:39:47 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 
0 pages/min), scraped 0 items (at 0 items/min)
2018-12-26 11:39:47 [scrapy.extensions.telnet] DEBUG: Telnet console 
listening on 
2018-12-26 11:39:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://www.discover.com/robots.txt> (referer: None)
2018-12-26 11:39:47 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://www.discover.com> (referer: None)
2018-12-26 11:39:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://portal.discover.com/robots.txt> (referer: None)
2018-12-26 11:39:48 [scrapy.downloadermiddlewares.redirect] DEBUG: 
Redirecting (302) to <GET 
https://portal.discover.com/psv1/notification.html> from <POST 
https://portal.discover.com/customersvcs/universalLogin/signin>
2018-12-26 11:39:48 [scrapy.core.engine] DEBUG: Crawled (200) <GET 
https://portal.discover.com/psv1/notification.html> (referer: 
https://www.discover.com)
2018-12-26 11:39:48 [scrapy.core.scraper] ERROR: Spider error processing 
<GET https://portal.discover.com/psv1/notification.html> (referer: 
https://www.discover.com)

从我得到的答复来看:

当前无法访问您的帐户。过时的浏览器可能会 使您的计算机面临安全风险。获得最佳体验 Discover.com,您可能需要将浏览器更新为最新版本 请重新设置版本,然后重试


因此,网站似乎无法将您的蜘蛛识别为有效的浏览器。要解决这个问题,您需要设置一个合适的用户代理,可能还需要设置一些此浏览器常用的其他头文件

如果没有看到通知的内容,很难知道是什么问题。您是否检查了可能需要errback=或打开HTTP\U缓存的响应内容?此外,您是否设置了正确的用户代理,以便他们不知道您是蜘蛛?