Python Scrapy-登录不工作

Python Scrapy-登录不工作,python,scrapy,Python,Scrapy,我有一个只有一个登录名的网站,我想登录http://145.100.108.148/login2/login.php然后抓取下一页,即http://145.100.108.148/login2/index.php 两个.html站点都必须保存到磁盘 from scrapy.http import Request, FormRequest from scrapy.linkextractors import LinkExtractor from scrapy.spiders import Crawl

我有一个只有一个登录名的网站,我想登录
http://145.100.108.148/login2/login.php
然后抓取下一页,即
http://145.100.108.148/login2/index.php

两个.html站点都必须保存到磁盘

from scrapy.http import Request, FormRequest
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.selector import HtmlXPathSelector
from scrapy.http import Request

class TestSpider(CrawlSpider):
    name = 'testspider'
    login_page = 'http://145.100.108.148/login2/login.php'
    start_urls = ['http://145.100.108.148/login2/index.php'
    ]
    rules = (
        Rule(LinkExtractor(allow=r'.*'),
             callback='parse_item', follow=True),
    )
    login_user = 'test@hotmail.com'
    login_pass = 'test'

    def start_request(self):
        """This function is called before crawling starts"""
        return [Request(url=self.login_page, callback=self.login)]

    def login(self, response):
        """Generate a login request"""
        return FormRequest.from_response(response,
                    formdata={
                    'email': self.login_user,
                    'pass': self.login_pass},
                    callback=self.check_login_response)

    def check_login_response(self, response):
        """Check the response returned by a login request to see if we are
        successfully logged in"""
        if b"Dashboard" in response.body:
            self.logger.info("successfully logged in. Let's start crawling!")
            return self.initialized()
        else:
            self.logger.info("NOT LOGGED IN :(")
            # Something went wrong, we couldn't log in, so nothing happens.
            return

    def parse_item(self, response):
        """Save pages to disk"""
        self.logger.info('Hi, this is an item page! %s', response.url)
        page = response.url.split("/")[-2]
        filename = 'scraped-%s.html' % page
        with open(filename, 'wb') as f:
            f.write(response.body)
        self.log('Saved file %s' % filename)
输出

2018-01-16 10:32:14 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2018-01-16 10:32:14 [scrapy.core.engine] INFO: Spider opened
2018-01-16 10:32:14 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2018-01-16 10:32:14 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2018-01-16 10:32:14 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://145.100.108.148/robots.txt> (referer: None)
2018-01-16 10:32:14 [scrapy.downloadermiddlewares.cookies] DEBUG: Received cookies from: <302 http://145.100.108.148/login2/index.php>
Set-Cookie: PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6; path=/

2018-01-16 10:32:14 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://145.100.108.148/login2/login.php> from <GET http://145.100.108.148/login2/index.php>
2018-01-16 10:32:14 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <GET http://145.100.108.148/login2/login.php>
Cookie: PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6

2018-01-16 10:32:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://145.100.108.148/login2/login.php> (referer: None)
2018-01-16 10:32:14 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <GET http://145.100.108.148/login2/register.php>
Cookie: PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6

2018-01-16 10:32:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://145.100.108.148/login2/register.php> (referer: http://145.100.108.148/login2/login.php)
2018-01-16 10:32:14 [testspider] INFO: Hi, this is an item page! http://145.100.108.148/login2/register.php
2018-01-16 10:32:14 [testspider] DEBUG: Saved file scraped-login2.html
2018-01-16 10:32:14 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET http://145.100.108.148/login2/register.php> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2018-01-16 10:32:14 [scrapy.downloadermiddlewares.cookies] DEBUG: Sending cookies to: <GET http://145.100.108.148/login2/login.php>
Cookie: PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6

2018-01-16 10:32:14 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://145.100.108.148/login2/login.php> (referer: http://145.100.108.148/login2/register.php)
2018-01-16 10:32:14 [testspider] INFO: Hi, this is an item page! http://145.100.108.148/login2/login.php
2018-01-16 10:32:14 [testspider] DEBUG: Saved file scraped-login2.html
2018-01-16 10:32:14 [scrapy.core.engine] INFO: Closing spider (finished)
2018-01-16 10:32:14[scrapy.middleware]信息:启用的项目管道:
[]
2018-01-16 10:32:14[刮屑.堆芯.发动机]信息:卡盘已打开
2018-01-16 10:32:14[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2018-01-16 10:32:14[scrapy.extensions.telnet]调试:telnet控制台监听127.0.0.1:6023
2018-01-16 10:32:14[scrapy.core.engine]调试:爬网(404)(参考:无)
2018-01-16 10:32:14[scrapy.downloadermiddleware.cookies]调试:从以下地址接收cookies:
设置Cookie:PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6;路径=/
2018-01-16 10:32:14[scrapy.downloadermiddleware.redirect]调试:重定向(302)到
2018-01-16 10:32:14[scrapy.downloadermiddleware.cookies]调试:将cookies发送到:
Cookie:PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6
2018-01-16 10:32:14[刮屑核心引擎]调试:爬网(200)(参考:无)
2018-01-16 10:32:14[scrapy.downloadermiddleware.cookies]调试:将cookies发送到:
Cookie:PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6
2018-01-16 10:32:14[刮屑核心引擎]调试:爬网(200)(参考:http://145.100.108.148/login2/login.php)
2018-01-16 10:32:14[测试蜘蛛]信息:嗨,这是一个项目页面!http://145.100.108.148/login2/register.php
2018-01-16 10:32:14[testspider]调试:保存的文件scraped-login2.html
2018-01-16 10:32:14[scrapy.dupefilters]调试:已过滤的重复请求:-将不再显示重复项(请参阅DUPEFILTER\u调试以显示所有重复项)
2018-01-16 10:32:14[scrapy.downloadermiddleware.cookies]调试:将cookies发送到:
Cookie:PHPSESSID=4oeh65l59aeutc2qetvgtpn0c6
2018-01-16 10:32:14[刮屑核心引擎]调试:爬网(200)(参考:http://145.100.108.148/login2/register.php)
2018-01-16 10:32:14[测试蜘蛛]信息:嗨,这是一个项目页面!http://145.100.108.148/login2/login.php
2018-01-16 10:32:14[testspider]调试:保存的文件scraped-login2.html
2018-01-16 10:32:14[刮屑芯发动机]信息:关闭卡盘(已完成)
因此,当爬行时,无论爬行器是否登录,都会有
输出。即使创建了IF/ELSE语句,也要开始检查登录响应

我也不确定爬虫程序是否有经过身份验证的会话。
只有一个保存的文件名为
scraped-login2.html
,而我希望至少有3个文件。它们是
注册页面
登录页面
,以及
index.php
页面。

爬行爬行器
继承自
爬行器
初始化请求
在继承自
初始化爬行器
时有效。所以你需要在下面改变

def init_request(self):
    """This function is called before crawling starts"""
    return Request(url=self.login_page, callback=self.login)

接下来,在
response.body
中得到的响应将是字节。所以你需要改变

if "Dashboard" in response.body:


感谢@Tarun Lalwani和一些尝试和错误,结果如下:

来自scrapy.http导入请求,FormRequest
从scrapy.LinkExtractor导入LinkExtractor
从scrapy.spider导入爬行蜘蛛,规则
从scrapy.selector导入HtmlXPathSelector
从scrapy.http导入FormRequest
类登录器(爬行蜘蛛):
名称='loginspider'
登录页面=http://145.100.108.148/login2/login.php'
起始URL=['http://145.100.108.148/login2/index.php']
用户名=test@hotmail.com'
密码='test'
def初始请求(自身):
返回请求(url=self.login\u页面,回调=self.start\u请求)
def start_请求(自我):
打印(“\n开始\u请求在此\n”)
让步请求(
url=self.login\u页面,
callback=self.login,
Don_filter=True
)
def登录(自我,响应):
打印(“\n登录在此处!\n”)
返回FormRequest.from_response(response,
formdata={'email':self.username,
“通过”:self.password},
callback=self.check\u login\u response)
def检查\登录\响应(自我,响应):
打印(“\n检查\u登录\u响应\n”)
如果b在响应中“学习”。正文:
打印(“已工作,已登录”)
#返回self.parse_项
其他:
打印(“未登录”)
返回

仍不能按预期工作。我编辑了主要帖子。
if "Dashboard" in response.body:
if b"Dashboard" in response.body: