Python 使用多个帐户重新登录或使用不同帐户重新登录(不同cookie)

Python 使用多个帐户重新登录或使用不同帐户重新登录(不同cookie),python,scrapy,twisted,scrapy-spider,Python,Scrapy,Twisted,Scrapy Spider,我正在抓取一个网站,每天用户的下载页面大约(1000页),之后,用户直到明天0:00才能登录 所以我注册了很多账户来面对它。网站确实使用了cookie 下面是我的问题,当用户过期时如何重新登录帐户并继续在“堆栈”中抓取旧页面。 这是我的代码,可以帮助你理解我的问题 def start_requests(self): return [Request(self.start_urls[0], meta = {'cookiejar' : 1}, callback = self.login,don

我正在抓取一个网站,每天用户的下载页面大约(1000页),之后,用户直到明天0:00才能登录

所以我注册了很多账户来面对它。网站确实使用了cookie

下面是我的问题,当用户过期时如何重新登录帐户并继续在“堆栈”中抓取旧页面。 这是我的代码,可以帮助你理解我的问题

def start_requests(self):
    return [Request(self.start_urls[0], meta = {'cookiejar' : 1}, callback = self.login,dont_filter=True)]

def login(self, response):
    self.account = self.accounts[self.line_count].split(",")
    self.line_count = self.line_count+1
    if(len(self.accounts)<=self.line_count):
        self.line_count = 0;
    self.log('Preparing login:'+self.account[0]+":"+self.account[1].rstrip())
    return [FormRequest.from_response(response,   
                        meta = {'cookiejar' : response.meta['cookiejar']},
                        headers = self.headers, 
                        formdata = {
                        'j_email': self.account[0],
                        'j_password': self.account[1].rstrip(),
                        'submit': 'Ok'
                        },
                        callback = self.parse_url,
                        dont_filter = True,
                        )]
这是我的孔代码:

# -*- coding:utf-8 -*-
from scrapy.contrib.spiders import CrawlSpider
from scrapy.http import Request, FormRequest
from imo_dlcosco_ships.settings import URLS, COOKIES, HEADER
from imo_dlcosco_ships.items import ShipListItem
from scrapy.selector import Selector
import time

class EquasisSpider(CrawlSpider):
    name = 'imo_202'
    allowed_domains = ["www.equasis.org"]
    start_urls = [
        "http://www.equasis.org/EquasisWeb/public/HomePage",
    ]

    def __init__(self): 
        self.headers = HEADER
        self.cookies = COOKIES
        self.urls = URLS
        f = open("account.txt", "r")
        self.accounts = f.readlines()
        f.close()
        self.line_count = 0



    #login
    def start_requests(self):
        return [Request(self.start_urls[0], meta = {'cookiejar' : 1}, callback = self.login,dont_filter=True)]

    def login(self, response):
        self.account = self.accounts[self.line_count].split(",")
        self.line_count = self.line_count+1
        if(len(self.accounts)<=self.line_count):
            self.line_count = 0;
        self.log('Preparing login:'+self.account[0]+":"+self.account[1].rstrip())
        return [FormRequest.from_response(response,   
                            meta = {'cookiejar' : response.meta['cookiejar']},
                            headers = self.headers, 
                            formdata = {
                            'j_email': self.account[0],
                            'j_password': self.account[1].rstrip(),
                            'submit': 'Ok'
                            },
                            callback = self.parse_url,
                            dont_filter = True,
                            )]


    def parse_url(self, response):
        return [FormRequest(url="http://www.equasis.org/EquasisWeb/restricted/ShipSearchAdvanced?fs=ShipSearch",   
                           meta = {'cookiejar' : response.meta['cookiejar']},
                            headers = self.headers, 
                            cookies = self.cookies,
                            formdata = {
                            'P_PAGE': '1'
                            },
                            dont_filter = True,
                            callback = self.parse_imo_url,
                            )]
    def parse_imo_url(self, response):
        return [FormRequest(url="http://www.equasis.org/EquasisWeb/restricted/ShipList?fs=ShipSearch",   
                            meta = {'cookiejar' : response.meta['cookiejar']},
                            headers = self.headers, 
                            cookies = self.cookies,
                            formdata = {
                            'P_CLASS_ST_rb':'HC',
                            'P_CLASS_rb':'HC',
                            'P_CatTypeShip':'6',
                            'P_CatTypeShip_p2':'6',
                            'P_CatTypeShip_rb':'CM',
                            'P_DW_GT':'250000',
                            'P_DW_LT':'999999',
                            'P_FLAG_rb':'HC',
                            'P_PAGE':'1',
                            'Submit':'SEARCH'
                            },
                            dont_filter = True,
                            callback = self.parse_page_num,
                            )]

    def parse_page_num(self,response):
        hxs = Selector(response)
        loginfail = hxs.xpath('//table[@class="tab"]/tbody/tr/td/div[@id="encart"]/li/text()').extract()

        if loginfail==([u'Your login (e-mail) or/and password are unknown in Equasis. Please, try again']):
            print "relogin"
            self.start_requests()
        if loginfail==([u'Your session has expired, please try to login again']):
            print "relogin"
            self.start_requests()
        if loginfail==([u'You have been disconnected or your login/password is unknown in Equasis. Please, try again.']):
            print "relogin"
            self.start_requests()
        if loginfail==([u'By security, your session has been cancelled.']):
            print "relogin"
            self.start_requests()
        htmlurl = response._url.split('?')[0]
        f = open('page.html','a')
        f.write(response.body)
        f.close()   
        if(htmlurl=='http://www.equasis.org/EquasisWeb/restricted/ShipList'):
            temp1 = hxs.xpath('//form[@name="form"]/table[@class="tab"]/tbody/tr/td[@align="right"]/span/a/@onclick').extract()
            temp2 = temp1[len(temp1)-1].split(";document")[0]
            PageNum = temp2.split("P_PAGE.value=")[1].encode("utf-8")

            for h in range(int(PageNum)):
                yield FormRequest(url="http://www.equasis.org/EquasisWeb/restricted/ShipList?fs=ShipList",
                                meta={'cookiejar' : response.meta['cookiejar'],'pageNum':str(h+1)},
                                headers = self.headers, 
                                cookies = self.cookies,
                                formdata = {
                                'P_CALLSIGN':'',
                                'P_IMO':'',
                                'P_NAME':'',
                                'P_PAGE':'%d' %(h+1)         
                                },
                                dont_filter = True,
                                callback = self.parse_page_imo     
                                )
    def parse_page_imo(self, response):
        hxs = Selector(response)
        loginfail = hxs.xpath('//table[@class="tab"]/tbody/tr/td/div[@id="encart"]/li/text()').extract()
        if(loginfail==([u'Your login (e-mail) or/and password are unknown in Equasis. Please, try again'])):
            print "relogin"
            self.start_requests()
        if(loginfail == [u'Your session has expired, please try to login again']):
            print "relogin"
            self.start_requests()
        if(loginfail == [u'You have been disconnected or your login/password is unknown in Equasis. Please, try again.']):
            print "relogin"
            self.start_requests()
        if(loginfail == [u'By security, your session has been cancelled.']):
            print "relogin"
            self.start_requests()


        htmlurl = response._url.split('?')[0]
        if(htmlurl=='http://www.equasis.org/EquasisWeb/restricted/ShipList'):

            item = ShipListItem()
            shipNameHtml = hxs.xpath('//form[@name="formShip"]/table[@class="tab"]/tbody/tr/td[1]').extract()
            shipHtmlTitle = Selector(text=shipNameHtml[0]).xpath('//text()').extract()
            if(shipHtmlTitle[0].find('Name of ship')>-1):
                item['ship_name'] = hxs.xpath('//form[@name="formShip"]/table[@class="tab"]/tbody/tr/td[1]/a/text()').extract()
            onclickValue = hxs.xpath('//form[@name="formShip"]/table[@class="tab"]/tbody/tr/td[1]/a/@onclick').extract()
            for i in range(len(onclickValue)):
                onclickValue2 = onclickValue[i].split(";document")[0]
                onclickValue3 = onclickValue2.split("P_IMO.value=")[1].encode("utf-8")
                onclickValue[i] = onclickValue3.strip('\'')
            item['imo'] = onclickValue


            for h in range(len(item['imo'])):
                p_imo = item['imo'][h]
                ShipName = item['ship_name'][h]
                p_imo = p_imo.rstrip()
                yield FormRequest("http://www.equasis.org/EquasisWeb/restricted/ShipInfo?fs=ShipList",   
                                meta = {'cookiejar' : response.meta['cookiejar'],'P_imo':p_imo,'ShipName':ShipName},
                                headers = self.headers, 
                                cookies = self.cookies,
                                formdata = {
                                'P_IMO': p_imo
                                },
                                dont_filter = True,
                                callback = self.parse_page_mmsi,
                                )

    def parse_page_mmsi(self,response):
        hxs = Selector(response)
        loginfail = hxs.xpath('//table[@class="tab"]/tbody/tr/td/div[@id="encart"]/li/text()').extract()
        if(loginfail==([u'Your login (e-mail) or/and password are unknown in Equasis. Please, try again'])):
            print "relogin"
            self.start_requests()
        if(loginfail == [u'Your session has expired, please try to login again']):
            print "relogin"
            self.start_requests()
        if(loginfail == [u'You have been disconnected or your login/password is unknown in Equasis. Please, try again.']):
            print "relogin"
            self.start_requests()
        if(loginfail == [u'By security, your session has been cancelled.']):
            print "relogin"
            self.start_requests()
        shipHtml = hxs.xpath('//table[@class="encart"]/tbody/tr').extract()
        item=ShipListItem()
        item['mmsi'] = [u'']
        for j in range(len(shipHtml)):  
            shipHtmlTitle = Selector(text=shipHtml[j]).xpath('//td[1]/text()').extract()
            if(shipHtmlTitle[0].find('MMSI :')>-1):
                item['mmsi'] = Selector(text=shipHtml[j]).xpath('//td[2]/text()').extract()
        item['imo'] = response.meta['P_imo']
        item['ship_name']  = response.meta['ShipName']
        yield item
#-*-编码:utf-8-*-
从scrapy.contrib.spider导入爬行蜘蛛
从scrapy.http导入请求,FormRequest
从imo_dlcosco_ships.settings导入URL、COOKIES和标头
从imo\u dlcosco\u ships.items导入ShipListItem
从scrapy.selector导入选择器
导入时间
类EquasisSpider(爬行爬行器):
名称='imo_202'
允许的_域=[“www.equasis.org”]
起始URL=[
"http://www.equasis.org/EquasisWeb/public/HomePage",
]
定义初始化(自):
self.headers=标题
self.cookies=cookies
self.url=url
f=未结(“account.txt”、“r”)
self.accounts=f.readlines()
f、 关闭()
self.line\u计数=0
#登录
def start_请求(自我):
return[Request(self.start\u url[0],meta={'cookiejar':1},callback=self.login,dont\u filter=True]
def登录(自我,响应):
self.account=self.accounts[self.line\u count]。拆分(“,”)
self.line\u计数=self.line\u计数+1
如果(len(self.accounts)-1):
item['ship_name']=hxs.xpath('//form[@name=“formShip”]/table[@class=“tab”]/tbody/tr/td[1]/a/text()).extract()
onclickValue=hxs.xpath('//form[@name=“formShip”]/table[@class=“tab”]/tbody/tr/td[1]/a/@onclick')。extract()
对于范围内的i(len(onclickValue)):
onclickValue2=onclickValue[i]。拆分(“;文档”)[0]
onclickValue3=onclickValue2.split(“P_IMO.value=”)[1]。编码(“utf-8”)
onclickValue[i]=onclickValue3.strip('\'')
项目['imo']=onclickValue
对于范围内的h(len(项目['imo']):
p_imo=项目['imo'][h]
ShipName=项目['ship_name'][h]
p_imo=p_imo.rstrip()
屈服形式请求(“http://www.equasis.org/EquasisWeb/restricted/ShipInfo?fs=ShipList",   
meta={'cookiejar':response.meta['cookiejar'],'P_imo':P_imo,'ShipName':ShipName},
headers=self.headers,
cookies=self.cookies,
formdata={
‘P_IMO’:P_IMO
},
Don_filter=True,
callback=self.parse\u page\u mmsi,
)
def解析页面\u mmsi(自我,响应):
hxs=选择器(响应)
loginfail=hxs.xpath('//table[@class=“tab”]/tbody/tr/td/div[@id=“encart”]/li/text()).extract()
如果(loginfail==([u'您的登录(电子邮件)或/和密码在Equasis中未知。请重试']):
打印“重新登录”
self.start_请求()
如果(loginfail==[u'您的会话已过期,请再次尝试登录']):
打印“重新登录”
self.start_请求()
如果(loginfail==[u'您已断开连接,或者您的登录名/密码在Equasis中未知。请重试。“]):
打印“重新登录”
self.start_请求()
如果(loginfail==[u'By security,您的会话已被取消。“]):
打印“重新登录”
self.start_请求()
shipHtml=hxs.xpath('//table[@class=“encart”]/tbody/tr').extract()
item=ShipListItem()
项目['mmsi']=[u']
对于范围内的j(len(shipHtml)):
shipHtmlTitle=Selector(text=shipHtml[j]).xpath('//td[1]/text()').extract()
如果(shipHtmlTitle[0]。查找('MMSI:')>-1):
item['mmsi']=选择器(text=shipHtml[j]).xpath('//td[2]/text()).extract()
item['imo']=response.meta['P_imo']
item['ship_name']=response.meta['ShipName']
收益项目
该方法应返回
scrapy.Request
对象的iterable。在响应回调
parse\u page\u imo
中简单地调用它只会得到一个临时值。您至少应返回或产生值,例如:

for req in self.start_requests():
    yield req

编辑:同样在您的响应回调
登录中,返回值也应该是请求对象(而不是列表)。

最后我解决了我的问题。我编写了一个downloadMiddleware来处理它。当登录错误发生时,我暂停了爬行器,将所有下一个请求排队并重新登录,然后恢复爬行器。一切似乎都正常

我试图用yield[Request(self.start_url[0],meta={'cookiejar':1},callback=self.login,dont_filter=True)]替换start_requests()。但这并没有起到任何作用,我也不认为这应该是解决我问题的直接办法problem@sacuba1.您做得不对:在我的示例中,当您尝试
产生
列表时,它产生了一个请求对象。而且
scrapy
不处理来自响应回调的
list
s。2.您只发布了部分代码(例如,网站的确切位置是什么?您的
parse\u page\u imo
调用在哪里?),这里的人无法知道您认为的直接解决方案是什么。3.至少这是个问题,问题一个接一个地解决了。我明白你的意思了,我的错是返回了一个列表而不是一个请求对象的iterable。我对我的错误感到抱歉。此外,我在我的问题中添加了孔代码(不更改“返回故障”,可能是原始代码可能有助于您理解我的问题)。期待您的回复,谢谢!最后我解决了我的问题。我写了一个下载中间件来处理
for req in self.start_requests():
    yield req