Python 刮擦错误302和代理问题

Python 刮擦错误302和代理问题,python,proxy,scrapy,Python,Proxy,Scrapy,我一直在尝试从中获取有关文章的信息,代码如下: class BasicSpider(scrapy.Spider): name = 'ILAR' def start_requests(self): start_urls = ['https://academic.oup.com/ilarjournal/issue-archive'] for url in start_urls: yield scrapy.Request(url=url, callback = se

我一直在尝试从中获取有关文章的信息,代码如下:

class BasicSpider(scrapy.Spider):
name = 'ILAR'

def start_requests(self):
    start_urls = ['https://academic.oup.com/ilarjournal/issue-archive']

    for url in start_urls:
        yield scrapy.Request(url=url, callback = self.parse)

def parse_item(self, response):

    item = PropertiesItem()

    item['authors'] = response.xpath("//*[contains(@class,'linked-name')]/text()").extract()
    self.log("authors %s" % item['authors'])

    articleTags = response.xpath("//*[@id='ContentTab']/div[1]/div/div//p/text()").extract()
    article = ''.join(articleTags)
    #self.log('ARTICLE TEXT IS: '+article)

    textFileTitle = response.xpath('//*[@id="ContentColumn"]/div[2]/div[1]/div/div/h1/text()').extract()
    fileTitle = ''.join(textFileTitle)
    pureFileTitle = fileTitle.replace('\n','').replace('  ','').replace('\r','')
    self.log("TEXT TITLE: " + pureFileTitle)
    item['title'] = pureFileTitle
    self.log("title %s" % item['title'])

    articleFile = str('D:/some path/' + pureFileTitle[:-2] + '.txt')

    with open (articleFile, 'wb') as newArticle:
        newArticle.write(article.encode('utf-8'))

    item['url'] = response.url
    item['project'] = self.settings.get('BOT_NAME')
    item['spider'] = self.name
    item['date'] = datetime.datetime.now()

    return item

def parse(self,response):
    #Get the year and issue URLs and yield Requests
    year_selector = response.xpath('//*[contains(@class,"IssueYear")]//@href')

    for url in year_selector.extract():
        if not year_selector.select('//*[contains(@class,"society-logo-block")]'):
            yield Request((urljoin(response.url, url)), dont_filter=True)
        else:
            yield Request(urljoin(response.url, url))

    issue_selector = response.xpath('//*[contains(@id,"item_Resource")]//@href')

    for url in issue_selector.extract():
        if not issue_selector.select('//*[contains(@class,"society-logo-block")]'):
            yield Request((urljoin(response.url, url)), dont_filter=True)
        else:
            yield Request(urljoin(response.url, url))

    #Get the articles URLs and yield Requests
    article_selector = response.xpath('//*[contains(@class,"viewArticleLink")]//@href')

    for url in article_selector.extract():
        if not article_selector.select('//*[contains(@class,"society-logo-block")]'):
            yield Request((urljoin(response.url, url)), dont_filter=True)
        else:
            yield Request(urljoin(response.url, url), callback=self.parse_item)
代理的设置如下所示:

RETRY_TIMES = 10
RETRY_HTTP_CODES = [500, 503, 504, 400, 403, 404, 408, 302]
DOWNLOADER_MIDDLEWARES = {
    'scrapy.downloadermiddlewares.retry.RetryMiddleware': 90,
    'scrapy_proxies.RandomProxy': 100,
    'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware': 110,
}
PROXY_LIST = 'C:/some path/proxies.csv'
PROXY_MODE = 0
但是,当我尝试运行代码时,它会获取所有URL,但似乎不会生成任何项目。shell会一直打印这些错误:

2018-08-29 16:53:38[scrapy.proxies]调试:使用代理,剩下8个代理

2018-08-29 16:53:38[scrapy.downloadermiddleware.redirect]调试:重定向(302)到https://academic.oup.com/ilarjournal/article/53/1/E99/656113>从https://academic.oup.com/ilarjournal/article-abstract/53/1/E99/656113>

2018-08-29 16:53:38[scrapy.proxies]调试:找不到代理用户密码


另一件可能很重要的事情是,我尝试过在没有代理的情况下使用spider,但它仍然会为所有文章返回302个错误。如果您有任何关于其他主题的错误或已有良好解决方案的想法,我们将不胜感激。

30x代码是正常的重定向,您应该允许它们发生

似乎您的
parse_item
方法返回值而不是屈服,请尝试用
yield item
替换
return item