502使用scrapy with splash刮取LinkedIn时出错

502使用scrapy with splash刮取LinkedIn时出错,scrapy,scrapy-splash,Scrapy,Scrapy Splash,我试着用Scrapy with Splash为Netflix抓取Linkedin公司页面。当我使用scrapy shell时,它工作得非常好,但是当我运行脚本时,它给出了502错误 错误: 2017-01-06 16:06:45 [scrapy.core.engine] INFO: Spider opened 2017-01-06 16:06:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scrap

我试着用Scrapy with Splash为Netflix抓取Linkedin公司页面。当我使用scrapy shell时,它工作得非常好,但是当我运行脚本时,它给出了502错误

错误:

2017-01-06 16:06:45 [scrapy.core.engine] INFO: Spider opened
2017-01-06 16:06:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-06 16:06:52 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (failed 1 times): 502 Bad Gateway
2017-01-06 16:06:55 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (failed 2 times): 502 Bad Gateway
2017-01-06 16:07:05 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (failed 3 times): 502 Bad Gateway
2017-01-06 16:07:05 [scrapy.core.engine] DEBUG: Crawled (502) <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (referer: None)
2017-01-06 16:07:05 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <502 https://www.linkedin.com/company/netflix>: HTTP status code is not handled or not allowed
2017-01-06 16:07:05 [scrapy.core.engine] INFO: Closing spider (finished)
spider的代码:

import scrapy
from scrapy_splash import SplashRequest
from linkedin.items import LinkedinItem


class LinkedinScrapy(scrapy.Spider):
    name = 'linkedin_spider'  # spider name
    allowed_domains = ['linkedin.com']
    start_urls = ['https://www.linkedin.com/company/netflix']

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, self.parse, 
                           endpoint='render.html', args={'wait': 0.5)

    def parse(self, response):
        item = LinkedinItem()
        item['name'] = response.xpath('//*[@id="stream-promo-top-bar"]/div[2]/div[1]/div[1]/div/h1/span/text()').extract_first()
        item['followers'] = response.xpath('//*[@id = "biz-follow-mod"]/div/div/div/p/text()').extract_first().split()[0]
        item['description'] = response.xpath('//*[@id="stream-about-section"]/div[2]/div[1]/div/p/text()').extract_first()
              yield item

这可能是因为LinkedIn拒绝访问,因为您的请求使用的是useragent字符串:

"User-Agent": "Scrapy/1.3.0 (+http://scrapy.org)"

你应该把你的蜘蛛中的用户代理换成其他的东西,如果LinkedIn的T&C说它可以刮它们的话,请看

。。。哦,等等,没有。另外,如果他们阻止他进行抓取,你会认为他们不允许这样做,而且实际上有人花时间阻止了他的抓取尝试。@zerohero数据是公开的,因此条款和条件是irellavent。在留下这些评论之前,请先对自己进行关于网络抓取合法性的教育。我认为你是需要教育的人。这些数据可能是公开的,但使用自动抓取实用程序违反了它们的条款,因为它会导致不必要的服务器资源在它们无法从中产生收入时被使用。我已经围绕这个问题和围绕它的法律问题讨论了好几次。@zerohero请阅读:。即使是偷窃、抢劫、抢劫或其他任何你可能会称之为的行为——当涉及到一个叫做stackexchange的教育网站时,这是完全无关的。
"User-Agent": "Scrapy/1.3.0 (+http://scrapy.org)"