Python 3.x Scrapy、Scrapinghub和谷歌云存储:keyrerror';gs';在scrapinghub上运行蜘蛛时

Python 3.x Scrapy、Scrapinghub和谷歌云存储:keyrerror';gs';在scrapinghub上运行蜘蛛时,python-3.x,scrapy,google-cloud-platform,google-cloud-storage,scrapinghub,Python 3.x,Scrapy,Google Cloud Platform,Google Cloud Storage,Scrapinghub,我正在使用Python3进行一个scrapy项目,Spider被部署到scrapinghub。我也在使用谷歌云存储来存储官方文件中提到的被刮取的文件 当我在本地运行Spider时,它运行得非常好,并且Spider部署到scrapinghub时没有任何错误。我正在使用scrapy:1.4-py3作为scrapinghub的堆栈。在其上运行爬行器时,出现以下错误: Traceback (most recent call last): File "/usr/local/lib/python

我正在使用Python3进行一个scrapy项目,Spider被部署到scrapinghub。我也在使用谷歌云存储来存储官方文件中提到的被刮取的文件

当我在本地运行Spider时,它运行得非常好,并且Spider部署到scrapinghub时没有任何错误。我正在使用scrapy:1.4-py3作为scrapinghub的堆栈。在其上运行爬行器时,出现以下错误:

    Traceback (most recent call last):
  File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1386, in _inlineCallbacks
    result = g.send(result)
  File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 77, in crawl
    self.engine = self._create_engine()
  File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 102, in _create_engine
    return ExecutionEngine(self, lambda _: self.stop())
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 70, in __init__
    self.scraper = Scraper(crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 71, in __init__
    self.itemproc = itemproc_cls.from_crawler(crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 58, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 36, in from_settings
    mw = mwcls.from_crawler(crawler)
  File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/media.py", line 68, in from_crawler
    pipe = cls.from_settings(crawler.settings)
  File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/images.py", line 95, in from_settings
    return cls(store_uri, settings=settings)
  File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/images.py", line 52, in __init__
    download_func=download_func)
  File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/files.py", line 234, in __init__
    self.store = self._get_store(store_uri)
  File "/usr/local/lib/python3.6/site-packages/scrapy/pipelines/files.py", line 269, in _get_store
    store_cls = self.STORE_SCHEMES[scheme]
KeyError: 'gs'
PS:gs在路径中用于存储文件,如

'IMAGES_STORE':'gs://<bucket-name>/'
'IMAGES\u STORE':'gs://'

我已经研究过这个错误,但是没有任何解决方案。任何帮助都会有巨大的帮助。

谷歌云存储支持是Scrapy 1.5中的一项新功能,因此您需要在Scrapy Cloud中使用
Scrapy:1.5-py3
堆栈。

谷歌云存储支持是Scrapy 1.5中的一项新功能,因此您需要在Scrapy Cloud中使用
Scrapy:1.5-py3
堆栈