Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/352.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/drupal/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何将peewee与scrapinghub一起使用_Python_Scrapy_Scrapinghub - Fatal编程技术网

Python 如何将peewee与scrapinghub一起使用

Python 如何将peewee与scrapinghub一起使用,python,scrapy,scrapinghub,Python,Scrapy,Scrapinghub,我想使用peewee将数据保存到远程计算机。当我运行爬虫程序时,我发现以下错误: File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run self.crawler_process.crawl(spname, **opts.spargs) File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py",

我想使用peewee将数据保存到远程计算机。当我运行爬虫程序时,我发现以下错误:

File "/usr/local/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 163, in crawl
    return self._crawl(crawler, *args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 167, in _crawl
    d = crawler.crawl(*args, **kwargs)
  File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1445, in unwindGenerator
    return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
  File "/usr/local/lib/python2.7/site-packages/twisted/internet/defer.py", line 1299, in _inlineCallbacks
    result = g.send(result)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 90, in crawl
    six.reraise(*exc_info)
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 72, in crawl
    self.engine = self._create_engine()
  File "/usr/local/lib/python2.7/site-packages/scrapy/crawler.py", line 97, in _create_engine
    return ExecutionEngine(self, lambda _: self.stop())
  File "/usr/local/lib/python2.7/site-packages/scrapy/core/engine.py", line 70, in __init__
    self.scraper = Scraper(crawler)
  File "/usr/local/lib/python2.7/site-packages/scrapy/core/scraper.py", line 71, in __init__
    self.itemproc = itemproc_cls.from_crawler(crawler)
  File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 58, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/usr/local/lib/python2.7/site-packages/scrapy/middleware.py", line 34, in from_settings
    mwcls = load_object(clspath)
  File "/usr/local/lib/python2.7/site-packages/scrapy/utils/misc.py", line 44, in load_object
    mod = import_module(module)
  File "/usr/local/lib/python2.7/importlib/__init__.py", line 37, in import_module
    __import__(name)
  File "/app/__main__.egg/annuaire_agence_bio/pipelines.py", line 8, in <module>

exceptions.ImportError: No module named peewee
文件“/usr/local/lib/python2.7/site packages/scrapy/commands/crawl.py”,第57行,运行中
self.crawler\u process.crawl(spname,**opts.spargs)
文件“/usr/local/lib/python2.7/site packages/scrapy/crawler.py”,第163行,在爬网中
返回自爬网(爬网程序,*args,**kwargs)
文件“/usr/local/lib/python2.7/site packages/scrapy/crawler.py”,第167行,在爬网中
d=爬网器。爬网(*args,**kwargs)
文件“/usr/local/lib/python2.7/site packages/twisted/internet/defer.py”,第1445行,在unwindGenerator中
return _inlineCallbacks(无、gen、Deferred())
---  ---
文件“/usr/local/lib/python2.7/site packages/twisted/internet/defer.py”,第1299行,在内联回调中
结果=g.send(结果)
文件“/usr/local/lib/python2.7/site packages/scrapy/crawler.py”,第90行,在爬网中
六、重放(*exc_信息)
文件“/usr/local/lib/python2.7/site packages/scrapy/crawler.py”,第72行,在爬网中
self.engine=self.\u创建\u引擎()
文件“/usr/local/lib/python2.7/site packages/scrapy/crawler.py”,第97行,在创建引擎中
返回ExecutionEngine(self,lambda:self.stop())
文件“/usr/local/lib/python2.7/site packages/scrapy/core/engine.py”,第70行,在__
self.scraper=铲运机(履带式)
文件“/usr/local/lib/python2.7/site packages/scrapy/core/scraper.py”,第71行,在__
self.itemproc=itemproc\u cls.from\u爬虫程序(爬虫程序)
文件“/usr/local/lib/python2.7/site-packages/scrapy/middleware.py”,第58行,来自爬虫程序
返回cls.from_设置(crawler.settings,crawler)
文件“/usr/local/lib/python2.7/site packages/scrapy/middleware.py”,第34行,在from_设置中
mwcls=加载对象(clspath)
文件“/usr/local/lib/python2.7/site packages/scrapy/utils/misc.py”,第44行,在load_对象中
mod=导入模块(模块)
文件“/usr/local/lib/python2.7/importlib/_init__.py”,第37行,在导入模块中
__导入(名称)
文件“/app/\uuuuuu main\uuuuuuuu.egg/annuaire\u agence\u bio/pipelines.py”,第8行,在
exceptions.ImportError:没有名为peewee的模块

欢迎提供任何建议。

您不能在Scrapinhub上安装自己选择的模块。。。据我所知,您只能安装MySQLDB

在项目的主文件夹中创建一个名为
scrapinghub.yml
的文件,其中包含以下内容

projects:
  default: 111149
requirements:
  file: requirements.txt
其中
111149
是我在scrapinghub上的项目ID

在同一目录中创建另一个名为
requirements.txt
的文件

然后把你需要的模块和你正在使用的版本号放在文件中,就像这样

MySQL-python==1.2.5

PS:我正在使用MySQLDB模块,所以我把它放在那里。

不确定我是否理解第一句话。事实上,您可以在Scrapinghub上安装您选择的任何模块