Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/324.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Linux服务器上出现Scrapy密钥错误,但Windows上没有_Python_Linux_Amazon Web Services_Scrapy_Scrapy Spider - Fatal编程技术网

Python Linux服务器上出现Scrapy密钥错误,但Windows上没有

Python Linux服务器上出现Scrapy密钥错误,但Windows上没有,python,linux,amazon-web-services,scrapy,scrapy-spider,Python,Linux,Amazon Web Services,Scrapy,Scrapy Spider,我的Scrapy在本地机器Windows上运行正常。然后我尝试在我的AWS Linux服务器上运行它,但我得到了这个 Traceback (most recent call last): File "run<spider_name>.py", line 12, in <module> spider_name).split()) File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 1

我的Scrapy在本地机器Windows上运行正常。然后我尝试在我的AWS Linux服务器上运行它,但我得到了这个

Traceback (most recent call last):
  File "run<spider_name>.py", line 12, in <module>
    spider_name).split())
  File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 142, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 88, in _run_print_help
    func(*a, **kw)
  File "/usr/lib/python2.7/site-packages/scrapy/cmdline.py", line 149, in _run_command
    cmd.run(args, opts)
  File "/usr/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 57, in run
    self.crawler_process.crawl(spname, **opts.spargs)
  File "/usr/lib/python2.7/site-packages/scrapy/crawler.py", line 162, in crawl
    crawler = self.create_crawler(crawler_or_spidercls)
  File "/usr/lib/python2.7/site-packages/scrapy/crawler.py", line 190, in create_crawler
    return self._create_crawler(crawler_or_spidercls)
  File "/usr/lib/python2.7/site-packages/scrapy/crawler.py", line 194, in _create_crawler
    spidercls = self.spider_loader.load(spidercls)
  File "/usr/lib/python2.7/site-packages/scrapy/spiderloader.py", line 51, in load
    raise KeyError("Spider not found: {}".format(spider_name))
KeyError: 'Spider not found: <spider_name>'
回溯(最近一次呼叫最后一次):
文件“run.py”,第12行,在
spider_name).split())
文件“/usr/lib/python2.7/site packages/scrapy/cmdline.py”,执行中的第142行
_运行\u打印\u帮助(解析器、\u运行\u命令、cmd、args、opts)
文件“/usr/lib/python2.7/site packages/scrapy/cmdline.py”,第88行,在“运行”和“打印”帮助中
func(*a,**千瓦)
文件“/usr/lib/python2.7/site packages/scrapy/cmdline.py”,第149行,在_run_命令中
cmd.run(参数、选项)
文件“/usr/lib/python2.7/site packages/scrapy/commands/crawl.py”,第57行,运行中
self.crawler\u process.crawl(spname,**opts.spargs)
文件“/usr/lib/python2.7/site packages/scrapy/crawler.py”,第162行,在爬网中
爬虫=自我。创建爬虫(爬虫或蜘蛛)
文件“/usr/lib/python2.7/site packages/scrapy/crawler.py”,第190行,在create\u crawler中
返回自我。创建爬虫程序(爬虫程序或蜘蛛程序)
文件“/usr/lib/python2.7/site packages/scrapy/crawler.py”,第194行,在创建爬虫程序中
spidercls=self.spider_loader.load(spidercls)
文件“/usr/lib/python2.7/site packages/scrapy/spiderloader.py”,第51行,装入
raise KeyError(“未找到蜘蛛网:{}”。格式(蜘蛛网名称))
KeyError:'未找到蜘蛛:'

为什么呢?如何在我的Linux服务器上运行它?

这一问题突然解决了,我感到困惑

我通过使用
pip install-r requirements.txt
更新所有需求来解决这个问题。我在需求中添加了Scrapy Splash,但忘了安装它