Python 使用-t csv-o data.csv时出现刮擦错误

Python 使用-t csv-o data.csv时出现刮擦错误,python,csv,scrapy,Python,Csv,Scrapy,我的scrapy机器人运行在两个不同的系统上。其中一个工作正常,而另一个工作不正常。它们是一模一样的复制品。当我使用-t csv-o data.csv时,我得到以下回溯 Traceback (most recent call last): File "/home/scraper/.python/bin/scrapy", line 4, in <module> execute() File "/home/scraper/.python/lib/python2.7/sit

我的scrapy机器人运行在两个不同的系统上。其中一个工作正常,而另一个工作不正常。它们是一模一样的复制品。当我使用-t csv-o data.csv时,我得到以下回溯

Traceback (most recent call last):
  File "/home/scraper/.python/bin/scrapy", line 4, in <module>
    execute()
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/cmdline.py", line 143, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/cmdline.py", line 89, in _run_print_help
    func(*a, **kw)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/cmdline.py", line 150, in _run_command
    cmd.run(args, opts)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/commands/crawl.py", line 50, in run
    self.crawler_process.start()
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 92, in start
    if self.start_crawling():
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 124, in start_crawling
    return self._start_crawler() is not None
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 139, in _start_crawler
    crawler.configure()
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/crawler.py", line 46, in configure
    self.extensions = ExtensionManager.from_crawler(self)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/middleware.py", line 50, in from_crawler
    return cls.from_settings(crawler.settings, crawler)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/middleware.py", line 31, in from_settings
    mw = mwcls.from_crawler(crawler)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 162, in from_crawler
    o = cls(crawler.settings)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 144, in __init__
    if not self._storage_supported(self.urifmt):
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 214, in _storage_supported
    self._get_storage(uri)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 225, in _get_storage
    return self.storages[urlparse(uri).scheme](uri)
  File "/home/scraper/.python/lib/python2.7/site-packages/scrapy/contrib/feedexport.py", line 70, in __init__
    self.path = file_uri_to_path(uri)
  File "/home/scraper/.python/lib/python2.7/site-packages/w3lib/url.py", line 141, in file_uri_to_path
    uri_path = moves.urllib.parse.urlparse(uri).path
AttributeError: 'Module_six_moves_urllib_parse' object has no attribute 'urlparse'
回溯(最近一次呼叫最后一次):
文件“/home/scraper/.python/bin/scrapy”,第4行,在
执行()
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/cmdline.py”,执行中的第143行
_运行\u打印\u帮助(解析器、\u运行\u命令、cmd、args、opts)
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/cmdline.py”,第89行,在\u运行\u打印\u帮助中
func(*a,**千瓦)
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/cmdline.py”,第150行,在_run_命令中
cmd.run(参数、选项)
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/commands/crawl.py”,第50行,正在运行
self.crawler_进程.start()
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/crawler.py”,第92行,开始
如果self.start_crawling():
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/crawler.py”,第124行,开始爬网
返回self.\u start\u crawler()不是None
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/crawler.py”,第139行,在“开始”爬虫程序中
crawler.configure()
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/crawler.py”,第46行,在configure中
self.extensions=ExtensionManager.from_crawler(self)
文件“/home/scraper/.python/lib/python2.7/site-packages/scrapy/middleware.py”,第50行,来自爬虫程序
返回cls.from_设置(crawler.settings,crawler)
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/middleware.py”,第31行,在from_设置中
mw=来自爬虫(爬虫)的mwcls
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/contrib/feedexport.py”,第162行,来自爬虫程序
o=cls(爬虫程序设置)
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/contrib/feedexport.py”,第144行,在__
如果不支持自存储(self.urifmt):
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/contrib/feedexport.py”,第214行,在受支持的存储中
自我获取存储(uri)
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/contrib/feedexport.py”,第225行,存储在
返回self.storages[urlparse(uri).scheme](uri)
文件“/home/scraper/.python/lib/python2.7/site packages/scrapy/contrib/feedexport.py”,第70行,在__
self.path=文件到路径(uri)
文件“/home/scraper/.python/lib/python2.7/site packages/w3lib/url.py”,第141行,在文件的路径中
uri_path=moves.urllib.parse.urlparse(uri).path
AttributeError:'Module_six_moves_urllib_parse'对象没有属性'urlparse'

看起来您的
six
模块不是
w3lib
所需的模块

尝试:


请粘贴你的蜘蛛代码
     pip install -U w3lib six