Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/325.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python exceptions.TypeError:无法将字典更新序列元素#1转换为序列?_Python_Web Crawler_Scrapy - Fatal编程技术网

Python exceptions.TypeError:无法将字典更新序列元素#1转换为序列?

Python exceptions.TypeError:无法将字典更新序列元素#1转换为序列?,python,web-crawler,scrapy,Python,Web Crawler,Scrapy,我使用打开的项目scrapy从腾讯抓取视频评论,但出现了错误。我不知道该怎么弄清楚 2015-10-22 18:33:58 [scrapy] INFO: Scrapy 1.0.1 started (bot: qqtvurl) 2015-10-22 18:33:58 [scrapy] INFO: Optional features available: ssl, http11, boto 2015-10-22 18:33:58 [scrapy] INFO: Overridden settings:

我使用打开的项目scrapy从腾讯抓取视频评论,但出现了错误。我不知道该怎么弄清楚

2015-10-22 18:33:58 [scrapy] INFO: Scrapy 1.0.1 started (bot: qqtvurl)
2015-10-22 18:33:58 [scrapy] INFO: Optional features available: ssl, http11, boto
2015-10-22 18:33:58 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'qqtvurl.spiders', 'SPIDER_MODULES': ['qqtvurl.spiders'], 'SCHEDULER': 'scrapy_redis.scheduler.Scheduler', 'BOT_NAME': 'qqtvurl'}
2015-10-22 18:33:58 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-10-22 18:33:58 [qqtvspider] DEBUG: Reading URLs from redis list 'qqtvspider:star_urls'
Unhandled error in Deferred:
2015-10-22 18:33:58 [twisted] CRITICAL: Unhandled error in Deferred:


Traceback (most recent call last):
     File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\cmdline.py", line 150, in _run_command
cmd.run(args, opts)
     File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\commands\crawl.py", line 57, in run
self.crawler_process.crawl(spname, **opts.spargs)
     File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 153, in crawl
d = crawler.crawl(*args, **kwargs)
     File "D:\anzhuang\Anaconda\lib\site-packages\twisted\internet\defer.py", line 1274, in unwindGenerator
return _inlineCallbacks(None, gen, Deferred())
--- <exception caught here> ---
    File "D:\anzhuang\Anaconda\lib\site-packages\twisted\internet\defer.py", line 1128, in _inlineCallbacks
result = g.send(result)
    File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 71, in crawl
self.engine = self._create_engine()
   File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\crawler.py", line 83, in _create_engine
return ExecutionEngine(self, lambda _: self.stop())
  File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\engine.py", line 66, in __init__
self.downloader = downloader_cls(crawler)
  File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\downloader\__init__.py", line 65, in __init__
self.handlers = DownloadHandlers(crawler)
  File "D:\anzhuang\Anaconda\lib\site-packages\scrapy\core\downloader\handlers\__init__.py", line 17, in __init__
handlers.update(crawler.settings.get('DOWNLOAD_HANDLERS', {}))
exceptions.TypeError: cannot convert dictionary update sequence element #1 to a sequence
2015-10-22 18:33:58 [twisted] CRITICAL:
当我运行这个项目时,上面的错误就出现了。
非常感谢

这是因为您正在将序列元素设置到字典中

您应该输入:

DOWNLOAD_HANDLERS = {'S3': None,}
或者类似的事情


您可以阅读更多关于如何设置
下载\u处理程序的值的信息,这里有示例:

{'S3',None,}
是一个
,而代码期望
下载\u处理程序
是一个
dict
或一系列(键,值)元组

{'S3',None,}
替换为
{'S3':None}
,您不应该出现此错误

DOWNLOAD_HANDLERS = {'S3': None,}