Python 2.7 使/强制Scrapy使用Python 2.7
在我的系统上,我同时拥有python3和python2.7,scrapy只支持python2.7,但通过讨论,我的库链接到了python3.4 我正在尝试运行一个基本示例,该示例附带了一些零碎的文档,即:Python 2.7 使/强制Scrapy使用Python 2.7,python-2.7,web-scraping,scrapy,package,Python 2.7,Web Scraping,Scrapy,Package,在我的系统上,我同时拥有python3和python2.7,scrapy只支持python2.7,但通过讨论,我的库链接到了python3.4 我正在尝试运行一个基本示例,该示例附带了一些零碎的文档,即: #!/usr/bin/python2.7 import scrapy class StackOverflowSpider(scrapy.Spider): name = 'stackoverflow' start_urls = ['http://stackoverflow.
#!/usr/bin/python2.7
import scrapy
class StackOverflowSpider(scrapy.Spider):
name = 'stackoverflow'
start_urls = ['http://stackoverflow.com/questions?sort=votes']
def parse(self, response):
for href in response.css('.question-summary h3 a::attr(href)'):
full_url = response.urljoin(href.extract())
yield scrapy.Request(full_url, callback=self.parse_question)
def parse_question(self, response):
yield {
'title': response.css('h1 a::text').extract()[0],
'votes': response.css('.question .vote-count-post::text').extract()[0],
'body': response.css('.question .post-text').extract()[0],
'tags': response.css('.question .post-tag::text').extract(),
'link': response.url,
}
要运行此代码,建议使用以下命令:
scrapy runspider stackoverflow\u spider.py-o top stackoverflow questions.json
使用stackoverflow_spider.py创建上述代码段
问题是,不知何故,这是在调用python3,而且由于我没有显式地调用python版本,因此不确定如何强制命令以使用python2.7 libs
下面是我得到的错误:
Traceback (most recent call last):
File "/usr/local/bin/scrapy", line 9, in <module>
load_entry_point('Scrapy==1.0.5', 'console_scripts', 'scrapy')()
File "/usr/local/lib/python3.4/dist-packages/scrapy/cmdline.py", line 122, in execute
cmds = _get_commands_dict(settings, inproject)
File "/usr/local/lib/python3.4/dist-packages/scrapy/cmdline.py", line 46, in _get_commands_dict
cmds = _get_commands_from_module('scrapy.commands', inproject)
File "/usr/local/lib/python3.4/dist-packages/scrapy/cmdline.py", line 29, in _get_commands_from_module
for cmd in _iter_command_classes(module):
File "/usr/local/lib/python3.4/dist-packages/scrapy/cmdline.py", line 21, in _iter_command_classes
for obj in vars(module).itervalues():
AttributeError: 'dict' object has no attribute 'itervalues'
回溯(最近一次呼叫最后一次):
文件“/usr/local/bin/scrapy”,第9行,在
加载入口点('Scrapy==1.0.5','console\u scripts','Scrapy')()
文件“/usr/local/lib/python3.4/dist-packages/scrapy/cmdline.py”,执行中的第122行
cmds=\u获取\u命令\u命令(设置,项目中)
文件“/usr/local/lib/python3.4/dist-packages/scrapy/cmdline.py”,第46行,在命令中
cmds=\u从\u模块获取\u命令\u('scrapy.commands',inproject)
文件“/usr/local/lib/python3.4/dist-packages/scrapy/cmdline.py”,第29行,在\u-get\u-commands\u模块中
对于命令类(模块)中的cmd:
文件“/usr/local/lib/python3.4/dist packages/scrapy/cmdline.py”,第21行,在命令类中
对于变量(模块).itervalues()中的obj:
AttributeError:“dict”对象没有属性“itervalues”
PS:我已经在Python2.7下安装了scrapy相关软件。@alecxe谢谢,
pip安装scrapy==1.1.0rc1
是我缺少的部分!现在工作!相关。@alecxe谢谢,pip install scrapy==1.1.0rc1
是我丢失的部分!现在工作!