scrapy教程:无法运行scrapy crawl dmoz

scrapy教程:无法运行scrapy crawl dmoz,scrapy,web-crawler,dmoz,Scrapy,Web Crawler,Dmoz,我问了一个新问题,因为我知道我在上一个问题中不够清楚。 我试图遵循scrapy教程,但我被困在关键的一步,即“scrapy craw dmoz”命令中。 代码如下(我在python shell中编写了该代码,并将其保存为键入.py扩展名): 我使用的目录应该可以,请在树下找到: . ├── scrapy.cfg └── tutorial ├── __init__.py ├── __init__.pyc ├── items.py ├── pipelines.py

我问了一个新问题,因为我知道我在上一个问题中不够清楚。 我试图遵循scrapy教程,但我被困在关键的一步,即“scrapy craw dmoz”命令中。 代码如下(我在python shell中编写了该代码,并将其保存为键入.py扩展名):

我使用的目录应该可以,请在树下找到:

.
├── scrapy.cfg
└── tutorial
    ├── __init__.py
    ├── __init__.pyc
    ├── items.py
    ├── pipelines.py
    ├── settings.py
    ├── settings.pyc
    └── spiders
        ├── __init__.py
        ├── __init__.pyc
        └── dmoz_spider.py

2 directories, 10 files
现在,当我尝试运行“scapy crawl dmoz”时,我得到以下信息:

$ scrapy crawl dmoz

2013-08-14 12:51:40+0200 [scrapy] INFO: Scrapy 0.16.5 started (bot: tutorial)
2013-08-14 12:51:40+0200 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/2.7/bin/scrapy", line 5, in <module>
    pkg_resources.run_script('Scrapy==0.16.5', 'scrapy')
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 499, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pkg_resources.py", line 1235, in run_script
    execfile(script_filename, namespace, namespace)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/EGG-INFO/scripts/scrapy", line 4, in <module>
    execute()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 131, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 76, in _run_print_help
    func(*a, **kw)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/cmdline.py", line 138, in _run_command
    cmd.run(args, opts)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/commands/crawl.py", line 43, in run
    spider = self.crawler.spiders.create(spname, **opts.spargs)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/command.py", line 33, in crawler
    self._crawler.configure()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/crawler.py", line 40, in configure
    self.spiders = spman_cls.from_crawler(self)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 35, in from_crawler
    sm = cls.from_settings(crawler.settings)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 31, in from_settings
    return cls(settings.getlist('SPIDER_MODULES'))
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/spidermanager.py", line 22, in __init__
    for module in walk_modules(name):
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/Scrapy-0.16.5-py2.7.egg/scrapy/utils/misc.py", line 65, in walk_modules
    submod = __import__(fullpath, {}, {}, [''])
  File "/Users//Documents/tutorial/tutorial/spiders/dmoz_spider.py", line 1
    ActivePython 2.7.2.5 (ActiveState Software Inc.) based on
                   ^
SyntaxError: invalid syntax
$scrapy crawl dmoz
2013-08-14 12:51:40+0200[scrapy]信息:scrapy 0.16.5已启动(机器人:教程)
2013-08-14 12:51:40+0200[scrapy]调试:启用的扩展:LogStats、TelnetConsole、CloseSpider、WebService、CoreStats、SpiderState
回溯(最近一次呼叫最后一次):
文件“/Library/Frameworks/Python.framework/Versions/2.7/bin/scrapy”,第5行,在
pkg_resources.run_脚本('Scrapy==0.16.5','Scrapy')
run_脚本中的文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/pkg_resources.py”,第499行
self.require(requires)[0]。运行脚本(脚本名称,ns)
run_脚本中的文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/pkg_resources.py”,第1235行
execfile(脚本文件名、命名空间、命名空间)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/egg-INFO/scripts/Scrapy”,第4行
执行()
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/cmdline.py”,执行中的第131行
_运行\u打印\u帮助(解析器、\u运行\u命令、cmd、args、opts)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/cmdline.py”,第76行,在“运行”和“打印”帮助中
func(*a,**千瓦)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/cmdline.py”,第138行,在_run_命令中
cmd.run(参数、选项)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/commands/crawl.py”,第43行,运行中
spider=self.crawler.spider.create(spname,**opts.spargs)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/command.py”,第33行,在crawler中
self.\u crawler.configure()
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/crawler.py”,第40行,在configure中
self.spider=spman\u cls.来自爬虫(self)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/spidermanager.py”,第35行,from_crawler
sm=cls.from_设置(爬虫程序设置)
from_设置中的文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/spidermanager.py”,第31行
返回cls(settings.getlist('SPIDER_MODULES'))
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/spidermanager.py”,第22行,在__
对于walk_模块中的模块(名称):
walk_模块中的文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/Scrapy-0.16.5-py2.7.egg/Scrapy/utils/misc.py”,第65行
submod=uuu导入(完整路径,{},{},['''])
文件“/Users//Documents/tutorial/tutorial/spider/dmoz_spider.py”,第1行
ActivePython 2.7.2.5(ActiveState软件公司)基于
^
SyntaxError:无效语法
有人知道我的脚步出了什么问题吗?
谢谢你的帮助。这是我的第一次编程经验,所以这可能是一个非常愚蠢的问题。

缩进不正确。 应该是:

>>>from scrapy.spider import BaseSpider

>>>class dmoz(BaseSpider):
       name = "dmoz"
       allowed_domains = ["dmoz.org"]
       start_urls = [
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/",
            "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
        ] 

       def parse(self, response):
          filename = response.url.split("/")[-2]
          open(filename, 'wb').write(response.body)

我认为您在空闲时复制粘贴了代码,请缩进类。

这不是缩进问题,错误消息很清楚:

File "/Users//Documents/tutorial/tutorial/spiders/dmoz_spider.py", line 1
ActivePython 2.7.2.5 (ActiveState Software Inc.) based on
               ^
SyntaxError: invalid syntax
显然,您已经将代码复制粘贴到IDLE中,包括从IDLE开始的字符串,这些字符串不是代码


不要复制粘贴,试着打开一个编辑器并在那里实际键入教程代码,这样你会学得更好,不会无意中粘贴垃圾。

事实上我这样做了,但我仍然会出现“无效语法”错误……还有什么可能?你是否使用了4个空格来缩进?你使用什么工具来执行此脚本?请将代码粘贴到I中deone.com和share。这会有帮助的。这是另一个打字错误,非常感谢您对ideone.com的帮助和建议。
File "/Users//Documents/tutorial/tutorial/spiders/dmoz_spider.py", line 1
ActivePython 2.7.2.5 (ActiveState Software Inc.) based on
               ^
SyntaxError: invalid syntax