Python 刮网器出现错误-我一生都找不到问题所在

Python 刮网器出现错误-我一生都找不到问题所在,python,python-2.7,scrapy,screen-scraping,scrapy-spider,Python,Python 2.7,Scrapy,Screen Scraping,Scrapy Spider,我不知道是什么导致了这个错误。这个错误发生在craig.py文件的第3行,但我没有看到任何差异 文件夹结构 克雷格文件夹 蜘蛛文件夹 init.py init.pyc 克雷格·皮 克雷格·皮克 init.py init.pyc 管道.py 设置.py 设置.pyc 刮痧 项目名称:克雷格 档案名称:克雷格 Spyder名称:Craig.py 克雷格·皮 items.py 以下是错误: 请显示items.py在项目结构中的位置 您应该有这样的smth: 克雷格文件夹 克雷格·皮 项目文件夹 __初

我不知道是什么导致了这个错误。这个错误发生在craig.py文件的第3行,但我没有看到任何差异

文件夹结构

克雷格文件夹 蜘蛛文件夹 init.py init.pyc 克雷格·皮 克雷格·皮克 init.py init.pyc 管道.py 设置.py 设置.pyc 刮痧 项目名称:克雷格 档案名称:克雷格 Spyder名称:Craig.py

克雷格·皮

items.py

以下是错误:


请显示items.py在项目结构中的位置

您应该有这样的smth:

克雷格文件夹 克雷格·皮 项目文件夹 __初始值 items.py
好啊我改了帖子:但是我没有看到items.py文件。为什么这个答案被接受?解决方案是什么?在文件夹结构中不显示ıtems.py文件的位置。这就是原因,你把它保存在一个完全随机的地方:D
from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from craig.items import CraigslistSampleItem

class MySpider (BaseSpider):
    name = "craig"
    allowed_domain = ["craigslist.org"]
    start_urls = ["http://sfbay.craigslist.org/sfc/npo/"]

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        title = hxs.select("//p")
        items = []
        for titles in titles:
            item = CraigslistSampleItem()
            item ["title"] = titles.select("a/text()").extract()
            item ["link"] = titles.select("a/@href").extract()
            items.append(item)
        return items
# -*- coding: utf-8 -*-

# Define here the models for your scraped items
#
# See documentation in:
# http://doc.scrapy.org/en/latest/topics/items.html

from scrapy.item import Item, Field


class CraigslistSampleItem(Item):
    title = Field()
    link = Field()
Traceback (most recent call last):
  File "C:\Python27\Scripts\scrapy-script.py", line 9, in <module>
    load_entry_point('scrapy==0.24.4', 'console_scripts', 'scrapy')()
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\cmdline.py"
, line 143, in execute
    _run_print_help(parser, _run_command, cmd, args, opts)
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\cmdline.py"
, line 89, in _run_print_help
    func(*a, **kw)
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\cmdline.py"
, line 150, in _run_command
    cmd.run(args, opts)
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\commands\cr
awl.py", line 57, in run
    crawler = self.crawler_process.create_crawler()
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\crawler.py"
, line 87, in create_crawler
    self.crawlers[name] = Crawler(self.settings)
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\crawler.py"
, line 25, in __init__
    self.spiders = spman_cls.from_crawler(self)
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\spidermanag
er.py", line 35, in from_crawler
    sm = cls.from_settings(crawler.settings)
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\spidermanag
er.py", line 31, in from_settings
    return cls(settings.getlist('SPIDER_MODULES'))
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\spidermanag
er.py", line 22, in __init__
    for module in walk_modules(name):
  File "C:\Python27\lib\site-packages\scrapy-0.24.4-py2.7.egg\scrapy\utils\misc.
py", line 68, in walk_modules
    submod = import_module(fullpath)
  File "C:\Python27\lib\importlib\__init__.py", line 37, in import_module
    __import__(name)
  File "C:\Users\Turbo\craig\craig\spiders\craig.py", line 3, in <module>
    from craig.items import CraigslistSampleItem
ImportError: No module named items