Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/355.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python &引用;moduleNotFoundError“;在django中将scrapy设置为应用程序时_Python_Django_Scrapy - Fatal编程技术网

Python &引用;moduleNotFoundError“;在django中将scrapy设置为应用程序时

Python &引用;moduleNotFoundError“;在django中将scrapy设置为应用程序时,python,django,scrapy,Python,Django,Scrapy,当我试图用scrapy crawl getCommodityInfo启动我的scrapy演示时,出现了以下错误 C:\Users\柘宇\PycharmProjects\GraduationProject\spiders\bin\JDSpider>scrapy crawl getCommodityInfo Traceback (most recent call last): File "D:\Anacaonda\Scripts\scrapy-script.py", line 5, in

当我试图用
scrapy crawl getCommodityInfo
启动我的scrapy演示时,出现了以下错误

C:\Users\柘宇\PycharmProjects\GraduationProject\spiders\bin\JDSpider>scrapy crawl getCommodityInfo
Traceback (most recent call last):
  File "D:\Anacaonda\Scripts\scrapy-script.py", line 5, in <module>
    sys.exit(scrapy.cmdline.execute())
  File "D:\Anacaonda\lib\site-packages\scrapy\cmdline.py", line 141, in execute
    cmd.crawler_process = CrawlerProcess(settings)
  File "D:\Anacaonda\lib\site-packages\scrapy\crawler.py", line 238, in __init__
    super(CrawlerProcess, self).__init__(settings)
  File "D:\Anacaonda\lib\site-packages\scrapy\crawler.py", line 129, in __init__
    self.spider_loader = _get_spider_loader(settings)
  File "D:\Anacaonda\lib\site-packages\scrapy\crawler.py", line 325, in _get_spider_loader
    return loader_cls.from_settings(settings.frozencopy())
  File "D:\Anacaonda\lib\site-packages\scrapy\spiderloader.py", line 45, in from_settings
    return cls(settings)
  File "D:\Anacaonda\lib\site-packages\scrapy\spiderloader.py", line 23, in __init__
    self._load_all_spiders()
  File "D:\Anacaonda\lib\site-packages\scrapy\spiderloader.py", line 32, in _load_all_spiders
    for module in walk_modules(name):
  File "D:\Anacaonda\lib\site-packages\scrapy\utils\misc.py", line 71, in walk_modules
    submod = import_module(fullpath)
  File "D:\Anacaonda\lib\importlib\__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 978, in _gcd_import
  File "<frozen importlib._bootstrap>", line 961, in _find_and_load
  File "<frozen importlib._bootstrap>", line 950, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 678, in exec_module
  File "<frozen importlib._bootstrap>", line 205, in _call_with_frames_removed
  File "C:\Users\柘宇\PycharmProjects\GraduationProject\spiders\bin\JDSpider\JDSpider\spiders\getCommodityInfo.py", line 12, in <module>
    from spiders.bin.JDSpider.JDSpider.items import JDCommodity
ModuleNotFoundError: No module named 'spiders'
C:\Users\柘宇\PycharmProjects\GraducationProject\spider\bin\JDSpider>scrapy crawl getCommodityInfo
回溯(最近一次呼叫最后一次):
文件“D:\Anacaonda\Scripts\scrapy script.py”,第5行,在
sys.exit(scrapy.cmdline.execute())
文件“D:\Anacaonda\lib\site packages\scrapy\cmdline.py”,执行中的第141行
cmd.crawler_process=CrawlerProcess(设置)
文件“D:\Anacaonda\lib\site packages\scrapy\crawler.py”,第238行,在\uuu init中__
超级(爬虫进程,自我)。\uuuuu初始化\uuuuu(设置)
文件“D:\Anacaonda\lib\site packages\scrapy\crawler.py”,第129行,在\uuuu init中__
self.spider\u loader=\u get\u spider\u loader(设置)
文件“D:\Anacaonda\lib\site packages\scrapy\crawler.py”,第325行,在“获取蜘蛛”加载程序中
从\u设置返回加载程序\u cls.(settings.frozencopy())
文件“D:\Anacaonda\lib\site packages\scrapy\spiderloader.py”,第45行,在from\U设置中
返回cls(设置)
文件“D:\Anacaonda\lib\site packages\scrapy\spiderloader.py”,第23行,在\uuu init中__
self.\u加载\u所有\u蜘蛛()
文件“D:\Anacaonda\lib\site packages\scrapy\spiderloader.py”,第32行,在所有spider中
对于walk_模块中的模块(名称):
文件“D:\Anacaonda\lib\site packages\scrapy\utils\misc.py”,第71行,在walk\u模块中
子模块=导入模块(完整路径)
文件“D:\Anacaonda\lib\importlib\\ uuuuu init\uuuuuu.py”,第126行,在导入模块中
return _bootstrap._gcd_import(名称[级别:],包,级别)
文件“”,第978行,在_gcd_import中
文件“”,第961行,在“查找”和“加载”中
文件“”,第950行,在“查找”和“加载”中解锁
文件“”,第655行,已加载
exec_模块中第678行的文件“”
文件“”,第205行,在调用中删除了帧
文件“C:\Users”\柘宇\PycharmProjects\GraducationProject\spider\bin\JDSpider\JDSpider\spider\getCommodityInfo.py”,第12行,在
从spider.bin.JDSpider.JDSpider.items导入JDSpider
ModuleNotFoundError:没有名为“spider”的模块
蜘蛛似乎找不到了,但我不知道为什么会这样。我的一切都在这里毕业项目是django项目mainspider是django的应用程序。bin目录存储两个演示剪贴画项目。当我输入JDSpider试图运行它时,出现了错误。你能帮我修一下吗

另外,我的蜘蛛名称:
name=“getCommodityInfo”



使用PS1212应用的解决方案,可以运行scrapy演示。然而,pycharm警告说,你喜欢。发生了什么事?

因为它无法识别您的项目模块

试试这个:

 from JDSpider.items import JDCommodity

我的scrapy版本的可能副本是1.3.3。所以链接中的错误在我的问题中是不可能的。Thx,它起作用了。但是,为什么编辑后的代码会被pycharm警告为“未解析的引用'JDSpider'”还不确定。你能用这个错误更新你的问题吗?