Python 使用“图像”进行图像刮取;“刮痧”;没有结果

Python 使用“图像”进行图像刮取;“刮痧”;没有结果,python,image,web-crawler,scrapy,Python,Image,Web Crawler,Scrapy,我一直在尝试使用Scrapy来抓取Imgur中的图像,但我遇到了问题 蜘蛛似乎工作得很好,但它无法进入网站并完成工作 我找不到我搞砸的地方 items.py import scrapy class ImgurItem(scrapy.Item): title = scrapy.Field() image_urls = scrapy.Field() images = scrapy.Field() setting.py BOT_NAME = 'imgur' SPIDER

我一直在尝试使用Scrapy来抓取Imgur中的图像,但我遇到了问题

蜘蛛似乎工作得很好,但它无法进入网站并完成工作

我找不到我搞砸的地方

items.py

import scrapy


class ImgurItem(scrapy.Item):
    title = scrapy.Field()
    image_urls = scrapy.Field()
    images = scrapy.Field()
setting.py

BOT_NAME = 'imgur'

SPIDER_MODULES = ['imgur.spiders']
NEWSPIDER_MODULE = 'imgur.spiders'
ITEM_PIPELINES = {'scrapy.pipelines.images.ImagesPipeline': 1}
IMAGES_STORE = '\Users\123\Desktop\images'
imgur_spider.py

import scrapy

from scrapy.spiders import Rule, CrawlSpider
from scrapy.linkextractors import LinkExtractor
from imgur.items import ImgurItem

class ImgurSpider(CrawlSpider):
    name = 'imgur'
    allowed_domains = ['imgur.com']
    start_url = ['http://imgur.com']
    rules = [Rule(LinkExtractor(allow=['/gallery/.*']), 'parse_imgur')]

    def parse_imgur(self, response):
        image = ImgurItem()
        image['title'] = response.xpath("//h2[@id='image-title']/text()").extract()
        rel = response.xpath("//img/@src").extract()
        image['image_urls'] = ['http:'+rel[0]]
        return image
日志

您的爬虫程序将运行,然后查找包含“/gallery/”的所有链接(不需要额外的。*)。它找不到任何内容(因为页面上没有),因此爬虫程序完成

编辑:
清理代码后,它跳出了
start\u url=…
应该是
start\u url=…
。照目前的情况,Scrapy什么也抓不到,因为它的起点都没有。

哦,那是我的错!忘记把它换回来了。。应将“”作为头版,并将其移至厨房。问题仍然存在。@user251420我更改了它,并在提供的代码中更正了您的缩进。在我继续之前,您能否检查以确保它准确地表示您现在在爬虫程序中拥有的内容?是的,除了删除您提到的多余*之外。@user251420我已经更新了我的答案,这应该可以解决问题。太棒了。我希望我有你的鹰眼!谢谢
2015-10-16 16:36:50 [scrapy] INFO: Scrapy 1.0.3 started (bot: imgur)
2015-10-16 16:36:50 [scrapy] INFO: Optional features available: ssl, http11
2015-10-16 16:36:50 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'imgur.spiders', 'SPIDER_MODULES': ['imgur.spiders'], 'BOT_NAME': 'imgur'}
2015-10-16 16:36:50 [scrapy] INFO: Enabled extensions: CloseSpider, TelnetConsole, LogStats, CoreStats, SpiderState
2015-10-16 16:36:50 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-10-16 16:36:50 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-10-16 16:36:50 [scrapy] INFO: Enabled item pipelines: ImagesPipeline
2015-10-16 16:36:50 [scrapy] INFO: Spider opened
2015-10-16 16:36:50 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped     0 items (at 0 items/min)
2015-10-16 16:36:50 [scrapy] DEBUG: Telnet console listening on   127.0.0.1:6023
2015-10-16 16:36:50 [scrapy] INFO: Closing spider (finished)
2015-10-16 16:36:50 [scrapy] INFO: Dumping Scrapy stats:
{'finish_reason': 'finished',
'finish_time': datetime.datetime(2015, 10, 16, 15, 36, 50, 469000),
'log_count/DEBUG': 1,
'log_count/INFO': 7,
'start_time': datetime.datetime(2015, 10, 16, 15, 36, 50, 462000)}
2015-10-16 16:36:50 [scrapy] INFO: Spider closed (finished)