Web scraping 从给定的URL抓取数据,并使用scrapy将其放入文件中

Web scraping 从给定的URL抓取数据,并使用scrapy将其放入文件中,web-scraping,scrapy,screen-scraping,Web Scraping,Scrapy,Screen Scraping,我正试图深入浏览一个给定的网站,并从所有页面中抓取文本。我正在使用scrapy刮网站 下面是我如何运行蜘蛛 scrapy crawl stack_crawler-o items.json item.json文件变空 这是蜘蛛代码 # -*- coding: utf-8 -*- import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule #fr

我正试图深入浏览一个给定的网站,并从所有页面中抓取文本。我正在使用scrapy刮网站

下面是我如何运行蜘蛛 scrapy crawl stack_crawler-o items.json

item.json文件变空

这是蜘蛛代码

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

#from tutorial.items import TutorialItem

from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    rules = (
        Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        i = TutorialItem()
        i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
        i['name'] = response.xpath('//div[@id="name"]').extract()
        i['description'] = response.xpath('//div[@id="description"]').extract()
        return i
import scrapy

from lxml import html
from scrapy.spiders import CrawlSpider, Rule
from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    def parse(self, response):  
        doc = html.fromstring(response.body)
        i = DmozItem()
        i['title'] = doc.xpath('//meta[@property="og:title"]/@content')
        i['link'] = response.url
        i['desc'] = doc.xpath('//meta[@name="description"]/@content')
        yield i
这是我在运行蜘蛛爬行时得到的日志

dummy-MacBook-Pro:spiders Dummy$ scrapy crawl stack_crawler -o items.json
2016-06-09 10:22:23 [scrapy] INFO: Scrapy 1.1.0 started (bot: tutorial)
2016-06-09 10:22:23 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'tutorial.spiders', 'FEED_URI': 'items.json', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 'tutorial', 'ROBOTSTXT_OBEY': True, 'FEED_FORMAT': 'json'}
2016-06-09 10:22:23 [scrapy] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2016-06-09 10:22:23 [scrapy] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.chunked.ChunkedTransferMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2016-06-09 10:22:23 [scrapy] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2016-06-09 10:22:23 [scrapy] INFO: Enabled item pipelines:
[]
2016-06-09 10:22:23 [scrapy] INFO: Spider opened
2016-06-09 10:22:23 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2016-06-09 10:22:23 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:6024
2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/robots.txt> (referer: None)
2016-06-09 10:22:24 [scrapy] DEBUG: Crawled (200) <GET http://www.dmoz.org/> (referer: None)
2016-06-09 10:22:24 [scrapy] INFO: Closing spider (finished)
2016-06-09 10:22:24 [scrapy] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 430,
 'downloader/request_count': 2,
 'downloader/request_method_count/GET': 2,
 'downloader/response_bytes': 5694,
 'downloader/response_count': 2,
 'downloader/response_status_count/200': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2016, 6, 9, 4, 52, 24, 862900),
 'log_count/DEBUG': 3,
 'log_count/INFO': 7,
 'response_received_count': 2,
 'scheduler/dequeued': 1,
 'scheduler/dequeued/memory': 1,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2016, 6, 9, 4, 52, 23, 483092)}
2016-06-09 10:22:24 [scrapy] INFO: Spider closed (finished)

有谁能帮我找出我在代码级别获取数据时做错了什么。

我认为你对scrapy是新手,你在代码中犯了很多错误

1.我们在scrapy中有默认函数解析或启动产品请求,因此您可以避免在那里使用
LinkExtractor
。使用
parse
函数,直接在那里获得start\u url响应

2.您必须在items.py中定义一个项并使用另一个项。因此字段名称不同,因此会产生冲突

3.为字段取值的路径正确

你必须试试这个

蜘蛛代码

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

#from tutorial.items import TutorialItem

from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    rules = (
        Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        i = TutorialItem()
        i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
        i['name'] = response.xpath('//div[@id="name"]').extract()
        i['description'] = response.xpath('//div[@id="description"]').extract()
        return i
import scrapy

from lxml import html
from scrapy.spiders import CrawlSpider, Rule
from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    def parse(self, response):  
        doc = html.fromstring(response.body)
        i = DmozItem()
        i['title'] = doc.xpath('//meta[@property="og:title"]/@content')
        i['link'] = response.url
        i['desc'] = doc.xpath('//meta[@name="description"]/@content')
        yield i
项目代码捕捉

import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()
import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

这是可行的。

我认为您对scrapy是新手,并且在代码中犯了很多错误

1.我们在scrapy中有默认函数解析或启动产品请求,因此您可以避免在那里使用
LinkExtractor
。使用
parse
函数,直接在那里获得start\u url响应

2.您必须在items.py中定义一个项并使用另一个项。因此字段名称不同,因此会产生冲突

3.为字段取值的路径正确

你必须试试这个

蜘蛛代码

# -*- coding: utf-8 -*-
import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

#from tutorial.items import TutorialItem

from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    rules = (
        Rule(LinkExtractor(allow=r'Items/'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        i = TutorialItem()
        i['domain_id'] = response.xpath('//input[@id="sid"]/@value').extract()
        i['name'] = response.xpath('//div[@id="name"]').extract()
        i['description'] = response.xpath('//div[@id="description"]').extract()
        return i
import scrapy

from lxml import html
from scrapy.spiders import CrawlSpider, Rule
from tutorial.items import DmozItem

class StackCrawlerSpider(CrawlSpider):
    name = 'stack_crawler'
    allowed_domains = ['http://www.dmoz.org']
    start_urls = ['http://www.dmoz.org/']

    def parse(self, response):  
        doc = html.fromstring(response.body)
        i = DmozItem()
        i['title'] = doc.xpath('//meta[@property="og:title"]/@content')
        i['link'] = response.url
        i['desc'] = doc.xpath('//meta[@name="description"]/@content')
        yield i
项目代码捕捉

import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()
import scrapy
class DmozItem(scrapy.Item):
    title = scrapy.Field()
    link = scrapy.Field()
    desc = scrapy.Field()

这是有效的。

dmoz.org在
href
中没有任何与“Items”的链接,因此您的规则找不到任何链接,这就是Items.json文件为空的原因。

dmoz.org在
href
中没有任何与“Items”的链接,因此,您的规则找不到任何链接,这就是您的items.json文件为空的原因。

从日志2016-06-09 10:35:46[scrapy]信息:已爬网0页(以0页/分钟的速度),已爬网0项(以0项/分钟的速度)。很明显,爬网不会发生,但原因是什么?我需要帮助它每分钟打印一次该消息,因此您的爬网速度很慢,但它正在发生:
爬网(200)
。从日志2016-06-09 10:35:46[scrapy]信息:爬网0页(以0页/分钟),爬网0项(以0项/分钟)很明显,爬网不会发生,但为什么?我需要帮助它每分钟打印一次该消息,因此您的爬网速度很慢,但它正在发生:
Crawled(200)
。是的,您的代码正在工作,但我无法从所有可用链接(深度爬网)中抓取所有文本。您能帮我抓取所有文本吗?以下是我如何抓取每个链接的代码快照“**def parse\u项目”(self,response):for sel in response.xpath('//ul/li'):item=DmozItem()item['title']=sel.xpath('a/text()).extract()item['link']=sel.xpath('text()')。extract()产生项**”大多数链接都没有href的文本。所以你必须从url本身中删除文本。你好,阿伦,你能帮我解决这个问题吗?是的,你的代码正在工作,但我无法从所有可用链接(深度爬网)中抓取所有文本。你能帮我抓取所有文本吗?下面是我如何抓取每个链接的代码快照“**def parse_item(self,response):for sel in response.xpath('//ul/li'):item=DmozItem()item['title']=sel.xpath('a/text()')。extract()item['link']=sel.xpath('a/@href')。extract()item['desc']=sel.xpath('text())。extract()生成item**“大多数链接都没有href的文本。因此您必须从url本身中删除文本。嗨,阿伦,您能帮我解决这个问题吗?”