Web scraping Scrapy IdentitationError:应为已识别的块

Web scraping Scrapy IdentitationError:应为已识别的块,web-scraping,scrapy,scrapy-spider,Web Scraping,Scrapy,Scrapy Spider,相信你做得很好。我需要你的帮助,我得到了这个错误,但我不知道为什么: File "C:\Users\Luis\Amazon\mercado\spiders\spider.py", line 14 yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&

相信你做得很好。我需要你的帮助,我得到了这个错误,但我不知道为什么:

    File "C:\Users\Luis\Amazon\mercado\spiders\spider.py", line 14
yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&qid=1535314254",self.parse_item)
    ^IndentationError: expected an indented block

# -*- coding: utf-8 -*-
import scrapy
import urllib
from mercado.items import MercadoItem


class MercadoSpider(CrawlSpider):
    name = 'mercado'
    item_count = 0
    allowed_domain = ['https://www.amazon.es']
    start_urls = ['https://www.amazon.es/s/ref=sr_pg_2rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&qid=1 535314254']

    def start_requests(self):
        yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&qid=1535314254",self.parse_item)

        for i in range(2,400):
            yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page="+str(i)+"&keywords=febi&ie=UTF8&qid=1535314254",self.parse_item)


    def parse_item(self, response):
        ml_item = MercadoItem()

        #info de producto
        ml_item['articulo'] = response.xpath('normalize-space(//*[@id="productTitle"])').extract()
        ml_item['precio'] = response.xpath('normalize-space(//*[@id="priceblock_ourprice"])').extract()
        self.item_count += 1
        yield ml_item
你知道为什么吗?
我在这里添加了代码以方便操作。

您有一个缩进错误:

# -*- coding: utf-8 -*-
import scrapy
import urllib
from mercado.items import MercadoItem


class MercadoSpider(CrawlSpider):
    name = 'mercado'
    item_count = 0
    allowed_domain = ['https://www.amazon.es']
    start_urls = ['https://www.amazon.es/s/ref=sr_pg_2rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&qid=1 535314254']

    def start_requests(self):
        yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&qid=1535314254",self.parse_item)

        for i in range(2,400):
            yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page="+str(i)+"&keywords=febi&ie=UTF8&qid=1535314254",self.parse_item)


    def parse_item(self, response):
        ml_item = MercadoItem()

        #info de producto
        ml_item['articulo'] = response.xpath('normalize-space(//*[@id="productTitle"])').extract()
        ml_item['precio'] = response.xpath('normalize-space(//*[@id="priceblock_ourprice"])').extract()
        self.item_count += 1
        yield ml_item   
更新但现在您有代码(不是最佳的)来获取分页和解析详细信息页面。您需要添加代码来解析每个分页页面并获取每个项目的详细信息链接:

def start_requests(self):
    yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page=1&keywords=febi&ie=UTF8&qid=1535314254",self.parse_search)

    for i in range(2,400):
        yield scrapy.Request("https://www.amazon.es/s/ref=sr_pg_2?rh=n%3A1951051031%2Cn%3A2424922031%2Ck%3Afebi&page="+str(i)+"&keywords=febi&ie=UTF8&qid=1535314254",self.parse_search)

def parse_search(self, response):

    for item_link in response.xpath('//ul[@id="s-results-list-atf"]//a[contains(@class, "s-access-detail-page")]/@href').extract():
        yield scrapy.Request(item_link, self.parse_item)

def parse_item(self, response):
    ml_item = MercadoItem()

    #info de producto
    ml_item['articulo'] = response.xpath('normalize-space(//*[@id="productTitle"])').extract()
    ml_item['precio'] = response.xpath('normalize-space(//*[@id="priceblock_ourprice"])').extract()
    self.item_count += 1
    yield ml_item   

欢迎来到SO。提供代码块以彻底调查问题。您好!非常感谢您的支持,我在主消息中添加了代码。似乎您错过了def start_请求(self)中的缩进:。这些函数应该在类内,但它们不是。真的谢谢你!请你解释一下怎么做好吗?我已经在你的问题中做了修改。在你的问题中使用代码,太棒了!!他在工作!非常感谢你!我现在的问题是,我每页只能从24个可能的页面中获得两个结果,你知道为什么吗?