Python 刮取:刮取连续的URL

Python 刮取:刮取连续的URL,python,web-scraping,scrapy,Python,Web Scraping,Scrapy,我有一个关于scrapy的非常简单的问题。我想浏览一个网站,其起始url为www.example.com/1。然后我想去www.example.com/2和www.example.com/3,依此类推。我知道这应该很简单,但是,怎么做呢 这是我的刮刀,再简单不过了: import scrapy class QuotesSpider(scrapy.Spider): name = "scraper" start_urls = [ 'http://www.examp

我有一个关于scrapy的非常简单的问题。我想浏览一个网站,其起始url为www.example.com/1。然后我想去www.example.com/2和www.example.com/3,依此类推。我知道这应该很简单,但是,怎么做呢

这是我的刮刀,再简单不过了:

import scrapy


class QuotesSpider(scrapy.Spider):
    name = "scraper"
    start_urls = [
        'http://www.example.com/1',
    ]

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }
现在,我如何才能转到?

在类中添加start\u requests方法,并根据需要生成这些请求:

import scrapy

class QuotesSpider(scrapy.Spider):

    name = "scraper"

    def start_requests(self):
        n = ???                          # set the limit here
        for i in range(1, n):
            yield scrapy.Request('http://www.example.com/{}'.format(i), self.parse)

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }
另一个选项是,您可以在start_url参数中放置多个URL:

class QuotesSpider(scrapy.Spider):
    name = "scraper"
    start_urls = ['http://www.example.com/{}'.format(i) for i in range(1, 100)]
                                                 # choose your limit here ^^^

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }
向类中添加start_requests方法,并根据需要生成这些请求:

import scrapy

class QuotesSpider(scrapy.Spider):

    name = "scraper"

    def start_requests(self):
        n = ???                          # set the limit here
        for i in range(1, n):
            yield scrapy.Request('http://www.example.com/{}'.format(i), self.parse)

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }
另一个选项是,您可以在start_url参数中放置多个URL:

class QuotesSpider(scrapy.Spider):
    name = "scraper"
    start_urls = ['http://www.example.com/{}'.format(i) for i in range(1, 100)]
                                                 # choose your limit here ^^^

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }
试试这个:

import scrapy

from scrapy.http import Request

class QuotesSpider(scrapy.Spider):
    name = "scraper"
    number_of_pages = 10 # number of pages you want to parse
    start_urls = [
        'http://www.example.com/1',
    ]

    def start_requests(self):
        for i in range(self.number_of_pages):
            yield Request('http://www.example.com/%d' % i, callback = self.parse)

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }
试试这个:

import scrapy

from scrapy.http import Request

class QuotesSpider(scrapy.Spider):
    name = "scraper"
    number_of_pages = 10 # number of pages you want to parse
    start_urls = [
        'http://www.example.com/1',
    ]

    def start_requests(self):
        for i in range(self.number_of_pages):
            yield Request('http://www.example.com/%d' % i, callback = self.parse)

    def parse(self, response):
        for quote in response.css('#Ficha'):
            yield {
                'item_1': quote.css('div.ficha_med > div > h1').extract(),
            }

一圈就够了,谢谢!我对php很在行,但对Python完全不在行。我将用实际的代码更新这个问题,因为我无法想象如何在python中做到这一点,我需要一个示例…一个循环就足够了。谢谢!我对php很在行,但对Python完全不在行。我将用实际的代码更新这个问题,因为我无法想象如何在python中做到这一点,我需要一个示例…谢谢!这是有道理的…:谢谢!这是有道理的…: