scrapy-来自以下页面的数据

scrapy-来自以下页面的数据,scrapy,Scrapy,我有个问题。 移动到下一页后如何下载数据? 它只从第一页下载。 我粘贴,我的代码: # -*- coding: utf-8 -*- from scrapy import Spider from scrapy.http import Request class PronobelSpider(Spider): name = 'pronobel' allowed_domains = ['pronobel.pl'] start_urls = ['http://pron

我有个问题。 移动到下一页后如何下载数据? 它只从第一页下载。 我粘贴,我的代码:

    # -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.http import Request


class PronobelSpider(Spider):
    name = 'pronobel'
    allowed_domains = ['pronobel.pl']
    start_urls = ['http://pronobel.pl/praca-opieka-niemcy/']

    def parse(self, response):

        jobs = response.xpath('//*[@class="offer offer-immediate"]')
        for job in jobs:
            title = job.xpath('.//*[@class="offer-title"]/text()').extract_first()
            start_date = job.xpath('.//*[@class="offer-attr offer-departure"]/text()').extract_first()
            place = job.xpath('.//*[@class="offer-attr offer-localization"]/text()').extract_first()
            language = job.xpath('.//*[@class="offer-attr offer-salary"]/text()').extract()[1]

            print title
            print start_date
            print place
            print language

        next_page_url = response.xpath('//*[@class="page-nav nav-next"]/a/@href').extract_first()
        absolute_next_page_url = response.urljoin(next_page_url)
        yield Request(absolute_next_page_url)
我只从第一页获取数据

我还尝试了以下方法:

# -*- coding: utf-8 -*-
from scrapy import Spider
from scrapy.http import Request


class PronobelSpider(Spider):
    name = 'pronobel'
    allowed_domains = ['pronobel.pl']
    start_urls = ['http://pronobel.pl/praca-opieka-niemcy']

    def parse(self, response):

        jobs = response.xpath('//*[@class="offer offer-immediate"]')
        for job in jobs:
            title = job.xpath('.//*[@class="offer-title"]/text()').extract_first()
            start_date = job.xpath('.//*[@class="offer-attr offer-departure"]/text()').extract_first()
            place = job.xpath('.//*[@class="offer-attr offer-localization"]/text()').extract_first()
            language = job.xpath('.//*[@class="offer-attr offer-salary"]/text()').extract()[1]

            yield {'place' : place}

        next_page_url = response.xpath('//*[@class="page-nav nav-next"]/a/@href').extract_first()
        absolute_next_page_url = response.urljoin(next_page_url)  
        yield Request(absolute_next_page_url)
答复:

2019-03-20 17:58:28 [scrapy.core.engine] INFO: Spider opened
2019-03-20 17:58:28 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2019-03-20 17:58:28 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6025
2019-03-20 17:58:28 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET https://pronobel.pl/praca-opieka-niemcy> from <GET http://pronobel.pl/praca-opieka-niemcy>
2019-03-20 17:58:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://pronobel.pl/praca-opieka-niemcy> (referer: None)
2019-03-20 17:58:29 [scrapy.core.scraper] DEBUG: Scraped from <200 https://pronobel.pl/praca-opieka-niemcy>
{'place': u'Ratingen'}
2019-03-20 17:58:29 [scrapy.core.scraper] DEBUG: Scraped from <200 https://pronobel.pl/praca-opieka-niemcy>
{'place': u'Burg Stargard'}
2019-03-20 17:58:29 [scrapy.core.scraper] DEBUG: Scraped from <200 https://pronobel.pl/praca-opieka-niemcy>
{'place': u'Fahrenzhausen'}
2019-03-20 17:58:29 [scrapy.core.scraper] DEBUG: Scraped from <200 https://pronobel.pl/praca-opieka-niemcy>
{'place': u'Meerbusch'}
2019-03-20 17:58:29 [scrapy.core.scraper] DEBUG: Scraped from <200 https://pronobel.pl/praca-opieka-niemcy>
{'place': u'Geislingen an der Steige T\xfcrkheim/Deutschland'}
2019-03-20 17:58:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://pronobel.pl/praca-opieka-niemcy?page_nr=2> (referer: https://pronobel.pl/praca-opieka-niemcy)
2019-03-20 17:58:29 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://pronobel.pl/praca-opieka-niemcy?page_nr=3> (referer: https://pronobel.pl/praca-opieka-niemcy?page_nr=2)
2019-03-20 17:58:29 [scrapy.dupefilters] DEBUG: Filtered duplicate request: <GET https://pronobel.pl/praca-opieka-niemcy?page_nr=3> - no more duplicates will be shown (see DUPEFILTER_DEBUG to show all duplicates)
2019-03-20 17:58:29 [scrapy.core.engine] INFO: Closing spider (finished)
2019-03-20 17:58:28[刮屑核心引擎]信息:蜘蛛网已打开
2019-03-20 17:58:28[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2019-03-20 17:58:28[scrapy.extensions.telnet]信息:telnet控制台监听127.0.0.1:6025
2019-03-20 17:58:28[scrapy.downloadermiddleware.redirect]调试:重定向(302)到
2019-03-20 17:58:29[刮屑核心引擎]调试:爬网(200)(参考:无)
2019-03-20 17:58:29[scrapy.core.scraper]调试:从
{'place':u'Ratingen'}
2019-03-20 17:58:29[scrapy.core.scraper]调试:从
{'place':u'Burg Stargard'}
2019-03-20 17:58:29[scrapy.core.scraper]调试:从
{'place':u'Fahrenzhausen'}
2019-03-20 17:58:29[scrapy.core.scraper]调试:从
{'place':u'Meerbusch'}
2019-03-20 17:58:29[scrapy.core.scraper]调试:从
{'place':u'Geislingen an der Steige T\xfcrkheim/德国}
2019-03-20 17:58:29[刮屑核心引擎]调试:爬网(200)(参考:https://pronobel.pl/praca-opieka-niemcy)
2019-03-20 17:58:29[刮屑核心引擎]调试:爬网(200)(参考:https://pronobel.pl/praca-opieka-niemcy?page_nr=2)
2019-03-20 17:58:29[scrapy.dupefilters]调试:已过滤的重复请求:-将不再显示重复项(请参阅DUPEFILTER_调试以显示所有重复项)
2019-03-20 17:58:29[刮屑芯发动机]信息:关闭卡盘(完成)

您的问题不是抓取下一页,而是选择器的问题。 首先,当按类选择元素时,它是。 发生的情况是,在其他页面上没有具有类
提供即时
的元素

我对您的代码做了一些更改,请参见下文:

from scrapy import Spider
from scrapy.http import Request


class PronobelSpider(Spider):
    name = 'pronobel'
    allowed_domains = ['pronobel.pl']
    start_urls = ['http://pronobel.pl/praca-opieka-niemcy/']

    def parse(self, response):
        jobs = response.css('div.offers-list div.offer')
        for job in jobs:
            title = job.css('a.offer-title::text').extract_first()
            start_date = job.css('div.offer-attr.offer-departure::text').extract_first()
            place = job.css('div.offer-attr.offer-localization::text').extract_first()
            language = job.css('div.offer-attr.offer-salary::text').extract()[1]
            yield {'title': title,
                    'start_date': start_date,
                    'place': place,
                    'language': language,
                    'url': response.url}

        next_page_url = response.css('li.page-nav.nav-next a::attr(href)').extract_first()
        absolute_next_page_url = response.urljoin(next_page_url)
        yield Request(absolute_next_page_url)

你是说你的蜘蛛不会进入下一页吗?此外,您还需要
生成数据,而不是打印数据。