Python Can';t使用Scrapy刮取下一页内容

Python Can';t使用Scrapy刮取下一页内容,python,web-scraping,scrapy,scrapy-shell,Python,Web Scraping,Scrapy,Scrapy Shell,我也想从下一页中删除内容,但没有转到下一页。我的代码是: import scrapy class AggregatorSpider(scrapy.Spider): name = 'aggregator' allowed_domains = ['startech.com.bd/component/processor'] start_urls = ['https://startech.com.bd/component/processor'] def parse(self, response):

我也想从下一页中删除内容,但没有转到下一页。我的代码是:

import scrapy
class AggregatorSpider(scrapy.Spider):
name = 'aggregator'
allowed_domains = ['startech.com.bd/component/processor']
start_urls = ['https://startech.com.bd/component/processor']

def parse(self, response):
    processor_details = response.xpath('//*[@class="col-xs-12 col-md-4 product-layout grid"]')
    for processor in processor_details:
        name = processor.xpath('.//h4/a/text()').extract_first()
        price = processor.xpath('.//*[@class="price space-between"]/span/text()').extract_first()
        print ('\n')
        print (name)
        print (price)
        print ('\n')
    next_page_url = response.xpath('//*[@class="pagination"]/li/a/@href').extract_first()
    # absolute_next_page_url = response.urljoin(next_page_url)
    yield scrapy.Request(next_page_url)

我没有使用urljoin,因为下一个页面的url会给出整个url。我还尝试了yield函数中的dont\u filter=true参数,它给了我一个通过第一页的无限循环。我从终端收到的消息是[scrapy.spidermiddleware.offsite]DEBUG:Filtered-offsite请求到“www.startech.com.bd”:https://www.startech.com.bd/component/processor?page=2>

这是因为您的
允许的\u域
变量错误,请使用
允许的\u域=['www.startech.com.bd']

您还可以修改下一页选择器,以避免再次转到第一页:

import scrapy
class AggregatorSpider(scrapy.Spider):
    name = 'aggregator'
    allowed_domains = ['www.startech.com.bd']
    start_urls = ['https://startech.com.bd/component/processor']

    def parse(self, response):
        processor_details = response.xpath('//*[@class="col-xs-12 col-md-4 product-layout grid"]')
        for processor in processor_details:
            name = processor.xpath('.//h4/a/text()').extract_first()
            price = processor.xpath('.//*[@class="price space-between"]/span/text()').extract_first()
            yield({'name': name, 'price': price})
        next_page_url = response.css('.pagination li:last-child a::attr(href)').extract_first()
        if next_page_url:
            yield scrapy.Request(next_page_url)

这是因为您的
allowed_domains
变量错误,请改用
allowed_domains=['www.startech.com.bd']

您还可以修改下一页选择器,以避免再次转到第一页:

import scrapy
class AggregatorSpider(scrapy.Spider):
    name = 'aggregator'
    allowed_domains = ['www.startech.com.bd']
    start_urls = ['https://startech.com.bd/component/processor']

    def parse(self, response):
        processor_details = response.xpath('//*[@class="col-xs-12 col-md-4 product-layout grid"]')
        for processor in processor_details:
            name = processor.xpath('.//h4/a/text()').extract_first()
            price = processor.xpath('.//*[@class="price space-between"]/span/text()').extract_first()
            yield({'name': name, 'price': price})
        next_page_url = response.css('.pagination li:last-child a::attr(href)').extract_first()
        if next_page_url:
            yield scrapy.Request(next_page_url)

我更改了允许的域名,现在它通过了第2页,但在第2页之后没有停止,它再次爬过第1页并显示了两次相同的内容。这是因为您的
next\u page\u url
变量,你在分页中使用了第一个链接,那么在这种情况下我应该怎么做?两个链接都有相同的值。你能告诉我关于css或xpath选择器的任何资源吗?谢谢。我更改了允许的域名,现在它进入第二页,但在第二页之后不会停止,它再次爬过第一个页面,并显示了两次相同的内容。这是因为您的
next\u page\u url
变量,您在分页中获取了第一个链接。在这种情况下,我该怎么办?这两个链接都具有相同的值。您能告诉我有关此css或xpath选择器的任何资源吗?谢谢。