Python 如何在不同内容的网站中刮取容器?

Python 如何在不同内容的网站中刮取容器?,python,xpath,web-scraping,scrapy,Python,Xpath,Web Scraping,Scrapy,我想放弃这个网站。 我构建了一个粗略的代码: import scrapy from urllib.parse import urljoin class DhgateSpider(scrapy.Spider): name = 'dhgate' allowed_domains = ['dhgate.com'] start_urls = ['https://www.dhgate.com/wholesale/electronics-robots/c103032.html'

我想放弃这个网站。

我构建了一个粗略的代码:

import scrapy
from urllib.parse import urljoin



class DhgateSpider(scrapy.Spider):
    name = 'dhgate'
    allowed_domains = ['dhgate.com']
    start_urls = ['https://www.dhgate.com/wholesale/electronics-robots/c103032.html']

    
    def parse(self, response):
        Product = response.xpath('//*[@class="pro-title"]/a/@title').extract()
        Price = response.xpath('//*[@class="price"]/span/text()').extract()
        Customer_review = response.xpath('//*[@class="reviewnum"]/span/text()').extract()
        Seller = response.xpath('//*[@class="seller"]/a/text()').extract()
        Feedback = response.xpath('//*[@class="feedback"]/span/text()').extract()

        for item in zip(Product,Price,Customer_review,Seller,Feedback):
            scraped_info = {
                'Product':item[0],
                'Price': item[1],
                'Customer_review':item[2],
                'Seller':item[2],
                'Feedback':item[3],

            }
            yield scraped_info
        next_page_url = response.xpath('//*[@class="next"]/@href').extract_first()
        if next_page_url:
            next_page_url = urljoin('https:',next_page_url)
            yield scrapy.Request(url = next_page_url, callback = self.parse)

问题是并非每个容器都有客户评论或反馈项。因此,它只对那些有完整的产品、价格、客户审查、卖家和反馈项目的产品进行刮除。我想清除所有容器,如果没有customer_review,我想添加一个空值。我该怎么做?谢谢。

不要使用邮政编码:

def parse(self, response):

    for product_node in response.xpath('//div[@id="proList"]/div[contains(@class, "listitem")]'):
        Product = product_node.xpath('.//*[@class="pro-title"]/a/@title').extract_first()
        Price = product_node.xpath('.//*[@class="price"]/span/text()').extract_first()
        Customer_review = product_node.xpath('.//*[@class="reviewnum"]/span/text()').extract_first()
        Seller = product_node.xpath('.//*[@class="seller"]/a/text()').extract_first()
        Feedback = product_node.xpath('.//*[@class="feedback"]/span/text()').extract_first()

        scraped_info = {
                'Product':Product,
                'Price': Price,
                'Customer_review':Customer_review,
                'Seller':Seller,
                'Feedback':Feedback,
        }
        yield scraped_info

    next_page_url = response.xpath('//*[@class="next"]/@href').extract_first()
    if next_page_url:
        next_page_url = urljoin('https:',next_page_url)
        yield scrapy.Request(url = next_page_url, callback = self.parse)

我觉得您在zip()中缺少了Customer_review。很抱歉,这是一个输入错误。这并不能解决问题。非常感谢。但是如果我不想要那些没有价格的呢?如何操作。@UchihaAJ
如果价格不是无:
=>保存项目