Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/359.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/ember.js/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 将项目刮印到csv,每个项目都在自己的行中_Python_Csv_Scrapy - Fatal编程技术网

Python 将项目刮印到csv,每个项目都在自己的行中

Python 将项目刮印到csv,每个项目都在自己的行中,python,csv,scrapy,Python,Csv,Scrapy,我正在通过Scrapy抓取amazon,并试图将产品的价格和名称导出到csv文件中。当我这样做时,Scrapy会将项目添加到一个列表中,csv的每一行都是该页面的产品列表(价格列也是如此)。我希望每个项目及其各自的价格打印到自己的行在一个CSV文件。下面是我的抓取代码 class ScrapeSpider(scrapy.Spider): name = 'scrape' start_urls = ['https://www.amazon.com/s?i=aps&k=lapt

我正在通过Scrapy抓取amazon,并试图将产品的价格和名称导出到csv文件中。当我这样做时,Scrapy会将项目添加到一个列表中,csv的每一行都是该页面的产品列表(价格列也是如此)。我希望每个项目及其各自的价格打印到自己的行在一个CSV文件。下面是我的抓取代码

class ScrapeSpider(scrapy.Spider):
    name = 'scrape'
    start_urls = ['https://www.amazon.com/s?i=aps&k=laptop&ref=nb_sb_noss_1&url=search-alias%3Daps']

    def parse(self, response):

        item = AmazonItem()
        name = '\n'.join(response.css('.a-text-normal.a-color-base.a-size-medium').css('::text').extract())
        price = '\n'.join(response.css('.a-offscreen').css('::text').extract())

        item['name'] = name
        item['price'] = price

        yield item

        for next_page in response.css('.a-pagination .a-last a'):
            yield response.follow(next_page, self.parse)

以下是在终端中运行以执行刮取的代码:

scrapy crawl scrape -o data.csv

创建一个包含每个项目的选择器列表,并在其中迭代创建一个名为product的新选择器。然后分别提取数据


def parse(self, response):
    items = response.css('.s-result-list .sg-col-inner')
    for product in items:
        item=AmazonItem()
        item['name'] = product.css('span.a-text-normal::text').get()
        item['price'] = product.css('.a-offscreen::text').get()
        yield item
    next_page = response.css('.a-last::attr(href)').get()
    if next_page:
        yield scrapy.Request(response.urljoin(next_page), callback=self.parse)

除非绝对必要,否则请不要以图像形式共享信息。请参阅:,。您没有在该代码中写入CSV文件,我是否遗漏了什么?您是。Scrapy通过终端中的'-o data.csv'命令写入csv