Python 为scrapy中的每个类别创建单独的输出文件

Python 为scrapy中的每个类别创建单独的输出文件,python,csv,scrapy,export-to-csv,Python,Csv,Scrapy,Export To Csv,我试着根据黄页的分类来刮黄页。因此,我从文本文件中加载类别,并将其提供给start_URL。我在这里面临的问题是为每个类别分别保存输出。以下是我试图实现的代码: CATEGORIES = [] with open('Catergories.txt', 'r') as f: data = f.readlines() for category in data: CATEGORIES.append(category.strip()) 在settings.py中打开文

我试着根据黄页的分类来刮黄页。因此,我从文本文件中加载类别,并将其提供给start_URL。我在这里面临的问题是为每个类别分别保存输出。以下是我试图实现的代码:

CATEGORIES = []
with open('Catergories.txt', 'r') as f:
    data = f.readlines()

    for category in data:
        CATEGORIES.append(category.strip())
在settings.py中打开文件,并在spider中创建要访问的列表

蜘蛛:

# -*- coding: utf-8 -*-
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

from ..items import YellowItem
from scrapy.utils.project import get_project_settings

settings = get_project_settings()


class YpSpider(CrawlSpider):
    categories = settings.get('CATEGORIES')

    name = 'yp'
    allowed_domains = ['yellowpages.com']

    start_urls = ['https://www.yellowpages.com/search?search_terms={0}&geo_location_terms=New%20York'
                      '%2C '
                      '%20NY'.format(*categories)]
    rules = (

        Rule(LinkExtractor(restrict_xpaths='//a[@class="business-name"]', allow=''), callback='parse_item',
             follow=True),

        Rule(LinkExtractor(restrict_xpaths='//a[@class="next ajax-page"]', allow=''),
             follow=True),
    )

    def parse_item(self, response):
        categories = settings.get('CATEGORIES')
        print(categories)
        item = YellowItem()
        # for data in response.xpath('//section[@class="info"]'):
        item['title'] = response.xpath('//h1/text()').extract_first()
        item['phone'] = response.xpath('//p[@class="phone"]/text()').extract_first()
        item['street_address'] = response.xpath('//h2[@class="address"]/text()').extract_first()
        email = response.xpath('//a[@class="email-business"]/@href').extract_first()
        try:
            item['email'] = email.replace("mailto:", '')
        except AttributeError:
            pass
        item['website'] = response.xpath('//a[@class="primary-btn website-link"]/@href').extract_first()
        item['Description'] = response.xpath('//dd[@class="general-info"]/text()').extract_first()
        item['Hours'] = response.xpath('//div[@class="open-details"]/descendant-or-self::*/text()[not(ancestor::*['
                                       '@class="hour-category"])]').extract()
        item['Other_info'] = response.xpath(
            '//dd[@class="other-information"]/descendant-or-self::*/text()').extract()
        category_ha = response.xpath('//dd[@class="categories"]/descendant-or-self::*/text()').extract()
        item['Categories'] = " ".join(category_ha)
        item['Years_in_business'] = response.xpath('//div[@class="number"]/text()').extract_first()
        neighborhood = response.xpath('//dd[@class="neighborhoods"]/descendant-or-self::*/text()').extract()
        item['neighborhoods'] = ' '.join(neighborhood)
        item['other_links'] = response.xpath('//dd[@class="weblinks"]/descendant-or-self::*/text()').extract()

        item['category'] = '{0}'.format(*categories)

        return item

       
下面是pipelines.py文件:

from scrapy import signals
from scrapy.exporters import CsvItemExporter
from scrapy.utils.project import get_project_settings

settings = get_project_settings()


class YellowPipeline(object):
    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider):
        self.exporters = {}
        categories = settings.get('CATEGORIES')

        file = open('{0}.csv'.format(*categories), 'w+b')

        exporter = CsvItemExporter(file, encoding='cp1252')
        exporter.fields_to_export = ['title', 'phone', 'street_address', 'website', 'email', 'Description',
                                     'Hours', 'Other_info', 'Categories', 'Years_in_business', 'neighborhoods',
                                     'other_links']
        exporter.start_exporting()
        for category in categories:
            self.exporters[category] = exporter

    def spider_closed(self, spider):

        for exporter in iter(self.exporters.items()):
            exporter.finish_exporting()

    def process_item(self, item, spider):

        self.exporters[item['category']].export_item(item)
        return item
运行代码后,我得到以下错误:

exporter.finish_exporting()
AttributeError: 'tuple' object has no attribute 'finish_exporting'

我需要为每个类别单独的csv文件。任何帮助都将不胜感激。

我会在后期处理中这样做。将所有项目导出到一个带有类别字段的.csv文件中。我认为你没有以正确的方式思考这个问题,并且把它复杂化了。不确定这是否有效,但值得一试:)

您也可以使用spider closed信号应用此代码


dict.items()
返回iterable,其中的每个项看起来像
tuple(key,value)
要消除此错误,您需要删除
iter
,并在self.exporter.items()中为category、exporter解包这些项,如
:为什么在
iter(self.exporters.items()中使用
iter()
?在这种情况下,你不需要
iter()。它复制了数据,而且每行后面都有一个空格。此外,我还希望使用与原始csv中使用的相同的标题。如果你愿意帮忙,我将非常感激有人请编辑使用DictReader和写头文件。我没有时间解决另一个问题。。现在唯一的问题是标题。我不知道如何写它们,因为它们在每一行之后都会循环,因为多个csv文件正在被打开。请在reader上枚举。。。对于idx,enumerate(reader)中的行:然后您可以获得列表的索引,并且只为索引0cool写入标题,很高兴它最终成功了。如果你想要一个更持久的解决方案,你可以查看scrapy文档,创建你自己的运行爬虫程序,并将此代码附加到spider.finished信号,但是一次使用,甚至几次使用,我只会使用这个简单的脚本,而不是麻烦地制作一个修改过的运行程序。
with open('parent.csv', 'r') as file:
    reader = csv.reader(file)
    for row in reader:
        with open('{}.csv'.format(row[category]), 'a') as f:
            writer = csv.writer(f)
            writer.writerow(row)