Python 如何修复CSV/JSON的Scrapy dictionary输出格式

Python 如何修复CSV/JSON的Scrapy dictionary输出格式,python,web-scraping,scrapy,screen-scraping,scrapy-spider,Python,Web Scraping,Scrapy,Screen Scraping,Scrapy Spider,我的代码如下。我希望将结果提取到CSV。但是,scrapy会生成一个包含2个键的字典,并且所有值都集中在每个键中。输出看起来不太好。 我该如何解决这个问题。这可以通过管道/项目装载机等完成吗 非常感谢 import scrapy from scrapy.linkextractors import LinkExtractor from scrapy.spiders import CrawlSpider, Rule from scrapy.loader import ItemLoader from

我的代码如下。我希望将结果提取到CSV。但是,scrapy会生成一个包含2个键的字典,并且所有值都集中在每个键中。输出看起来不太好。 我该如何解决这个问题。这可以通过管道/项目装载机等完成吗

非常感谢

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
from gumtree1.items import GumtreeItems

class AdItemLoader(ItemLoader):
    jobs_in = MapCompose(unicode.strip)

class GumtreeEasySpider(CrawlSpider):
    name = 'gumtree_easy'
    allowed_domains = ['gumtree.com.au']
    start_urls = ['http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering']

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//a[@class="rs-paginator-btn next"]'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        loader = AdItemLoader(item=GumtreeItems(), response=response)
        loader.add_xpath('jobs','//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()')
        loader.add_xpath('location', '//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()')
        yield loader.load_item() 
结果是:

2016-03-16 01:51:32 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-5/c9302?ad=offering>
{'jobs': [u'Technical Account Manager',
          u'Service & Maintenance Advisor',
          u'we are hiring motorbike driver delivery leaflet.Strat NOW(BE...',
          u'Casual Gardner/landscape maintenance labourer',
          u'Seeking for Experienced Builders Cleaners with white card',
          u'Babysitter / home help for approx 2 weeks',
          u'Toothing brickwork | Dapto',
          u'EXPERIENCED CHEF',
          u'ChildCare Trainee Wanted',
          u'Skilled Pipelayers & Drainer- Sydney Region',
          u'Casual staff required for Royal Easter Show',
          u'Fencing contractor',
          u'Excavator & Loader Operator',
          u'***EXPERIENCED STRAWBERRY AND RASPBERRY PICKERS WANTED***',
          u'Kitchenhand required for Indian restaurant',
          u'Taxi Driver Wanted',
          u'Full time nanny/sitter',
          u'Kitchen hand and meal packing',
          u'Depot Assistant Required',
          u'hairdresser Junior apprentice required for salon in Randwick',
          u'Insulation Installers Required',
          u'The Knox is seeking a new apprentice',
          u'Medical Receptionist Needed in Bankstown Area - Night Shifts',
          u'On Call Easy Work, Do you live in Berala, Lidcombe or Auburn...',
          u'Looking for farm jon'],
 'location': [u'Melbourne City',
              u'Eastern Suburbs',
              u'Rockdale Area',
              u'Logan Area',
              u'Greater Dandenong',
              u'Brisbane North East',
              u'Kiama Area',
              u'Byron Area',
              u'Dardanup Area',
              u'Blacktown Area',
              u'Auburn Area',
              u'Kingston Area',
              u'Inner Sydney',
              u'Northern Midlands',
              u'Inner Sydney',
              u'Hume Area',
              u'Maribyrnong Area',
              u'Perth City',
              u'Brisbane South East',
              u'Eastern Suburbs',
              u'Gold Coast South',
              u'North Canberra',
              u'Bankstown Area',
              u'Auburn Area',
              u'Gingin Area']}
部分结果如下

2016-03-16 02:20:46 [scrapy] DEBUG: Crawled (200) <GET http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering> (referer: http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering)
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Live In Au pair-Urgent', 'location': u'Wanneroo Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'live in carer', 'location': u'Fraser Coast'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Mental Health Nurse', 'location': u'Perth Region'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Experienced NBN pit and pipe installers/node and cabinet wor...',
 'location': u'Marrickville Area'}
2016-03-16 02:20:46 [scrapy] DEBUG: Scraped from <200 http://www.gumtree.com.au/s-jobs/page-3/c9302?ad=offering>
{'jobs': u'Delivery Driver / Pizza Maker Job - Dominos Pizza',
 'location': u'Hurstville Area'}
2016-03-16 02:20:46[scrapy]调试:爬网(200)(参考:http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering)
2016-03-16 02:20:46[刮伤]调试:刮伤自
{'jobs':u'Live In Au pair emergency','location':u'Wanneroo Area'}
2016-03-16 02:20:46[刮伤]调试:刮伤自
{'jobs':你'住在看护者','地点':你'弗雷泽海岸'}
2016-03-16 02:20:46[刮伤]调试:刮伤自
{'jobs':u'Mental Health Nurse','location':u'Perth Region'}
2016-03-16 02:20:46[刮伤]调试:刮伤自
{'jobs':你是经验丰富的NBN坑和管道安装工/节点和机柜工人…',
“位置”:u'Marrickville区域'}
2016-03-16 02:20:46[刮伤]调试:刮伤自
{'jobs':u'Delivery Driver/Pizza Maker Job-Dominos Pizza',
“位置”:u'Hurstville区域'}

非常感谢

每个项目都有一个父选择器,并提取与其相关的
作业
位置

import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
from scrapy.loader import ItemLoader
from scrapy.loader.processors import TakeFirst, MapCompose, Join
from gumtree1.items import GumtreeItems

class AdItemLoader(ItemLoader):
    jobs_in = MapCompose(unicode.strip)

class GumtreeEasySpider(CrawlSpider):
    name = 'gumtree_easy'
    allowed_domains = ['gumtree.com.au']
    start_urls = ['http://www.gumtree.com.au/s-jobs/page-2/c9302?ad=offering']

    rules = (
        Rule(LinkExtractor(restrict_xpaths='//a[@class="rs-paginator-btn next"]'), callback='parse_item', follow=True),
    )

    def parse_item(self, response):
        loader = AdItemLoader(item=GumtreeItems(), response=response)
        loader.add_xpath('jobs','//div[@id="recent-sr-title"]/following-sibling::*//*[@itemprop="name"]/text()')
        loader.add_xpath('location', '//div[@id="recent-sr-title"]/following-sibling::*//*[@class="rs-ad-location-area"]/text()')
        yield loader.load_item() 
rows = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*')
for row in rows:
    item = GumtreeItems()
    item['jobs'] = row.xpath('.//*[@itemprop="name"]/text()').extract_first().strip()
    item['location'] = row.xpath('.//*[@class="rs-ad-location-area"]/text()').extract_first().strip()
    yield item

每个项目都有一个父选择器,并提取与其相关的
作业
位置

rows = response.xpath('//div[@id="recent-sr-title"]/following-sibling::*')
for row in rows:
    item = GumtreeItems()
    item['jobs'] = row.xpath('.//*[@itemprop="name"]/text()').extract_first().strip()
    item['location'] = row.xpath('.//*[@class="rs-ad-location-area"]/text()').extract_first().strip()
    yield item

老实说,使用for循环是正确的方法,但您可以在管道上解决它:

from scrapy.http import Response
from gumtree1.items import GumtreeItems, CustomItem
from scrapy.exceptions import DropItem


class CustomPipeline(object):

    def __init__(self, crawler):
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_item(self, item, spider):
        if isinstance(item, GumtreeItems):
            for i, jobs in enumerate(item['jobs']):
                self.crawler.engine.scraper._process_spidermw_output(
                    CustomItem(jobs=jobs, location=item['location'][i]), None, Response(''), spider)
            raise DropItem("main item dropped")
        return item
还要添加自定义项:

class CustomItem(scrapy.Item):
    jobs = scrapy.Field()
    location = scrapy.Field()

希望这能有所帮助,我还是认为您应该使用循环。

老实说,使用for循环是正确的方法,但您可以在管道上解决它:

from scrapy.http import Response
from gumtree1.items import GumtreeItems, CustomItem
from scrapy.exceptions import DropItem


class CustomPipeline(object):

    def __init__(self, crawler):
        self.crawler = crawler

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def process_item(self, item, spider):
        if isinstance(item, GumtreeItems):
            for i, jobs in enumerate(item['jobs']):
                self.crawler.engine.scraper._process_spidermw_output(
                    CustomItem(jobs=jobs, location=item['location'][i]), None, Response(''), spider)
            raise DropItem("main item dropped")
        return item
还要添加自定义项:

class CustomItem(scrapy.Item):
    jobs = scrapy.Field()
    location = scrapy.Field()

希望这对您有所帮助,我再次认为您应该使用循环。

欢迎使用堆栈溢出!如果您可以发布单独的问题,而不是将您的问题合并为一个问题,这是首选。这样,它可以帮助人们回答你的问题,也可以帮助其他人寻找至少一个你的问题。谢谢@Hatchet非常感谢您的反馈。我将编辑我的问题欢迎使用堆栈溢出!如果您可以发布单独的问题,而不是将您的问题合并为一个问题,这是首选。这样,它可以帮助人们回答你的问题,也可以帮助其他人寻找至少一个你的问题。谢谢@Hatchet非常感谢您的反馈。我将编辑我的问题谢谢@alecxe。除了循环之外,是否可以用其他方式进行分离。另外,如果我有一些不在父选择器中的项目,该怎么办。我是否需要为另一个父选择器创建另一个for循环?您好@alecxe。我试过这个代码,但不起作用。它提到AttributeError:unicode对象没有属性xpath。我猜这与行末尾的.extract()有关。但是如果我删除它,代码也不起作用。谢谢你的帮助。谢谢这个
rows=response.xpath('//div[@id=“recent sr title”]/下面的兄弟::*')
@Ming yes,
extract()
必须删除。现在就去看看。谢谢@alecxe。除了循环之外,是否可以用其他方式进行分离。另外,如果我有一些不在父选择器中的项目,该怎么办。我是否需要为另一个父选择器创建另一个for循环?您好@alecxe。我试过这个代码,但不起作用。它提到AttributeError:unicode对象没有属性xpath。我猜这与行末尾的.extract()有关。但是如果我删除它,代码也不起作用。谢谢你的帮助。谢谢这个
rows=response.xpath('//div[@id=“recent sr title”]/下面的兄弟::*')
@Ming yes,
extract()
必须删除。现在就去看看。谢谢你的反馈。了解for循环是最好的方法。这就是枚举的作用。因此,您不必依赖zip之类的工具,可以通过多个您知道匹配的列表的索引进行枚举。感谢您的反馈。了解for循环是最好的方法。这就是枚举的作用。所以,您不必依赖于zip之类的东西,并且可以通过您知道匹配的多个列表的索引进行枚举。