Python 不收集数据

Python 不收集数据,python,web-scraping,web-crawler,scrapy,Python,Web Scraping,Web Crawler,Scrapy,我正在使用scrapy从craiglist收集一些电子邮件,当我运行它时,它会返回.csv文件中的空白行。我能够提取标题,标签和链接。只有电子邮件才是问题所在。代码如下: # -*- coding: utf-8 -*- import re import scrapy from scrapy.http import Request # item class included here class DmozItem(scrapy.Item): # define the fields f

我正在使用scrapy从craiglist收集一些电子邮件,当我运行它时,它会返回.csv文件中的空白行。我能够提取标题,标签和链接。只有电子邮件才是问题所在。代码如下:

 # -*- coding: utf-8 -*-
import re
import scrapy
from scrapy.http import Request


# item class included here
class DmozItem(scrapy.Item):
    # define the fields for your item here like:
    link = scrapy.Field()
    attr = scrapy.Field()
    title = scrapy.Field()
    tag = scrapy.Field()

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["craigslist.org"]
    start_urls = [
    "http://raleigh.craigslist.org/bab/5038434567.html"
    ]

    BASE_URL = 'http://raleigh.craigslist.org/'

    def parse(self, response):
        links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
        for link in links:
            absolute_url = self.BASE_URL + link
            yield scrapy.Request(absolute_url, callback=self.parse_attr)

    def parse_attr(self, response):
        match = re.search(r"(\w+)\.html", response.url)
        if match:
            item_id = match.group(1)
            url = self.BASE_URL + "reply/nos/vgm/" + item_id

            item = DmozItem()
            item["link"] = response.url
            item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
            item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0])
            return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact)

    def parse_contact(self, response):
        item = response.meta['item']
        item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
        return item

首先,您希望在目录页上有
start\u URL
http://raleigh.craigslist.org/search/bab

另外,据我所知,获得电子邮件的额外请求应该转到
reply/ral/bab/
,而不是
reply/nos/vgm/

此外,如果没有attr group,则在以下行中会出现错误:

item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract()[0])
替换为:

item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract())
适用于我的完整代码:

# -*- coding: utf-8 -*-
import re
import scrapy


class DmozItem(scrapy.Item):
    # define the fields for your item here like:
    link = scrapy.Field()
    attr = scrapy.Field()
    title = scrapy.Field()
    tag = scrapy.Field()


class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["raleigh.craigslist.org"]
    start_urls = [
        "http://raleigh.craigslist.org/search/bab"
    ]

    BASE_URL = 'http://raleigh.craigslist.org/'

    def parse(self, response):
        links = response.xpath('//a[@class="hdrlnk"]/@href').extract()
        for link in links:
            absolute_url = self.BASE_URL + link
            yield scrapy.Request(absolute_url, callback=self.parse_attr)

    def parse_attr(self, response):
        match = re.search(r"(\w+)\.html", response.url)
        if match:
            item_id = match.group(1)
            url = self.BASE_URL + "reply/ral/bab/" + item_id

            item = DmozItem()
            item["link"] = response.url
            item["title"] = "".join(response.xpath("//span[@class='postingtitletext']//text()").extract())
            item["tag"] = "".join(response.xpath("//p[@class='attrgroup']/span/b/text()").extract())
            return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact)

    def parse_contact(self, response):
        item = response.meta['item']
        item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract())
        return item

当从
start\u url
开始时,默认的回调函数是
parse()
,而不是
parse\u contact()
。另外,在
start\u URL
中定义的URL中没有电子邮件,因此您的xpath与任何内容都不匹配。你把报纸通读了吗?这些都在这里解释过了。到目前为止,这段代码对我很有效,但最近两天,craiglist上似乎有些东西被修改了。你能添加工作代码吗?谢谢你advance@ArkanKalu您需要提供爬行器的完整代码。@alecxe当然可以,给您。谢谢,代码工作正常!如何限制Sury提取50行?@ ArkanKalu,欢迎,不要在评论中解决新问题,如果有困难,请考虑创建一个单独的问题。谢谢