Python 刮取:嵌套url数据刮取

Python 刮取:嵌套url数据刮取,python,scrapy,scrapy-spider,Python,Scrapy,Scrapy Spider,我有一个网站名 在那一页我想买一种浴室水龙头 在该页面中有多个产品/相关产品。我想获取每个产品url并删除数据。为此,我写了以下内容 My items.py文件看起来像 from scrapy.item import Item, Field class ScrapytestprojectItem(Item): producturl=Field() imageurl=Field() description=Field() 蜘蛛代码是 import scrapy from

我有一个网站名 在那一页我想买一种浴室水龙头 在该页面中有多个产品/相关产品。我想获取每个产品url并删除数据。为此,我写了以下内容

My items.py文件看起来像

from scrapy.item import Item, Field

class ScrapytestprojectItem(Item):
    producturl=Field()
    imageurl=Field()
    description=Field()
蜘蛛代码是

import scrapy
from ScrapyTestProject.items import ScrapytestprojectItem
class QuotesSpider(scrapy.Spider):
    name = "nestedurl"
    allowed_domains = ['www.grohe.com']
    start_urls = [
    'https://www.grohe.com/in/7780/bathroom/bathroom-faucets/essence/',
    ]

    def parse(self, response):
    for divs in response.css('div.viewport div.workspace div.float-box'):
        item = {'producturl': divs.css('a::attr(href)').extract(),
                'imageurl': divs.css('a img::attr(src)').extract(),
                'description' : divs.css('a div.text::text').extract() + divs.css('a span.nowrap::text').extract()}
        next_page = response.urljoin(item['producturl'])
        yield scrapy.Request(next_page, callback=self.parse, meta={'item': item})
当我跑刮痧的时候 **搔痒爬行nestedurl-o nestedurl.csv ** 已创建空文件。 控制台是

2017-02-15 18:03:11 [scrapy] DEBUG: Telnet console listening on    127.0.0.1:6024
2017-02-15 18:03:13 [scrapy] DEBUG: Crawled (200) <GET  https://www.grohe.com/in/7780/bathroom/bathroom-faucets/essence/>  (referer: None)
2017-02-15 18:03:13 [scrapy] ERROR: Spider error processing <GET   https://www.grohe.com/in/7780/bathroom/bathroom-faucets/essence/>   (referer: None)
Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/scrapy/utils/defer.py", line 102, in iter_errback
yield next(it)
         File "/usr/lib/python2.7/dist-        packages/scrapy/spidermiddlewares/offsite.py", line 28, in     process_spider_output
     for x in result:
       File "/usr/lib/python2.7/dist-    packages/scrapy/spidermiddlewares/referer.py", line 22, in <genexpr>
      return (_set_referer(r) for r in result or ())
       File "/usr/lib/python2.7/dist-     packages/scrapy/spidermiddlewares/urllength.py", line 37, in <genexpr>
        return (r for r in result or () if _filter(r))
      File "/usr/lib/python2.7/dist-  packages/scrapy/spidermiddlewares/depth.py", line 54, in <genexpr>
   return (r for r in result or () if _filter(r))
 File    "/home/pradeep/ScrapyTestProject/ScrapyTestProject/spiders/nestedurl.py",    line 15, in parse
   next_page = response.urljoin(item['producturl'])
      File "/usr/lib/python2.7/dist-packages/scrapy/http/response/text.py",    line 72, in urljoin
   return urljoin(get_base_url(self), url)
      File "/usr/lib/python2.7/urlparse.py", line 261, in urljoin
   urlparse(url, bscheme, allow_fragments)
     File "/usr/lib/python2.7/urlparse.py", line 143, in urlparse
  tuple = urlsplit(url, scheme, allow_fragments)
     File "/usr/lib/python2.7/urlparse.py", line 176, in urlsplit
     cached = _parse_cache.get(key, None)
    TypeError: unhashable type: 'list'
    2017-02-15 18:03:13 [scrapy] INFO: Closing spider (finished)
   2017-02-15 18:03:13 [scrapy] INFO: Dumping Scrapy stats:
          {'downloader/request_bytes': 253,
          'downloader/request_count': 1,
       'downloader/request_method_count/GET': 1,
          'downloader/response_bytes': 31063,
     'downloader/response_count': 1,
        'downloader/response_status_count/200': 1,
            'finish_reason': 'finished',
        'finish_time': datetime.datetime(2017, 2, 15, 12, 33, 13, 396542),
        'log_count/DEBUG': 3,
          'log_count/ERROR': 3,
          'log_count/INFO': 7,
          'response_received_count': 1,
       'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
          'scheduler/enqueued': 1,
          'scheduler/enqueued/memory': 1,
          'spider_exceptions/TypeError': 1,
           'start_time': datetime.datetime(2017, 2, 15, 12, 33, 11, 568424)}
          2017-02-15 18:03:13 [scrapy] INFO: Spider closed (finished)
2017-02-15 18:03:11[scrapy]调试:Telnet控制台监听127.0.0.1:6024
2017-02-15 18:03:13[scrapy]调试:爬网(200)(参考:无)
2017-02-15 18:03:13[scrapy]错误:蜘蛛错误处理(参考:无)
回溯(最近一次呼叫最后一次):
文件“/usr/lib/python2.7/dist packages/scrapy/utils/defer.py”,第102行,在iter\u errback中
下一个(it)
文件“/usr/lib/python2.7/dist-packages/scrapy/spidermiddleware/offsite.py”,第28行,进程中输出
对于结果中的x:
文件“/usr/lib/python2.7/dist-packages/scrapy/spidermiddleware/referer.py”,第22行,在
返回(_set_referer(r)表示结果中的r或())
文件“/usr/lib/python2.7/dist-packages/scrapy/spidermiddleware/urlength.py”,第37行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“/usr/lib/python2.7/dist-packages/scrapy/spidermiddleware/depth.py”,第54行,in
返回(结果中的r表示r或()如果_过滤器(r))
文件“/home/pradeep/ScrapyTestProject/ScrapyTestProject/spider/nestedurl.py”,第15行,在语法分析中
下一页=response.urljoin(项['producturl'])
文件“/usr/lib/python2.7/dist-packages/scrapy/http/response/text.py”,urljoin中的第72行
返回urljoin(获取基本url(self),url)
urljoin中第261行的文件“/usr/lib/python2.7/urlparse.py”
url解析(url、bscheme、allow_片段)
文件“/usr/lib/python2.7/urlparse.py”,urlparse中的第143行
tuple=urlspit(url、scheme、allow\u片段)
文件“/usr/lib/python2.7/urlparse.py”,第176行,在URLSLIPT中
cached=\u parse\u cache.get(键,无)
TypeError:不可损坏的类型:“列表”
2017-02-15 18:03:13[scrapy]信息:关闭卡盘(已完成)
2017-02-15 18:03:13[scrapy]信息:倾销scrapy统计数据:
{'downloader/request_bytes':253,
“下载程序/请求计数”:1,
“downloader/request\u method\u count/GET”:1,
“downloader/response_字节”:31063,
“下载程序/响应计数”:1,
“下载程序/响应状态\计数/200”:1,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2017,2,15,12,33,13396542),
“日志计数/调试”:3,
“日志计数/错误”:3,
“日志计数/信息”:7,
“响应\u已接收\u计数”:1,
“调度程序/出列”:1,
“调度程序/出列/内存”:1,
“调度程序/排队”:1,
“调度程序/排队/内存”:1,
“spider_异常/类型错误”:1,
“开始时间”:datetime.datetime(2017,2,15,12,33,11,568424)}
2017-02-15 18:03:13[刮擦]信息:蜘蛛网关闭(完成)

我认为item
divs.css('a::attr(href)').extract()
有时会返回一个列表,当在urljoin leads中使用该列表时,会导致urlparse崩溃,因为它无法散列列表。

URL未正确生成

您应该启用日志记录,并记录一些消息以调试代码

import scrapy, logging
from ScrapyTestProject.items import ScrapytestprojectItem
class QuotesSpider(scrapy.Spider):
    name = "nestedurl"
    allowed_domains = ['www.grohe.com']
    start_urls = [
    'https://www.grohe.com/in/7780/bathroom/bathroom-faucets/essence/',
    ]

    def parse(self, response):
    for divs in response.css('div.viewport div.workspace div.float-box'):
        item = {'producturl': divs.css('a::attr(href)').extract(),
                'imageurl': divs.css('a img::attr(src)').extract(),
                'description' : divs.css('a div.text::text').extract() + divs.css('a span.nowrap::text').extract()}
        next_page = response.urljoin(item['producturl'])

        logging.info(next_page ) # see what it prints in console.

        yield scrapy.Request(next_page, callback=self.parse, meta={'item': item})

在我的spider代码中,我使用了.extract_first()/.extract_first(“”)。输出仍然不变。与我在scrapy shell中使用.extract()测试的结果相同它本身。似乎goodproducturl类似于-->/in/8257/浴室/浴室水龙头/精华/产品详细信息/?product=19408-G145&color=000&material=19408000之后,我们形成了“生成的url类似于”/in/8257/浴室/浴室水龙头/精华/产品详细信息/?product=19408-G145&color=000&material=19408000”的链接应该附加到“www.grohe.in”url,然后它会生成senceloger信息[…..同样的方式形成多个url否,您可以手动加入url,如
“www.grohe.in”+项['producturl']
    item = {'producturl': divs.css('a::attr(href)').extract(),  # <--- issue here
            'imageurl': divs.css('a img::attr(src)').extract(),
            'description' : divs.css('a div.text::text').extract() + divs.css('a span.nowrap::text').extract()}
    next_page = response.urljoin(item['producturl'])  # <--- here item['producturl'] is a list
    item = {'producturl': divs.css('a::attr(href)').extract_fist(''),
            'imageurl': divs.css('a img::attr(src)').extract_first(''),
            'description' : divs.css('a div.text::text').extract() + divs.css('a span.nowrap::text').extract()}
    next_page = response.urljoin(item['producturl'])