Python 2.7 scrapy spider中的变量拆分

Python 2.7 scrapy spider中的变量拆分,python-2.7,scrapy,Python 2.7,Scrapy,原谅我,我是个十足的编程高手 我试图用下面的代码从url中提取一个记录id,我遇到了麻烦。如果我在shell中运行它,它似乎工作得很好(没有错误),但是当我在scrapy中运行它时,框架会生成错误 示例: 如果url是http://domain.com/path/to/record_id=1599 然后记录链接=/path/to/record\u id=1599 因此,记录id应为=1599 非常感谢您的帮助 编辑:: 类似这样的零碎错误: root@web01:/home/user/sp

原谅我,我是个十足的编程高手

我试图用下面的代码从url中提取一个记录id,我遇到了麻烦。如果我在shell中运行它,它似乎工作得很好(没有错误),但是当我在scrapy中运行它时,框架会生成错误

示例:
如果url是http://domain.com/path/to/record_id=1599
然后记录链接=/path/to/record\u id=1599
因此,记录id应为=1599

非常感谢您的帮助

编辑::

类似这样的零碎错误:

   root@web01:/home/user/spiderdir/spiderdir/spiders# sudo scrapy crawl spider
   2012-02-23 09:47:16+1100 [scrapy] INFO: Scrapy 0.13.0.2839 started (bot: spider)
   2012-02-23 09:47:16+1100 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, MemoryUsage, SpiderState
   2012-02-23 09:47:16+1100 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
   2012-02-23 09:47:16+1100 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
   2012-02-23 09:47:16+1100 [scrapy] DEBUG: Enabled item pipelines:
   2012-02-23 09:47:16+1100 [spider] INFO: Spider opened
   2012-02-23 09:47:16+1100 [spider] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
   2012-02-23 09:47:16+1100 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6031
   2012-02-23 09:47:16+1100 [scrapy] DEBUG: Web service listening on 0.0.0.0:6088
   2012-02-23 09:47:19+1100 [spider] DEBUG: Crawled (200) <GET http://www.domain.com/path/to/> (referer: None)
   2012-02-23 09:47:21+1100 [spider] DEBUG: Crawled (200) <GET http://www.domain.com/path/to/record_id=2> (referer: http://www.domain.com/path/to/)
   2012-02-23 09:47:21+1100 [spider] ERROR: Spider error processing <GET http://www.domain.com/path/to/record_id=2>
   Traceback (most recent call last):
      File "/usr/lib/python2.6/dist-packages/twisted/internet/base.py", line 778, in runUntilCurrent
        call.func(*call.args, **call.kw)
      File "/usr/lib/python2.6/dist-packages/twisted/internet/task.py", line 577, in _tick
        taskObj._oneWorkUnit()
      File "/usr/lib/python2.6/dist-packages/twisted/internet/task.py", line 458, in _oneWorkUnit
        result = self._iterator.next()
      File "/usr/lib/pymodules/python2.6/scrapy/utils/defer.py", line 57, in <genexpr>
        work = (callable(elem, *args, **named) for elem in iterable)
    --- <exception caught here> ---
      File "/usr/lib/pymodules/python2.6/scrapy/utils/defer.py", line 96, in iter_errback
        yield it.next()
      File "/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/offsite.py", line 24, in process_spider_output
        for x in result:
      File "/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/referer.py", line 14, in <genexpr>
        return (_set_referer(r) for r in result or ())
      File "/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/urllength.py", line 32, in <genexpr>
        return (r for r in result or () if _filter(r))
      File "/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/depth.py", line 56, in <genexpr>
        return (r for r in result or () if _filter(r))
      File "/usr/lib/pymodules/python2.6/scrapy/contrib/spiders/crawl.py", line 66, in _parse_response
        cb_res = callback(response, **cb_kwargs) or ()
      File "/home/nick/googledir/googledir/spiders/google_directory.py", line 36, in parse_main
        record_id = record_link.split("=")[1]
    exceptions.AttributeError: 'list' object has no attribute 'split'
root@web01:/home/user/spiderdir/spiderdir/spiders#sudo scrapy crawl spider
2012-02-23 09:47:16+1100[scrapy]信息:scrapy 0.13.0.2839已启动(机器人:蜘蛛)
2012-02-23 09:47:16+1100[scrapy]调试:启用的扩展:LogStats、TelnetConsole、CloseSpider、WebService、CoreStats、MemoryUsage、SpiderState
2012-02-23 09:47:16+1100[scrapy]调试:启用的下载程序中间件:HttpAuthMiddleware,DownloadTimeoutMiddleware,UserAgentMiddleware,RetryMiddleware,DefaultHeadersMiddleware,RedirectMiddleware,Cookies中间件,HttpCompressionMiddleware,ChunkedTransferMiddleware,DownloaderStats
2012-02-23 09:47:16+1100[scrapy]调试:启用的spider中间件:HttpErrorMiddleware、OffsiteMiddleware、referermidleware、urlengthmiddleware、DepthMiddleware
2012-02-23 09:47:16+1100[scrapy]调试:启用的项目管道:
2012-02-23 09:47:16+1100[蜘蛛网]信息:蜘蛛网已打开
2012-02-23 09:47:16+1100[蜘蛛]信息:爬网0页(以0页/分钟的速度),刮取0项(以0项/分钟的速度)
2012-02-23 09:47:16+1100[scrapy]调试:Telnet控制台在0.0.0.0上侦听:6031
2012-02-23 09:47:16+1100[scrapy]调试:在0.0.0.0上侦听Web服务:6088
2012-02-23 09:47:19+1100[spider]调试:已爬网(200)(参考:无)
2012-02-23 09:47:21+1100[spider]调试:爬网(200)(参考:http://www.domain.com/path/to/)
2012-02-23 09:47:21+1100[spider]错误:spider错误处理
回溯(最近一次呼叫最后一次):
文件“/usr/lib/python2.6/dist packages/twisted/internet/base.py”,第778行,在rununtlcurrent中
call.func(*call.args,**call.kw)
文件“/usr/lib/python2.6/dist-packages/twisted/internet/task.py”,第577行,勾选
taskObj._oneWorkUnit()
文件“/usr/lib/python2.6/dist-packages/twisted/internet/task.py”,第458行,在一个工作单元中
result=self.\u迭代器.next()
文件“/usr/lib/pymodules/python2.6/scrapy/utils/defer.py”,第57行,在
work=(iterable中的elem可调用(elem,*args,**命名)
---  ---
文件“/usr/lib/pymodules/python2.6/scrapy/utils/defer.py”,第96行,在iter\u errback中
下一步
文件“/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/offsite.py”,第24行,进程中输出
对于结果中的x:
文件“/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/referer.py”,第14行,在
返回(_set_referer(r)表示结果中的r或())
文件“/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/urlength.py”,第32行,在
返回(结果中的r表示r或()如果_过滤器(r))
文件“/usr/lib/pymodules/python2.6/scrapy/contrib/spidermiddleware/depth.py”,第56行,in
返回(结果中的r表示r或()如果_过滤器(r))
文件“/usr/lib/pymodules/python2.6/scrapy/contrib/spider/crawl.py”,第66行,在解析响应中
cb_res=回调(响应,**cb_kwargs)或()
文件“/home/nick/googledir/googledir/spider/google_directory.py”,第36行,在parse_main中
record\u id=record\u link.split(“=”)[1]
exceptions.AttributeError:“list”对象没有属性“split”

`

由于您没有发布错误,所以有点遥不可及,但我猜您必须更改这一行:

record\u id=record\u link.strip().split('=')[1]

record\u id=record\u link[0].strip().split('=')[1]


因为HtmlXPathSelector总是返回所选项目的列表。

我想我想要的是这样的:

for site in sites:

      record_link = site.select('div[@class="description"]/h4/a/@href').extract()
      record_id = [i.split('=')[1] for i in record_link]

  item['link'] = record_link
  item['id'] = record_id
  items.append(item)

你应该把你的错误贴出来,是不是记录链接中的字符串需要用引号括起来?ie“/path/to”而不是/path/to?如果上面的注释正确,我该如何向废弃的数据添加引号?错误显示您将一个空字符串作为URL提供给爬行器。添加引号是什么意思?当我手动将变量record_link更改为/path/to/adid=524352时,它会抛出一个语法错误,因为其中包含“/”。如果我将其封装在引号ie'/path/to/adid=524352'中,则行:record\u id=record\u link.split(“=”[1]将正确显示“524352”。我的问题是如何向record_link=site.select的输出添加引号('div[@class=“description”]/h4/a/@href').extract()??您不必向该输出添加引号?也许你可以用你当前的代码更新你的问题,这样你想做什么就更清楚了?
for site in sites:

      record_link = site.select('div[@class="description"]/h4/a/@href').extract()
      record_id = [i.split('=')[1] for i in record_link]

  item['link'] = record_link
  item['id'] = record_id
  items.append(item)