Scrapy 刮壳不认识';sel';对象

Scrapy 刮壳不认识';sel';对象,scrapy,Scrapy,我是一名python新手,正在尝试将scrapy用于项目。 Scrapy 0.19安装在我的centos(linux 2.6.32)上,我按照Scrapy文档页面上的说明操作,但发现Scrapy shell找不到“sel”对象,我的步骤如下: [root@localhost rpm]# scrapy shell http://doc.scrapy.org/en/latest/_static/selectors-sample1.html 2014-03-02 06:33:23+0800 [scra

我是一名python新手,正在尝试将scrapy用于项目。 Scrapy 0.19安装在我的centos(linux 2.6.32)上,我按照Scrapy文档页面上的说明操作,但发现Scrapy shell找不到“sel”对象,我的步骤如下:

[root@localhost rpm]# scrapy shell http://doc.scrapy.org/en/latest/_static/selectors-sample1.html
2014-03-02 06:33:23+0800 [scrapy] INFO: Scrapy 0.19.0 started (bot: scrapybot)
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Optional features available: ssl, http11, libxml2
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Overridden settings: {'LOGSTATS_INTERVAL': 0}
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Enabled extensions: TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Enabled item pipelines: 
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2014-03-02 06:33:23+0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2014-03-02 06:33:23+0800 [default] INFO: Spider opened
2014-03-02 06:33:24+0800 [default] DEBUG: Crawled (200) <GET 

http://doc.scrapy.org/en/latest/_static/selectors-sample1.html> (referer: None)
[s] Available Scrapy objects:
[s]   hxs        <HtmlXPathSelector xpath=None data=u'<html><head><base   href="http://example.c'>
[s]   item       {}
[s]   request    <GET http://doc.scrapy.org/en/latest/_static/selectors-sample1.html>
[s]   response   <200 http://doc.scrapy.org/en/latest/_static/selectors-sample1.html>
[s]   settings   <CrawlerSettings module=None>
[s]   spider     <BaseSpider 'default' at 0x3668ed0>
[s] Useful shortcuts:
[s]   shelp()           Shell help (print this help)
[s]   fetch(req_or_url) Fetch request (or URL) and update local objects
[s]   view(response)    View response in a browser

>>> sel.xpath('//title/text()')
Traceback (most recent call last):
File "<console>", line 1, in <module>
NameError: name 'sel' is not defined
>>> 
[root@localhostrpm]#刮壳http://doc.scrapy.org/en/latest/_static/selectors-sample1.html
2014-03-02 06:33:23+0800[scrapy]信息:scrapy 0.19.0已启动(机器人:scrapybot)
2014-03-02 06:33:23+0800[scrapy]调试:可选功能:ssl、http11、libxml2
2014-03-02 06:33:23+0800[scrapy]调试:覆盖的设置:{'LOGSTATS_INTERVAL':0}
2014-03-02 06:33:23+0800[scrapy]调试:启用的扩展:TelnetConsole、CloseSpider、WebService、CoreStats、SpiderState
2014-03-02 06:33:23+0800[scrapy]调试:启用的下载中间件:HttpAuthMiddleware、DownloadTimeoutMiddleware、UserAgentMiddleware、RetryMiddleware、DefaultHeadersMiddleware、MetaRefreshMiddleware、HttpCompressionMiddleware、RedirectMiddleware、Cookies Middleware、ChunkedTransferMiddleware、DownloadersStats
2014-03-02 06:33:23+0800[scrapy]调试:启用的spider中间件:HttpErrorMiddleware、OffsiteMiddleware、RefererMiddleware、UrlLengthMiddleware、DepthMiddleware
2014-03-02 06:33:23+0800[scrapy]调试:启用的项目管道:
2014-03-02 06:33:23+0800[scrapy]调试:Telnet控制台在0.0.0.0:6023上侦听
2014-03-02 06:33:23+0800[scrapy]调试:Web服务侦听0.0.0.0:6080
2014-03-02 06:33:23+0800[默认]信息:蜘蛛网已打开
2014-03-02 06:33:24+0800[默认]调试:爬网(200)(参考:无)
[s] 可用的刮擦对象:
[s] hxs
[s] 项目{}
[s] 请求
[s] 回应
[s] 背景
[s] 蜘蛛
[s] 有用的快捷方式:
[s] shelp()Shell帮助(打印此帮助)
[s] 获取(请求或url)获取请求(或url)并更新本地对象
[s] 查看(响应)在浏览器中查看响应
>>>sel.xpath(“//title/text()”)
回溯(最近一次呼叫最后一次):
文件“”,第1行,在
NameError:未定义名称“sel”
>>> 

谁能告诉我怎么解决?thx预先

0.20
版本中添加了
sel
对象。当您运行
shell
命令时,它会告诉您可以使用哪些对象,在您的情况下,
hxs
,这些对象具有类似的行为:

>>> hxs.select('//title/text()')
您应该先尝试阅读文档。在
选择器
部分,我们非常清楚地解释了如何在当前版本中使用它们。

您必须首先定义一个名为
sel
的对象,该对象具有
xpath
属性。你不能要求Python处理一些不存在的东西,并期望它知道你的意思。