Python 2.7 我如何使用scrapy shell在url上使用用户名和密码(在需要登录的网站上)

Python 2.7 我如何使用scrapy shell在url上使用用户名和密码(在需要登录的网站上),python-2.7,xpath,scrapy,scrapyd,scrapy-spider,Python 2.7,Xpath,Scrapy,Scrapyd,Scrapy Spider,我想放弃一个需要登录的网站,并在python scrapy框架中使用scrapy shell检查xpath的对错,如 C:\Users\Ranvijay.Sachan>scrapy shell https://www.google.co.in/?gfe_rd=cr&ei=mIl8V 6LovC8gegtYHYDg&gws_rd=ssl :0: UserWarning: You do not have a working installation of

我想放弃一个需要登录的网站,并在python scrapy框架中使用scrapy shell检查xpath的对错,如

    C:\Users\Ranvijay.Sachan>scrapy shell https://www.google.co.in/?gfe_rd=cr&ei=mIl8V
    6LovC8gegtYHYDg&gws_rd=ssl
    :0: UserWarning: You do not have a working installation of the service_identity m
    ule: 'No module named service_identity'.  Please install it from <https://pypi.py
    on.org/pypi/service_identity> and make sure all of its dependencies are satisfied
     Without the service_identity module and a recent enough pyOpenSSL to support it,
    wisted can perform only rudimentary TLS client hostname verification.  Many valid
    ertificate/hostname mappings may be rejected.
    2014-12-01 21:00:04-0700 [scrapy] INFO: Scrapy 0.24.2 started (bot: scrapybot)
    2014-12-01 21:00:04-0700 [scrapy] INFO: Optional features available: ssl, http11
    2014-12-01 21:00:04-0700 [scrapy] INFO: Overridden settings: {'LOGSTATS_INTERVAL'
    0}
    2014-12-01 21:00:05-0700 [scrapy] INFO: Enabled extensions: TelnetConsole, CloseS
    der, WebService, CoreStats, SpiderState
    2014-12-01 21:00:05-0700 [scrapy] INFO: Enabled downloader middlewares: HttpAuthM
    dleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, Default
    adersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddle
    re, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
    2014-12-01 21:00:05-0700 [scrapy] INFO: Enabled spider middlewares: HttpErrorMidd
    ware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
    2014-12-01 21:00:05-0700 [scrapy] INFO: Enabled item pipelines:
    2014-12-01 21:00:05-0700 [scrapy] DEBUG: Telnet console listening on 127.0.0.1:60

    2014-12-01 21:00:05-0700 [scrapy] DEBUG: Web service listening on 127.0.0.1:6081
    2014-12-01 21:00:05-0700 [default] INFO: Spider opened
    2014-12-01 21:00:06-0700 [default] DEBUG: Crawled (200) <GET https://www.google.c
    in/?gfe_rd=cr> (referer: None)
    [s] Available Scrapy objects:
    [s]   crawler    <scrapy.crawler.Crawler object at 0x01B71910>
    [s]   item       {}
    [s]   request    <GET https://www.google.co.in/?gfe_rd=cr>
    [s]   response   <200 https://www.google.co.in/?gfe_rd=cr>
    [s]   settings   <scrapy.settings.Settings object at 0x023CBC90>
    [s]   spider     <Spider 'default' at 0x29402f0>
    [s] Useful shortcuts:
    [s]   shelp()           Shell help (print this help)
    [s]   fetch(req_or_url) Fetch request (or URL) and update local objects
    [s]   view(response)    View response in a browser

    >>> response.xpath("//div[@id='_eEe']/text()").extract()

    [u'Google.co.in offered in: ', u'  ', u'  ', u'  ', u'  ', u'  ', u'  ', u'  ', u
     ', u' ']
    >>>
C:\Users\Ranvijay.Sachan>scrapy shellhttps://www.google.co.in/?gfe_rd=cr&ei=mIl8V
6LovC8gegtYHYDg&gws_rd=ssl
:0:UserWarning:您没有有效安装的服务\u标识
ule:“没有名为服务\u标识的模块”。请从安装它,并确保满足其所有依赖项
如果没有service_标识模块和最近足够支持它的pyOpenSSL,
wisted只能执行基本的TLS客户端主机名验证。许多有效的
证书/主机名映射可能被拒绝。
2014-12-01 21:00:04-0700[scrapy]信息:scrapy 0.24.2已启动(机器人:scrapybot)
2014-12-01 21:00:04-0700[scrapy]信息:可选功能可用:ssl、http11
2014-12-01 21:00:04-0700[scrapy]信息:覆盖的设置:{'LOGSTATS_INTERVAL'
0}
2014-12-01 21:00:05-0700[scrapy]信息:启用的扩展:TelnetConsole,关闭
der、WebService、CoreStats、SpiderState
2014-12-01 21:00:05-0700[scrapy]信息:启用的下载程序中间件:HttpAuthM
dleware,DownloadTimeoutMiddleware,UserAgentMiddleware,RetryMiddleware,默认值
AdersMiddle软件、元刷新中间件、HttpCompression中间件、重定向中间件
re、CookiesMiddleware、ChunkedTransferMiddleware、DownloaderStats
2014-12-01 21:00:05-0700[scrapy]信息:启用的蜘蛛中间件:HttpErrorMidd
软件,OffsiteMiddleware,RefererMiddle软件,UrlLengthMiddleware,DepthMiddleware
2014-12-01 21:00:05-0700[scrapy]信息:启用的项目管道:
2014-12-01 21:00:05-0700[scrapy]调试:Telnet控制台监听127.0.0.1:60
2014-12-01 21:00:05-0700[scrapy]调试:在127.0.0.1:6081上侦听Web服务
2014-12-01 21:00:05-0700[默认]信息:蜘蛛网已打开
2014-12-01 21:00:06-0700[默认]调试:爬网(200)(参考:无)
[s] 可用的刮擦对象:
[s] 爬虫
[s] 项目{}
[s] 请求
[s] 回应
[s] 背景
[s] 蜘蛛
[s] 有用的快捷方式:
[s] shelp()Shell帮助(打印此帮助)
[s] 获取(请求或url)获取请求(或url)并更新本地对象
[s] 查看(响应)在浏览器中查看响应
>>>xpath(“//div[@id=''ueee']/text()”).extract()
[u'Google.co.in.提供的格式为:'、u''、u''、u''、u''、u''、u''、u''、u''
“,u”“]
>>>

你到底想做什么?什么具体不起作用?请转到url()并检查我正在删除登录的用户名只有在您登录时才可能。但我想知道我是否可以在scrapy shell上传递参数用户名和密码如果是,那么如何感谢这在中解释得很好。我无法登录并删除此url请给我一个解决方案