Python Scrapy没有收到我的url

Python Scrapy没有收到我的url,python,scrapy,scrapy-spider,Python,Scrapy,Scrapy Spider,我有一个粗略的代码,应该能够从网络中的表格中获取电话和地址: import scrapy class PeopleSpider(scrapy.Spider): name="People" start_urls=[ 'http://canada411.yellowpages.ca/search/si/1/519-896-7080/', ] def parse(self,response): for people in response.css("div.person-search

我有一个粗略的代码,应该能够从网络中的表格中获取电话和地址:

import scrapy

class PeopleSpider(scrapy.Spider):
 name="People"
 start_urls=[
  'http://canada411.yellowpages.ca/search/si/1/519-896-7080/',
 ]
 def parse(self,response):
  for people in response.css("div.person-search__table--row"):
   yield {
    'Name': people.css('div.person-search__table--name::text').extract_first(),
    'Phone Number': people.css('div.person-search__table--phoneNumber::text').extract_first(),
    'Street': people.css('div.person-search__table--name::street').extract_first(),
    'City': people.css('div.person-search__table--city::text').extract_first(),
    'Province': people.css('div.person-search__table--province::text').extract_first(),
    'Postal Code': people.css('div.person-search__table--postalCode::text').extract_first(),
   }
但我一直得到0个爬网页面

scrapy runspider get.py -o people.json
2017-02-15 20:14:26 [scrapy.utils.log] INFO: Scrapy 1.3.2 started (bot: scrapybot)
2017-02-15 20:14:26 [scrapy.utils.log] INFO: Overridden settings: {'FEED_FORMAT': 'json', 'FEED_URI': 'people.json'}
2017-02-15 20:14:26 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.feedexport.FeedExporter',
 'scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-02-15 20:14:26 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2017-02-15 20:14:26 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-02-15 20:14:26 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-02-15 20:14:26 [scrapy.core.engine] INFO: Spider opened
2017-02-15 20:14:26 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-02-15 20:14:26 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-02-15 20:14:27 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://99.227.194.212/> from <GET http://canada411.yellowpages.ca/search/si/1/519-896-7080/>
2017-02-15 20:14:27 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (302) to <GET http://99.227.194.212/login.html> from <GET http://99.227.194.212/>
2017-02-15 20:14:27 [scrapy.core.engine] DEBUG: Crawled (200) <GET http://99.227.194.212/login.html> (referer: None)
2017-02-15 20:14:27 [scrapy.core.engine] INFO: Closing spider (finished)
2017-02-15 20:14:27 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 681,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 7931,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 1,
 'downloader/response_status_count/302': 2,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2017, 2, 16, 1, 14, 27, 273208),
 'log_count/DEBUG': 4,
 'log_count/INFO': 7,
 'response_received_count': 1,
 'scheduler/dequeued': 3,
 'scheduler/dequeued/memory': 3,
 'scheduler/enqueued': 3,
 'scheduler/enqueued/memory': 3,
 'start_time': datetime.datetime(2017, 2, 16, 1, 14, 26, 788223)}
2017-02-15 20:14:27 [scrapy.core.engine] INFO: Spider closed (finished)
scrapy runspider get.py-o people.json
2017-02-15 20:14:26[scrapy.utils.log]信息:scrapy 1.3.2已启动(bot:scrapybot)
2017-02-15 20:14:26[scrapy.utils.log]信息:覆盖的设置:{'FEED_FORMAT':'json','FEED_URI':'people.json'}
2017-02-15 20:14:26[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.feedexport.FeedExporter',
'scrapy.extensions.logstats.logstats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.corestats']
2017-02-15 20:14:26[剪贴簿中间件]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloadermiddleware.stats.DownloaderStats']
2017-02-15 20:14:26[scrapy.middleware]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2017-02-15 20:14:26[scrapy.middleware]信息:启用的项目管道:
[]
2017-02-15 20:14:26[刮屑.堆芯.发动机]信息:卡盘已打开
2017-02-15 20:14:26[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2017-02-15 20:14:26[scrapy.extensions.telnet]调试:telnet控制台监听127.0.0.1:6023
2017-02-15 20:14:27[scrapy.DownloaderMiddleware.redirect]调试:重定向(302)到
2017-02-15 20:14:27[scrapy.DownloaderMiddleware.redirect]调试:重定向(302)到
2017-02-15 20:14:27[刮屑核心引擎]调试:爬网(200)(参考:无)
2017-02-15 20:14:27[刮屑芯发动机]信息:关闭卡盘(已完成)
2017-02-15 20:14:27[scrapy.statscollectors]信息:倾销scrapy统计数据:
{'downloader/request_bytes':681,
“下载程序/请求计数”:3,
“下载程序/请求方法\计数/获取”:3,
“downloader/response_字节”:7931,
“下载程序/响应计数”:3,
“下载程序/响应状态\计数/200”:1,
“下载程序/响应状态\计数/302”:2,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2017,2,16,1,14,27,273208),
“日志计数/调试”:4,
“日志计数/信息”:7,
“响应\u已接收\u计数”:1,
“调度程序/出列”:3,
“调度程序/出列/内存”:3,
“调度程序/排队”:3,
“调度程序/排队/内存”:3,
“开始时间”:datetime.datetime(2017,2,16,1,14,26788223)}
2017-02-15 20:14:27[刮屑芯发动机]信息:十字轴关闭(完成)

我的代码有问题吗?还是无法解析url?

当您尝试爬网url时,会收到一条重定向(302)消息,这意味着您将被从尝试爬网的页面发送出去。您需要添加一个元值来停止重定向;请参阅以获得潜在的解决方案。

您正在网站上删除Yellopages,删除它们并不容易

我确信他们正在重定向到验证码页面。我过去曾浏览过那个网站

您可以尝试使用这段代码来查看重定向到的页面

from scrapy.utils.response import open_in_browser

def parse_details(self, response):
        open_in_browser(response)
它将在浏览器中打开刮取的URL(如果您在Ubuntu、Windows或Mac上运行这个刮取的项目)