Python 2.7 Scrapy shell可以工作,但实际脚本返回404错误

Python 2.7 Scrapy shell可以工作,但实际脚本返回404错误,python-2.7,scrapy,http-status-code-404,scrapy-spider,Python 2.7,Scrapy,Http Status Code 404,Scrapy Spider,返回正确的200代码 scrapy shell http://www.zara.com/us 在终端中键入: scrapy crawl zara us class ZaraSpider(scrapy.Spider): name = "zara-us" allowed_domain = ['www.zara.com/us'] start_urls = [ "http://www.zara.com/us" ] handle_httpstat

返回正确的200代码

scrapy shell http://www.zara.com/us
在终端中键入:
scrapy crawl zara us

class ZaraSpider(scrapy.Spider):

    name = "zara-us"
    allowed_domain = ['www.zara.com/us']
    start_urls = [
        "http://www.zara.com/us"
    ]
    handle_httpstatus_list = [404]

    # navigating main page
    def parse(self, response):

        # get 1st 2 category listing in navigation sidebar
        categories = response.xpath('//*[@id="menu"]/ul/li')
        collections = categories[0].xpath('a//text()').extract()
        yield ProductItem(collection=collections[0])
2017-01-05 18:45:24[scrapy.utils.log]信息:scrapy 1.3.0已启动(bot:zara)
2017-01-05 18:45:24[scrapy.utils.log]信息:覆盖的设置:{'NEWSPIDER_MODULE':'zara.SPIDER','ROBOTSTXT_obe':True,'DUPEFILTER_CLASS':'scrapy.dupefilters.BaseDupeFilter','SPIDER_MODULES':['zara.SPIDER','HTTPCACHE_ENABLED':True,'BOT_NAME':'zara','USER_AGENT':'zara(+http://www.yourdomain.com)'}
2017-01-05 18:45:24[scrapy.middleware]信息:启用的扩展:
['scrapy.extensions.logstats.logstats',
'scrapy.extensions.telnet.TelnetConsole',
'scrapy.extensions.corestats.corestats']
2017-01-05 18:45:24[剪贴簿中间件]信息:启用的下载程序中间件:
['scrapy.downloaderMiddleware.robotstxt.RobotsTxtMiddleware',
'scrapy.downloaderMiddleware.httpauth.HttpAuthMiddleware',
'scrapy.downloadermiddleware.downloadtimeout.DownloadTimeoutMiddleware',
'scrapy.downloaderMiddleware.defaultheaders.DefaultHeadersMiddleware',
'scrapy.DownloaderMiddleware.useragent.UserAgentMiddleware',
'scrapy.DownloaderMiddleware.retry.RetryMiddleware',
'scrapy.DownloaderMiddleware.redirect.MetaRefreshMiddleware',
'scrapy.DownloaderMiddleware.httpcompression.HttpCompressionMiddleware',
'scrapy.DownloaderMiddleware.redirect.RedirectMiddleware',
“scrapy.DownloaderMiddleware.cookies.CookiesMiddleware”,
'scrapy.downloaderMiddleware.stats.DownloaderStats',
'scrapy.downloadermiddleware.httpcache.HttpCacheMiddleware']
2017-01-05 18:45:24[剪贴簿中间件]信息:启用的蜘蛛中间件:
['scrapy.spidermiddleware.httperror.httperror中间件',
'刮皮.SpiderMiddleware.场外.场外Iddleware',
“scrapy.Spidermiddleware.referer.RefererMiddleware”,
'scrapy.spiderMiddleware.urllength.UrlLengthMiddleware',
'scrapy.spidermiddleware.depth.DepthMiddleware']
2017-01-05 18:45:24[碎片中间件]信息:启用的项目管道:
[]
2017-01-05 18:45:24[刮屑.堆芯.发动机]信息:十字轴已打开
2017-01-05 18:45:24[scrapy.extensions.logstats]信息:爬网0页(0页/分钟),爬网0项(0项/分钟)
2017-01-05 18:45:24[scrapy.extensions.telnet]调试:telnet控制台监听127.0.0.1:6023
2017-01-05 18:45:25[scrapy.core.engine]调试:已爬网(404)(引用程序:无)[“缓存”]
2017-01-05 18:45:25[scrapy.core.engine]调试:已爬网(404)(引用程序:无)[“缓存”]
2017-01-05 18:45:25[刮片机芯刮片]错误:蜘蛛错误处理(参考:无)
`

默认情况下,Scrapy for every new projects将启用
ROBOTS\u TXT\u obe
为True,这意味着在您的爬行器可以刮取任何内容之前,它会检查网站
ROBOTS.TXT
文件中允许刮取和不允许刮取的内容

要禁用此功能,只需从
settings.py
文件中删除设置
ROBOTS\u TXT\u obe

看更多

class ZaraSpider(scrapy.Spider):

    name = "zara-us"
    allowed_domain = ['www.zara.com/us']
    start_urls = [
        "http://www.zara.com/us"
    ]
    handle_httpstatus_list = [404]

    # navigating main page
    def parse(self, response):

        # get 1st 2 category listing in navigation sidebar
        categories = response.xpath('//*[@id="menu"]/ul/li')
        collections = categories[0].xpath('a//text()').extract()
        yield ProductItem(collection=collections[0])
2017-01-05 18:45:24 [scrapy.utils.log] INFO: Scrapy 1.3.0 started (bot: zara)
2017-01-05 18:45:24 [scrapy.utils.log] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'zara.spiders', 'ROBOTSTXT_OBEY': True, 'DUPEFILTER_CLASS': 'scrapy.dupefilters.BaseDupeFilter', 'SPIDER_MODULES': ['zara.spiders'], 'HTTPCACHE_ENABLED': True, 'BOT_NAME': 'zara', 'USER_AGENT': 'zara (+http://www.yourdomain.com)'}
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.logstats.LogStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.corestats.CoreStats']
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats',
 'scrapy.downloadermiddlewares.httpcache.HttpCacheMiddleware']
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2017-01-05 18:45:24 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2017-01-05 18:45:24 [scrapy.core.engine] INFO: Spider opened
2017-01-05 18:45:24 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-05 18:45:24 [scrapy.extensions.telnet] DEBUG: Telnet console listening on 127.0.0.1:6023
2017-01-05 18:45:25 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.zara.com/robots.txt> (referer: None) ['cached']
2017-01-05 18:45:25 [scrapy.core.engine] DEBUG: Crawled (404) <GET http://www.zara.com/us> (referer: None) ['cached']
2017-01-05 18:45:25 [scrapy.core.scraper] ERROR: Spider error processing <GET http://www.zara.com/us> (referer: None)
`