Scrapy 刮痒的爬行蜘蛛不爬行

Scrapy 刮痒的爬行蜘蛛不爬行,scrapy,Scrapy,我在尝试爬网特定站点时遇到了一个奇怪的问题。 如果我使用basespider对某些页面进行爬网,代码工作得很好,但是如果我将代码更改为使用爬行爬行器,爬行器将在没有任何错误但没有爬行的情况下完成 下面的代码工作正常 from scrapy.spider import BaseSpider from scrapy.selector import HtmlXPathSelector from scrapy.contrib.linkextractors.sgml import

我在尝试爬网特定站点时遇到了一个奇怪的问题。 如果我使用basespider对某些页面进行爬网,代码工作得很好,但是如果我将代码更改为使用爬行爬行器,爬行器将在没有任何错误但没有爬行的情况下完成

下面的代码工作正常

    from scrapy.spider import BaseSpider
    from scrapy.selector import HtmlXPathSelector
    from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
    from scrapy.contrib.spiders import CrawlSpider, Rule
    from scrapy.contrib.loader import XPathItemLoader
    from dirbot.items import Website
    from urlparse import urlparse
    from scrapy import log


class hushBabiesSpider(BaseSpider):
   name = "hushbabies"
   #download_delay = 10
   allowed_domains = ["hushbabies.com"]
   start_urls = [
       "http://www.hushbabies.com/category/toys-playgear-bath-bedtime.html",
       "http://www.hushbabies.com/category/mommy-newborn.html",
       "http://www.hushbabies.com"

   ]
   def parse(self, response):
       print response.body
       print "Inside parse Item"
       return []
下面的代码不起作用

from scrapy.spider import BaseSpider
from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.loader import XPathItemLoader
from dirbot.items import Website
from urlparse import urlparse
from scrapy import log

class hushBabiesSpider(CrawlSpider):
   name = "hushbabies"
   #download_delay = 10
   allowed_domains = ["hushbabies.com"]
   start_urls = [
       "http://www.hushbabies.com/category/toys-playgear-bath-bedtime.html",
       "http://www.hushbabies.com/category/mommy-newborn.html",
       "http://www.hushbabies.com"

   ]
   rules = (
        Rule(SgmlLinkExtractor(allow=()),
            'parseItem',
            follow=True,
        ),
    )
  def parseItem(self, response):
       print response.body
       print "Inside parse Item"
       return []
Scrapy运行的输出如下所示

scrapy crawl hushbabies
2012-07-23 18:50:37+0000 [scrapy] INFO: Scrapy 0.15.1-198-g831a450 started (bot: SKBot)
2012-07-23 18:50:37+0000 [scrapy] DEBUG: Enabled extensions: LogStats, TelnetConsole, WebService, CoreStats, MemoryUsage, SpiderState, CloseSpider
2012-07-23 18:50:37+0000 [scrapy] DEBUG: Enabled downloader middlewares: RobotsTxtMiddleware, HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, RedirectMiddleware, CookiesMiddleware, HttpCompressionMiddleware, ChunkedTransferMiddleware, DownloaderStats
2012-07-23 18:50:37+0000 [scrapy] DEBUG: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2012-07-23 18:50:37+0000 [scrapy] DEBUG: Enabled item pipelines: SQLStorePipeline
2012-07-23 18:50:37+0000 [hushbabies] INFO: Spider opened
2012-07-23 18:50:37+0000 [hushbabies] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2012-07-23 18:50:37+0000 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023
2012-07-23 18:50:37+0000 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080
2012-07-23 18:50:37+0000 [hushbabies] DEBUG: Crawled (200) <GET http://www.hushbabies.com/robots.txt> (referer: None)
2012-07-23 18:50:39+0000 [hushbabies] DEBUG: Crawled (200) <GET http://www.hushbabies.com> (referer: None)
2012-07-23 18:50:39+0000 [hushbabies] DEBUG: Crawled (200) <GET http://www.hushbabies.com/category/mommy-newborn.html> (referer: None)
2012-07-23 18:50:39+0000 [hushbabies] INFO: Closing spider (finished)
2012-07-23 18:50:39+0000 [hushbabies] INFO: Dumping spider stats:
        {'downloader/request_bytes': 634,
         'downloader/request_count': 3,
         'downloader/request_method_count/GET': 3,
         'downloader/response_bytes': 44395,
         'downloader/response_count': 3,
         'downloader/response_status_count/200': 3,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2012, 7, 23, 18, 50, 39, 674965),
         'scheduler/memory_enqueued': 2,
         'start_time': datetime.datetime(2012, 7, 23, 18, 50, 37, 700711)}
2012-07-23 18:50:39+0000 [hushbabies] INFO: Spider closed (finished)
2012-07-23 18:50:39+0000 [scrapy] INFO: Dumping global stats:
        {'memusage/max': 27820032, 'memusage/startup': 27652096}
scrapy crawl hushbaies
2012-07-23 18:50:37+0000[scrapy]信息:scrapy 0.15.1-198-g831a450已启动(机器人:SKBot)
2012-07-23 18:50:37+0000[scrapy]调试:启用的扩展:LogStats、TelnetConsole、WebService、CoreStats、MemoryUsage、SpiderState、CloseSpider
2012-07-23 18:50:37+0000[scrapy]调试:启用的下载中间件:RobotsTxtMiddleware、HttpAuthMiddleware、DownloadTimeoutMiddleware、UserAgentMiddleware、RetryMiddleware、DefaultHeadersMiddleware、RedirectMiddleware、CookiesMiddleware、HttpCompressionMiddleware、ChunkedTransferMiddleware、DownloadersStats
2012-07-23 18:50:37+0000[scrapy]调试:启用的spider中间件:HttpErrorMiddleware、OffsiteMiddleware、referermidleware、urlengthmiddleware、DepthMiddleware
2012-07-23 18:50:37+0000[scrapy]调试:启用的项目管道:SQLStorePipeline
2012-07-23 18:50:37+0000[丈夫]信息:蜘蛛打开
2012-07-23 18:50:37+0000[丈夫]信息:爬网0页(以0页/分钟的速度),抓取0项(以0项/分钟的速度)
2012-07-23 18:50:37+0000[scrapy]调试:Telnet控制台在0.0.0.0上侦听:6023
2012-07-23 18:50:37+0000[scrapy]调试:在0.0.0.0:6080上侦听Web服务
2012-07-23 18:50:37+0000[hushbabies]调试:爬网(200)(参考:无)
2012-07-23 18:50:39+0000[hushbabies]调试:爬网(200)(参考:无)
2012-07-23 18:50:39+0000[hushbabies]调试:爬网(200)(参考:无)
2012-07-23 18:50:39+0000[hushbabies]信息:关闭十字轴(已完成)
2012-07-23 18:50:39+0000[hushbabies]信息:转储蜘蛛统计信息:
{'downloader/request_bytes':634,
“下载程序/请求计数”:3,
“下载程序/请求方法\计数/获取”:3,
“downloader/response_字节”:44395,
“下载程序/响应计数”:3,
“下载/响应状态\计数/200”:3,
“完成原因”:“完成”,
“完成时间”:datetime.datetime(2012,7,23,18,50,39674965),
“调度程序/内存已排队”:2,
“开始时间”:datetime.datetime(2012,7,23,18,50,37,700711)}
2012-07-23 18:50:39+0000[丈夫]信息:蜘蛛关闭(完成)
2012-07-23 18:50:39+0000[scrapy]信息:倾销全球统计数据:
{'memusage/max':27820032,'memusage/startup':27652096}

将网站从hushbabies.com更改为其他网站将使代码正常工作。

SGMLLinkedExtractor中的底层SGML解析器似乎存在问题

以下代码返回零链接:

>>> from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
>>> fetch('http://www.hushbabies.com/')
>>> len(SgmlLinkExtractor().extract_links(response))
0
您可以尝试Slybot的替代链接提取器,该提取器取决于:

>>> from slybot.linkextractor import LinkExtractor
>>> from scrapely.htmlpage import HtmlPage
>>> p = HtmlPage(body=response.body_as_unicode())
>>> sum(1 for _ in LinkExtractor().links_to_follow(p))
314

SGMLLinkedExtractor中的底层SGML解析器似乎存在问题

以下代码返回零链接:

>>> from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
>>> fetch('http://www.hushbabies.com/')
>>> len(SgmlLinkExtractor().extract_links(response))
0
您可以尝试Slybot的替代链接提取器,该提取器取决于:

>>> from slybot.linkextractor import LinkExtractor
>>> from scrapely.htmlpage import HtmlPage
>>> p = HtmlPage(body=response.body_as_unicode())
>>> sum(1 for _ in LinkExtractor().links_to_follow(p))
314

听起来像个奇怪的虫子!为什么SGMLLinkedExtractor不适用于这个特定的站点?有什么具体的原因吗?在零碎的文档中,有这样一段:“基于SGMLParser的链接提取器没有包含,不鼓励使用它。如果您仍在使用SGMLLinkedExtractor,建议迁移到LXMLinkedExtractor。”文档页面链接是:听起来像一个奇怪的bug!为什么SGMLLinkedExtractor不适用于此特定站点?有任何具体原因吗?在零碎的文档中,有这样一段:“基于SGMLParser的链接提取器未包含,不鼓励使用。如果您仍在使用SGMLLinkedExtractor,建议迁移到LXMLinkedExtractor。”文档页面链接为: