Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/gwt/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 只在给定页面上爬行_Python_Web Scraping_Scrapy_Scrapy Spider - Fatal编程技术网

Python 只在给定页面上爬行

Python 只在给定页面上爬行,python,web-scraping,scrapy,scrapy-spider,Python,Web Scraping,Scrapy,Scrapy Spider,我开始学习刮毛,我用谷歌搜索了大约4-5个小时,但找不到任何东西。有人能帮我吗?我有一个电子商务网站。我将只获取产品页面。其他页面没有,将通过另一页。我给了starturls主页,之后我设置了URL allow()和parse and follow true,但我无法管理它来跟踪链接 scrapy crawl loom 2014-05-14 12:33:20+0000 [scrapy] INFO: Scrapy 0.23.0 started (bot: loom) 2014-05-14 12:3

我开始学习刮毛,我用谷歌搜索了大约4-5个小时,但找不到任何东西。有人能帮我吗?我有一个电子商务网站。我将只获取产品页面。其他页面没有,将通过另一页。我给了starturls主页,之后我设置了URL allow()和parse and follow true,但我无法管理它来跟踪链接

scrapy crawl loom
2014-05-14 12:33:20+0000 [scrapy] INFO: Scrapy 0.23.0 started (bot: loom)
2014-05-14 12:33:20+0000 [scrapy] INFO: Optional features available: ssl, http11
2014-05-14 12:33:20+0000 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'loom.spiders', 'SPIDER_MODULES': ['loom.spiders'], 'LOG_LEVEL': 'INFO', 'BOT_NAME': 'loom'}
2014-05-14 12:33:20+0000 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState
2014-05-14 12:33:20+0000 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2014-05-14 12:33:20+0000 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2014-05-14 12:33:20+0000 [scrapy] INFO: Enabled item pipelines:
2014-05-14 12:33:20+0000 [loom] INFO: Spider opened
2014-05-14 12:33:20+0000 [loom] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
http://2loom.com
[]
2014-05-14 12:33:20+0000 [loom] INFO: Closing spider (finished)
2014-05-14 12:33:20+0000 [loom] INFO: Dumping Scrapy stats:
        {'downloader/request_bytes': 208,
         'downloader/request_count': 1,
         'downloader/request_method_count/GET': 1,
         'downloader/response_bytes': 6329,
         'downloader/response_count': 1,
         'downloader/response_status_count/200': 1,
         'finish_reason': 'finished',
         'finish_time': datetime.datetime(2014, 5, 14, 12, 33, 20, 824120),
         'log_count/INFO': 7,
         'response_received_count': 1,
         'scheduler/dequeued': 1,
         'scheduler/dequeued/memory': 1,
         'scheduler/enqueued': 1,
         'scheduler/enqueued/memory': 1,
         'start_time': datetime.datetime(2014, 5, 14, 12, 33, 20, 657838)}
2014-05-14 12:33:20+0000 [loom] INFO: Spider closed (finished)
我的蜘蛛:

from scrapy.spider import Spider
from scrapy.selector import Selector
from scrapy.http import Request

from scrapy.utils.response import get_base_url
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor

class LoomSpider(Spider):
    name = "loom"
    allowed_domains = ["2loom.com"]
    start_urls = [
        "http://2loom.com",
    ]

    rules = [Rule(SgmlLinkExtractor(), callback='parse', follow=True)]    

    def parse(self, response):

        print response.url

        sel = Selector(response)
        print sel.xpath('//h1[@itemprop="name"]/text()').extract()

感谢您所做的一切

您需要做一些更改才能使其正常工作:

  • 继承自而不是
    Spider
  • 提供与
    parse()不同的
    回调
以下是跟踪每个链接的爬行器代码:

class LoomSpider(CrawlSpider):
    name = "loom"
    allowed_domains = ["2loom.com"]
    start_urls = [
        "http://2loom.com",
    ]

    rules = [Rule(SgmlLinkExtractor(), callback='parse_page', follow=True)]

    def parse_page(self, response):
        print response.url