Python 递归webscraper不使用Scrapy将文本从页面打印到屏幕

Python 递归webscraper不使用Scrapy将文本从页面打印到屏幕,python,web-scraping,scrapy,scrapy-spider,Python,Web Scraping,Scrapy,Scrapy Spider,我使用的是Python.org版本2.7,在Windows Vista上为64位。我正在构建一个递归webscraper,当只从单个页面提取文本时,它似乎可以工作,但是当抓取多个页面时,它似乎不工作。代码如下: from scrapy.contrib.spiders import CrawlSpider, Rule from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor from scrapy.selector impor

我使用的是Python.org版本2.7,在Windows Vista上为64位。我正在构建一个递归webscraper,当只从单个页面提取文本时,它似乎可以工作,但是当抓取多个页面时,它似乎不工作。代码如下:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log
from scrapy.cmdline import execute
from scrapy.utils.markup import remove_tags
import time


class ExampleSpider(CrawlSpider):
    name = "goal3"
    allowed_domains = ["whoscored.com"]
    start_urls = ["http://www.whoscored.com"]
    download_delay = 1
    rules = [Rule(SgmlLinkExtractor(allow=()), 
                  follow=True),
             Rule(SgmlLinkExtractor(allow=()), callback='parse_item')
    ]

    def parse_item(self,response):
        self.log('A response from %s just arrived!' % response.url)
        scripts = response.selector.xpath("normalize-space(//title)")
        for scripts in scripts:
            body = response.xpath('//p').extract()
            body2 = "".join(body)
            print remove_tags(body2).encode('utf-8')  


execute(['scrapy','crawl','goal3'])
2014-07-25 19:31:32+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Players/133260/Show/Michael-Ngoo> (referer: http://www.whoscored.com/Players/14170/Show/Ishmael-Miller)
2014-07-25 19:31:33+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/160/Show/England-Charlton> (referer: http://www.whoscored.com/Players/10794/Show/Rafik-Djebbour)
2014-07-25 19:31:33+0100 [goal3] DEBUG: Filtered offsite request to 'www.cafc.co.uk': <GET http://www.cafc.co.uk/page/Home>
2014-07-25 19:31:34+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Matches/721465/Live/England-Championship-2013-2014-Nottingham-Forest-Charlton> (referer: http://www.whoscored.com/Players/10794/Show/Rafik-Djebbour)
2014-07-25 19:31:36+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/126/News> (referer: http://www.whoscored.com/Teams/1426/News)
2014-07-25 19:31:36+0100 [goal3] DEBUG: Filtered offsite request to 'www.fcsochaux.fr': <GET http://www.fcsochaux.fr/fr/index.php?lng=fr>
2014-07-25 19:31:37+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/976/News> (referer: http://www.whoscored.com/Teams/1426/News)
2014-07-25 19:31:37+0100 [goal3] DEBUG: Filtered offsite request to 'www.grenoblefoot38.fr': <GET http://www.grenoblefoot38.fr/>
2014-07-25 19:31:37+0100 [goal3] DEBUG: Filtered offsite request to 'www.as.com': <GET http://www.as.com/futbol/articulo/leones-ponen-manos-obra-grenoble/20120713dasdaiftb_52/Tes>
2014-07-25 19:31:38+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/56/News> (referer: http://www.whoscored.com/Teams/53/News)
2014-07-25 19:31:38+0100 [goal3] DEBUG: Filtered offsite request to 'www.realracingclub.es': <GET http://www.realracingclub.es/default.aspx>
2014-07-25 19:31:39+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/125/News> (referer: http://www.whoscored.com/Teams/146/News)
2014-07-25 19:31:39+0100 [goal3] DEBUG: Filtered offsite request to 'www.asnl.net': <GET http://www.asnl.net/pages/club/entraineurs.html>
2014-07-25 19:31:40+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/425/News> (referer: http://www.whoscored.com/Teams/24/News)
2014-07-25 19:31:40+0100 [goal3] DEBUG: Filtered offsite request to 'www.dbu.dk': <GET http://www.dbu.dk/>
2014-07-25 19:31:42+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/282/News> (referer: http://www.whoscored.com/Teams/50/News)
2014-07-25 19:31:42+0100 [goal3] DEBUG: Filtered offsite request to 'www.fc-koeln.de': <GET http://www.fc-koeln.de/index.php?id=10>
2014-07-25 19:31:43+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/58/News> (referer: http://www.whoscored.com/Teams/131/News)
2014-07-25 19:31:43+0100 [goal3] DEBUG: Filtered offsite request to 'www.realvalladolid.es': <GET http://www.realvalladolid.es/>
2014-07-25 19:31:44+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/973/News> (referer: http://www.whoscored.com/Teams/145/News)
2014-07-25 19:31:44+0100 [goal3] DEBUG: Filtered offsite request to 'www.fifci.org': <GET http://www.fifci.org/>
我从中得到的输出示例如下:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log
from scrapy.cmdline import execute
from scrapy.utils.markup import remove_tags
import time


class ExampleSpider(CrawlSpider):
    name = "goal3"
    allowed_domains = ["whoscored.com"]
    start_urls = ["http://www.whoscored.com"]
    download_delay = 1
    rules = [Rule(SgmlLinkExtractor(allow=()), 
                  follow=True),
             Rule(SgmlLinkExtractor(allow=()), callback='parse_item')
    ]

    def parse_item(self,response):
        self.log('A response from %s just arrived!' % response.url)
        scripts = response.selector.xpath("normalize-space(//title)")
        for scripts in scripts:
            body = response.xpath('//p').extract()
            body2 = "".join(body)
            print remove_tags(body2).encode('utf-8')  


execute(['scrapy','crawl','goal3'])
2014-07-25 19:31:32+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Players/133260/Show/Michael-Ngoo> (referer: http://www.whoscored.com/Players/14170/Show/Ishmael-Miller)
2014-07-25 19:31:33+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/160/Show/England-Charlton> (referer: http://www.whoscored.com/Players/10794/Show/Rafik-Djebbour)
2014-07-25 19:31:33+0100 [goal3] DEBUG: Filtered offsite request to 'www.cafc.co.uk': <GET http://www.cafc.co.uk/page/Home>
2014-07-25 19:31:34+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Matches/721465/Live/England-Championship-2013-2014-Nottingham-Forest-Charlton> (referer: http://www.whoscored.com/Players/10794/Show/Rafik-Djebbour)
2014-07-25 19:31:36+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/126/News> (referer: http://www.whoscored.com/Teams/1426/News)
2014-07-25 19:31:36+0100 [goal3] DEBUG: Filtered offsite request to 'www.fcsochaux.fr': <GET http://www.fcsochaux.fr/fr/index.php?lng=fr>
2014-07-25 19:31:37+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/976/News> (referer: http://www.whoscored.com/Teams/1426/News)
2014-07-25 19:31:37+0100 [goal3] DEBUG: Filtered offsite request to 'www.grenoblefoot38.fr': <GET http://www.grenoblefoot38.fr/>
2014-07-25 19:31:37+0100 [goal3] DEBUG: Filtered offsite request to 'www.as.com': <GET http://www.as.com/futbol/articulo/leones-ponen-manos-obra-grenoble/20120713dasdaiftb_52/Tes>
2014-07-25 19:31:38+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/56/News> (referer: http://www.whoscored.com/Teams/53/News)
2014-07-25 19:31:38+0100 [goal3] DEBUG: Filtered offsite request to 'www.realracingclub.es': <GET http://www.realracingclub.es/default.aspx>
2014-07-25 19:31:39+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/125/News> (referer: http://www.whoscored.com/Teams/146/News)
2014-07-25 19:31:39+0100 [goal3] DEBUG: Filtered offsite request to 'www.asnl.net': <GET http://www.asnl.net/pages/club/entraineurs.html>
2014-07-25 19:31:40+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/425/News> (referer: http://www.whoscored.com/Teams/24/News)
2014-07-25 19:31:40+0100 [goal3] DEBUG: Filtered offsite request to 'www.dbu.dk': <GET http://www.dbu.dk/>
2014-07-25 19:31:42+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/282/News> (referer: http://www.whoscored.com/Teams/50/News)
2014-07-25 19:31:42+0100 [goal3] DEBUG: Filtered offsite request to 'www.fc-koeln.de': <GET http://www.fc-koeln.de/index.php?id=10>
2014-07-25 19:31:43+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/58/News> (referer: http://www.whoscored.com/Teams/131/News)
2014-07-25 19:31:43+0100 [goal3] DEBUG: Filtered offsite request to 'www.realvalladolid.es': <GET http://www.realvalladolid.es/>
2014-07-25 19:31:44+0100 [goal3] DEBUG: Crawled (200) <GET http://www.whoscored.com/Teams/973/News> (referer: http://www.whoscored.com/Teams/145/News)
2014-07-25 19:31:44+0100 [goal3] DEBUG: Filtered offsite request to 'www.fifci.org': <GET http://www.fifci.org/>
2014-07-2519:31:32+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Players/14170/Show/Ishmael-Miller)
2014-07-25 19:31:33+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Players/10794/Show/Rafik-Djebbour)
2014-07-25 19:31:33+0100[goal3]调试:过滤到“www.cafc.co.uk”的场外请求:
2014-07-25 19:31:34+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Players/10794/Show/Rafik-Djebbour)
2014-07-25 19:31:36+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/1426/News)
2014-07-25 19:31:36+0100[goal3]调试:过滤到“www.fcsochaux.fr”的场外请求:
2014-07-25 19:31:37+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/1426/News)
2014-07-25 19:31:37+0100[goal3]调试:过滤到“www.grenoblefoot38.fr”的场外请求:
2014-07-25 19:31:37+0100[goal3]调试:过滤到“www.as.com”的场外请求:
2014-07-25 19:31:38+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/53/News)
2014-07-25 19:31:38+0100[goal3]调试:过滤到“www.realracingclub.es”的场外请求:
2014-07-25 19:31:39+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/146/News)
2014-07-25 19:31:39+0100[goal3]调试:过滤到“www.asnl.net”的场外请求:
2014-07-25 19:31:40+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/24/News)
2014-07-25 19:31:40+0100[goal3]调试:过滤到“www.dbu.dk”的场外请求:
2014-07-25 19:31:42+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/50/News)
2014-07-25 19:31:42+0100[goal3]调试:过滤到“www.fc-koeln.de”的场外请求:
2014-07-25 19:31:43+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/131/News)
2014-07-25 19:31:43+0100[goal3]调试:过滤到“www.realvaladolid.es”的场外请求:
2014-07-25 19:31:44+0100[goal3]调试:爬网(200)(参考:http://www.whoscored.com/Teams/145/News)
2014-07-25 19:31:44+0100[goal3]调试:过滤到“www.fifci.org”的场外请求:
我可以理解外部链接被过滤掉,因为它们不在爬虫程序的范围内,但是我无法理解的是,为什么返回的结果是一条“DEBUG:”消息和页面的链接,特别是当所有这些结果都有一个成功的HTTP返回码200被打印时

有人能看到这里有什么问题吗


谢谢

您只需要有一条规则
follow=True

rules = [Rule(SgmlLinkExtractor(), follow=True, callback='parse_item')]

你好。这似乎是可行的,但它只是从页面返回页脚链接。我需要查看一下html并找到文本体是如何编码的,因为“//p”在这个实例中不起作用。谢谢