Python 使用Scrapy从页面上找到的每个表中递归地刮取数据

Python 使用Scrapy从页面上找到的每个表中递归地刮取数据,python,scrapy,Python,Scrapy,我在Windows Vista 64位上使用Python.org 2.7 64位版本。我有以下一段代码,可以从单个网页中刮取一个命名表: from scrapy.spider import Spider from scrapy.selector import Selector from scrapy.utils.markup import remove_tags from scrapy.cmdline import execute import csv filepath = "C:\\Pyth

我在Windows Vista 64位上使用Python.org 2.7 64位版本。我有以下一段代码,可以从单个网页中刮取一个命名表:

from scrapy.spider import Spider
from scrapy.selector import Selector
from scrapy.utils.markup import remove_tags
from scrapy.cmdline import execute
import csv

filepath = "C:\\Python27\\Football Data\\test" + ".txt"

with open(filepath, "w") as f:
    f.write("")
    f.close()

class MySpider(Spider):

    name = "goal2"
    allowed_domains = ["whoscored.com"]
    start_urls = ["http://www.whoscored.com/Players/3859/Fixtures/Wayne-Rooney"]    

    def parse(self, response):
        sel = Selector(response)

        titles = sel.xpath("normalize-space(//title)")
        print 'titles:', titles.extract()[0]

        rows = sel.xpath('//table[@id="player-fixture"]//tbody//tr')

        for row in rows:

            print 'date:', "".join( row.css('.date::text').extract() ).strip()
            print 'result:', "".join( row.css('.result a::text').extract() ).strip()
            print 'team_home:', "".join( row.css('.team.home a::text').extract() ).strip()
            print 'team_away:', "".join( row.css('.team.away a::text').extract() ).strip()
            print 'info:', "".join( row.css('.info::text').extract() ).strip(), "".join( row.css('.info::attr(title)').extract() ).strip()
            print 'rating:', "".join( row.css('.rating::text').extract() ).strip()
            print 'incidents:', ", ".join( row.css('.incidents-icon::attr(title)').extract() ).strip()
            print '-'*40

            date = "".join( row.css('.date::text').extract() ).strip() + ','
            result = "".join( row.css('.result a::text').extract() ).strip() + ','
            team_home = "".join( row.css('.team.home a::text').extract() ).strip() + ','
            team_away = "".join( row.css('.team.away a::text').extract() ).strip() + ','
            info = "".join( row.css('.info::text').extract() ).strip() + ','
            rating = "".join( row.css('.rating::text').extract() ).strip() + ','
            incident = " ".join( row.css('.incidents-icon::attr(title)').extract() ).strip() + ','
然后,我有一些代码可以抓取同一网站的多个页面,并抓取文章的文本内容:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import Selector
from scrapy.item import Item
from scrapy.spider import BaseSpider
from scrapy import log
from scrapy.cmdline import execute
from scrapy.utils.markup import remove_tags
import time


class ExampleSpider(CrawlSpider):
    name = "goal3"
    allowed_domains = ["whoscored.com"]
    start_urls = ["http://www.whoscored.com/Articles"]
    download_delay = 1

    rules = [Rule(SgmlLinkExtractor(allow=('/Articles',)), follow=True, callback='parse_item')]

    def parse_item(self,response):
        paragraphs = response.selector.xpath("//p").extract()
        text = "".join(remove_tags(paragraph).encode('utf-8') for paragraph in paragraphs)
        print text        


execute(['scrapy','crawl','goal3'])
但我真正想做的是从任何页面上遇到的任何表中获取数据。顶部的代码示例仅在刮页上的表被称为“playerfixture”时才起作用,而不是在每个刮页上

在我开始在站点上拖网搜索HTML,寻找页面的哪个分支将有命名为特定事物的表之前,Scrapy是否可以从遇到的任何表中获取数据


谢谢

如果您希望
id
具有不同的可能值,那么可以在xpath上使用
运算符来捕获所有可能的场景

e、 g.
'//表[@id=“player fixture”或@id=“other value”]///tbody//tr'

如果可能的值太多,可以尝试锚定在更静态的变量上,例如
div

e、 g.
//div[@att=“value”]/table/tbody/tr