Python使用lxml遍历节

Python使用lxml遍历节,python,parsing,iteration,lxml,Python,Parsing,Iteration,Lxml,我有一个网页,我目前正在使用BeautifulSoup解析,但它非常慢,所以我决定尝试lxml,因为我阅读它非常快 无论如何,我正在努力让我的代码迭代我想要的部分,不知道如何使用lxml,也找不到关于它的清晰文档 无论如何,这是我的代码: import urllib, urllib2 from lxml import etree def wgetUrl(target): try: req = urllib2.Request(target) req.add

我有一个网页,我目前正在使用BeautifulSoup解析,但它非常慢,所以我决定尝试lxml,因为我阅读它非常快

无论如何,我正在努力让我的代码迭代我想要的部分,不知道如何使用lxml,也找不到关于它的清晰文档

无论如何,这是我的代码:

import urllib, urllib2
from lxml import etree

def wgetUrl(target):
    try:
        req = urllib2.Request(target)
        req.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3 Gecko/2008092417 Firefox/3.0.3')
        response = urllib2.urlopen(req)
        outtxt = response.read()
        response.close()
    except:
        return ''
    return outtxt

newUrl = 'http://www.tv3.ie/3player'

data = wgetUrl(newUrl)
parser = etree.HTMLParser()
tree   = etree.fromstring(data, parser)

for elem in tree.iter("div"):
    print elem.tag, elem.attrib, elem.text
这将返回所有DIV,但是如何指定只遍历dev id='slider1'

div {'style': 'position: relative;', 'id': 'slider1'} None
这不起作用:

for elem in tree.iter("slider1"):
我知道这可能是个愚蠢的问题,但我想不出来

谢谢

*编辑**

在您的帮助下添加此代码,我现在有以下输出:

for elem in tree.xpath("//div[@id='slider1']//div[@id='gridshow']"):
    print elem[0].tag, elem[0].attrib, elem[0].text
    print elem[1].tag, elem[1].attrib, elem[1].text
    print elem[2].tag, elem[2].attrib, elem[2].text
    print elem[3].tag, elem[3].attrib, elem[3].text
    print elem[4].tag, elem[4].attrib, elem[4].text
输出:

a {'href': '/3player/show/392/57922/1/Tallafornia', 'title': '3player | Tallafornia, 11/01/2013. The Tallafornia crew are back, living in a beachside villa in Santa Ponsa, Majorca. As the crew settle in, the egos grow bigger than ever and cause tension'} None
h3 {} None
span {'id': 'gridcaption'} The Tallafornia crew are back, living in a beachside vill...
span {'id': 'griddate'} 11/01/2013
span {'id': 'gridduration'} 00:27:52
这真是太棒了,但我遗漏了上面标签的一部分。解析器是否不能正确处理代码

我没有得到以下信息:

<img alt="3player | Tallafornia, 11/01/2013. The Tallafornia crew are back, living in a beachside villa in Santa Ponsa, Majorca. As the crew settle in, the egos grow bigger than ever and cause tension" src='http://content.tv3.ie/content/videos/0378/tallaforniaep2_fri11jan2013_3player_1_57922_180x102.jpg' class='shadow smallroundcorner'></img>

你知道为什么它不拉这个吗


再次感谢,非常有用的帖子。

您可以使用XPath表达式,如下所示:

for elem in tree.xpath("//div[@id='slider1']"):
例如:

>>> import urllib2
>>> import lxml.etree
>>> url = 'http://www.tv3.ie/3player'
>>> data = urllib2.urlopen(url)
>>> parser = lxml.etree.HTMLParser()
>>> tree = lxml.etree.parse(data,parser)
>>> elem = tree.xpath("//div[@id='slider1']")
>>> elem[0].attrib
{'style': 'position: relative;', 'id': 'slider1'}

您需要更好地分析正在处理的页面内容(一个好方法是使用Firefox和Firebug插件)

您试图获取的
标记实际上是
标记的子项:

>>> for elem in tree.xpath("//div[@id='slider1']//div[@id='gridshow']"):
...    for elem_a in elem.xpath("./a"):
...       for elem_img in elem_a.xpath("./img"):
...          print '<A> HREF=%s'%(elem_a.attrib['href'])
...          print '<IMG> ALT="%s"'%(elem_img.attrib['alt'])
<A> HREF=/3player/show/392/58784/1/Tallafornia
<IMG> ALT="3player | Tallafornia, 01/02/2013. A fresh romance blossoms in the Tallafornia house. Marc challenges Cormac to a 'bench off' in the gym"
<A> HREF=/3player/show/46/58765/1/Coronation-Street
<IMG> ALT="3player | Coronation Street, 01/02/2013. Tyrone bumps into Kirsty in the street and tries to take Ruby from her pram"
../..
>>对于tree.xpath(//div[@id='slider1']//div[@id='gridshow'])中的元素:
...    对于元素xpath中的元素a(“./a”):
...       对于elem_a.xpath中的elem_img(“./img”):
...          打印“HREF=%s%”(元素a.attrib['HREF'])
...          打印'ALT=“%s”%”(元素img.attrib['ALT'])
HREF=/3player/show/392/58784/1/Tallafornia
ALT=“3player |塔拉福尼亚,2013年2月1日。塔拉福尼亚之家绽放出新的浪漫。马克在健身房挑战科马克,让他“坐板凳休息”
HREF=/3player/show/46/58765/1/加冕街
ALT=“3player |加冕街,2013年2月1日。泰龙在街上撞到了Kirsty,试图从她的婴儿车上带走Ruby”
../..

我就是这样让它为自己工作的,我不确定这是否是最好的方法,欢迎评论:

import urllib2, re
from lxml import etree
from datetime import datetime

def wgetUrl(target):
    try:
        req = urllib2.Request(target)
        req.add_header('User-Agent', 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-GB; rv:1.9.0.3 Gecko/2008092417 Firefox/3.0.3')
        response = urllib2.urlopen(req)
        outtxt = response.read()
        response.close()
    except:
        return ''
    return outtxt

start = datetime.now()

newUrl = 'http://www.tv3.ie/3player' # homepage

data = wgetUrl(newUrl)
parser = etree.HTMLParser()
tree   = etree.fromstring(data, parser)

for elem in tree.xpath("//div[@id='slider1']//div[@id='gridshow'] | //div[@id='slider1']//div[@id='gridshow']//img[@class='shadow smallroundcorner']"):
    if elem.tag == 'img':
        img = elem.attrib.get('src')
        print 'img: ', img

    if elem.tag == 'div':
        show = elem[0].attrib.get('href')
        print 'show: ', show
        titleData = elem[0].attrib.get('title')

        match=re.search("3player\s+\|\s+(.+),\s+(\d\d/\d\d/\d\d\d\d)\.\s*(.*)", titleData) 
        title=match.group(1)
        print 'title: ', title

        description = match.group(3)
        print 'description: ', description

        date = elem[3].text
        duration = elem[4].text
        print 'date: ', date
        print 'duration: ', duration

end = datetime.now()
print 'time took was ', (end-start)
时间安排很好,虽然没有我在美丽的周末所期望的那么大的不同