Xpath 使用美丽的汤来清理刮去的HTML

Xpath 使用美丽的汤来清理刮去的HTML,xpath,scrapy,Xpath,Scrapy,我正在使用scrapy尝试从Google Scholar上获取一些我需要的数据。以下面的链接为例: 现在,我想把这一页上所有的标题都删掉。我所遵循的过程如下: scrapy shell "http://scholar.google.com/scholar?q=intitle%3Apython+xpath" scrapy shell "http://scholar.google.com/scholar?q=intitle%3Apython+xpath" 这给了我一个黏糊糊的外壳,我在里面做:

我正在使用scrapy尝试从Google Scholar上获取一些我需要的数据。以下面的链接为例:

现在,我想把这一页上所有的标题都删掉。我所遵循的过程如下:

scrapy shell "http://scholar.google.com/scholar?q=intitle%3Apython+xpath"
scrapy shell "http://scholar.google.com/scholar?q=intitle%3Apython+xpath"
这给了我一个黏糊糊的外壳,我在里面做:

>>> sel.xpath('//h3[@class="gs_rt"]/a').extract()

[
 u'<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.4438&amp;rep=rep1&amp;type=pdf"><b>Python </b>Paradigms for XML</a>', 
 u'<a href="https://svn.eecs.jacobs-university.de/svn/eecs/archive/bsc-2009/sbhushan.pdf">NCClient: A <b>Python </b>Library for NETCONF Clients</a>', 
 u'<a href="http://hal.archives-ouvertes.fr/hal-00759589/">PALSE: <b>Python </b>Analysis of Large Scale (Computer) Experiments</a>', 
 u'<a href="http://i.iinfo.cz/r2/kd/xmlprague2007.pdf#page=53"><b>Python </b>and XML</a>', 
 u'<a href="http://www.loadaveragezero.com/app/drx/Programming/Languages/Python/">drx: <b>Python </b>Programming Language [Computers: Programming: Languages: <b>Python</b>]-loadaverageZero</a>', 
 u'<a href="http://www.worldcolleges.info/sites/default/files/py10.pdf">XML and <b>Python </b>Tutorial</a>', 
 u'<a href="http://dl.acm.org/citation.cfm?id=2555791">Zato\u2014agile ESB, SOA, REST and cloud integrations in <b>Python</b></a>', 
 u'<a href="ftp://ftp.sybex.com/4021/4021index.pdf">XML Processing with Perl, <b>Python</b>, and PHP</a>', 
 u'<a href="http://books.google.com/books?hl=en&amp;lr=&amp;id=El4TAgAAQBAJ&amp;oi=fnd&amp;pg=PT8&amp;dq=python+xpath&amp;ots=RrFv0f_Y6V&amp;sig=tSXzPJXbDi6KYnuuXEDnZCI7rDA"><b>Python </b>&amp; XML</a>', 
 u'<a href="https://code.grnet.gr/projects/ncclient/repository/revisions/efed7d4cd5ac60cbb7c1c38646a6d6dfb711acc9/raw/docs/proposal.pdf">A <b>Python </b>Module for NETCONF Clients</a>'
]
>>> sel.xpath('string(//h3[@class="gs_rt"]/a)').extract()
[u'Python Paradigms for XML']
这是根据早些时候的一项调查得出的。有人建议使用regexp版本,但我猜BeautifulSoup将更加健壮

我是一个邋遢的n00b,不知道如何将它嵌入我的蜘蛛中。我试过了

from scrapy.spider import Spider
from scrapy.selector import Selector
from bs4 import BeautifulSoup

from scholarscrape.items import ScholarscrapeItem

class ScholarSpider(Spider):
    name = "scholar"
    allowed_domains = ["scholar.google.com"]
    start_urls = [
        "http://scholar.google.com/scholar?q=intitle%3Apython+xpath"
    ]

    def parse(self, response):
        sel = Selector(response)
        item = ScholarscrapeItem()        
        t = sel.xpath('//h3[@class="gs_rt"]/a').extract()
        soup = BeautifulSoup(t)
        text_parts = soup.findAll(text=True)
        text = ''.join(text_parts)
        item['title'] = text
        return(item)
但这不太管用。任何建议都会有帮助


编辑3:根据建议,我已将我的蜘蛛文件修改为:

from scrapy.spider import Spider
from scrapy.selector import Selector
from bs4 import BeautifulSoup

from scholarscrape.items import ScholarscrapeItem

class ScholarSpider(Spider):
    name = "dmoz"
    allowed_domains = ["sholar.google.com"]
    start_urls = [
        "http://scholar.google.com/scholar?q=intitle%3Anine+facts+about+top+journals+in+economics"
    ]

    def parse(self, response):
        sel = Selector(response)
        item = ScholarscrapeItem()        
        titles = sel.xpath('//h3[@class="gs_rt"]/a')

        for title in titles:
            title = item.xpath('.//text()').extract()
            print "".join(title)
但是,我得到以下输出:

2014-02-17 15:11:12-0800 [scrapy] INFO: Scrapy 0.22.2 started (bot: scholarscrape) 2014-02-17 15:11:12-0800 [scrapy] INFO: Optional features available: ssl, http11 2014-02-17 15:11:12-0800 [scrapy] INFO: Overridden settings: {'NEWSPIDER_MODULE': 'scholarscrape.spiders', 'SPIDER_MODULES': ['scholarscrape.spiders'], 'BOT_NAME': 'scholarscrape'} 2014-02-17 15:11:12-0800 [scrapy] INFO: Enabled extensions: LogStats, TelnetConsole, CloseSpider, WebService, CoreStats, SpiderState 2014-02-17 15:11:13-0800 [scrapy] INFO: Enabled downloader middlewares: HttpAuthMiddleware, DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats 2014-02-17 15:11:13-0800 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware 2014-02-17 15:11:13-0800 [scrapy] INFO: Enabled item pipelines: 2014-02-17 15:11:13-0800 [dmoz] INFO: Spider opened 2014-02-17 15:11:13-0800 [dmoz] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min) 2014-02-17 15:11:13-0800 [scrapy] DEBUG: Telnet console listening on 0.0.0.0:6023 2014-02-17 15:11:13-0800 [scrapy] DEBUG: Web service listening on 0.0.0.0:6080 2014-02-17 15:11:13-0800 [dmoz] DEBUG: Crawled (200) <GET http://scholar.google.com/scholar?q=intitle%3Apython+xml> (referer: None) 2014-02-17 15:11:13-0800 [dmoz] ERROR: Spider error processing <GET http://scholar.google.com/scholar?q=intitle%3Apython+xml> Traceback (most recent call last): File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py", line 1178, in mainLoop self.runUntilCurrent() File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/base.py", line 800, in runUntilCurrent call.func(*call.args, **call.kw) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py", line 368, in callback self._startRunCallbacks(result) File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py", line 464, in _startRunCallbacks self._runCallbacks() --- <exception caught here> --- File "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python/twisted/internet/defer.py", line 551, in _runCallbacks current.result = callback(current.result, *args, **kw) File "/Users/krishnan/work/research/journals/code/scholarscrape/scholarscrape/spiders/scholar_spider.py", line 20, in parse title = item.xpath('.//text()').extract() File "/Library/Python/2.7/site-packages/scrapy/item.py", line 65, in __getattr__ raise AttributeError(name) exceptions.AttributeError: xpath 2014-02-17 15:11:13-0800 [dmoz] INFO: Closing spider (finished) 2014-02-17 15:11:13-0800 [dmoz] INFO: Dumping Scrapy stats: {'downloader/request_bytes': 247, 'downloader/request_count': 1, 'downloader/request_method_count/GET': 1, 'downloader/response_bytes': 108851, 'downloader/response_count': 1, 'downloader/response_status_count/200': 1, 'finish_reason': 'finished', 'finish_time': datetime.datetime(2014, 2, 17, 23, 11, 13, 196648), 'log_count/DEBUG': 3, 'log_count/ERROR': 1, 'log_count/INFO': 7, 'response_received_count': 1, 'scheduler/dequeued': 1, 'scheduler/dequeued/memory': 1, 'scheduler/enqueued': 1, 'scheduler/enqueued/memory': 1, 'spider_exceptions/AttributeError': 1, 'start_time': datetime.datetime(2014, 2, 17, 23, 11, 13, 21701)} 2014-02-17 15:11:13-0800 [dmoz] INFO: Spider closed (finished) 这给了我一个黏糊糊的外壳,我在里面做:

>>> sel.xpath('//h3[@class="gs_rt"]/a').extract()

[
 u'<a href="http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.122.4438&amp;rep=rep1&amp;type=pdf"><b>Python </b>Paradigms for XML</a>', 
 u'<a href="https://svn.eecs.jacobs-university.de/svn/eecs/archive/bsc-2009/sbhushan.pdf">NCClient: A <b>Python </b>Library for NETCONF Clients</a>', 
 u'<a href="http://hal.archives-ouvertes.fr/hal-00759589/">PALSE: <b>Python </b>Analysis of Large Scale (Computer) Experiments</a>', 
 u'<a href="http://i.iinfo.cz/r2/kd/xmlprague2007.pdf#page=53"><b>Python </b>and XML</a>', 
 u'<a href="http://www.loadaveragezero.com/app/drx/Programming/Languages/Python/">drx: <b>Python </b>Programming Language [Computers: Programming: Languages: <b>Python</b>]-loadaverageZero</a>', 
 u'<a href="http://www.worldcolleges.info/sites/default/files/py10.pdf">XML and <b>Python </b>Tutorial</a>', 
 u'<a href="http://dl.acm.org/citation.cfm?id=2555791">Zato\u2014agile ESB, SOA, REST and cloud integrations in <b>Python</b></a>', 
 u'<a href="ftp://ftp.sybex.com/4021/4021index.pdf">XML Processing with Perl, <b>Python</b>, and PHP</a>', 
 u'<a href="http://books.google.com/books?hl=en&amp;lr=&amp;id=El4TAgAAQBAJ&amp;oi=fnd&amp;pg=PT8&amp;dq=python+xpath&amp;ots=RrFv0f_Y6V&amp;sig=tSXzPJXbDi6KYnuuXEDnZCI7rDA"><b>Python </b>&amp; XML</a>', 
 u'<a href="https://code.grnet.gr/projects/ncclient/repository/revisions/efed7d4cd5ac60cbb7c1c38646a6d6dfb711acc9/raw/docs/proposal.pdf">A <b>Python </b>Module for NETCONF Clients</a>'
]
>>> sel.xpath('string(//h3[@class="gs_rt"]/a)').extract()
[u'Python Paradigms for XML']
如您所见,这仅选择第一个标题,而不选择页面上的其他标题。我不知道应该将XPath修改为什么,以便在页面上选择所有此类元素。非常感谢您的帮助


编辑1:我的第一个方法是尝试

>>> sel.xpath('//h3[@class="gs_rt"]/a/text()').extract()
[u'Paradigms for XML', u'NCClient: A ', u'Library for NETCONF Clients', 
 u'PALSE: ', u'Analysis of Large Scale (Computer) Experiments', u'and XML', 
 u'drx: ', u'Programming Language [Computers: Programming: Languages: ',
 u']-loadaverageZero', u'XML and ', u'Tutorial', 
 u'Zato\u2014agile ESB, SOA, REST and cloud integrations in ', 
 u'XML Processing with Perl, ', u', and PHP', u'& XML', u'A ', 
 u'Module for NETCONF Clients']
这种方法的问题在于,如果您查看实际的Google Scholar页面,您会发现第一个条目实际上是“PythonParadigms for XML”,而不是scrapy返回的“Paradigms for XML”。我对这种行为的猜测是,“Python”被困在标记中,这就是为什么
text()
没有做我们希望他做的事情。

XPath
string()
函数只返回传递给它的第一个节点的字符串表示形式

只需正常提取节点,不要使用
string()


这是一个非常有趣而且相当困难的问题。您面临的问题是,标题中的“Python”是粗体的,它被视为节点,而标题的其余部分只是一个文本,因此text()只提取文本内容,而不提取
节点的内容

这是我的解决办法。首先获取所有链接:


titles=sel.xpath('//h3[@class=“gs\u rt”]/a')

然后对它们进行迭代并选择每个节点的所有文本内容,换句话说,将
节点与此链接的每个子节点的文本节点连接起来

for item in titles:
    title = item.xpath('.//text()').extract()
    print "".join(title)

这是因为在for循环中,您将处理每个链接的子级的文本内容,因此您将能够连接匹配的元素。循环中的标题将相等,例如:
[u'Python',u'Paradigms for XML']
[u'NCClient:A',u'Python',u'Library for NETCONF Clients']

为什么要将其转换为字符串?我会先删除它,或者选择
//h3[@class=“gs\u rt”]/a/text()
@Wrikken,这是我第一次尝试!但是,以第一个条目为例。当我尝试您建议的方法时,我只得到了
XML范例”,而不是
Python XML范例。我想这是因为“Python”被困在标记中,而text()并没有拾取它。这有什么意义吗?(问题编辑)当然,你可以通过
/h3[@class=“gs_rt”]/a//text()获得单独的节点,但我认为你想将
/a
的全部内容转换为一个字符串?@Wrikken,是的,这离解决方案越来越近了。但是,正如您正确地说的,我希望它们在同一个字符串中。嗯,我认为XPath本身并不适合它。我同意Tomalak的说法,最好找到
/a
,然后在应用程序代码中获取它们的文本内容。我说的可能更好:我的意思是我不知道如何使XPath按您所希望的方式运行;)嘿@Tomalak,我刚刚在上面的评论中为一个问题添加了一条评论,我认为这突出了为什么我不能遵循text()方法。(问题编辑)XPath选择节点。Scrapy目前支持1.0版,在该版本中,您不能做更多的事情。选择所需的节点(可能是
元素)并在第二步中处理它们。(使用更通用的XPath 2.0,您可以执行
//h3[@class=“gs\u rt”]/a/string()
,但这不适用于scrapy。)嘿@Tomalak谢谢。不幸的是,我不太确定现在应该采取什么样的正确方式。text()不好。我可以不使用text()进行提取,但是我不知道该怎么处理这些HTML垃圾。理想情况下,我不想逐个清理这些东西。如我所说的,有没有关于更整洁的方法的建议。提取
a
元素并用Python处理它们。获取它们的文本值不应该太难。抱歉@Tomalak,我应该更清楚。我明白这是前进的方向。然而,我是python、xpath n00b,不知道最直接的清理方法是什么。你能给我指一下能做这项工作的工具吗?嗨@Pawelmhm,谢谢!我尝试了这一点,但我得到了以下错误:
Traceback(最近一次调用):File“”,第2行,在文件“/Library/Python/2.7/site packages/scrapy/item.py”中,第65行,在uuuuu getattr\uuuuuu提高属性错误(name)属性错误:xpath
有什么建议吗?不要提取链接,只需使用选择器,
titles=sel.xpath('//h3[@class=“gs_rt”]/a')
而不是
titles=sel.xpath('//h3[@class=“gs_rt”]/a').extract()
如果你的意思是do
sel.xpath('//h3[@class=“gs_rt”]/a')
而不是
sel.xpath('//h3[@class=“gs_rt”]/a')。extract()
,我做到了。据我所知,我完全遵循了你的代码嘿@Pawelmhm怪异豆!让我编辑这个问题并建立新的蜘蛛文件。也许我对一些显而易见的事情很傻。(编辑现在开始)嘿@Pawelmhm,我发现了问题所在——我已经有了一个名为item:P的东西,当我调用print时,你的代码现在可以工作了。但是如果我尝试将它分配给一个零碎的项目,那么我只会得到最后一行。例如,title=thing.xpath('.//text()').extract()项目中的东西的
f=open('test.txt',w')['title']=''。
for item in titles:
    title = item.xpath('.//text()').extract()
    print "".join(title)