Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/302.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python:解析html并生成表格文本文件_Python_Html_Beautifulsoup_Html Parsing_Text Files - Fatal编程技术网

Python:解析html并生成表格文本文件

Python:解析html并生成表格文本文件,python,html,beautifulsoup,html-parsing,text-files,Python,Html,Beautifulsoup,Html Parsing,Text Files,问题:我想解析html代码并检索一个表格文本文件,如下所示: East Counties Babergh, http://ratings.food.gov.uk/OpenDataFiles/FHRS297en-GB.xml, 876 Basildon, http://ratings.food.gov.uk/OpenDataFiles/FHRS109en-GB.xml, 1134 ... ... 取而代之的是: txt文件中只显示East Countries,因此for循环无法打印每个新区域。尝

问题:我想解析html代码并检索一个表格文本文件,如下所示:

East Counties
Babergh, http://ratings.food.gov.uk/OpenDataFiles/FHRS297en-GB.xml, 876
Basildon, http://ratings.food.gov.uk/OpenDataFiles/FHRS109en-GB.xml, 1134
...
...
取而代之的是: txt文件中只显示
East Countries
,因此for循环无法打印每个新区域。尝试代码在html代码之后

HTML代码: 代码可在中找到,这是参考上表的摘录:

<h2>
                                    East Counties</h2>

                                        <table>
                                            <thead>
                                                <tr>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleLAName_0">Local authority</span>
                                                    </th>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleUpdate_0">Last update</span>
                                                    </th>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleEstablishments_0">Number of businesses</span>
                                                    </th>
                                                    <th>
                                                        <span id="listRegions_lvFiles_0_titleCulture_0">Download</span>
                                                    </th>
                                                </tr>
                                            </thead>

                                        <tr>
                                            <td>
                                                <span id="listRegions_lvFiles_0_laNameLabel_0">Babergh</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_updatedLabel_0">04/05/2017 </span>
                                                at
                                                <span id="listRegions_lvFiles_0_updatedTime_0"> 12:00</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_establishmentsLabel_0">876</span>
                                            </td>
                                            <td>
                                                <a id="listRegions_lvFiles_0_fileURLLabel_0" title="Babergh: English language" href="http://ratings.food.gov.uk/OpenDataFiles/FHRS297en-GB.xml">English language</a>
                                            </td>
                                        </tr>

                                        <tr>
                                            <td>
                                                <span id="listRegions_lvFiles_0_laNameLabel_1">Basildon</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_updatedLabel_1">06/05/2017 </span>
                                                at
                                                <span id="listRegions_lvFiles_0_updatedTime_1"> 12:00</span>
                                            </td>
                                            <td>
                                                <span id="listRegions_lvFiles_0_establishmentsLabel_1">1,134</span>
                                            </td>
                                            <td>
                                                <a id="listRegions_lvFiles_0_fileURLLabel_1" title="Basildon: English language" href="http://ratings.food.gov.uk/OpenDataFiles/FHRS109en-GB.xml">English language</a>
                                            </td>
                                        </tr>
如何修复此问题?

我不熟悉Beautiful Soup库,但从每个
h2
循环中的代码判断,您正在遍历文档的所有
tr
元素。您应该只遍历属于与特定
h2
元素相关的表的行

编辑: 快速查看后,看起来您可以使用
.next\u sibling
,因为
h2
后面总是跟着
表,即
table=h2.next\u sibling.next\u sibling
(调用两次,因为第一个同级是包含空格的字符串)。从
表中
可以遍历其所有行

您获得威尔士的副本的原因是源代码中实际上有副本。

我不熟悉Beautiful Soup库,但从每个
h2
循环中的代码来看,您正在遍历文档的所有
tr
元素。您应该只遍历属于与特定
h2
元素相关的表的行

编辑: 快速查看后,看起来您可以使用
.next\u sibling
,因为
h2
后面总是跟着
表,即
table=h2.next\u sibling.next\u sibling
(调用两次,因为第一个同级是包含空格的字符串)。从
表中
可以遍历其所有行


您获得威尔士的副本的原因是源中实际上有副本。

您是否将
h2
的搜索嵌套在
表的
中?解决了僵局。非常感谢!您是否将对
h2
的搜索嵌套在
的搜索中?解决了僵局。非常感谢!
from xml.dom import minidom
import urllib2
from bs4 import BeautifulSoup

url='http://ratings.food.gov.uk/open-data/'
f = urllib2.urlopen(url)
mainpage = f.read()
soup = BeautifulSoup(mainpage, 'html.parser')

regions=[]
with open('Regions_and_files.txt', 'w') as f:
    for h2 in soup.find_all('h2')[6:]: #Skip 6 h2 lines 
        region=h2.text.strip() #Get the text of each h2 without the white spaces
        regions.append(str(region))
        f.write(region+'\n')
        for tr in soup.find_all('tr')[1:]: # Skip headers
            tds = tr.find_all('td')
            if len(tds)==0:
                continue
            else:
                a = tr.find_all('a')
                link = str(a)[10:67]
                span = tr.find_all('span')
                places = int(str(span[3].text).replace(',', ''))
                f.write("%s,%s,%s" % \
                              (str(tds[0].text)[1:-1], link, places)+'\n')