Python 使用熊猫清理HTML表

Python 使用熊猫清理HTML表,python,pandas,Python,Pandas,我想阅读一个网站上的表格并解析这些值。为此,我做了以下工作: url = 'http://www.astro.keele.ac.uk/jkt/debcat/' df = pd.read_html(url, header=0) df = df[1] df1.shape (161, 11) df1.columns Index([u' System ', u' Period (days) ', u' V B-V ', u' Spectral type ', u' Mass (Msun )'

我想阅读一个网站上的表格并解析这些值。为此,我做了以下工作:

url = 'http://www.astro.keele.ac.uk/jkt/debcat/'    
df = pd.read_html(url, header=0)
df = df[1]
df1.shape
(161, 11)
df1.columns
Index([u' System ', u' Period (days) ', u' V  B-V ', u' Spectral type ', u' Mass (Msun )', u' Radius (Rsun) ', u' Surface gravity (cgs) ', u' log Teff (K) ', u' log (L/Lsun) ', u' [M/H]  (dex) ', u' References and notes '], dtype='object')
即使header=0,我仍然有一个带有df[0]的header,因此我执行以下操作:

url = 'http://www.astro.keele.ac.uk/jkt/debcat/'    
df = pd.read_html(url, header=0)
df = df[1]
df1.shape
(161, 11)
df1.columns
Index([u' System ', u' Period (days) ', u' V  B-V ', u' Spectral type ', u' Mass (Msun )', u' Radius (Rsun) ', u' Surface gravity (cgs) ', u' log Teff (K) ', u' log (L/Lsun) ', u' [M/H]  (dex) ', u' References and notes '], dtype='object')
然而,我无法得到

df1.Period
“DataFrame”对象没有“period”属性

我也不能

df1.to_csv('junk.csv')

那么,如何访问列并清理表呢?谢谢

列名称在
u'期间(天)
进行解析,以便访问列:

>>> df1[ u' Period (days) ' ]
也就是说,对于这种类型的工作,您需要使用html解析库;例如,
BeautifulSoup
可以做得非常整洁

>>> from bs4 import BeautifulSoup
>>> from urllib2 import urlopen

>>> url = 'http://www.astro.keele.ac.uk/jkt/debcat/'
>>> html = urlopen(url).read()
>>> soup = BeautifulSoup(html)

>>> # catch the target table by its attributes
>>> table = soup.find('table', attrs={'frame':'BOX', 'rules':'ALL'})

>>> # parse the table as a list of lists; each row as a single list
>>> tbl = [[td.getText() for td in tr.findAll(['td', 'th'])] for tr in table.findAll('tr')]
末尾的
tbl
是作为列表列表的目标表;即,每行是该行中单元格值的列表;例如,
tbl[0]
只是标题:

>>> tbl[0]
[u' System ', u' Period (days) ', u' V  B-V ', u' Spectral type ', u' Mass (Msun )', u' Radius (Rsun) ', u' Surface gravity (cgs) ', u' log Teff (K) ', u' log (L/Lsun) ', u' [M/H]  (dex) ', u' References and notes ']

我觉得它已经有了一个足够公平的格式:

>>> url = 'http://www.astro.keele.ac.uk/jkt/debcat/'
>>> df = pd.read_html(url, header=0)
>>> df1 = df[1]
>>> df1.head()
     System    Period (days)   V  B-V   Spectral type   \
0  V3903 Sgr            1.744      NaT             NaT   
1   V467 Vel            2.753      NaT             NaT   
2     EM Car            3.414      NaT             NaT   
3      Y Cyg            2.996      NaT             NaT   
4   V478 Cyg            2.881      NaT             NaT   

                Mass (Msun )               Radius (Rsun)   \
0  27.27 ± 0.55 19.01 ± 0.44  8.088 ± 0.086 6.125 ± 0.060   
1     25.3 ± 0.7 8.25 ± 0.17      9.99 ± 0.09 3.49 ± 0.03   
2  22.89 ± 0.32 21.43 ± 0.33      9.35 ± 0.17 8.34 ± 0.14   
3  17.57 ± 0.27 17.04 ± 0.26      5.93 ± 0.07 5.78 ± 0.07   
4  16.67 ± 0.45 16.31 ± 0.35  7.423 ± 0.079 7.423 ± 0.079   

        Surface gravity (cgs)                 log Teff (K)   \
0  4.058 ± 0.016 4.143 ± 0.013  4.580 ± 0.021 4.531 ± 0.021   
1  3.842 ± 0.016 4.268 ± 0.017  4.559 ± 0.031 4.402 ± 0.046   
2  3.856 ± 0.017 3.926 ± 0.016  4.531 ± 0.026 4.531 ± 0.026   
3      4.16 ± 0.10 4.18 ± 0.10  4.545 ± 0.007 4.534 ± 0.007   
4  3.919 ± 0.015 3.909 ± 0.013  4.484 ± 0.015 4.485 ± 0.015   

                 log (L/Lsun)   [M/H]  (dex)   \
0  5.087 ± 0.029 4.658 ± 0.032            NaN   
1  5.187 ± 0.126 3.649 ± 0.110            NaN   
2      5.02 ± 0.10 4.92 ± 0.10            NaN   
3                          NaN    0.00 ± 0.00   
4      4.63 ± 0.06 4.63 ± 0.06            NaN   

                               References and notes   
0                   Vaz et al. (1997A&A...327.1094V)  
1             Michalska et al. (2013MNRAS.429.1354M)  
2           Andersen & Clausen (1989A&A...213..183A)  
3       Simon, Sturm & Fiedler (1994A&A...292..507S)  
4  Popper & Hill (1991AJ....101..600P) Popper & E...  

[5 rows x 11 columns]
因为您知道如何查看列:

>>> df1.columns
Index([u' System ', u' Period (days) ', u' V  B-V ', u' Spectral type ', u' Mass (Msun )', u' Radius (Rsun) ', u' Surface gravity (cgs) ', u' log Teff (K) ', u' log (L/Lsun) ', u' [M/H]  (dex) ', u' References and notes '], dtype='object')
df.Period
不起作用也就不足为奇了——毕竟没有一列被称为
Period
。熊猫不会随机猜测哪只看起来最接近。如果要处理列名,可以执行以下操作

>>> df1.columns = [x.strip() for x in df1.columns] # get rid of the leading/trailing spaces
>>> df1 = df1.rename(columns={"Period (days)": "Period"})
在此之后,
df1[“Period”]
(首选)和
df1.Period
(快捷方式)将起作用:

>>> df1["Period"].describe()
count    161.000000
mean      32.035019
std       98.392634
min        0.452000
25%        2.293000
50%        3.895000
75%        9.945000
max      771.781000
Name: Period, dtype: float64

“我也不能执行
df1。to_csv('junk.csv')
”不是错误报告,因为您没有解释为什么不能执行,或者在执行时会发生什么。我假设您得到了一个编码错误:

>>> df1.to_csv("out.csv")
Traceback (most recent call last):
[...]
 File "lib.pyx", line 845, in pandas.lib.write_csv_rows (pandas/lib.c:14261)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xb1' in position 6: ordinal not in range(128)
如果指定适当的编码,则可以避免:

>>> df1.to_csv("out.csv", encoding="utf8")

谢谢你的解决方案。这确实有效。然而,我正在寻找可以添加到数据帧的东西,即熊猫。在目前的状态下,工作和操作数据是一件非常痛苦的事情。尽管如此,还是要谢谢你。