Python 从HTML表中删除每一行

Python 从HTML表中删除每一行,python,web-scraping,html-table,beautifulsoup,python-requests,Python,Web Scraping,Html Table,Beautifulsoup,Python Requests,我从网页上抓取了一个HTML表,但它只是反复地提取第一行的内容,而不是每行中唯一的值。似乎位置参数tds[0]-tds[5]只适用于第一行,我只是不知道如何指示代码移到每一行 import requests from bs4 import BeautifulSoup headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/

我从网页上抓取了一个HTML表,但它只是反复地提取第一行的内容,而不是每行中唯一的值。似乎位置参数tds[0]-tds[5]只适用于第一行,我只是不知道如何指示代码移到每一行

import requests
from bs4 import BeautifulSoup



headers = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/62.0.3202.94 Safari/537.36'}


url = 'https://www.fdic.gov/bank/individual/failed/banklist.html'
r = requests.get(url, headers = headers)

soup = BeautifulSoup(r.text, 'html.parser')


mylist5 = []
for tr in soup.find_all('table'):
    tds = tr.findAll('td')
    for x in tds:
        output5 = ("Bank: %s, City: %s, State: %s, Closing Date: %s, Cert #: %s, Acquiring Inst: %s \r\n" % (tds[0].text, tds[1].text, tds[2].text, tds[5].text, tds[3].text, tds[4].text))
        mylist5.append(output5)
        print(output5)

我稍微修改了您的代码-我忽略了第一行标题,然后按tr行迭代,而不仅仅是td行:

印刷品:

Bank: The Enloe State Bank, City: Cooper, State: TX, Closing Date: May 31, 2019, Cert #: 10716, Acquiring Inst: Legend Bank, N. A. 

Bank: Washington Federal Bank for Savings, City: Chicago, State: IL, Closing Date: December 15, 2017, Cert #: 30570, Acquiring Inst: Royal Savings Bank 

Bank: The Farmers and Merchants State Bank of Argonia, City: Argonia, State: KS, Closing Date: October 13, 2017, Cert #: 17719, Acquiring Inst: Conway Bank 

Bank: Fayette County Bank, City: Saint Elmo, State: IL, Closing Date: May 26, 2017, Cert #: 1802, Acquiring Inst: United Fidelity Bank, fsb 

Bank: Guaranty Bank, (d/b/a BestBank in Georgia & Michigan) , City: Milwaukee, State: WI, Closing Date: May 5, 2017, Cert #: 30003, Acquiring Inst: First-Citizens Bank & Trust Company 

Bank: First NBC Bank, City: New Orleans, State: LA, Closing Date: April 28, 2017, Cert #: 58302, Acquiring Inst: Whitney Bank 

Bank: Proficio Bank, City: Cottonwood Heights, State: UT, Closing Date: March 3, 2017, Cert #: 35495, Acquiring Inst: Cache Valley Bank 
…etc

您可以将“查找所有”与列表理解一起使用:

import requests
from bs4 import BeautifulSoup as soup
d = soup(requests.get('https://www.fdic.gov/bank/individual/failed/banklist.html').text, 'html.parser')
h, data = [i.text for i in d.find_all('th')], [[i.text for i in b.find_all('td')] for b in d.find_all('tr')[1:]]
由于SO的字符限制,输出缩短:

['Bank Name', 'City', 'ST', 'CERT', 'Acquiring Institution', 'Closing Date', 'Updated Date']
[['The Enloe State Bank', 'Cooper', 'TX', '10716', 'Legend Bank, N. A.', 'May 31, 2019', 'June 5, 2019'], ['Washington Federal Bank for Savings', 'Chicago', 'IL', '30570', 'Royal Savings Bank', 'December 15, 2017', 'February 1, 2019'], ['The Farmers and Merchants State Bank of Argonia', 'Argonia', 'KS', '17719', 'Conway Bank', 'October 13, 2017', 'February 21, 2018'], ['Fayette County Bank', 'Saint Elmo', 'IL', '1802', 'United Fidelity Bank, fsb', 'May 26, 2017', 'January 29, 2019'], ['Guaranty Bank, (d/b/a BestBank in Georgia & Michigan) ', 'Milwaukee', 'WI', '30003', 'First-Citizens Bank & Trust Company', 'May 5, 2017', 'March 22, 2018'], ['First NBC Bank', 'New Orleans', 'LA', '58302', 'Whitney Bank', 'April 28, 2017', 'January 29, 2019'], ['Proficio Bank', 'Cottonwood Heights', 'UT', '35495', 'Cache Valley Bank', 'March 3, 2017', 'January 29, 2019'], ]

我个人会在这里使用熊猫:

import pandas as pd

table = pd.read_html('https://www.fdic.gov/bank/individual/failed/banklist.html')[0]
print(table)

谢谢,这帮了大忙。
import pandas as pd

table = pd.read_html('https://www.fdic.gov/bank/individual/failed/banklist.html')[0]
print(table)