Python 如何从具有相同类的页面中的两个表中提取数据?

Python 如何从具有相同类的页面中的两个表中提取数据?,python,html,selenium,selenium-webdriver,beautifulsoup,Python,Html,Selenium,Selenium Webdriver,Beautifulsoup,我想从具有相同类的两个不同表中获取或选择数据 我试着从“soup.find_all”中获取数据,但格式化数据变得越来越困难 同一类有许多表。我只需要从表中获取无标签的值 网址: 表1: 轮辋材料 合金 前轮胎说明 215/55 R16 前轮辋说明 16x7.0 后轮胎说明 215/55 R16 后轮辋说明 16x7.0 //我想这是一个额外的接近 表2: 驾驶 齿轮齿条 //我想这是一个额外的接近 我所尝试的: 我尝试从Xpath获取第一个表内容,但它同时提供了值和标签 table1 = dri

我想从具有相同类的两个不同表中获取或选择数据

我试着从“soup.find_all”中获取数据,但格式化数据变得越来越困难

同一类有许多表。我只需要从表中获取无标签的值

网址:

表1:

轮辋材料 合金 前轮胎说明 215/55 R16 前轮辋说明 16x7.0 后轮胎说明 215/55 R16 后轮辋说明 16x7.0 //我想这是一个额外的接近 表2:

驾驶 齿轮齿条 //我想这是一个额外的接近 我所尝试的:

我尝试从Xpath获取第一个表内容,但它同时提供了值和标签

table1 = driver.find_element_by_xpath("//*[@id='features']/div/div[5]/div[2]/div[1]/div[1]/div/div[2]/table/tbody/tr[1]/td[1]/table/tbody/tr/td[2]")

我试图分割数据,但没有成功。如果您想查看,请提供页面的URL,这不是一个完美的解决方案,但如果您愿意稍微翻阅一下数据,我建议您使用pandas的read_html函数

pandas的read_html提取网页中的所有html表,并将其转换为pandas数据帧数组

此代码似乎获取了链接页面中的所有82个表元素:

import pandas as pd
import requests

url = "https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/"

#Need to add a fake header to avoid 403 forbidden error
header = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
        "X-Requested-With": "XMLHttpRequest"
        }

resp = requests.get(url, headers=header)

table_dataframes = pd.read_html(resp.text)


for i, df in enumerate(table_dataframes):
    print(f"================Table {i}=================\n")
    print(df)
这将打印出网页中的所有82个表格。限制是您必须手动查找感兴趣的表并相应地对其进行操作。似乎表71和表74是您想要的表


这种方法需要额外的智能才能实现自动化。

这不是一个完美的解决方案,但如果您愿意稍微翻阅一下数据,我建议使用pandas的read_html函数

pandas的read_html提取网页中的所有html表,并将其转换为pandas数据帧数组

此代码似乎获取了链接页面中的所有82个表元素:

import pandas as pd
import requests

url = "https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/"

#Need to add a fake header to avoid 403 forbidden error
header = {
        "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/50.0.2661.75 Safari/537.36",
        "X-Requested-With": "XMLHttpRequest"
        }

resp = requests.get(url, headers=header)

table_dataframes = pd.read_html(resp.text)


for i, df in enumerate(table_dataframes):
    print(f"================Table {i}=================\n")
    print(df)
这将打印出网页中的所有82个表格。限制是您必须手动查找感兴趣的表并相应地对其进行操作。似乎表71和表74是您想要的表


这种方法需要额外的智能才能实现自动化。

这两个表的目标有点棘手,因为它们包含其他表。我使用CSS选择器table:hastd:containsRim Material:hasttable tr:not:hastr作为第一个表的目标,使用相同的选择器,并将字符串转向作为第二个表的目标:

from bs4 import BeautifulSoup
import requests

url = 'https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/'

headers = {'User-Agent':'Mozilla/5.0'}
soup = BeautifulSoup(requests.get(url, headers=headers).text, 'lxml')

rows = []
for tr in soup.select('table:has(td:contains("Rim Material")):has(table) tr:not(:has(tr)), table:has(td:contains("Steering")):has(table) tr:not(:has(tr))'):
    rows.append([td.get_text(strip=True) for td in tr.select('td')])

for label, text in rows:
    print('{: <30}: {}'.format(label, text))
编辑:用于从多个URL获取数据:

from bs4 import BeautifulSoup
import requests

headers = {'User-Agent':'Mozilla/5.0'}

urls = ['https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/',
        'https://www.redbook.com.au/cars/details/2019-genesis-g80-38-ultimate-auto-my19/SPOT-ITM-520697/']

for url in urls:
    soup = BeautifulSoup(requests.get(url, headers=headers).text, 'lxml')

    rows = []
    for tr in soup.select('table:has(td:contains("Rim Material")):has(table) tr:not(:has(tr)), table:has(td:contains("Steering")):has(table) tr:not(:has(tr))'):
        rows.append([td.get_text(strip=True) for td in tr.select('td')])

    print('{: <30}: {}'.format('Title', soup.h1.text))
    print('-' * (len(soup.h1.text.strip())+32))
    for label, text in rows:
        print('{: <30}: {}'.format(label, text))

    print('*' * 80)

这两个表的目标有点棘手,因为它们包含其他表。我使用CSS选择器table:hastd:containsRim Material:hasttable tr:not:hastr作为第一个表的目标,使用相同的选择器,并将字符串转向作为第二个表的目标:

from bs4 import BeautifulSoup
import requests

url = 'https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/'

headers = {'User-Agent':'Mozilla/5.0'}
soup = BeautifulSoup(requests.get(url, headers=headers).text, 'lxml')

rows = []
for tr in soup.select('table:has(td:contains("Rim Material")):has(table) tr:not(:has(tr)), table:has(td:contains("Steering")):has(table) tr:not(:has(tr))'):
    rows.append([td.get_text(strip=True) for td in tr.select('td')])

for label, text in rows:
    print('{: <30}: {}'.format(label, text))
编辑:用于从多个URL获取数据:

from bs4 import BeautifulSoup
import requests

headers = {'User-Agent':'Mozilla/5.0'}

urls = ['https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/',
        'https://www.redbook.com.au/cars/details/2019-genesis-g80-38-ultimate-auto-my19/SPOT-ITM-520697/']

for url in urls:
    soup = BeautifulSoup(requests.get(url, headers=headers).text, 'lxml')

    rows = []
    for tr in soup.select('table:has(td:contains("Rim Material")):has(table) tr:not(:has(tr)), table:has(td:contains("Steering")):has(table) tr:not(:has(tr))'):
        rows.append([td.get_text(strip=True) for td in tr.select('td')])

    print('{: <30}: {}'.format('Title', soup.h1.text))
    print('-' * (len(soup.h1.text.strip())+32))
    for label, text in rows:
        print('{: <30}: {}'.format(label, text))

    print('*' * 80)

您不必在一个xpath中完成。您可以使用xpath获取all,然后使用index从列表中选择表,并使用另一个xpath从这个表中获取值

我使用BeautifulSoup来实现这一点,但对于xpath,它应该是类似的

import requests
from bs4 import BeautifulSoup as BS

url = 'https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/'

text = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'}).text

soup = BS(text, 'html.parser')

all_tables = soup.find_all('table', {'class': 'prop-list'}) # xpath('//table[@class="prop-list"]')
#print(len(all_tables))

print("\n--- Engine ---\n")
all_labels = all_tables[3].find_all('td', {'class': 'label'}) # xpath('.//td[@class="label"]')
all_values = all_tables[3].find_all('td', {'class': 'value'}) # xpath('.//td[@class="value"]')
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Fuel ---\n")
all_labels = all_tables[4].find_all('td', {'class': 'label'})
all_values = all_tables[4].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Stearing ---\n")
all_labels = all_tables[7].find_all('td', {'class': 'label'})
all_values = all_tables[7].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Wheels ---\n")
all_labels = all_tables[8].find_all('td', {'class': 'label'})
all_values = all_tables[8].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))
结果:

--- Engine ---

Engine Type: Piston
Valves/Ports per Cylinder: 4
Engine Location: Front
Compression ratio: 10.6
Engine Size (cc) (cc): 1799
Engine Code: R18Z1
Induction: Aspirated
Power: 104kW @ 6500rpm
Engine Configuration: In-line
Torque: 174Nm @ 4300rpm
Cylinders: 4
Power to Weight Ratio (W/kg): 82.6
Camshaft: OHC with VVT & Lift

--- Fuel ---

Fuel Type: Petrol - Unleaded ULP
Fuel Average Distance (km): 734
Fuel Capacity (L): 47
Fuel Maximum Distance (km): 940
RON Rating: 91
Fuel Minimum Distance (km): 540
Fuel Delivery: Multi-Point Injection
CO2 Emission Combined (g/km): 148
Method of Delivery: Electronic Sequential
CO2 Extra Urban (g/km): 117
Fuel Consumption Combined (L/100km): 6.4
CO2 Urban (g/km): 202
Fuel Consumption Extra Urban (L/100km): 5
Emission Standard: Euro 5
Fuel Consumption Urban (L/100km): 8.7

--- Stearing ---

Steering: Rack and Pinion

--- Wheels ---

Rim Material: Alloy
Front Tyre Description: 215/55 R16
Front Rim Description: 16x7.0
Rear Tyre Description: 215/55 R16
Rear Rim Description: 16x7.0

我假设所有页面都有相同的表,并且它们有相同的数字

不必在一个xpath中完成。您可以使用xpath获取all,然后使用index从列表中选择表,并使用另一个XPathT o从这个表中获取值

我使用BeautifulSoup来实现这一点,但对于xpath,它应该是类似的

import requests
from bs4 import BeautifulSoup as BS

url = 'https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/'

text = requests.get(url, headers={'User-Agent': 'Mozilla/5.0'}).text

soup = BS(text, 'html.parser')

all_tables = soup.find_all('table', {'class': 'prop-list'}) # xpath('//table[@class="prop-list"]')
#print(len(all_tables))

print("\n--- Engine ---\n")
all_labels = all_tables[3].find_all('td', {'class': 'label'}) # xpath('.//td[@class="label"]')
all_values = all_tables[3].find_all('td', {'class': 'value'}) # xpath('.//td[@class="value"]')
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Fuel ---\n")
all_labels = all_tables[4].find_all('td', {'class': 'label'})
all_values = all_tables[4].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Stearing ---\n")
all_labels = all_tables[7].find_all('td', {'class': 'label'})
all_values = all_tables[7].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))

print("\n--- Wheels ---\n")
all_labels = all_tables[8].find_all('td', {'class': 'label'})
all_values = all_tables[8].find_all('td', {'class': 'value'})
for label, value in zip(all_labels, all_values):
    print('{}: {}'.format(label.text, value.text))
结果:

--- Engine ---

Engine Type: Piston
Valves/Ports per Cylinder: 4
Engine Location: Front
Compression ratio: 10.6
Engine Size (cc) (cc): 1799
Engine Code: R18Z1
Induction: Aspirated
Power: 104kW @ 6500rpm
Engine Configuration: In-line
Torque: 174Nm @ 4300rpm
Cylinders: 4
Power to Weight Ratio (W/kg): 82.6
Camshaft: OHC with VVT & Lift

--- Fuel ---

Fuel Type: Petrol - Unleaded ULP
Fuel Average Distance (km): 734
Fuel Capacity (L): 47
Fuel Maximum Distance (km): 940
RON Rating: 91
Fuel Minimum Distance (km): 540
Fuel Delivery: Multi-Point Injection
CO2 Emission Combined (g/km): 148
Method of Delivery: Electronic Sequential
CO2 Extra Urban (g/km): 117
Fuel Consumption Combined (L/100km): 6.4
CO2 Urban (g/km): 202
Fuel Consumption Extra Urban (L/100km): 5
Emission Standard: Euro 5
Fuel Consumption Urban (L/100km): 8.7

--- Stearing ---

Steering: Rack and Pinion

--- Wheels ---

Rim Material: Alloy
Front Tyre Description: 215/55 R16
Front Rim Description: 16x7.0
Rear Tyre Description: 215/55 R16
Rear Rim Description: 16x7.0

我假设所有页面都有相同的表,并且它们有相同的数字

您可以使用xpath获取作为python列表的表,并使用索引表\u list[0]或表\u list[1]选择列表上的表,然后使用xpath从单个表中获取值。您能详细解释一下吗。不知道如何使用它们您不必在xpath中使用所有这些div。大多数情况下,您可以使用//跳过它们以获取期望的元素要仅获取值,您必须在xpathuse xpath中使用td[@class=value]来获取所有表或具有某个类的表,然后使用index来获取所需的表,并使用其他xpath从表中获取值。创建一个xpath可能会更简单。您可以使用xpath获取作为python列表的表,并使用索引表\u list[0]或表\u list[1]选择列表上的表,然后使用xpath从单个表中获取值。您能详细解释一下吗。不知道如何使用它们您不必在xpath中使用所有这些div。大多数情况下,您可以使用//跳过它们以获取期望的元素要仅获取值,您必须在xpathuse xpath中使用td[@class=value]来获取所有表或具有某个类的表,然后使用index来获取所需的表,并使用其他xpath从表中获取值。它可能比创建一个XPath更简单,所以您要使用一些独特的元素来定位表。你能不能把它做成一个df,这样如果我在一个循环中运行两个URL,它们就会附加。@thoris我没有安装pandas,但肯定地说,将列表插入pandas数据框不会有问题。sure会尝试两个页面,并让你知道。但只有一个是存储url=['https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/','https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/']headers={'User-Agent':'Mozilla/5.0'}在url:soup=BeautifulSouprequests.getit中,headers=headers.text“lxml”尝试使用此url=[]https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/','https://www.redbook.com.au/cars/details/2019-genesis-g80-38-ultimate-auto-my19/SPOT-ITM-520697/']headers={'User-Agent':'Mozilla/5.0'}对于url:soup=BeautifulSouprequests.getit,headers=headers.text,'lxml'中的表,您将使用一些独特的元素作为目标。你能不能把它做成一个df,这样如果我在一个循环中运行两个URL,它们就会附加。@thoris我没有安装pandas,但肯定地说,将列表插入pandas数据框不会有问题。sure会尝试两个页面,并让你知道。但只有一个是存储url=['https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/','https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/']headers={'User-Agent':'Mozilla/5.0'}在url:soup=BeautifulSouprequests.getit中,headers=headers.text“lxml”尝试使用此url=[]https://www.redbook.com.au/cars/details/2019-honda-civic-50-years-edition-auto-my19/SPOT-ITM-524208/','https://www.redbook.com.au/cars/details/2019-genesis-g80-38-ultimate-auto-my19/SPOT-ITM-520697/']headers={'User-Agent':'Mozilla/5.0'}对于url中的它:soup=BeautifulSouprequests.getit,headers=headers.text,“lxml”