Python 3.x 使用selenium在Python中删除React图表

Python 3.x 使用selenium在Python中删除React图表,python-3.x,selenium,web-scraping,Python 3.x,Selenium,Web Scraping,我正在尝试使用selenium从网站上的React图表中提取数据。我能够找到元素,但无法获取数据。我需要从该图表中获取的特定数据位于嵌套系列中: "data":[{"name":"December 2019", "....", "coverage":107.9} 在元素中,实际上不需要使用selenium,因为数据嵌入到静态响应的脚本标记中。只需将

我正在尝试使用selenium从网站上的React图表中提取数据。我能够找到元素,但无法获取数据。我需要从该图表中获取的特定数据位于嵌套系列中:

"data":[{"name":"December 2019",
            "....",
            "coverage":107.9}

在元素
中,实际上不需要使用selenium,因为数据嵌入到静态响应的脚本标记中。只需将其拉出,稍微操纵字符串以转换为json格式,然后将其读入。然后,只需对其进行迭代:

import pandas as pd
import json
import requests
from bs4 import BeautifulSoup

url = 'https://www.aholddelhaizepensioen.nl/over-ons/financiele-situatie/beleidsdekkingsgraad'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')

scripts = soup.find_all('script')
for script in scripts:
    if 'coverage' in script.text:
        jsonStr = script.text
        break

jsonStr = jsonStr.split('Section, ')[-1]

loop = True
while loop == True:
    try:
        jsonData = json.loads(jsonStr + '}')
        loop = False
    except:
        jsonStr = jsonStr.rsplit('}',1)[0]
 
data = jsonData['data']['data']
months = []
coverages = []

for each in data:
    months.append(each['name'])
    coverages.append(each['coverage'])
输出:

print(months)
['December 2019', 'Januari 2020', 'Februari 2020', 'Maart 2020', 'April 2020', 'Mei 2020', 'Juni 2020', 'Juli 2020', 'Augustus 2020', 'September 2020', 'Oktober 2020', 'November 2020']


有两个元素具有相同的
id
。您可以选择
div
。尝试
driver.find\u element\u by_xpath(//script[@id='react\u 5X8YGgN8H0GoMMQ4RLqjrQ'])
我接受您的答案作为解决方案,因为它指引了我正确的方向。谢谢你。然而,在我最初的问题中,我添加了另一个解决方案,该解决方案在代码方面不太广泛,结果相同。因此,如果遇到类似问题,其他人可以选择他们喜欢的解决方案。
import requests
import BeautifulSoup as bs4
import pandas as pd

# Fetch site data
url = 'https://www.aholddelhaizepensioen.nl/over-ons/financiele-situatie/beleidsdekkingsgraad'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
r = requests.get(url, headers=headers)
soup = bs4(r.content, 'html.parser')

# Find script
script_data = soup.find('script', attrs={'id':'react_5X8YGgN8H0GoMMQ4RLqjrQ'})
script_to_string = str(script) # cast to string for regex

# Regex
coverage_pattern = r'(?<="coverage":)\d{2,3}.\d{1}' #positive lookup, find everything after "coverage": with 2 or 3 numbers, a dot, and another number
months_pattern = r'(?<="name":")\w+\s\d{4}' #same as coverage_pattern, now based on word followed by four digits

# Data
coverages = re.findall(coverage_pattern,script_to_string)
months = re.findall(months_pattern,scrip_to_string)
frame = pd.DataFrame({'months':months,'coverages':coverages})
import pandas as pd
import json
import requests
from bs4 import BeautifulSoup

url = 'https://www.aholddelhaizepensioen.nl/over-ons/financiele-situatie/beleidsdekkingsgraad'
headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/88.0.4324.104 Safari/537.36'}
response = requests.get(url, headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')

scripts = soup.find_all('script')
for script in scripts:
    if 'coverage' in script.text:
        jsonStr = script.text
        break

jsonStr = jsonStr.split('Section, ')[-1]

loop = True
while loop == True:
    try:
        jsonData = json.loads(jsonStr + '}')
        loop = False
    except:
        jsonStr = jsonStr.rsplit('}',1)[0]
 
data = jsonData['data']['data']
months = []
coverages = []

for each in data:
    months.append(each['name'])
    coverages.append(each['coverage'])
print(months)
['December 2019', 'Januari 2020', 'Februari 2020', 'Maart 2020', 'April 2020', 'Mei 2020', 'Juni 2020', 'Juli 2020', 'Augustus 2020', 'September 2020', 'Oktober 2020', 'November 2020']
print(coverages)
[107.9, 107.8, 107.2, 106.1, 105.1, 104.3, 103.7, 103.0, 102.8, 102.3, 101.9, 101.6]