Python 从ajax支持的弹出框中包含的工具提示中刮取文本

Python 从ajax支持的弹出框中包含的工具提示中刮取文本,python,selenium,web-scraping,beautifulsoup,screen-scraping,Python,Selenium,Web Scraping,Beautifulsoup,Screen Scraping,我知道以前也有人问过类似的问题,但在这种特殊情况下,似乎没有一个是有效的。我在几个网站上遇到过这个问题,所以我随机选择了这个问题 如果查看第一页上的第一个条目,您会看到: 其中显示标记描述的开头、问题总数以及今天和本周提出的问题数。此信息很容易选择: from selenium.webdriver import Chrome driver = Chrome() driver.get('https://stackoverflow.com/tags') 例如,关注JavaScript标记: da

我知道以前也有人问过类似的问题,但在这种特殊情况下,似乎没有一个是有效的。我在几个网站上遇到过这个问题,所以我随机选择了这个问题

如果查看第一页上的第一个条目,您会看到:

其中显示标记描述的开头、问题总数以及今天和本周提出的问题数。此信息很容易选择:

from selenium.webdriver import Chrome
driver = Chrome()
driver.get('https://stackoverflow.com/tags')
例如,关注
JavaScript
标记:

dat = driver.find_elements_by_xpath("//*[contains(text(), 'week')]/ancestor::div[5]/div/div[1]/span/parent::*")
for i in dat:
    print(i.text)
输出:

    javascript× 1801272
JavaScript (not to be confused with Java) is a high-level, dynamic, multi-paradigm, object-oriented, prototype-based, weakly-typed language used for both client-side and server-side scripting. Its pri…
703 asked today, 4757 this week
现在它变得更复杂了(至少对我来说):如果您将鼠标悬停在
JavaScript
标记上,您会看到以下弹出框:

该框包含完整的标签描述,以及(四舍五入的)问题和观察者的数量。如果将鼠标悬停在“1.2m观察者”元素上,将看到以下工具提示:

这是此特定框的调用url:

https://stackoverflow.com/tags/javascript/popup?_=1556571234452
该目标项(以及问题总数)包含在此html中
span
标题中:

<div class="-container">
<div class="-arrow js-source-arrow"></div>
<div class="mb12">
        <span class="fc-orange-400 fw-bold mr8">
            <svg aria-hidden="true" class="svg-icon va-text-top iconFire" width="18" height="18" viewBox="0 0 18 18"><path d="M7.48.01c.87 2.4.44 3.74-.57 4.77-1.06 1.16-2.76 2.02-3.93 3.7C1.4 10.76 1.13 15.72 6.8 17c-2.38-1.28-2.9-5-.32-7.3-.66 2.24.57 3.67 2.1 3.16 1.5-.52 2.5.58 2.46 1.84-.02.86-.33 1.6-1.22 2A6.17 6.17 0 0 0 15 10.56c0-3.14-2.74-3.56-1.36-6.2-1.64.14-2.2 1.24-2.04 3.03.1 1.2-1.11 2-2.02 1.47-.73-.45-.72-1.31-.07-1.96 1.36-1.36 1.9-4.52-2.03-6.88L7.45 0l.03.01z"/></svg>
            <span title="1195903">1.2m</span> watchers
        </span>

        <span class="mr8"><span title="1801277">1.8m</span> questions</span>

        <a class="float-right fc-orange-400" href="/feeds/tag/javascript" title="Add this tag to your RSS reader"><svg aria-hidden="true" class="svg-icon iconRss" width="18" height="18" viewBox="0 0 18 18"><path d="M1 3c0-1.1.9-2 2-2h12a2 2 0 0 1 2 2v12a2 2 0 0 1-2 2H3a2 2 0 0 1-2-2V3zm14.5 12C15.5 8.1 9.9 2.5 3 2.5V5a10 10 0 0 1 10 10h2.5zm-5 0A7.5 7.5 0 0 0 3 7.5V10a5 5 0 0 1 5 5h2.5zm-5 0A2.5 2.5 0 0 0 3 12.5V15h2.5z"/></svg></a>
</div>
        <div>JavaScript (not to be confused with Java) is a high-level, dynamic, multi-paradigm, object-oriented, prototype-based, weakly-typed language used for both client-side and server-side scripting. Its primary use is in rendering and manipulating of web pages. Use this tag for questions regarding ECMAScript and its various dialects/implementations (excluding ActionScript and Google-Apps-Script). <a href="/questions/tagged/javascript">View tag</a></div></div>

为了避免可能的评论,请允许我补充:我知道SO有一个类似这样的搜索API,但是(I)正如我提到的,我随机选择了SO的标签页面,我希望尽可能普遍地解决这个问题;(ii)如果我理解正确;(iii)即使可以,我仍然想学习如何使用刮取技术来实现它。

下面构造了检索该信息所需的最小url,然后从这些url中提取所需信息,并插入变量,这些变量作为列表插入,
,进入最终列表
结果
。最后一个列表将在末尾转换为数据帧

您可以使用的构造循环所有页面

https://stackoverflow.com/tags?page={}
不确定您想要什么关于本周的数字等,因为没有为每个标签报告相同的时间段。我会更新答案,如果你能说明你想如何处理这个问题。看起来单位可以是天、周或月(其中两个)

我认为时间段周/月等中提出的问题是动态加载的,因此您并不总是有两个度量值。为此,我添加了一个
if
语句来处理这个问题。您可以通过测试
频率的
len
直到==2,一直发出请求,直到获得该信息

import requests
from bs4 import BeautifulSoup as bs
import urllib.parse
import pandas as pd

url = 'https://stackoverflow.com/tags/{}/popup'
page_url = 'https://stackoverflow.com/tags?page={}'
results = []

with requests.Session() as s:
    r = s.get('https://stackoverflow.com/tags')
    soup = bs(r.content, 'lxml')
    num_pages = int(soup.select('.page-numbers')[-2].text)

    for page in range(1, 3): # for page in range(1, num_pages):
        frequency1 = []
        frequency2 = []
        if page > 1:
            r = s.get(page_url.format(page))
            soup = bs(r.content, 'lxml')

        tags = [(item.text, urllib.parse.quote(item.text)) for item in soup.select('.post-tag')]

        for item in soup.select('.stats-row'):
            frequencies = item.select('a')
            frequency1.append(frequencies[0].text)
            if len(frequencies) == 2:
                frequency2.append(frequencies[1].text)
            else:
                frequency2.append('Not loaded') 
        i = 0
        for tag in tags:
            r = s.get(url.format(tag[1]))
            soup = bs(r.content, 'lxml')
            description = soup.select_one('div:not([class])').text
            stats = [item['title'] for item in soup.select('[title]')]
            total_watchers = stats[0]
            total_questions = stats[1]
            row = [tag[0], description, total_watchers, total_questions, frequency1[i],  frequency2[i]]
            results.append(row)
            i+=1
df = pd.DataFrame(results, columns = ['Tag', 'Description', 'Total Watchers', 'Total Questions', 'Frequency1', 'Frequency2'])


将原始代码与Selenium结合使用,以确保加载动态内容:

import requests
from bs4 import BeautifulSoup as bs
import urllib.parse
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


url = 'https://stackoverflow.com/tags/{}/popup'
page_url = 'https://stackoverflow.com/tags?page={}'
results = []
d = webdriver.Chrome()

with requests.Session() as s:
    r = s.get('https://stackoverflow.com/tags')
    soup = bs(r.content, 'lxml')
    num_pages = int(soup.select('.page-numbers')[-2].text)

    for page in range(1, 3): # for page in range(1, num_pages + 1):
        if page > 1:
            r = d.get(page_url.format(page))
            WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '.stats-row a')))
            soup = bs(d.page_source, 'lxml')

        tags = [(item.text, urllib.parse.quote(item.text)) for item in soup.select('.post-tag')]
        how_many  = [item.text for item in soup.select('.stats-row a')]
        frequency1 = how_many[0::2]
        frequency2 = how_many[1::2]
        i = 0
        for tag in tags:
            r = s.get(url.format(tag[1]))
            soup = bs(r.content, 'lxml')
            description = soup.select_one('div:not([class])').text
            stats = [item['title'] for item in soup.select('[title]')]
            total_watchers = stats[0]
            total_questions = stats[1]
            row = [tag[0], description, total_watchers, total_questions, frequency1[i],  frequency2[i]]
            results.append(row)
            i+=1
df = pd.DataFrame(results, columns = ['Tag', 'Description', 'Total Watchers', 'Total Questions', 'Frequency1', 'Frequency2'])
d.quit()
print(df.head())

所以你忽略了问题的整数,只保留原始的?例如,你不能生成一个字典列表吗?@QHarr-是的,舍入的数字真的不相关。我自己也无法生成词典列表(或类似的东西),因此,对于那些更了解词典的人来说,这是一个问题……第一,干得好!真是太棒了,谢谢!第二,提出几点意见,这样答案就可以被接受:问题的时间单位并不重要(尽管我不知道为什么会从每周的中游改为每月的中游)。更重要的是,多页版本存在问题;按原样运行时,输出与单页版本相同;如果将范围(1,2)内的页面的
更改为范围(1,3)
内的页面的
,则会从
行=[tag[0],描述,总观察者,总问题,频率1[i],频率2[i]中获得
索引器:列表索引超出范围
。第2页typescript是第一个问题,因为频率1与频率2的长度不同-它少完成一项任务!像往常一样——干得好;我不知道你是怎么做到的(如果有的话,也不知道你什么时候睡觉!)!当然,上面的逻辑可以用于传输到selenium,您可以确保加载了内容。我的等待条件基于所有元素的存在。您可能需要对此进行测试,因为这并不一定意味着内容存在。我认为应该没有问题,因为selenium隐式等待应该涵盖必要的加载时间。
import requests
from bs4 import BeautifulSoup as bs
import urllib.parse
import pandas as pd
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC


url = 'https://stackoverflow.com/tags/{}/popup'
page_url = 'https://stackoverflow.com/tags?page={}'
results = []
d = webdriver.Chrome()

with requests.Session() as s:
    r = s.get('https://stackoverflow.com/tags')
    soup = bs(r.content, 'lxml')
    num_pages = int(soup.select('.page-numbers')[-2].text)

    for page in range(1, 3): # for page in range(1, num_pages + 1):
        if page > 1:
            r = d.get(page_url.format(page))
            WebDriverWait(d,10).until(EC.presence_of_all_elements_located((By.CSS_SELECTOR, '.stats-row a')))
            soup = bs(d.page_source, 'lxml')

        tags = [(item.text, urllib.parse.quote(item.text)) for item in soup.select('.post-tag')]
        how_many  = [item.text for item in soup.select('.stats-row a')]
        frequency1 = how_many[0::2]
        frequency2 = how_many[1::2]
        i = 0
        for tag in tags:
            r = s.get(url.format(tag[1]))
            soup = bs(r.content, 'lxml')
            description = soup.select_one('div:not([class])').text
            stats = [item['title'] for item in soup.select('[title]')]
            total_watchers = stats[0]
            total_questions = stats[1]
            row = [tag[0], description, total_watchers, total_questions, frequency1[i],  frequency2[i]]
            results.append(row)
            i+=1
df = pd.DataFrame(results, columns = ['Tag', 'Description', 'Total Watchers', 'Total Questions', 'Frequency1', 'Frequency2'])
d.quit()
print(df.head())