Python 如何从网站上刮取ID标签及其内容(文本)?

Python 如何从网站上刮取ID标签及其内容(文本)?,python,beautifulsoup,nlp,Python,Beautifulsoup,Nlp,顶部有17个ID标签: 1.Boxed warning 2.Indications 3.Dosage/Administration 4.Dosage forms 5.Contraindications 6.Warnings/Precautions 7.Adverse reactions 8.Drug interactions 9.Specific populations 10.Overdosage 11.Description 12.Clinical pharmacology 13.Noncli

顶部有17个ID标签:

1.Boxed warning
2.Indications
3.Dosage/Administration
4.Dosage forms
5.Contraindications
6.Warnings/Precautions
7.Adverse reactions
8.Drug interactions
9.Specific populations
10.Overdosage
11.Description
12.Clinical pharmacology
13.Nonclinical toxicology
14.Clinical studies
15.How supplied
16.Patient counseling
17.Medication guide
我想把这一页刮下来,用这些标签做一本字典。我该怎么做?以下是我迄今为止所尝试的:

urls = "https://www.drugs.com/pro/abacavir-lamivudine-and-zidovudine-tablets.html"
response = requests.get(urls)
soup = BeautifulSoup(response.text, 'html.parser')
data3 = soup.findAll('h2')
out = {}
y1 = []
y2 = []
for header in data3:
   x0 = header.get('id')
   y1.append(x0)
   nextNode = header
   while True:
      nextNode = nextNode.nextSibling
      if nextNode is None:
          break
      if isinstance(nextNode, NavigableString):
          x1 = nextNode.strip()
      if isinstance(nextNode, Tag):
          if nextNode.name == "h2":
              break

      x2 = nextNode.get_text(strip=True).strip()
      x3 = x1 + " " + x2
      y2.append(x3)
 print(y1,y2)
我要走了

Output I'm Getting: [None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None, None] [content]

Desired Output: ['boxed warning', 'indications', 'dosage/administration', 'dosage forms', 'contraindications', 'warnings/precautions', 'adverse reactions', 'drug interactions', 'specific populations', 'overdosage', 'description', 'clinical pharmacology', 'nonclinical toxicology', 'clinical studies', 'how supplied', 'patient counseling', 'medication guide'] ['content present under boxed warning', 'content present under indications']

如何获得一个字典或列表,用标记列表替换所有的Nones?我正在努力处理网页的结构。谢谢大家!

我不是100%确定你需要什么,但根据评论,我认为这就是你想要的。您可以轻松地将输出添加到列表或字典中

import requests
from bs4 import BeautifulSoup
urls = "https://www.drugs.com/pro/abacavir-lamivudine-and-zidovudine-tablets.html"
response = requests.get(urls)
soup = BeautifulSoup(response.text, 'html.parser')
tags = soup.find('div', {'class': 'ddc-anchor-links'})

available_information = []

for tag in tags.find_all('a'):
    available_information.append(tag.text)
    

print(available_information)
# output
['Boxed Warning', 'Indications and Usage', 'Dosage and Administration', 'Dosage Forms and Strengths', 'Contraindications', 'Warnings and Precautions', 'Adverse Reactions/Side Effects', 'Drug Interactions', 'Use In Specific Populations', 'Overdosage', 'Description', 'Clinical Pharmacology', 'Nonclinical Toxicology', 'Clinical Studies', 'How Supplied/Storage and Handling', 'Patient Counseling Information', 'Medication Guide']


您可以使用以下代码获取每个TOC的内容:

anchor_tags = []
soup = BeautifulSoup(response.text, 'html.parser')
tags = soup.find('div', {'class': 'ddc-toc-content'})
for tag in tags.find_all('a'):
    anchor_tag = str(tag['href']).replace('#', '')
    anchor_tags.append(anchor_tag)

for tag in anchor_tags:
    anchor_tag = soup.find("a", {"id": tag})
    header_tag = anchor_tag.find_next_sibling('h2')
    # now you need to figure out how you want to store this information that is being extracted. 
根据我们的聊天对话,您可以通过这种方式查询具有不同结构的多个页面。当你刮取更多不同结构的页面时,你必须修改搜索词和已知标记

import requests
from bs4 import BeautifulSoup

def get_soup(target_url):
    response = requests.get(target_url)
    soup = BeautifulSoup(response.text, 'html.parser')
    return soup

def obtain_toc_content(soup):
    available_information = []
    anchor_tags = []
    known_tags = ['div', 'ul']
    search_terms = ['ddc-toc-content', 'ddc-anchor-links']
    for tag, search_string in zip(known_tags, search_terms):
        tag_found = bool(soup.find(tag, {'class': search_string}))
        if tag_found:
            toc = soup.find(tag, {'class': search_string})
            for toc_tag in toc.find_all('a'):
                available_information.append(toc_tag.text)
                anchor_tag = str(toc_tag['href'])
                anchor_tags.append(anchor_tag)

    return available_information, anchor_tags


urls = ['https://www.drugs.com/pro/abacavir-lamivudine-and-zidovudine-tablets.html',
        'https://www.drugs.com/ajovy.html','https://www.drugs.com/cons/a-b-otic.html']
for url in urls:
    make_soup = get_soup(url)
    results = obtain_toc_content(make_soup)
    table_of_content = results[0]
    toc_tags = results[1]
       

下面的代码至少能让你接近我认为你想去的地方。在路上我会尽可能多地解释,但还有很多东西需要你学习:

import lxml.html as lh
url = 'https://www.drugs.com/pro/abacavir-lamivudine-and-zidovudine-tablets.html#s-42231-1'
req = requests.get(url)

doc = lh.fromstring(req.text)

headers = doc.xpath('//ul[@class="ddc-anchor-links"]//li')
head_names = [] #when the code is done running, this list will contain the headers
anchors = [] #this list will contain the reference to the text elements for each header
for header in headers:
    head_names.append(header.xpath('a/text()')[0])
    anchors.append(header.xpath('a/@href')[0].replace('#',''))
for anchor in anchors:
    #now to iterate through each reference to get to the actual text:
    target = doc.xpath(f'//a[@id="{anchor}"]')[0] #this uses f-strings; you may need to read up on that
    ind = anchors.index(anchor)+1 #because of the structure of the page, this next block will help us determine when the text for one header ends, and the next one begins; you'll have to play with it to see how it actually works:
    for z in target.xpath('./following-sibling::*'):
        try:
            if (z.xpath('name()'))=="a"  and z.xpath('./@id')[0]==anchors[ind]:
                break
        except:
            continue #this is necessary because the last header doesn't have a "next" header
        else:
            print(z.xpath('.//text()'))

文本输出将不美观,但将包含所需的信息。你必须玩剥离、格式化等游戏,才能让它看起来像你想要的那样。

你已经尝试过的代码是什么?@BLimitless我已经编辑了问题并添加了我尝试过的代码。很期待听到你的声音很好,谢谢。很遗憾,看到你的代码,我不知道如何帮助你。但是我现在看到了如何修改这个问题,所以你应该得到一些答案。希望我对这个问题的编辑能很快通过,然后有一个对网页抓取有更深入了解的人应该能够帮助你。祝你好运你的预期产出是多少?例如,如果键是
装箱警告
,那么值是多少?或者,它不是一本字典,只是一个页面上所有标签的列表吗?@flameline嘿,你得到了一些答案!很高兴这一切顺利。如果答案不能满足你的要求,请继续编辑问题,使其更加清晰。对于StackOverflow来说,这是一个复杂的第一个问题,所以请继续回答。在这里得到帮助会更容易/更快。祝你好运别忘了向上投票/接受一个对你有用的答案,这样社区就知道什么时候该继续前进(将来有类似问题的其他人也知道他们可以在这里寻找答案)。我做了一个更改-headers=doc.xpath(“//div[@class=“ddc toc content”]”),但它在head_names.append(header.xpath)中显示了超出范围的列表索引('a/text()')[0]).我已编辑了类名,因为您建议的类名尚未编辑worked@Flameling不确定问题出在哪里;它对我有效。我复制粘贴了与你给出的代码相同的代码,但没有显示任何输出。我不明白为什么。你能解释一下first for loop做了什么吗?@Flameling我知道发生了什么-该网站实际上更改了html结构从昨天开始;标题现在位于
//div[@class=“ddc toc content”]//ul//li/a/text()
,等等。可能他们经历了太多的刮片活动…现在代码工作了,但需要做大量工作才能使数据美观并消除不必要的内容所需的输出:[‘装箱警告’、‘适应症’、‘剂量/给药’、‘剂型’、‘禁忌症’、‘警告/注意事项’、‘不良反应’、‘药物相互作用’、‘特定人群’、‘过量’、‘说明’、‘临床药理学’、‘非临床毒理学’、‘临床研究’、‘供应方式’、‘患者咨询’、‘药物指南’]['content present under boxed warning','content present under indications']我将这些项目添加到了一个列表中。我在页面上没有看到“content present under indications”或“content present under indications”?在URL中,如果你点击“Description”,你会被拖到标题“Abacavir,Lamivudine and Zidovudine patters Description”在页面中。我希望该标题下的内容一直到下一个标题“阿巴卡韦、拉米夫定和齐多夫定片剂-临床药理学”,所以您只需要“说明”下的文本,或者您想要所有TOC元素的所有文本?我需要所有TOC元素的文本