Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/313.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/.net/24.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 如何解析没有类的表并保持分组_Python_Web Scraping_Beautifulsoup - Fatal编程技术网

Python 如何解析没有类的表并保持分组

Python 如何解析没有类的表并保持分组,python,web-scraping,beautifulsoup,Python,Web Scraping,Beautifulsoup,我正在尝试解析以下IUPAC、MIC和生物体菌株的url。在某种程度上,我能够做到这一点,尽管我无法找到一种方法将结果分组。这是我到目前为止得到的 import bs4 from bs4 import BeautifulSoup as soup from urllib.request import urlopen as uReq myurl = 'http://www.trimslabs.com/mic/300.htm' uClient = uReq(myurl) page_html = uC

我正在尝试解析以下IUPAC、MIC和生物体菌株的url。在某种程度上,我能够做到这一点,尽管我无法找到一种方法将结果分组。这是我到目前为止得到的

import bs4
from bs4 import BeautifulSoup as soup 
from urllib.request import urlopen as uReq
myurl = 'http://www.trimslabs.com/mic/300.htm'
uClient = uReq(myurl)
page_html = uClient.read()
uClient.close()
page_soup = soup(page_html, "html.parser")
#grab IUPACs
tables = page_soup.findAll("table")
table = tables[0]
IUPACS = []
for i in range (1, 454, 3):
    IUPACs = tables[i].find(text = "IUPAC").findNext('td').get_text(",",     strip = True)
    print(IUPACs)
for i in range (455, 661, 3):
    IUPACs_two = tables[i].find(text = "IUPAC").findNext('td').get_text(",", strip = True)
    print(IUPACs_two)
#grab organism names
organism_list = page_soup.findAll("i")
org = organism_list[1]
for org in organism_list:
    organism = org.text
    print(organism)
#get the MIC numbers
for org in organism_list:
    numbers = org.findNext('td').get_text(",", strip = True)
    print(numbers)
这将打印出我想要的大部分内容,但我完全失去了与它们相关的抗生素(IUPAC)编号的信息。意识到每种抗生素都有3个表,我也尝试了以下方法

chem_tables = []
name_tables = []
org_tables = []
results_tables = []
for i in range (0, 451, 3):
    # 1.  Establish three tables per document
    chem_tables.append(tables[i])
    name_tables.append(tables[i + 1].find(text = "IUPAC").findNext('td').get_text(",", strip = True))
    org_tables.append(tables[i + 2].findAll("i"))
    results_tables.append(tables[i + 2].findAll("i").findNext('td'))
这很好,因为现在
化学表[0]
组织表[0]
名称表[0]
都引用了一种药物,但我一辈子都搞不清楚如何从
组织表
中提取个体有机体名称,而不丢失与它们相关联的药物的信息


我已经为这个问题苦思冥想了两天了。任何帮助都将不胜感激

我会这样做:

1) 找到
IUPAC
单元格

2) 获取价值

3) 从
IUPAC
单元格中找到最近的表

4) 找到所有表行并跳过前两行和最后一行(无用数据)

5) 对于第二行单元格中的每一行,找到
生物体
值和的所有
字体
标记

6) 从第三行单元格中获取每个值,以获取
MIC

7) 从5)中获取每个值并存储到列表中

8) 按逗号拆分并存储到列表中

9) 把所有的东西都放到字典里

示例代码:

from bs4 import BeautifulSoup
import requests

response = requests.get('http://www.trimslabs.com/mic/300.htm')

soup = BeautifulSoup(response.content, "html.parser")

MicDatabase = []

for IUPAC in soup.find_all(text="IUPAC"):
    Value = IUPAC.find_next('td').get_text(",", strip = True)

    for tr in IUPAC.find_next('table').find_all("tr")[2:-1]:
        td = tr.find_all("td")[1:]

        Organism = td[0].find_all("font")
        MIC = td[1].get_text(",", strip = True)

    MicDatabase.append(
        {
            "IUPAC": Value,
            "ActivityData": {"Organism": [o.get_text(" ", strip=True) for o in Organism], "MIC": MIC.split(',')}
        })
哪些产出:

[{'ActivityData': {'MIC': [u'2-4', u'1-2', u'1-2', u'1-2', u'2-4', u'2-4', u'2-4', u'1-2', u'>16', u'2-4', u'1-2', u'0.25 - 0.5', u'0.25 - 0.5'], 'Organism': [u'B. pumilus ATCC 14348', u'S. epidermidis ATCC 155', u'E. faecalis ATCC 35550', u'S. aureus ATCC 25923', u'S. aureus ATCC 9144', u'S. aureus ATCC 14154', u'S. aureus ATCC 29213', u'S. aureus ATCC 700699', u'(methicillin-resistant)', u'S. aureus NRS 119', u'(linezolid-resistant)', u'E.faecalis ATCC 14506', u'E.faecalis ATCC 700802', u'(vancomycin-resistant)', u'S.pyogenes ATCC 14289', u'S.pneumoniae ATCC 700904', u'(penicillin-resistant)']}, 'IUPAC': u'2-[(S)-3-(3-Fluoro-4-morpholin-4-yl-phenyl)-2-oxo-oxazolidin-5-yl]-acetamide'}...

感谢您的指导,我对这方面还很陌生,所以示例代码和明确的步骤非常有用!