python中字典到数据帧的转换

python中字典到数据帧的转换,python,pandas,dictionary,Python,Pandas,Dictionary,首先,我的循环没有按我想要的方式工作。它是一步一步地为特定网站提供字典内的链接。我想立即填充它。我的输出如下: {'Banks – Assets': {'link': 'https://data.gov.au/dataset/banks-assets'}, 'Consolidated Exposures – Immediate and Ultimate Risk Basis': {}, 'Foreign Exchange Transactions and Holdings of Official

首先,我的循环没有按我想要的方式工作。它是一步一步地为特定网站提供字典内的链接。我想立即填充它。我的输出如下:

{'Banks – Assets': {'link': 'https://data.gov.au/dataset/banks-assets'}, 'Consolidated Exposures – Immediate and Ultimate Risk Basis': {}, 'Foreign Exchange Transactions and Holdings of Official Reserve Assets': {}, 'Finance Companies and General Financiers – Selected Assets and Liabilities': {}, 'Liabilities and Assets – Monthly': {}, 'Consolidated Exposures – Immediate Risk Basis – International Claims by Country': {}, 'Consolidated Exposures – Ultimate Risk Basis': {}, 'Banks – Consolidated Group off-balance Sheet Business': {}, 'Liabilities of Australian-located Operations': {}, 'Building Societies – Selected Assets and Liabilities': {}, 'Consolidated Exposures – Immediate Risk Basis – Foreign Claims by Country': {}, 'Banks – Consolidated Group Impaired Assets': {}, 'Assets and Liabilities of Australian-Located Operations': {}, 'Managed Funds': {}, 'Daily Net Foreign Exchange Transactions': {}, 'Consolidated Exposures-Immediate Risk Basis': {}, 'Public Unit Trust': {}, 'Securitisation Vehicles': {}, 'Assets of Australian-located Operations': {}, 'Banks – Consolidated Group Capital': {}}
{'Banks – Assets': {'link': 'https://data.gov.au/dataset/banks-assets'}, 'Consolidated Exposures – Immediate and Ultimate Risk Basis': {'link': 'https://data.gov.au/dataset/consolidated-exposures-immediate-and-ultimate-risk-basis'}, 'Foreign Exchange Transactions and Holdings of Official Reserve Assets': {}, 'Finance Companies and General Financiers – Selected Assets and Liabilities': {}, 'Liabilities and Assets – Monthly': {}, 'Consolidated Exposures – Immediate Risk Basis – International Claims by Country': {}, 'Consolidated Exposures – Ultimate Risk Basis': {}, 'Banks – Consolidated Group off-balance Sheet Business': {}, 'Liabilities of Australian-located Operations': {}, 'Building Societies – Selected Assets and Liabilities': {}, 'Consolidated Exposures – Immediate Risk Basis – Foreign Claims by Country': {}, 'Banks – Consolidated Group Impaired Assets': {}, 'Assets and Liabilities of Australian-Located Operations': {}, 'Managed Funds': {}, 'Daily Net Foreign Exchange Transactions': {}, 'Consolidated Exposures-Immediate Risk Basis': {}, 'Public Unit Trust': {}, 'Securitisation Vehicles': {}, 'Assets of Australian-located Operations': {}, 'Banks – Consolidated Group Capital': {}}
第二件事,我想用它制作数据帧,比如:

Titles                                                  Links
Banks - Assets                      https://data.gov.au/dataset/banks-assets
Consolidated Exposures – Immediate and Ultimate Risk Basis   https://data.gov.au/dataset/consolidated-exposures-immediate-and-ultimate-risk-basis
等等。。。 我的代码是:

webpage4_urls = ["https://data.gov.au/dataset?q=&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&groups=sciences&organization=departmentofagriculturefisheriesandforestry&_groups_limit=0",
                 "https://data.gov.au/dataset?q=&organization=commonwealthscientificandindustrialresearchorganisation&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&groups=sciences&_groups_limit=0",
                 "https://data.gov.au/dataset?q=&organization=bureauofmeteorology&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&groups=sciences&_groups_limit=0",
                 "https://data.gov.au/dataset?q=&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&groups=sciences&organization=tasmanianmuseumandartgallery&_groups_limit=0",
                 "https://data.gov.au/dataset?q=&organization=department-of-industry&sort=extras_harvest_portal+asc%2C+score+desc%2C+metadata_modified+desc&_organization_limit=0&groups=sciences&_groups_limit=0"]
for i in webpage4_urls:
    wiki2 = i
    page= urllib.request.urlopen(wiki2)
    soup = BeautifulSoup(page)

    lobbying = {}
    data2 = soup.find_all('h3', class_="dataset-heading")
    for element in data2:
        lobbying[element.a.get_text()] = {}
    data2[0].a["href"]
    prefix = "https://data.gov.au"
    for element in data2:
        lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]
        print(lobbying)
我认为您需要++:

编辑:


我想用列名、标题和链接来匹配csv中的所有值。我正在使用df.to_csv('D:/output.csv',encoding=utf-8),还有其他方法吗?给我一点时间,我必须测试一下。当然,我的也在运行,需要很多时间。如果数据很多,这是可能的。但如果使用
df.to_csv('D:/output.csv',encoding=utf-8)
则仍然存在数据被覆盖的问题-始终写入同一文件。因此,需要为每个
数据帧
更改文件名。是的,此操作已中止:(。我现在应该如何将上述数据帧写入一个文件?
for element in data2:
    lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]
    #print(lobbying)
    df = pd.DataFrame.from_dict(lobbying, orient='index').rename_axis('Titles').reset_index()
    print (df)
dfs = []
for i in webpage4_urls:
    wiki2 = i
    page= urllib.request.urlopen(wiki2)
    soup = BeautifulSoup(page)

    lobbying = {}
    data2 = soup.find_all('h3', class_="dataset-heading")
    for element in data2:
        lobbying[element.a.get_text()] = {}
    data2[0].a["href"]
    prefix = "https://data.gov.au"
    for element in data2:
        print ()
        lobbying[element.a.get_text()]["link"] = prefix + element.a["href"]
        #print(lobbying)
        df = pd.DataFrame.from_dict(lobbying, orient='index').rename_axis('Titles').reset_index()
        dfs.append(df)

df = pd.concat(dfs, ignore_index=True)
print (df)
df.to_csv('output.csv')