尝试使用Python将解析数据导出到CSV文件,我可以';我不知道如何导出多行

尝试使用Python将解析数据导出到CSV文件,我可以';我不知道如何导出多行,python,pandas,beautifulsoup,export-to-csv,Python,Pandas,Beautifulsoup,Export To Csv,我对beautiful soup/Python/Web Scraping还相当陌生,我已经能够从网站上刮取数据,但我只能将第一行导出到csv文件(我想将所有刮取的数据导出到该文件中) 如何让这段代码将所有刮取的数据导出到多个单独的行中,我被难住了: r = requests.get("https://www.infoplease.com/primary-sources/government/presidential-speeches/state-union-addresses"

我对beautiful soup/Python/Web Scraping还相当陌生,我已经能够从网站上刮取数据,但我只能将第一行导出到csv文件(我想将所有刮取的数据导出到该文件中)

如何让这段代码将所有刮取的数据导出到多个单独的行中,我被难住了:

r = requests.get("https://www.infoplease.com/primary-sources/government/presidential-speeches/state-union-addresses")
data = r.content  # Content of response
soup = BeautifulSoup(data, "html.parser")


for span in soup.find_all("span", {"class": "article"}):
    for link in span.select("a"):
           
        name_and_date = link.text.split('(')
        name = name_and_date[0].strip()
        date = name_and_date[1].replace(')','').strip()
        
        base_url = "https://www.infoplease.com"
        links = link['href']
        links = urljoin(base_url, links)
        
        
    
    pres_data = {'Name': [name],
                'Date': [date],
                'Link': [links]
                }
        
    df = pd.DataFrame(pres_data, columns= ['Name', 'Date', 'Link'])

    df.to_csv (r'C:\Users\ThinkPad\Documents\data_file.csv', index = False, header=True)

    print (df)
有什么想法吗?我相信我需要在数据解析过程中循环它,抓住每个集合并将其推入。 我这样做对吗


感谢您对当前设置方式的了解,看起来您并没有将每个链接添加为新条目,而是只添加了最后一个链接。如果您初始化一个列表并添加一个字典,就像您为循环的“链接”的每次迭代都设置了字典一样,那么您将添加每一行,而不仅仅是最后一行

import pandas as pd 
import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin

r = requests.get("https://www.infoplease.com/primary-sources/government/presidential-speeches/state-union-addresses")
data = r.content  # Content of response
soup = BeautifulSoup(data, "html.parser")

pres_data = []
for span in soup.find_all("span", {"class": "article"}):
    for link in span.select("a"):
        
        name_and_date = link.text.split('(')
        name = name_and_date[0].strip()
        date = name_and_date[1].replace(')','').strip()
        
        base_url = "https://www.infoplease.com"
        links = link['href']
        links = urljoin(base_url, links)
        
        this_data = {'Name': name,
                    'Date': date,
                    'Link': links
                    }
        pres_data.append(this_data)
        
df = pd.DataFrame(pres_data, columns= ['Name', 'Date', 'Link'])

df.to_csv (r'C:\Users\ThinkPad\Documents\data_file.csv', index = False, header=True)

print (df)

您不需要在此处使用
Pandas
,因为您不愿意在此处应用任何类型的
数据
操作

通常尝试将自己限制在内置库上,以防任务较短

import requests
from bs4 import BeautifulSoup
import csv


def main(url):
    r = requests.get(url)
    soup = BeautifulSoup(r.text, 'lxml')
    target = [([x.a['href']] + x.a.text[:-1].split(' ('))
              for x in soup.select('span.article')]
    with open('data.csv', 'w', newline='') as f:
        writer = csv.writer(f)
        writer.writerow(['Url', 'Name', 'Date'])
        writer.writerows(target)


main('https://www.infoplease.com/primary-sources/government/presidential-speeches/state-union-addresses')
输出样本:


这是否回答了您的问题?就这样!非常感谢。我刚才在读关于追加操作的文章。真的很感谢这个很棒的,很乐意帮忙!