如何运行python命令,单击页面上的每个链接并提取每个链接的标题、内容和日期?

如何运行python命令,单击页面上的每个链接并提取每个链接的标题、内容和日期?,python,html,selenium,web-scraping,beautifulsoup,Python,Html,Selenium,Web Scraping,Beautifulsoup,使用此链接:。我有一个命令,可以单击页面上的每个链接并取出所有数据,但我想将其转换为csv文件,因此需要运行三个不同的命令,以获取页面上每篇文章的标题、段落和日期,以便它们可以成为excel工作表中的列。我遇到困难,因为此页面没有“类”或“id”。任何建议都会很有帮助 这是我目前的代码: url = 'https://1997-2001.state.gov/briefings/statements/2000/2000_index.html' soup = BeautifulSou

使用此链接:。我有一个命令,可以单击页面上的每个链接并取出所有数据,但我想将其转换为csv文件,因此需要运行三个不同的命令,以获取页面上每篇文章的标题、段落和日期,以便它们可以成为excel工作表中的列。我遇到困难,因为此页面没有“类”或“id”。任何建议都会很有帮助

这是我目前的代码:

    url = 'https://1997-2001.state.gov/briefings/statements/2000/2000_index.html'
    soup = BeautifulSoup(requests.get(url).content, 'html.parser')

    for a in soup.select('td[width="580"] img + a')[400:]:
    u = 'https://1997-2001.state.gov/briefings/statements/2000/' + a['href'] 
    print(u)
    s = BeautifulSoup(requests.get(u).content, 'html.parser')
    t = s.select_one('td[width="580"], td[width="600"], table[width="580"]:has(td[colspan="2"])').get_text(strip=True, separator='\n')
    print( t.split('[end of document]')[0] )
    print('-' * 80)

您可以使用此脚本将数据保存到CSV中:

import requests
import pandas as pd
from bs4 import BeautifulSoup


url = 'https://1997-2001.state.gov/briefings/statements/2000/2000_index.html'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')

all_data = []
for a in soup.select('td[width="580"] img + a'):
    date = a.text.strip(':')
    title = a.find_next_sibling(text=True).strip(': ')   
    u = 'https://1997-2001.state.gov/briefings/statements/2000/' + a['href'] 
    print(u)
    s = BeautifulSoup(requests.get(u).content, 'html.parser')
    t = s.select_one('td[width="580"], td[width="600"], table[width="580"]:has(td[colspan="2"])').get_text(strip=True, separator='\n')
    content = t.split('[end of document]')[0]
    print(date, title, content)
    all_data.append({
        'url': u,
        'date': date,
        'title': title,
        'content': content
    })
    print('-' * 80)

df = pd.DataFrame(all_data)
df.to_csv('data.csv', index=False)
print(df)
印刷品:

...

                                                   url  ...                                            content
0    https://1997-2001.state.gov/briefings/statemen...  ...  Statement by Philip T. Reeker, Deputy Spokesma...
1    https://1997-2001.state.gov/briefings/statemen...  ...  Media Note\nDecember 26, 2000\nRenewal of the ...
2    https://1997-2001.state.gov/briefings/statemen...  ...  Statement by Philip T. Reeker, Deputy Spokesma...
3    https://1997-2001.state.gov/briefings/statemen...  ...  Notice to the Press\nDecember 21, 2000\nMeetin...
4    https://1997-2001.state.gov/briefings/statemen...  ...  Statement by Philip T. Reeker, Deputy Spokesma...
..                                                 ...  ...                                                ...
761  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Deputy Spok...
762  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Spokesman\n...
763  https://1997-2001.state.gov/briefings/statemen...  ...  Notice to the Press\nJanuary 6, 2000\nAssistan...
764  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Spokesman\n...
765  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Spokesman\n...

[766 rows x 4 columns]
并从LibreOffice保存data.csv屏幕截图:

编辑:1998年:

import requests
import pandas as pd
from bs4 import BeautifulSoup


url = 'https://1997-2001.state.gov/briefings/statements/1998/1998_index.html'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')

all_data = []
for a in soup.select('td[width="580"] img + a, blockquote img + a'):
    date = a.text.strip(':')
    title = a.find_next_sibling(text=True).strip(': ')   
    u = 'https://1997-2001.state.gov/briefings/statements/1998/' + a['href'] 
    print(u)
    s = BeautifulSoup(requests.get(u).content, 'html.parser')
    if not s.body:
        continue
    t = s.select_one('td[width="580"], td[width="600"], table[width="580"]:has(td[colspan="2"]), blockquote, body').get_text(strip=True, separator='\n')
    content = t.split('[end of document]')[0]
    print(date, title, content)
    all_data.append({
        'url': u,
        'date': date,
        'title': title,
        'content': content
    })
    print('-' * 80)

df = pd.DataFrame(all_data)
df.to_csv('data.csv', index=False)
print(df)

您可以使用此脚本将数据保存到CSV中:

import requests
import pandas as pd
from bs4 import BeautifulSoup


url = 'https://1997-2001.state.gov/briefings/statements/2000/2000_index.html'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')

all_data = []
for a in soup.select('td[width="580"] img + a'):
    date = a.text.strip(':')
    title = a.find_next_sibling(text=True).strip(': ')   
    u = 'https://1997-2001.state.gov/briefings/statements/2000/' + a['href'] 
    print(u)
    s = BeautifulSoup(requests.get(u).content, 'html.parser')
    t = s.select_one('td[width="580"], td[width="600"], table[width="580"]:has(td[colspan="2"])').get_text(strip=True, separator='\n')
    content = t.split('[end of document]')[0]
    print(date, title, content)
    all_data.append({
        'url': u,
        'date': date,
        'title': title,
        'content': content
    })
    print('-' * 80)

df = pd.DataFrame(all_data)
df.to_csv('data.csv', index=False)
print(df)
印刷品:

...

                                                   url  ...                                            content
0    https://1997-2001.state.gov/briefings/statemen...  ...  Statement by Philip T. Reeker, Deputy Spokesma...
1    https://1997-2001.state.gov/briefings/statemen...  ...  Media Note\nDecember 26, 2000\nRenewal of the ...
2    https://1997-2001.state.gov/briefings/statemen...  ...  Statement by Philip T. Reeker, Deputy Spokesma...
3    https://1997-2001.state.gov/briefings/statemen...  ...  Notice to the Press\nDecember 21, 2000\nMeetin...
4    https://1997-2001.state.gov/briefings/statemen...  ...  Statement by Philip T. Reeker, Deputy Spokesma...
..                                                 ...  ...                                                ...
761  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Deputy Spok...
762  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Spokesman\n...
763  https://1997-2001.state.gov/briefings/statemen...  ...  Notice to the Press\nJanuary 6, 2000\nAssistan...
764  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Spokesman\n...
765  https://1997-2001.state.gov/briefings/statemen...  ...  Press Statement by James P. Rubin, Spokesman\n...

[766 rows x 4 columns]
并从LibreOffice保存data.csv屏幕截图:

编辑:1998年:

import requests
import pandas as pd
from bs4 import BeautifulSoup


url = 'https://1997-2001.state.gov/briefings/statements/1998/1998_index.html'
soup = BeautifulSoup(requests.get(url).content, 'html.parser')

all_data = []
for a in soup.select('td[width="580"] img + a, blockquote img + a'):
    date = a.text.strip(':')
    title = a.find_next_sibling(text=True).strip(': ')   
    u = 'https://1997-2001.state.gov/briefings/statements/1998/' + a['href'] 
    print(u)
    s = BeautifulSoup(requests.get(u).content, 'html.parser')
    if not s.body:
        continue
    t = s.select_one('td[width="580"], td[width="600"], table[width="580"]:has(td[colspan="2"]), blockquote, body').get_text(strip=True, separator='\n')
    content = t.split('[end of document]')[0]
    print(date, title, content)
    all_data.append({
        'url': u,
        'date': date,
        'title': title,
        'content': content
    })
    print('-' * 80)

df = pd.DataFrame(all_data)
df.to_csv('data.csv', index=False)
print(df)

非常感谢,这起作用了!既然你似乎是这方面的专家,你知道这个链接的结构有什么问题吗:为什么它看起来不会运行相同的脚本,既不打印内容也不转换为csv,而且没有错误?所以我也尝试使用该脚本来刮取这个链接:,但不会运行,也没有错误,你知道在结构上有什么不同,或者脚本中有什么需要改变吗?非常感谢,这很有效!既然你似乎是这方面的专家,你知道这个链接的结构有什么问题吗:为什么它看起来不会运行相同的脚本,既不打印内容也不转换为csv,而且没有错误?所以我也尝试使用该脚本来刮取这个链接:,但不会运行,也没有错误,你知道结构上有什么不同,或者脚本中有什么需要更改吗?