Python 正在尝试提取Web链接美化组
我正试图提取所有的PDF链接在这个 我的代码是:Python 正在尝试提取Web链接美化组,python,pdf,beautifulsoup,python-requests,Python,Pdf,Beautifulsoup,Python Requests,我正试图提取所有的PDF链接在这个 我的代码是: import requests from bs4 import BeautifulSoup from pprint import pprint base_url = 'https://usda.library.cornell.edu' url = 'https://usda.library.cornell.edu/concern/publications/3t945q76s?locale=en#release-items' soup = B
import requests
from bs4 import BeautifulSoup
from pprint import pprint
base_url = 'https://usda.library.cornell.edu'
url = 'https://usda.library.cornell.edu/concern/publications/3t945q76s?locale=en#release-items'
soup = BeautifulSoup(requests.get(url).pdf, 'html.parser')
b = []
page = 1
while True:
pdf_urls = [a["href"] for a in soup.select('#release-items a[href$=".pdf"]')]
pprint(pdf_urls)
b.append(pdf_urls)
m = soup.select_one('a[rel="next"][href]')
if m and m['href'] != '#':
soup = BeautifulSoup(requests.get(base_url + m['href']).pdf, 'html.parser')
else:
break
我得到以下错误:
AttributeError: 'Response' object has no attribute 'pdf'
类似的文本文件代码也可以工作。我哪里做错了
我得到以下错误:
AttributeError: 'Response' object has no attribute 'pdf'
方法resquests.get()
始终将返回一个响应对象:
print(requests.get("https://stackoverflow.com/"))
将显示:
<Response [200]>
您需要使用requests.get(url).content
制作汤:
soup = BeautifulSoup(requests.get(url).content,'html.parser')
我正在尝试提取此页面上的所有PDF链接
检查HTML正文,您将看到所有文件都有一个“file\u set”
类。您可以使用generator expresion直接获取此类的“href”
pdf_urls = [x.a["href"] for x in soup.find_all(class_ = "file_set")]
打印您将获得所有pdf链接:print(pdf\u URL)
我得到以下错误:
AttributeError: 'Response' object has no attribute 'pdf'
方法resquests.get()
始终将返回一个响应对象:
print(requests.get("https://stackoverflow.com/"))
将显示:
<Response [200]>
您需要使用requests.get(url).content
制作汤:
soup = BeautifulSoup(requests.get(url).content,'html.parser')
我正在尝试提取此页面上的所有PDF链接
检查HTML正文,您将看到所有文件都有一个“file\u set”
类。您可以使用generator expresion直接获取此类的“href”
pdf_urls = [x.a["href"] for x in soup.find_all(class_ = "file_set")]
打印您将获得所有pdf链接:print(pdf\u URL)
对您的代码稍作更改可能会:
import requests
from bs4 import BeautifulSoup
from pprint import pprint
base_url = 'https://usda.library.cornell.edu'
url = 'https://usda.library.cornell.edu/concern/publications/3t945q76s?locale=en#release-items'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
b = []
page = 1
while True:
pdf_urls = [a["href"] for a in soup.select('#release-items a[href$=".pdf"]')]
pprint(pdf_urls)
b.append(pdf_urls)
m = soup.select_one('a[rel="next"][href]')
if m and m['href'] != '#':
soup = BeautifulSoup(requests.get(base_url + m['href']).text, 'html.parser')
else:
break
这:
致:
这是:
soup = BeautifulSoup(requests.get(base_url + m['href']).pdf, 'html.parser')
为此:
soup = BeautifulSoup(requests.get(base_url + m['href']).text, 'html.parser')
输出:
['https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/sb397x16q/b8516938c/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/g158c396h/8910kd95z/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/w6634p60m/2v23wd923/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/q237jb60d/8910kc45j/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/02871d57q/tx31r242v/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/pz50hc74s/pz50hc752/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/79408c82d/jw827v53v/latest.pdf',...
以此类推……对代码稍作修改可能会使其:
import requests
from bs4 import BeautifulSoup
from pprint import pprint
base_url = 'https://usda.library.cornell.edu'
url = 'https://usda.library.cornell.edu/concern/publications/3t945q76s?locale=en#release-items'
soup = BeautifulSoup(requests.get(url).text, 'html.parser')
b = []
page = 1
while True:
pdf_urls = [a["href"] for a in soup.select('#release-items a[href$=".pdf"]')]
pprint(pdf_urls)
b.append(pdf_urls)
m = soup.select_one('a[rel="next"][href]')
if m and m['href'] != '#':
soup = BeautifulSoup(requests.get(base_url + m['href']).text, 'html.parser')
else:
break
这:
致:
这是:
soup = BeautifulSoup(requests.get(base_url + m['href']).pdf, 'html.parser')
为此:
soup = BeautifulSoup(requests.get(base_url + m['href']).text, 'html.parser')
输出:
['https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/sb397x16q/b8516938c/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/g158c396h/8910kd95z/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/w6634p60m/2v23wd923/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/q237jb60d/8910kc45j/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/02871d57q/tx31r242v/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/pz50hc74s/pz50hc752/latest.pdf',
'https://downloads.usda.library.cornell.edu/usda-esmis/files/3t945q76s/79408c82d/jw827v53v/latest.pdf',...
等等