Javascript 如何从网页获取JS重定向pdf链接
我使用Javascript 如何从网页获取JS重定向pdf链接,javascript,python,web-scraping,beautifulsoup,python-requests,Javascript,Python,Web Scraping,Beautifulsoup,Python Requests,我使用请求获取网页,例如如下所示 import requests from bs4 import BeautifulSoup url = "http://www.ofsted.gov.uk/inspection-reports/find-inspection-report/provider/CARE/EY298883" r = requests.get(url) soup = BeautifulSoup(r.text) 对于这些页面中的每一页,我都希望得到标题为“最新报告”的部分中指向的第一个
请求
获取网页,例如如下所示
import requests
from bs4 import BeautifulSoup
url = "http://www.ofsted.gov.uk/inspection-reports/find-inspection-report/provider/CARE/EY298883"
r = requests.get(url)
soup = BeautifulSoup(r.text)
对于这些页面中的每一页,我都希望得到标题为“最新报告”的部分中指向的第一个pdf。你怎么能用漂亮的汤来做这个
HTML的相关部分是
<tbody>
<tr>
<th scope="col">Latest reports</th>
<th scope="col" class="date">Inspection <br/>date</th>
<th scope="col" class="date">First<br/>publication<br/>date</th>
</tr>
<tr>
<td><a href="/provider/files/1266031/urn/106428.pdf"><span class="icon pdf">pdf</span> Early years inspection report </a></td>
<td class="date">12 Mar 2009</td>
<td class="date">4 Apr 2009</td>
</tr> </tbody>
问题是p
包含另一个网页,而不是它应该包含的pdf。有没有办法得到实际的pdf
更新:
让它与BeautifulSoup的另一个迭代一起工作
souppage = BeautifulSoup(p.text)
line = souppage.findAll('a',text=re.compile("requested"))[0]
pdf = requests.get(ofstedbase+line['href'])
souppage = BeautifulSoup(p.text)
line = souppage.findAll('a',text=re.compile("requested"))[0]
pdf = requests.get(ofstedbase+line['href'])
感谢您提供更好的解决方案。这不是最干净的解决方案,但您可以反复浏览列标题,直到找到“最新报告”,然后在该表中搜索指向PDF文件的第一个链接
for col_header in soup.findAll('th'):
if not col_header.contents[0] == "Latest reports": continue
for link in col_header.parent.parent.findAll('a'):
if 'href' in link.attrs and link['href'].endswith('pdf'): break
else:
print '"Latest reports" PDF not found'
break
print '"Latest reports" PDF points at', link['href']
break
您可以尝试SeleniumWebDriver(python-m“easy\u install”Selenium
)来自动指示Firefox下载该文件。这需要Firefox:
from selenium import webdriver
from bs4 import BeautifulSoup
profile = webdriver.FirefoxProfile()
profile.set_preference('browser.helperApps.neverAsk.saveToDisk', ('application/pdf'))
profile.set_preference("pdfjs.previousHandler.alwaysAskBeforeHandling", False)
profile.set_preference("browser.helperApps.alwaysAsk.force", False)
profile.set_preference("browser.download.manager.showWhenStarting", False)
driver = webdriver.Firefox(firefox_profile = profile)
base_url = "http://www.ofsted.gov.uk"
driver.get(base_url + "/inspection-reports/find-inspection-report/provider/CARE/EY298883")
soup = BeautifulSoup(driver.page_source)
for col_header in soup.findAll('th'):
if not col_header.contents[0] == "Latest reports": continue
for link in col_header.parent.parent.findAll('a'):
if 'href' in link.attrs and link['href'].endswith('pdf'): break
else:
print '"Latest reports" PDF not found'
break
print '"Latest reports" PDF points at', link['href']
driver.get(base_url + link['href'])
这个解决方案非常强大,因为它可以做任何人类用户可以做的事情,但它也有缺点。例如,我试图解决Firefox提示下载的问题,但这对我来说不起作用。根据您安装的插件和Firefox版本的不同,结果可能会有所不同。让它与BeautifulSoup的另一次迭代一起工作
souppage = BeautifulSoup(p.text)
line = souppage.findAll('a',text=re.compile("requested"))[0]
pdf = requests.get(ofstedbase+line['href'])
souppage = BeautifulSoup(p.text)
line = souppage.findAll('a',text=re.compile("requested"))[0]
pdf = requests.get(ofstedbase+line['href'])
非常感谢。如果我真的想得到PDF,我想我需要完整的url,但这只给了我
/provider/files/1295389/urn/EY298883.PDF
。我是否只需要添加前缀http://www.ofsted.gov.uk
?啊。。。遗憾的是,这并没有给我pdf格式。一定有更复杂的事情发生了,因为链接不是直接指向PDF,而是指向另一个页面,该页面随后提供PDF。尽管如此,我还是会看看有没有好办法下载它。谢谢。如果这很难,我将接受你的问题并单独提问。行driver=webdriver.Firefox(Firefox\u profile=profile)
为我打开一个Firefox副本,然后脚本停止。