Python 网页垃圾pdf链接-不返回结果

Python 网页垃圾pdf链接-不返回结果,python,web-scraping,Python,Web Scraping,我已经设置了一些代码来从地方议会网站上删除PDF。我请求了我想要的页面,然后得到了指向不同日期的链接,然后在每个日期中都有指向PDF的链接。但是它没有返回任何结果 我一直在玩弄代码,弄不懂它。它在jupyter笔记本中运行正常,没有返回任何错误 这是我的代码: import requests from bs4 import BeautifulSoup as bs dates = ['April 2019', 'July 2019', 'December 2018'] r = requests.

我已经设置了一些代码来从地方议会网站上删除PDF。我请求了我想要的页面,然后得到了指向不同日期的链接,然后在每个日期中都有指向PDF的链接。但是它没有返回任何结果

我一直在玩弄代码,弄不懂它。它在jupyter笔记本中运行正常,没有返回任何错误

这是我的代码:

import requests
from bs4 import BeautifulSoup as bs

dates = ['April 2019', 'July 2019', 'December 2018']
r = requests.get('https://www.gmcameetings.co.uk/meetings/committee/36/economy_business_growth_and_skills_overview_and_scrutiny')
soup = bs(r.content, 'lxml')

f = open(r"E:\Internship\WORK\GMCA\Getting PDFS\gmcabusinessdatelinks.txt", "w+")

for date in dates:
        if ['a'] in soup.select('a:contains("' + date + '")'):
            r2 = requests.get(date['href'])
            print("link1")
            page2 = r2.text
            soup2 = bs(page2, 'lxml')
            pdf_links = soup2.find_all('a', href=True)
            for plink in pdf_links:
                if plink['href'].find('minutes')>1:
                    print("Minutes!")
                    f.write(str(plink['href']) + ' ')
f.close()               

它创建了一个文本文件,但为空。我想要一个包含所有PDF链接的文本文件。谢谢。

如果您想获得包含
分钟数
关键字的pdf链接,那么以下内容应该可以使用:

import requests
from bs4 import BeautifulSoup

link = 'https://www.gmcameetings.co.uk/meetings/committee/36/economy_business_growth_and_skills_overview_and_scrutiny'

dates = ['April 2019', 'July 2019', 'December 2018']

r = requests.get(link)
soup = BeautifulSoup(r.text, 'lxml')
target_links = [[i['href'] for i in soup.select(f'a:contains("{date}")')] for date in dates]

with open("output_file.txt","w",encoding="utf-8") as f:
    for target_link in target_links:

        res = requests.get(target_link[0])
        soup_obj = BeautifulSoup(res.text,"lxml")
        pdf_links = [item.get("href") for item in soup_obj.select("#content .item-list a[href*='minutes']")]
        for pdf_file in pdf_links:
            print(pdf_file)
            f.write(pdf_file+"\n")

如果您想获取包含
minutes
关键字的pdf链接,那么以下内容应该可以使用:

import requests
from bs4 import BeautifulSoup

link = 'https://www.gmcameetings.co.uk/meetings/committee/36/economy_business_growth_and_skills_overview_and_scrutiny'

dates = ['April 2019', 'July 2019', 'December 2018']

r = requests.get(link)
soup = BeautifulSoup(r.text, 'lxml')
target_links = [[i['href'] for i in soup.select(f'a:contains("{date}")')] for date in dates]

with open("output_file.txt","w",encoding="utf-8") as f:
    for target_link in target_links:

        res = requests.get(target_link[0])
        soup_obj = BeautifulSoup(res.text,"lxml")
        pdf_links = [item.get("href") for item in soup_obj.select("#content .item-list a[href*='minutes']")]
        for pdf_file in pdf_links:
            print(pdf_file)
            f.write(pdf_file+"\n")

可以使用regex
soup.find('a',text=re.compile(date))
代替:

import requests
from bs4 import BeautifulSoup as bs
import re

dates = ['April 2019', 'July 2019', 'December 2018']
r = requests.get('https://www.gmcameetings.co.uk/meetings/committee/36/economy_business_growth_and_skills_overview_and_scrutiny')
soup = bs(r.content, 'lxml')

f = open(r"E:\gmcabusinessdatelinks.txt", "w+")

for date in dates:
        link = soup.find('a', text=re.compile(date))
        r2 = requests.get(link['href'])
        print("link1")
        page2 = r2.text
        soup2 = bs(page2, 'lxml')
        pdf_links = soup2.find_all('a', href=True)
        for plink in pdf_links:
            if plink['href'].find('minutes')>1:
                print("Minutes!")
                f.write(str(plink['href']) + ' ')
f.close()               

可以使用regex
soup.find('a',text=re.compile(date))
代替:

import requests
from bs4 import BeautifulSoup as bs
import re

dates = ['April 2019', 'July 2019', 'December 2018']
r = requests.get('https://www.gmcameetings.co.uk/meetings/committee/36/economy_business_growth_and_skills_overview_and_scrutiny')
soup = bs(r.content, 'lxml')

f = open(r"E:\gmcabusinessdatelinks.txt", "w+")

for date in dates:
        link = soup.find('a', text=re.compile(date))
        r2 = requests.get(link['href'])
        print("link1")
        page2 = r2.text
        soup2 = bs(page2, 'lxml')
        pdf_links = soup2.find_all('a', href=True)
        for plink in pdf_links:
            if plink['href'].find('minutes')>1:
                print("Minutes!")
                f.write(str(plink['href']) + ' ')
f.close()               

然后调试。对于这样的脚本,打印调试通常就足够了。将第一个soup.select结果指定给变量,然后打印它。是空的吗?如果没有,那么soup2呢?页面是否返回?用这种方法缩小问题的范围,然后你就会发现真正的问题是什么。你能告诉我们你想在文本文件中写的pdf链接的类型吗?“然后调试。对于这样的脚本,打印调试通常就足够了。将第一个soup.select结果指定给变量,然后打印它。是空的吗?如果没有,那么soup2呢?页面是否返回?用这种方法缩小问题的范围,然后你就会发现真正的问题是什么。你能告诉我们你想在文本文件中写入的pdf链接类型吗?”