Python 美丽集团';s find方法返回None而不是link

Python 美丽集团';s find方法返回None而不是link,python,web,web-scraping,beautifulsoup,Python,Web,Web Scraping,Beautifulsoup,感谢您关注我的问题,我正在尝试从一个旧的Reddit博客页面获取下一页链接 但不知何故,find方法返回了meNone对象,代码: def crawl(self): curr_page_url = self.start_url curr_page = requests.get(curr_page_url) bs = BeautifulSoup(curr_page.text,'lxml') # all_links = GetAllL

感谢您关注我的问题,我正在尝试从一个旧的Reddit博客页面获取下一页链接 但不知何故,find方法返回了meNone对象,代码:

 def crawl(self):
        curr_page_url = self.start_url
        curr_page = requests.get(curr_page_url)
        bs = BeautifulSoup(curr_page.text,'lxml')
        # all_links = GetAllLinks(self.start_url)
        nxtlink = bs.find('a',attrs={'rel':'nofollow next'})['href']
        print(nxtlink)

HTML页面链接在这个页面上,我正在尝试获取下一个页面的链接 在跨度标记中,此标记:

<span class="next-button">
    <a href="https://old.reddit.com/r/learnprogramming/?count=25&amp;after=t3_j54ae2" rel="nofollow 
    next">next ›
    </a>
</span>

我认为您必须在请求中添加标题,否则服务器会认为您是一个机器人,这是正确的

试试这个:

import requests
from bs4 import BeautifulSoup

headers = {
    "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8",
    "Accept-Encoding": "gzip, deflate, br",
    "Accept-Language": "en-GB,en;q=0.5",
    "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:81.0) Gecko/20100101 Firefox/81.0",
}

response = requests.get("https://old.reddit.com/r/learnprogramming/", headers=headers).text
soup = BeautifulSoup(response, "html.parser").find('a', attrs={'rel': 'nofollow next'})['href']
print(soup)

输出:

https://old.reddit.com/r/learnprogramming/?count=25&after=t3_j5ezm8

非常感谢,我在沮丧了一个小时后几乎放弃了。它起了神奇的作用:)