Python 3.x Python:Page Navigator最大值刮刀-仅获取最后一个值的输出

Python 3.x Python:Page Navigator最大值刮刀-仅获取最后一个值的输出,python-3.x,beautifulsoup,urllib,Python 3.x,Beautifulsoup,Urllib,这是我创建的用于从列表中的每个类别部分提取最大页面值的程序。我无法获取所有值,我只是获取列表中最后一个值的值。为了获取所有输出,我需要做哪些更改 import bs4 from urllib.request import urlopen as uReq from bs4 import BeautifulSoup as soup #List for extended links to the base url links = ['Link_1/','Link_2/','Link_3/'] #F

这是我创建的用于从列表中的每个类别部分提取最大页面值的程序。我无法获取所有值,我只是获取列表中最后一个值的值。为了获取所有输出,我需要做哪些更改

import bs4
from urllib.request import urlopen as uReq
from bs4 import BeautifulSoup as soup

#List for extended links to the base url

links = ['Link_1/','Link_2/','Link_3/']
#Function to find out the biggest number present in the page navigation
#section.Every element before 'Next→' is consist of the upper limit

def page_no():
    bs = soup(page_html, "html.parser")
    max_page = bs.find('a',{'class':'next page-numbers'}).findPrevious().text
   print(max_page)

#url loop
for url in links:
    my_urls ='http://example.com/category/{}/'.format(url)

# opening up connection,grabbing the page
uClient = uReq(my_urls)
page_html = uClient.read()
uClient.close()
page_no()
页面导航器示例:
1 2 3…15下一步→


提前感谢

您需要将页面html放入函数中,并缩进最后4行。另外,最好返回max_page值,以便在函数之外使用它

def page_no(page_html): 
    bs = soup(page_html, "html.parser")
    max_page = bs.find('a',{'class':'next page-numbers'}).findPrevious().text
    return max_page

#url loop 
for url in links: 
    my_urls='http://example.com/category/{}/'.format(url) 
    # opening up connection,grabbing the page 
    uClient = uReq(my_urls) 
    page_html = uClient.read()
    uClient.close() 
    max_page = page_no(page_html)
    print(max_page)

请提供您正在解析的真实URL