Web scraping 获得;属性错误:';非类型';对象没有属性';获取'&引用;从flipkart中抓取数据时?

Web scraping 获得;属性错误:';非类型';对象没有属性';获取'&引用;从flipkart中抓取数据时?,web-scraping,beautifulsoup,Web Scraping,Beautifulsoup,我正在尝试从Flipkart中替换移动数据。下面是我编写的代码 下面是图像的实际代码: home_page_link = "https://www.flipkart.com" href = "/search?q=mobiles&as=on&as- show=on&otracker=AS_Query_TrendingAutoSuggest_1_0_na_na_na&otracker1=AS_Query_TrendingAutoSugg

我正在尝试从Flipkart中替换移动数据。下面是我编写的代码

下面是图像的实际代码:

home_page_link = "https://www.flipkart.com"
href = "/search?q=mobiles&as=on&as- show=on&otracker=AS_Query_TrendingAutoSuggest_1_0_na_na_na&otracker1=AS_Query_TrendingAutoSuggest_1_0_na_na_na&as-pos=1&as-type=TRENDING&suggestionId=mobiles&requestId=55feeb8d-8549-48a8-9325-1c0e8756151e&page=1"
url = home_page_link + href
for i in range(1, 101):
    print("page: ", i)

    page_response = requests.get(url)
    print(page_response)
    soup = BeautifulSoup(page_response.content, 'html.parser')

#   cards = soup.find_all('div', attrs={'class': '_1UoZlX'})

#   for card in cards:
#       name = card.find("div", attrs={'class': '_3wU53n'})
#       price = card.find('div', attrs={'class': '_1vC4OE'})
#       print(name.text, price.text)
    

    next_link = soup.find("a",text = "Next")

    print(type(next_link))

    link = next_link.get("href")
    home_page_link = "https://www.flipkart.com"
    next_page_link = home_page_link + link
    url = next_page_link
我在第29页得到了非类型对象:

再次执行相同代码后:

您试图在flipkart中更改页面以获取所有记录

通过更改链接中的最后一个字符,可以轻松更改页面。您只需更改链接中的页面值

要获取整个页面的所有内容,请尝试以下操作:

import requests
from bs4 import BeautifulSoup
home_page_link = "https://www.flipkart.com"
href = "/search?q=mobiles&as=on&as- show=on&otracker=AS_Query_TrendingAutoSuggest_1_0_na_na_na&otracker1=AS_Query_TrendingAutoSuggest_1_0_na_na_na&as-pos=1&as-type=TRENDING&suggestionId=mobiles&requestId=55feeb8d-8549-48a8-9325-1c0e8756151e&page="       # remove last character which show page number.
url = home_page_link + href
print(url)
for i in range(1, 101):
    print("page: ", i)
    new_url = url + str(i)       # add page number in the link every time value of `i` change, also page is increment.
    page_response = requests.get(new_url)
    print(page_response)
    soup = BeautifulSoup(page_response.content, 'html.parser')

伟大的它起作用了。谢谢@Bhargav Desai。但是你能帮我理解为什么(text=“Next”)这个东西不起作用,即我的方法吗?即使这样,也可以获得下一页链接。我的代码面临的问题是什么?