Python 脚本只抓取第一页,而不是多页

Python 脚本只抓取第一页,而不是多页,python,web-scraping,web-crawler,Python,Web Scraping,Web Crawler,我正在尝试抓取一个网站的多个页面。但是程序只能抓取第一页 import requests from bs4 import BeautifulSoup import re import json import time def make_soup(url): source = requests.get(url).text soup = BeautifulSoup(source, 'lxml') pattern = re.compile(r'window.__WEB_C

我正在尝试抓取一个网站的多个页面。但是程序只能抓取第一页

import requests
from bs4 import BeautifulSoup
import re
import json
import time

def make_soup(url):

    source = requests.get(url).text
    soup = BeautifulSoup(source, 'lxml')

    pattern = re.compile(r'window.__WEB_CONTEXT__={pageManifest:(\{.*\})};')
    script = soup.find("script", text=pattern)
    jsonData = pattern.search(script.text).group(1)

    pattern_number = re.compile(r'\"[0-9]{9,12}\":(\{\"data\":\{\"cachedFilters\":(.*?)\}\}),\"[0-9]{9,11}\"')
    jsonData2 = pattern_number.search(jsonData).group(1)

    dictData = json.loads(jsonData2)
    return dictData

def get_reviews(dictData):

    """ Return a list of five dicts with reviews.
    """

    all_dictionaries = []

    for data in dictData['data']['locations']:
        for reviews in data['reviewListPage']['reviews']:

            review_dict = {}

            review_dict["reviewid"] = reviews['id']
            review_dict["reviewurl"] =  reviews['absoluteUrl']
            review_dict["reviewlang"] = reviews['language']
            review_dict["reviewdate"] = reviews['createdDate']

            userProfile = reviews['userProfile']
            review_dict["author"] = userProfile['displayName']

            all_dictionaries.append(review_dict)

    return all_dictionaries

def main():

    url = 'https://www.tripadvisor.ch/Hotel_Review-g188113-d228146-Reviews-Coronado_Hotel-Zurich.html#REVIEWS'

    dictData = make_soup(url)
    review_list = get_reviews(dictData) # list with five dicts
    #print(review_list)

    page_number = 5

    while page_number <= 260: # number in the URL
        next_url = 'https://www.tripadvisor.ch/Hotel_Review-g188113-d228146-Reviews-or' + str(page_number) + '-Coronado_Hotel-Zurich.html#REVIEWS'
        dictData = make_soup(url)
        review_list2 = get_reviews(dictData)
        print(review_list2)

        page_number += 5
        time.sleep(0.5)

if __name__ == "__main__":
    main()
我不知道这是不是个好主意。
你有什么建议吗?提前谢谢你

您将新url分配给下一个\u url,但您使用
url
阅读页面

next_url = 'https://www.tripadvisor.ch/Hotel_Review-g188113-d228146-Reviews-or' + str(page_number) + '-Coronado_Hotel-Zurich.html#REVIEWS'
dictData = make_soup(url)
您必须重命名变量

url = 'https://www.tripadvisor.ch/Hotel_Review-g188113-d228146-Reviews-or' + str(page_number) + '-Coronado_Hotel-Zurich.html#REVIEWS'
dictData = make_soup(url)

我不确定我是否理解你在从开始的段落中的意思,我也不确定我是否可以爬网…对不起:-D好的,让我们这样说:我可以用这个URL爬网多个页面吗?你试过了吗?如我所知,使用
or5
or10
等,它应该阅读页面。很久以前也有类似的问题,我可能用
or5
or10
来回答阅读页面。这是。在代码内部,您可以找到Stackoverflow问题的链接。@furas非常感谢!您的代码输出的正是我所需要的:-)首先,我还尝试使用scrapy对该网站进行爬网,但为了爬网评级,我必须更改为json。你知道为什么我上面的代码不能用于多个页面吗?即使我在第一时间也没有看到这个错误-我必须使用
print(url)
inside
make_soup()
来查看它。
url = 'https://www.tripadvisor.ch/Hotel_Review-g188113-d228146-Reviews-or' + str(page_number) + '-Coronado_Hotel-Zurich.html#REVIEWS'
dictData = make_soup(url)