Python 3.x python3更多按钮可在第1页单击,但在第2页不可单击

Python 3.x python3更多按钮可在第1页单击,但在第2页不可单击,python-3.x,selenium,web-scraping,Python 3.x,Selenium,Web Scraping,这是关于如何在网页上单击“更多”按钮的扩展问题。 下面是我之前的问题,一个人很友好地回答了我的问题。 因为我不太熟悉“按类名查找元素”函数,所以我只是在现有代码中添加了此人的修订代码。所以我修改后的代码效率不高(我道歉) 情况是,有两种类型的“更多”按钮。第一个在属性描述部分,第二个在文本回顾部分。如果您只单击任何评论中的一个“更多”按钮,评论将展开,以便您可以查看全文评论 我遇到的问题是,我可以单击“更多”按钮查看第1页中的评论,但无法单击第2页中的评论。 下面是我收到的错误消息,但我的代码

这是关于如何在网页上单击“更多”按钮的扩展问题。 下面是我之前的问题,一个人很友好地回答了我的问题。 因为我不太熟悉“按类名查找元素”函数,所以我只是在现有代码中添加了此人的修订代码。所以我修改后的代码效率不高(我道歉)

情况是,有两种类型的“更多”按钮。第一个在属性描述部分,第二个在文本回顾部分。如果您只单击任何评论中的一个“更多”按钮,评论将展开,以便您可以查看全文评论

我遇到的问题是,我可以单击“更多”按钮查看第1页中的评论,但无法单击第2页中的评论。 下面是我收到的错误消息,但我的代码仍在运行(一旦看到错误就不会停止)。 信息:

没有这样的元素:无法定位元素:{“方法”:“标记名”,“选择器”:“span”}

根据我的理解,每次复习都有入门课和相应的时间跨度。我不明白为什么它说python找不到它

from selenium import webdriver
from selenium.webdriver import ActionChains
from bs4 import BeautifulSoup

review_list=[]
review_appended_list=[]
review_list_v2=[]
review_appended_list_v2=[]
listed_reviews=[]
listed_reviews_v2=[]
listed_reviews_total=[]
listed_reviews_total_v2=[]
final_list=[]

#Incognito Mode
option = webdriver.ChromeOptions()
option.add_argument("--incognito")

#Open Chrome
driver=webdriver.Chrome(executable_path="C:/Users/chromedriver.exe",options=option)

#url I want to visit (I'm going to loop over multiple listings but for simplicity, I just added one listing url).
lists = ['https://www.tripadvisor.com/VacationRentalReview-g30196-d6386734-Hot_51st_St_Walk_to_Mueller_2BDR_Modern_sleeps_7-Austin_Texas.html']

for k in lists:

    driver.get(k)
    time.sleep(3)

    #click 'More' on description part.
    link = driver.find_element_by_link_text('More')

    try:
        ActionChains(driver).move_to_element(link)
        time.sleep(1) # time to move to link

        link.click()
        time.sleep(1) # time to update HTML
    except Exception as ex:
        print(ex)

    time.sleep(3)

    # first "More" shows text in all reviews - there is no need to search other "More"
    try:
        first_entry = driver.find_element_by_class_name('entry')
        more = first_entry.find_element_by_tag_name('span')
        #more = first_entry.find_element_by_link_text('More')
    except Exception as ex:
        print(ex)

    try:
        ActionChains(driver).move_to_element(more)
        time.sleep(1) # time to move to link

        more.click()
        time.sleep(1) # time to update HTML
    except Exception as ex:
        print(ex)

    #begin parsing html and scraping data.
    html =driver.page_source
    soup=BeautifulSoup(html,"html.parser")
    listing=soup.find_all("div", class_="review-container")

    all_reviews = driver.find_elements_by_class_name('wrap')
    for review in all_reviews:

        all_entries = review.find_elements_by_class_name('partial_entry')
        if all_entries:
            review_list=[all_entries[0].text]
            review_appended_list.extend([review_list])

    for i in range(len(listing)):
        review_id=listing[i]["data-reviewid"]
        listing_v1=soup.find_all("div", class_="rating reviewItemInline")
        rating=listing_v1[i].span["class"][1]
        review_date=listing_v1[i].find("span", class_="ratingDate relativeDate")
        review_date_detail=review_date["title"]

        listed_reviews=[review_id, review_date_detail, rating[7:8]]
        listed_reviews.extend([k])
        listed_reviews_total.append(listed_reviews)

    for a,b in zip (listed_reviews_total,review_appended_list):
        final_list.append(a+b)

    #loop over from the 2nd page of the reviews for the same listing.
    for j in range(5,20,5):
        url_1='-'.join(k.split('-',3)[:3])
        url_2='-'.join(k.split('-',3)[3:4])

        middle="-or%d-" % j

        final_k=url_1+middle+url_2

        driver.get(final_k)
        time.sleep(3)

        link = driver.find_element_by_link_text('More')

        try:
            ActionChains(driver).move_to_element(link)
            time.sleep(1) # time to move to link

            link.click()
            time.sleep(1) # time to update HTML
        except Exception as ex:
            print(ex)

        # first "More" shows text in all reviews - there is no need to search other "More"
        try:
            first_entry = driver.find_element_by_class_name('entry')
            more = first_entry.find_element_by_tag_name('span')
        except Exception as ex:
            print(ex)

        try:
            ActionChains(driver).move_to_element(more)
            time.sleep(2) # time to move to link

            more.click()
            time.sleep(2) # time to update HTML
        except Exception as ex:
            print(ex)

        html =driver.page_source
        soup=BeautifulSoup(html,"html.parser")
        listing=soup.find_all("div", class_="review-container")


        all_reviews = driver.find_elements_by_class_name('wrap')
        for review in all_reviews:
            all_entries = review.find_elements_by_class_name('partial_entry')
            if all_entries:
                #print('--- review ---')
                #print(all_entries[0].text)
                #print('--- end ---')
                review_list_v2=[all_entries[0].text]
                #print (review_list)
                review_appended_list_v2.extend([review_list_v2])

                #print (review_appended_list)

        for i in range(len(listing)):
            review_id=listing[i]["data-reviewid"]
            #print review_id
            listing_v1=soup.find_all("div", class_="rating reviewItemInline")
            rating=listing_v1[i].span["class"][1]
            review_date=listing_v1[i].find("span", class_="ratingDate relativeDate")
            review_date_detail=review_date["title"]
            listed_reviews_v2=[review_id, review_date_detail, rating[7:8]]
            listed_reviews_v2.extend([k])


            listed_reviews_total_v2.append(listed_reviews_v2)

        for a,b in zip (listed_reviews_total_v2,review_appended_list_v2):
            final_list.append(a+b)

        print (final_list)
        if len(listing) !=5:
            break
如何为第二页和其余页启用单击“更多”按钮?这样我就可以抓取全文评论了

编辑如下:

我得到的错误消息是这两行

Message: no such element: Unable to locate element: {"method":"tag name","selector":"span"}
Message: stale element reference: element is not attached to the page document
我想我的所有代码仍然运行,因为我使用了try和except函数?通常,当python遇到错误时,它会停止运行。

像这样尝试:

driver.execute_script("""
  arguments[0].click()
""", link)

你在第二页上看到更多按钮的错误了吗?你能发布完整的堆栈跟踪吗?哪一行代码导致了这一切?@Sureshmani嗨,这里。是的,你说得对。我在第二页上看到了一个错误,更多的按钮用于评论!属性描述中的“更多”按钮起作用。我添加了收到的错误消息。请用此替换link.click()。