Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/xcode/7.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python selenium webdriver:我想点击下一页直到最后一页_Python_Selenium Webdriver_Selenium Chromedriver - Fatal编程技术网

Python selenium webdriver:我想点击下一页直到最后一页

Python selenium webdriver:我想点击下一页直到最后一页,python,selenium-webdriver,selenium-chromedriver,Python,Selenium Webdriver,Selenium Chromedriver,我试图单击下一页,但上面编写的代码正在单击其他页。我想从第一页单击到最后一页。我在该页上发现两个问题 1) 它加载页面的速度非常慢,所以在获取数据和单击()按钮之前,我甚至需要睡眠10秒钟 2) 按钮的工作原理与我预期的不同-它会跳转3页(即使我在浏览器中手动单击它),所以我用下一页的编号搜索按钮并单击它 from selenium import webdriver from bs4 import BeautifulSoup as bs import time url = 'https://c

我试图单击下一页,但上面编写的代码正在单击其他页。我想从第一页单击到最后一页。

我在该页上发现两个问题

1) 它加载页面的速度非常慢,所以在获取数据和
单击()
按钮之前,我甚至需要睡眠10秒钟

2) 按钮
的工作原理与我预期的不同-它会跳转3页(即使我在浏览器中手动单击它),所以我用下一页的编号搜索按钮并单击它

from selenium import webdriver
from bs4 import BeautifulSoup as bs
import time

url = 'https://curecity.in/vendor-list.php?category=Doctor&filters_location=Jaipur&filters%5Bsubareas_global%5D=&filters_speciality='

driver = webdriver.Chrome('C:\chromedriver.exe')
driver.get(url)
driver.maximize_window()


next_page_number=1
next_page = True
while next_page == True:
     soup = bs(driver.page_source, 'html.parser')
     for link in soup.find_all('div',class_='col-md-9 feature-info'):
        link1 = link.find('a')
        print(link1['href'])
     try:
        driver.find_element_by_link_text(">").click()
        next_page_number+=1
        time.sleep(1)
    except:
        print ('No more pages')
        next_page=False

driver.close()

完整代码。即使没有
BeautifulSoup

 driver.find_element_by_xpath('//a[@data-page="{}"]'.format(next_page_number)).click()

可能使用其他方法找到下一页的按钮。代码对我有效-我只有
sleep()
click()
from selenium import webdriver
#from bs4 import BeautifulSoup as bs
import time

url = 'https://curecity.in/vendor-list.php?category=Doctor&filters_location=Jaipur&filters%5Bsubareas_global%5D=&filters_speciality='

driver = webdriver.Chrome('C:\chromedriver.exe')
#driver = webdriver.Firefox()
driver.maximize_window()

driver.get(url)
next_page_number = 1

while True:

    print('page:', next_page_number)
    time.sleep(10)

    #soup = bs(driver.page_source, 'html.parser')
    #for link in soup.find_all('div',class_='col-md-9 feature-info'):
    #    link1 = link.find('a')
    #    print(link1['href'])

    for link in driver.find_elements_by_xpath('//div[@class="col-md-2 feature-icon"]/a'):
        print(link.get_attribute('href'))

    try:
        next_page_number += 1
        driver.find_element_by_xpath('//a[@data-page="{}"]'.format(next_page_number)).click()
    except:
        print('No more pages')
        break # exit loop

#driver.close()