Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/git/24.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 无法单击页面上的元素_Python_Selenium_Web Scraping - Fatal编程技术网

Python 无法单击页面上的元素

Python 无法单击页面上的元素,python,selenium,web-scraping,Python,Selenium,Web Scraping,我正试图用python selenium转到第2页及本页以外的地方(分页),并为此花了几个小时。我收到了这个错误,非常感谢chromedriver的任何帮助 is not clickable at point(). Other element would receive the click 到目前为止,我的代码是: class Chezacash: t1 = time.time() driver = webdriver.Chrome(chromedriver) de

我正试图用python selenium转到第2页及本页以外的地方(分页),并为此花了几个小时。我收到了这个错误,非常感谢chromedriver的任何帮助

is not clickable at point(). Other element would receive the click
到目前为止,我的代码是:

class Chezacash:
    t1 = time.time()
    driver = webdriver.Chrome(chromedriver)


    def controller(self):
        self.driver.get("https://www.chezacash.com/#/home/")
        element = WebDriverWait(self.driver, 10).until(
        EC.presence_of_element_located((By.CSS_SELECTOR, "div.panel-heading")))    
        soup = BeautifulSoup(self.driver.page_source.encode('utf-8'),"html.parser")
        self.parser(soup)
        self.driver.find_element(By.XPATH, "//li[@class='paginate_button active']/following-sibling::li").click()
        time.sleep(2)
        soup = BeautifulSoup(self.driver.page_source.encode('utf-8'),"html.parser")
        self.parser(soup)


    def parser(self, soup):
        for i in soup.find("table", {"id":"DataTables_Table_1"}).tbody.contents:
            date =  i.findAll("td")[0].get_text().strip()
            time =  i.findAll("td")[1].get_text().strip()
            home =   i.findAll("td")[4].div.span.get_text().strip().encode("utf-8")
            home_odds =  i.findAll("td")[4].div.findAll("span")[1].get_text().strip()
            draw_odds =  i.findAll("td")[5].div.findAll("span")[1].get_text().strip()
            away =   i.findAll("td")[6].div.span.get_text().strip().encode("utf-8")
            away_odds =  i.findAll("td")[6].div.findAll("span")[1].get_text().strip()
            print home

cheza = Chezacash()
try:
    cheza.controller()
except:
    cheza.driver.service.process.send_signal(signal.SIGTERM) # kill the specific phantomjs child proc                            # quit the node proc
    cheza.driver.quit()
    traceback.print_exc()

如果您通过链接文本找到“下一步”按钮,滚动到其视图,然后单击:

next_button = self.driver.find_element_by_link_text("Next")
self.driver.execute_script("arguments[0].scrollIntoView();", next_button)
next_button.click()
在导航到页面之前,我还将最大化浏览器窗口:

self.driver.maximize_window()
self.driver.get("https://www.chezacash.com/#/home/")

对代码的快速评论。在i的
循环中,第一行应该将
i.findAll(“td”)
存储在变量中,然后
date
等将访问不同的
[0]
等元素。当前代码正在为每个分配的变量重新刷新页面(
.findAll()
)。