Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/css/37.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何让python上的selenium web驱动程序在以下页面的css选择器上查找元素?_Python_Css_Selenium_Web Scraping_Webdriverwait - Fatal编程技术网

如何让python上的selenium web驱动程序在以下页面的css选择器上查找元素?

如何让python上的selenium web驱动程序在以下页面的css选择器上查找元素?,python,css,selenium,web-scraping,webdriverwait,Python,Css,Selenium,Web Scraping,Webdriverwait,我试图让selenium使用CSS选择器在web上抓取wiki页面的第一段 当我运行此代码时,它似乎只从原始网页中选择一个 而不是我正在寻找的,在这个例子中是“猫” 这方面的任何帮助都会很棒 browser = webdriver.Firefox(executable_path='D:\Import Files that I also want backed up\Jupyter Notebooks\Python Projects\Selenium\driverss\geckodriver.

我试图让selenium使用CSS选择器在web上抓取wiki页面的第一段

当我运行此代码时,它似乎只从原始网页中选择一个

而不是我正在寻找的,在这个例子中是“猫”

这方面的任何帮助都会很棒


browser = webdriver.Firefox(executable_path='D:\Import Files that I also want backed up\Jupyter Notebooks\Python Projects\Selenium\driverss\geckodriver.exe')
browser.get('https://en.wikipedia.org')

search_elem = browser.find_element_by_css_selector('#searchInput')

search_elem.send_keys('cats')
search_elem.submit()


results_elem = browser.find_element_by_css_selector('p')

print(results_elem.text)


要从wiki页面获取第一段文本,请导入
WebDriverWait()

from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

browser = webdriver.Firefox(executable_path='D:\Import Files that I also want backed up\Jupyter Notebooks\Python Projects\Selenium\driverss\geckodriver.exe')
browser.get('https://en.wikipedia.org')
search_elem = browser.find_element_by_css_selector('#searchInput')
search_elem.send_keys('cats')
search_elem.submit()
results_elem=WebDriverWait(browser,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,"div.mw-parser-output p:nth-of-type(3)")))
print(results_elem.text)

您的预期输出是什么?我想打印“cat”页面的第一段。但是当我使用css选择器时,我仍然只是在删除第一个“wikipedia.com”页面。即使我在“猫”的页面上。基本上,我希望能够在使用selenium搜索主题后从网页中进行刮取。如果我使用搜索栏删除代码,请参见此操作。但是,当我通过搜索cat进入cat页面时,如果您的页面需要更多时间加载,则需要从访问的第一个页面中选择css选择器。然后提供一些时间。在提交页面后睡眠(5)。让我知道这是怎么回事?
from selenium import webdriver
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC

browser = webdriver.Firefox(executable_path='D:\Import Files that I also want backed up\Jupyter Notebooks\Python Projects\Selenium\driverss\geckodriver.exe')
browser.get('https://en.wikipedia.org')
search_elem = browser.find_element_by_css_selector('#searchInput')
search_elem.send_keys('cats')
search_elem.submit()
results_elem=WebDriverWait(browser,10).until(EC.visibility_of_element_located((By.CSS_SELECTOR,"div.mw-parser-output p:nth-of-type(3)")))
print(results_elem.text)