Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/python-3.x/15.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 3.x Can';无法获取跨类文本_Python 3.x_Selenium_Selenium Webdriver_Web Scraping_Twitterapi Python - Fatal编程技术网

Python 3.x Can';无法获取跨类文本

Python 3.x Can';无法获取跨类文本,python-3.x,selenium,selenium-webdriver,web-scraping,twitterapi-python,Python 3.x,Selenium,Selenium Webdriver,Web Scraping,Twitterapi Python,请找到附件中的图片。我想获取图像中突出显示的部分, 我想获取这些属性1·趋势,#周二动机,7750条Tweets 请告知。 要获取的URL=https://twitter.com/explore/tabs/trending from selenium import webdriver url = 'https://twitter.com/explore/tabs/trending' # scrolling and scraping tweets driver = webdriver.Chr

请找到附件中的图片。我想获取图像中突出显示的部分, 我想获取这些属性1·趋势,#周二动机,7750条Tweets

请告知。 要获取的URL=https://twitter.com/explore/tabs/trending

from selenium import webdriver

url = 'https://twitter.com/explore/tabs/trending'

# scrolling and scraping tweets

driver = webdriver.Chrome('/chromedriver')
driver.get(url)

trends = driver.find_element_by_xpath('//div[@data-testid="trends"]')
trend = trends[0]
trend.find_element_by_xpath('.//span').text
输出:

Traceback (most recent call last):
  File "/home/PycharmProject/Twitter_trending/ss.py", line 13, in <module>
    trends = driver.find_element_by_xpath('//div[@data-testid="trends"]')
  File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 394, in find_element_by_xpath
    return self.find_element(by=By.XPATH, value=xpath)
  File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 976, in find_element
    return self.execute(Command.FIND_ELEMENT, {
  File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "/usr/local/lib/python3.8/dist-packages/selenium/webdriver/remote/errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"xpath","selector":"//div[@data-testid="trends"]"}
  (Session info: chrome=90.0.4430.72)

尝试使用JavaScript获取文本。使用此代码

span = trend.find_element_by_xpath('.//span')
text = driver.execute_script("return arguments[0].innerText", span)
此方法还可以提供来自嵌套元素的文本。仍然无法工作,请尝试css选择器而不是xpath

trends = driver.find_elements_by_css_selector('div[data-testid="trends"]')
trend = trends[0]
span = trend.find_element_by_xpath('.//span')
text = driver.execute_script("return arguments[0].innerText", span)

不是
driver.find_elements
而不是
driver.find_element
?我已经尝试了这两个driver.find_elements和driver.find_element,但是运气不好!trends=driver。通过xpath('//div[@data testid=“trend”]')小排版trends->trends查找元素仍然得到[]空列表!在使用css选择器AttributeError后,尝试了两种方法获取此错误:“list”对象没有“find_element_by_xpath”属性。这是与驱动程序相关的问题吗?然后再次将css选择器与列表对象一起使用。或者像这样使用:trends=driver。通过_css_选择器('div[data testid=“trends”]span')查找_elements_,忽略span变量,并将trend用作执行脚本中的第二个参数。之后它将在控制台上打印“None”。趋势=驱动程序。通过css选择器('div[data testid=“trends”]span')text=driver查找元素。执行脚本(“返回参数[0]。innerText”,趋势),然后再次检查。我认为您没有使用文本针对正确的元素。我检查了网址。这对我不起作用。
trends = driver.find_elements_by_css_selector('div[data-testid="trends"]')
trend = trends[0]
span = trend.find_element_by_xpath('.//span')
text = driver.execute_script("return arguments[0].innerText", span)