Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/selenium/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
使用python抓取google搜索结果时出现NoTouchElementException_Python_Selenium_Google Chrome_Web Scraping - Fatal编程技术网

使用python抓取google搜索结果时出现NoTouchElementException

使用python抓取google搜索结果时出现NoTouchElementException,python,selenium,google-chrome,web-scraping,Python,Selenium,Google Chrome,Web Scraping,大家好, 我正在使用此代码在spyder上使用python刮取Google搜索结果,并收到此错误消息请帮助: 文件 “C:\Users\mehdi\Anaconda3\lib\site packages\selenium\webdriver\remote\errorhandler.py”, 第242行,在check_响应中 引发异常类(消息、屏幕、堆栈跟踪) NoTouchElementException:没有此类元素:无法定位元素: {“方法”:“css选择器”,“选择器”:“.st”}(会话

大家好,

我正在使用此代码在spyder上使用python刮取Google搜索结果,并收到此错误消息请帮助:

文件 “C:\Users\mehdi\Anaconda3\lib\site packages\selenium\webdriver\remote\errorhandler.py”, 第242行,在check_响应中 引发异常类(消息、屏幕、堆栈跟踪)

NoTouchElementException:没有此类元素:无法定位元素: {“方法”:“css选择器”,“选择器”:“.st”}(会话信息: 铬=86.0.4240.111)

    # -*- coding: utf-8 -*-
"""
Created on Sun Nov  1 16:15:50 2020

@author: mehdi
"""
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import pandas as pd

from selenium import webdriver
# path to chromedriver.exe 
path = 'D:/chromedriver/chromedriver.exe'
# create nstance of webdriver
driver = webdriver.Chrome(path)
# google url
url = 'https://www.google.com'
# Code to open a specific url
driver.get(url)

# set the keyword you want to search for
keyword = 'kpi' 
# we find the search bar using it's name attribute value
searchBar = driver.find_element_by_name('q')
# first we send our keyword to the search bar followed by the enter # key
searchBar.send_keys(keyword)
searchBar.send_keys('\n')


import time
time.sleep(10)

#
#from selenium import webdriver

#

#from selenium.common.exceptions import NoSuchElementException

#driver = webdriver.Chrome()
#driver.get("http://somedomain/url_that_delays_loading")
#try:
#    element = WebDriverWait(driver, 10).until(
#        EC.presence_of_element_located((By.ID, "myDynamicElement"))
#    )
#finally:
#    driver.quit()


def scrape():
   pageInfo = []
   try:
      # wait for search results to be fetched
      WebDriverWait(driver, 10).until(
      EC.presence_of_element_located((By.CLASS_NAME, "g"))
      )
   except Exception as e:
      print(e)
      driver.quit()
   # contains the search results
   searchResults = driver.find_elements_by_class_name('g')
   
   
   for result in searchResults:
    element = result.find_element_by_css_selector('a') 
    link = element.get_attribute('href')
    header = result.find_element_by_css_selector('h3').text
    text = result.find_element_by_class_name('st').text        
    pageInfo.append({
      'header' : header, 'link' : link, 'text': text
    })
    return pageInfo



# Number of pages to scrape
numPages = 5
# All the scraped data
infoAll = []
# Scraped data from page 1
infoAll.extend(scrape())
for i in range(0 , numPages - 1):
   nextButton = driver.find_element_by_link_text('Next')
   nextButton.click()
   infoAll.extend(scrape())
   
   
df = pd.DataFrame(infoAll)
fileName = keyword + '_' + str(numPages) + '.csv'
df.to_csv(fileName)