Python 使用Selenium从网站中刮取值

Python 使用Selenium从网站中刮取值,python,selenium-webdriver,web-scraping,Python,Selenium Webdriver,Web Scraping,我正在尝试从以下网站提取数据: 我的目标是八角形中的值“6”: 我相信我的目标是正确的xpath 这是我的密码: import sys import os from selenium.webdriver.firefox.firefox_binary import FirefoxBinary from selenium import webdriver os.environ['MOZ_HEADLESS'] = '1' binary = FirefoxBinary('C:/Program Fi

我正在尝试从以下网站提取数据:

我的目标是八角形中的值“6”:

我相信我的目标是正确的xpath

这是我的密码:

import sys
import os
from selenium.webdriver.firefox.firefox_binary import FirefoxBinary
from selenium import webdriver

os.environ['MOZ_HEADLESS'] = '1'
binary = FirefoxBinary('C:/Program Files/Mozilla Firefox/firefox.exe', log_file=sys.stdout)

browser = webdriver.PhantomJS(service_args=["--load-images=no", '--disk-cache=true'])

url = 'https://www.tipranks.com/stocks/sui/stock-analysis'
xpath = '/html/body/div[1]/div/div/div/div/main/div/div/article/div[2]/div/main/div[1]/div[2]/section[1]/div[1]/div[1]/div/svg/text/tspan'
browser.get(url)

element = browser.find_element_by_xpath(xpath)

print(element)
以下是我返回的错误:

Traceback (most recent call last):
  File "C:/Users/jaspa/PycharmProjects/ig-markets-api-python-library/trader/market_signal_IV_test.py", line 15, in <module>
    element = browser.find_element_by_xpath(xpath)
  File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 394, in find_element_by_xpath
    return self.find_element(by=By.XPATH, value=xpath)
  File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 978, in find_element
    'value': value})['value']
  File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 321, in execute
    self.error_handler.check_response(response)
  File "C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 242, in check_response
    raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.NoSuchElementException: Message: {"errorMessage":"Unable to find element with xpath '/html/body/div[1]/div/div/div/div/main/div/div/article/div[2]/div/main/div[1]/div[2]/section[1]/div[1]/div[1]/div/svg/text/tspan'","request":{"headers":{"Accept":"application/json","Accept-Encoding":"identity","Content-Length":"96","Content-Type":"application/json;charset=UTF-8","Host":"127.0.0.1:51786","User-Agent":"selenium/3.141.0 (python windows)"},"httpVersion":"1.1","method":"POST","post":"{\"using\": \"xpath\", \"value\": \"/h3/div/span\", \"sessionId\": \"d8e91c70-9139-11e9-a9c9-21561f67b079\"}","url":"/element","urlParsed":{"anchor":"","query":"","file":"element","directory":"/","path":"/element","relative":"/element","port":"","host":"","password":"","user":"","userInfo":"","authority":"","protocol":"","source":"/element","queryKey":{},"chunks":["element"]},"urlOriginal":"/session/d8e91c70-9139-11e9-a9c9-21561f67b079/element"}}
Screenshot: available via screen
回溯(最近一次呼叫最后一次):
文件“C:/Users/jaspa/PycharmProjects/ig markets api python library/trader/market_signal_IV_test.py”,第15行,in
元素=浏览器。通过xpath(xpath)查找元素
文件“C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site packages\selenium\webdriver\remote\webdriver.py”,第394行,按xpath查找元素
返回self.find_元素(by=by.XPATH,value=XPATH)
文件“C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site packages\selenium\webdriver\remote\webdriver.py”,第978行,在find\u元素中
'value':value})['value']
文件“C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site packages\selenium\webdriver\remote\webdriver.py”,第321行,在execute中
self.error\u handler.check\u响应(响应)
文件“C:\Users\jaspa\AppData\Local\Programs\Python\Python36-32\lib\site packages\selenium\webdriver\remote\errorhandler.py”,第242行,在check\u响应中
引发异常类(消息、屏幕、堆栈跟踪)
selenium.common.exceptions.NoSuchElementException:Message:{“errorMessage”:“无法找到xpath'/html/body/div[1]/div/div/div/div/div/main/div/article/div[2]/div/main/div[1]/div[1]/div/svg/text/tspan'”的元素”,请求:{“headers”:{“接受应用程序/json”,“接受编码”:“标识”,“内容长度”:“96”,“内容类型”:“应用程序/json;字符集=UTF-8”,“主机”:“127.0.0.1:51786”,“用户代理”:“selenium/3.141.0(python windows)”,“httpVersion”:“1.1”,“方法”:“POST”,“POST”:“{\”using\”:“xpath\”,“value\”:\“/h3/div/span\”,“sessionId\:“d8e91c70-9139-11e9-a9c9-21561f67b079\”,“url”:“元素”,“解析的url”;“url”;“查询”;“文件元素”:目录“/,”路径“/”元素“,”相对“/”元素“,”端口“,”主机“,”密码“,”用户“,”用户信息“,”权限“,”协议“,”源“/”元素“,”查询键“{},,”块“:[”元素“}”,URLEGINAL:“/会话/d8e91c70-9139-11e9-a9c9-21561f67b079/元素“}”
屏幕截图:可通过屏幕
我可以看出这个问题是由于xpath不正确造成的,但是我不知道为什么

我还应该指出,使用selenium对我来说是清理这个网站的最佳方法,我打算提取其他值,并在多个页面上重复这些针对不同股票的查询。如果有人认为我使用BeutifulSoup、lmxl等会更好,那么我很高兴听到建议


提前感谢!

您甚至不需要声明所有路径。八角形在div中,哪个类
client-components-ValueChange-shape\uuuu八角形
,请搜索此div

x = browser.find_elements_by_css_selector("div[class='client-components-ValueChange-shape__Octagon']") ## Declare which class
for all in x:
    print all.text
输出:

6

您似乎有两个问题:

对于xpath,我只是:

xpath='//div[@class=“client-components-ValueChange-shape\uuuu Octagon”]

然后做:

打印(element.text)

它会得到您想要的值。但是,您的代码实际上不会等到浏览器加载完页面后才执行xpath。对于我来说,使用Firefox,我只有大约40%的时间通过这种方式获得值。使用Selenium有很多方法来处理此问题,最简单的方法可能是在浏览器之间睡眠几秒钟。获取结束xpath语句


您似乎正在设置Firefox,但随后使用了Phantom。我没有在Phantom中尝试此操作,Phantom可能不需要睡眠行为。

您可以尝试此css选择器
[class$='shape\uu Octagon']
以目标内容为目标。如果我选择了,我将执行以下操作:

import asyncio
from pyppeteer import launch

async def get_content(url):
    browser = await launch({"headless":True})
    [page] = await browser.pages()
    await page.goto(url)
    await page.waitForSelector("[class$='shape__Octagon']")
    value = await page.querySelectorEval("[class$='shape__Octagon']","e => e.innerText")
    return value

if __name__ == "__main__":
    url = "https://www.tipranks.com/stocks/sui/stock-analysis"
    loop = asyncio.get_event_loop()
    result = loop.run_until_complete(get_content(url))
    print(result.strip())
输出:

6

谢谢Matt。这是一个非常有用的答案。睡眠声明似乎是必要的-这是处理大量页面的唯一方法吗?我想我可以刮去所有html,然后从中提取内容?非常欢迎。是的,我发现有必要对web上的许多页面执行类似的操作。他们所做的AJAX工作越复杂更清楚的是,如果你的意思是通过“清除所有html”来查询在适当的浏览器中呈现的内容”“那么是的,我就是这样做的。我已经广泛尝试了BeautilSoup和lxml,并决定使用硒。web的现实情况是,有太多复杂的AJAX内容在进行,您最好直接使用真正的浏览器,然后像在这里一样检查完全呈现的文档。