Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python tkinter+;OpenCV网络摄像头非常慢的视频流_Python_Opencv_Tkinter_Raspberry Pi - Fatal编程技术网

Python tkinter+;OpenCV网络摄像头非常慢的视频流

Python tkinter+;OpenCV网络摄像头非常慢的视频流,python,opencv,tkinter,raspberry-pi,Python,Opencv,Tkinter,Raspberry Pi,我第一次使用tkinter和opencv,并成功地为我的项目构建了一个GUI,但是,我不明白为什么我的视频流更新速度如此之慢。我抓取帧的速度非常快,但屏幕上的更新速度似乎呈指数级下降。当我第一次启动程序时,我看到大约30秒的延迟,但它最终会减慢到停止。我连接了三个摄像头,但一次只显示一个。摄像头全部显示,选择按钮工作。我唯一的问题是显示器刷新率 这是运行在Python3.7上的覆盆子pi4。我可以通过网络浏览器连接到相机,它看起来没有延迟 我一直在寻找答案,但似乎找不到任何有帮助的。有人能提供一

我第一次使用tkinter和opencv,并成功地为我的项目构建了一个GUI,但是,我不明白为什么我的视频流更新速度如此之慢。我抓取帧的速度非常快,但屏幕上的更新速度似乎呈指数级下降。当我第一次启动程序时,我看到大约30秒的延迟,但它最终会减慢到停止。我连接了三个摄像头,但一次只显示一个。摄像头全部显示,选择按钮工作。我唯一的问题是显示器刷新率

这是运行在Python3.7上的覆盆子pi4。我可以通过网络浏览器连接到相机,它看起来没有延迟

我一直在寻找答案,但似乎找不到任何有帮助的。有人能提供一些帮助吗

这是我的程序(我删除了不相关的代码):

注意:在这个程序副本中,我每250ms更新一次,但我尝试了更小的数字,降到3左右,但帧似乎仍然落后。有更好的方法吗

注2:今天,我进一步研究了这一点之后,我意识到openCV肯定是在为每个摄像头调用cv2.VideoCapture()函数时开始为每个摄像头缓冲帧。read()函数似乎正在从缓冲区中提取下一帧,这就解释了为什么更新要花这么长时间,以及为什么我在屏幕上看到的图像永远都跟不上现实。我将测试代码更改为一次只连接一个摄像头,并在不主动查看摄像头时使用cv2.release()函数。这使情况有了很大改善。我还将更新函数设置为每1ms运行一次,并使用grab()函数在每个周期中获取一帧,但我只在每10个周期中处理和显示一帧,这也改进了一些。我仍然有一些滞后,如果有人有任何建议,我很想消除


在web浏览器中查看时,我的RTSP流显示为零明显滞后。有人知道我如何在tkinter中获得同样的效果吗?我没有嫁给openCV。

你每250毫秒读取一次相机的帧,速度不够快。即使您增加了频率,也不要期望太多。@acw1668我已经将其降低到1毫秒,但没有太多变化。实际上,大约1将开始使gui完全陷入困境。我不需要一个快速的帧速率,我只需要帧是当前的。目前,似乎帧正在被缓冲,我一次只从缓冲区中剥离一帧,而不是直接从相机中获取一个活动帧。使用您的代码我没有问题(当然,我只有一个相机,而不是三个)。我可以得到当前的帧(可能有点延迟)。我创建了一个精简版本的代码,只有一个摄像头,我看到一个体面的延迟可能2秒。我没有学习opencv,所以我不完全理解它在引擎盖下做什么。我认为read函数会从传入的流中抓取一帧,但看起来并不是这样。
#!/usr/bin/env python3

import time
from tkinter import *
import cv2
from PIL import Image, ImageTk


#GUI
class robotGUI:

    def __init__(self):

        self.selectedCam = "front"
        self.window = Tk()

        #Setup the window to fit the Raspberry Pi Touch Display = 800x400 and align top left
        self.window.geometry("800x480+0+0")
        self.window.overrideredirect(True)
        self.window.fullScreenState = False

        #Create Frame for Video Window
        self.videoFrame = Frame(self.window, relief=SUNKEN, bd=2)
        self.videoFrame.place(x=0, y=0, height=457, width=650)

        #Create the Video Window
        self.video = Label(self.videoFrame, bd=0, relief=FLAT, width=644, height=451)
        self.video.place(x=0, y=0)
        self.vid = VideoCapture()
        self.camUpdateFreq = 250
        self.updateCams()

        #Create the Button Frame
        self.buttonFrame = Frame(self.window, relief=FLAT)
        self.buttonFrame.place(x=651, y=0, height=457, width=149)

        #Create Buttons
        #Select Front Camera Button
        self.frontCamButton = Button(self.buttonFrame, text="Front Camera", command=lambda: self.selectCam("front"))
        self.frontCamButton.place(x=24, y=50, height=30, width=100)

        #Select Boom Camera Button
        self.boomCamButton = Button(self.buttonFrame, text="Boom Camera", command=lambda: self.selectCam("boom"))
        self.boomCamButton.place(x=24, y=130, height=30, width=100)

        #Select Rear Camera Button
        self.rearCamButton = Button(self.buttonFrame, text="Rear Camera", command=lambda: self.selectCam("rear"))
        self.rearCamButton.place(x=24, y=210, height=30, width=100)

        #Close Button
        self.exitButton = Button(self.buttonFrame, text="Close", command=self.window.destroy)
        self.exitButton.place(x=24, y=400, height=30, width=100)

        #Start the main loop for the gui
        self.window.mainloop()

    def selectCam(self, cam):

        if (cam.lower() == "front"):
            self.selectedCam = "front"
            self.statusBarLeft['text'] = "Front Camera Selected"
        elif (cam.lower() == "boom"):
            self.selectedCam = "boom"
            self.statusBarLeft['text'] = "Boom Camera Selected"
        elif (cam.lower() == "rear"):
            self.selectedCam = "rear"
            self.statusBarLeft['text'] = "Rear Camera Selected"

    def updateCams(self):

        #Get a frame from the selected camera
        ret, frame = self.vid.get_frame(self.selectedCam)

        if ret:
            imageCV2 = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
            img = Image.fromarray(imageCV2)
            imgPhoto = ImageTk.PhotoImage(image=img)
            self.video.imgPhoto = imgPhoto
            self.video.configure(image=imgPhoto)

        self.window.after(self.camUpdateFreq, self.updateCams)
        


#Video Camera Class
class VideoCapture:

    def __init__(self):

        #Define Locals
        FrontCameraAddress = "rtsp://admin:password@192.168.5.20:8554/12"
        BoomCameraAddress = "rtsp://admin:password@192.168.5.21:8554/12"
        RearCameraAddress = "rtsp://admin:password@192.168.5.22:8554/12"

        #Open Front Video Camera Source
        self.vidFront = cv2.VideoCapture(FrontCameraAddress)
        self.vidBoom = cv2.VideoCapture(BoomCameraAddress)
        self.vidRear = cv2.VideoCapture(RearCameraAddress)

        #Verify that the Camera Streams Opened
        if not self.vidFront.isOpened():
            raise ValueError("Unable to open video source to Front Camera")

        if not self.vidBoom.isOpened():
            raise ValueError("Unable to open video source to Boom Camera")

        if not self.vidRear.isOpened():
            raise ValueError("Unable to open video source to Rear Camera")

    #Get One Frame from the Selected Camera
    def get_frame(self, camera="front"):

        #Attempt to Get Front Camera Frame
        if (camera.lower() == "front"):

            #If Stream Still Open Return a Frame
            if self.vidFront.isOpened():
                ret, frame = self.vidFront.read()
                if ret:
                    #Return a boolean success flag and the current frame converted to BGR
                    return (ret, frame)                
                else:
                    return (ret, None)
            else:
                return (ret, None)

        #Attempt to Get Boom Camera Frame
        elif (camera.lower() == "boom"):

            #If Stream Still Open Return a Frame
            if self.vidBoom.isOpened():
                ret, frame = self.vidBoom.read()
                if ret:
                    #Return a boolean success flag and the current frame converted to BGR
                    return (ret, frame)                
                else:
                    return (ret, None)
            else:
                return (ret, None)

        #Attempt to Get Rear Camera Frame
        elif (camera.lower() == "rear"):

            #If Stream Still Open Return a Frame
            if self.vidRear.isOpened():
                ret, frame = self.vidRear.read()
                if ret:
                    #Return a boolean success flag and the current frame converted to BGR
                    return (ret, frame)                
                else:
                    return (ret, None)
            else:
                return (ret, None)

        else:
            return (False, None)

    #Release the video sources when the object is destroyed
    def __del__(self):
        if self.vidFront.isOpened():
            self.vidFront.release()
        if self.vidBoom.isOpened():
            self.vidBoom.release()
        if self.vidRear.isOpened():
            self.vidRear.release()
            

#Main Routine  - Only run if called from main program instance
if __name__ == '__main__':

    try:

        #Create GUI Object
        app = robotGUI()            

    except Exception as e:
        print("Exception: " + str(e))

    finally:
        print("Cleaning Up")