Python Raspberry Pi 3通过720p USB摄像头提高FPS

Python Raspberry Pi 3通过720p USB摄像头提高FPS,python,linux,multithreading,raspberry-pi,jpeg,Python,Linux,Multithreading,Raspberry Pi,Jpeg,我用python编写了一些代码来打开USB摄像头并从中抓取帧。我将代码用于http流。对于JPEG编码,我使用libturbojpeg库。为此,我使用64位操作系统 product: Raspberry Pi 3 Model B Rev 1.2 serial: 00000000f9307746 width: 64 bits capabilities: smp cp15_barrier setend swp 我用不同的分辨率做了一些测试 Resolution FPS Time for e

我用python编写了一些代码来打开USB摄像头并从中抓取帧。我将代码用于http流。对于JPEG编码,我使用libturbojpeg库。为此,我使用64位操作系统

product: Raspberry Pi 3 Model B Rev 1.2
serial: 00000000f9307746
width: 64 bits
capabilities: smp cp15_barrier setend swp
我用不同的分辨率做了一些测试

Resolution   FPS   Time for encode
640 x 480     ~35       ~0.01
1280 x 720    ~17       ~0.028
这是我的密码

import time
import os
import re
import uvc
from turbojpeg import TurboJPEG, TJPF_GRAY, TJSAMP_GRAY
jpeg = TurboJPEG("/opt/libjpeg-turbo/lib64/libturbojpeg.so")
camera = None

import numpy as np
from threading import Thread

class ProcessJPG(Thread):

    def __init__(self, data):
        self.jpeg_data = None
        self.data = data
        super(ProcessJPG, self).__init__()

    def run(self):
        self.jpeg_data = jpeg.encode((self.data))

    def get_frame(self):
        self.frame = camera.get_frame()

global camera
dev_list = uvc.device_list()
print("devices: ", dev_list)
camera = uvc.Capture(dev_list[1]['uid'])
camera.frame_size = camera.frame_sizes[2] // set 1280 x 720
camera.frame_rate = camera.frame_rates[0] // set 30 fps

class GetFrame(Thread):
    def __init__(self):
        self.frame = None
        super(GetFrame, self).__init__()
    def run(self):
        self.frame = camera.get_frame()

_fps = -1
count_to_fps = 0
_real_fps = 0
from time import time
_real_fps = ""
cfps_time = time()

while True:
    if camera:
        t = GetFrame()
        t.start()
        t.join()
        img = t.frame
        timestamp = img.timestamp
        img = img.img
        ret = 1
    t_start = time()
    t = ProcessJPG(img)
    t.start()
    t.join()
    jpg = t.jpeg_data
    t_end = time()
    print(t_end - t_start)
    count_to_fps += 1
    if count_to_fps >= _fps:
        t_to_fps = time() - cfps_time
        _real_fps = 1.0 / t_to_fps
        cfps_time = time()
        count_to_fps = 0
    print("FPS, ", _real_fps)
编码行是:
jpeg.encode((self.data))

我的问题是,可以将分辨率提高到1280 x 720(例如30fps),还是应该使用更强大的设备?当我在计算期间查看htop时,CPU没有100%使用

编辑: 摄像机格式:

[video4linux2,v4l2 @ 0xa705c0] Raw       :     yuyv422 :           YUYV 4:2:2 : 640x480 1280x720 960x544 800x448 640x360 424x240 352x288 320x240 800x600 176x144 160x120 1280x800
[video4linux2,v4l2 @ 0xa705c0] Compressed:       mjpeg :          Motion-JPEG : 640x480 1280x720 960x544 800x448 640x360 800x600 416x240 352x288 176x144 320x240 160x120

这是可能的,您不需要更强大的硬件

*捕获实例将始终从相机抓取压缩的帧。

当代码访问
.img
属性时,该属性将调用
jpeg2yuv
(请参阅 和 ).那么 您正在使用
jpeg\u encode()
重新编码。尝试在之后使用
frame.jpeg\u缓冲区
抓拍,不要触摸
.img

我在RPi2上用a看了一眼pyuvc,然后做了一个 简化示例

import uvc
import time

def main():
    dev_list = uvc.device_list()
    cap = uvc.Capture(dev_list[0]["uid"])
    cap.frame_mode = (1280, 720, 30)
    tlast = time.time()
    for x in range(100):
        frame = cap.get_frame_robust()
        jpeg = frame.jpeg_buffer
        print("%s (%d bytes)" % (type(jpeg), len(jpeg)))
        #img = frame.img
        tnow = time.time()
        print("%.3f" % (tnow - tlast))
        tlast = tnow
    cap = None

main()

我得到每帧约0.033秒,在约8%CPU的情况下达到约30fps。如果我取消注释
#img=frame.img
行,它会在99%CPU的情况下上升到~0.054s/帧或~18fps(解码时间限制了捕获速率)。

您能列举相机可以捕获的格式吗?您可以将其配置为返回Jpeg编码的帧。@jamieguinan我检查了这个并更新了问题。有两种格式:原始格式和压缩格式。问题是有可能在压缩模式下打开相机吗?这绝对是可能的,从C代码级别-我有一个罗技C310,可以读取mjpeg帧的速度1280x720@25fps在RPi2上。但我不知道python库如何/是否提供设置帧格式的访问。您使用哪个库导入uvc
?您是否尝试过跳过ProcessJPG()步骤并执行类似于
print(dir(img))
的操作?可能代码正在进行冗余的解压缩->压缩。我正在使用python绑定libuvc库,它被称为pyuvc.Output of dir
“UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUU"新","pyx"vtable","reduce","reduce"ex","repr","setattr","setstate","sizeof"strdes、“带宽系数”、“关闭”、“控制”、“帧模式”、“帧速率”、“帧速率”、“帧大小”、“帧大小”、“获取帧”、“获取帧鲁棒性”、“名称”、“停止流”]
在源代码中,我看到了
UVC\u FRAME\u FORMAT\u COMPRESSED
完整列表,无论您在哪里解码jpeg帧,您都可能需要确定。因此,当我要显示jpeg\u缓冲区时,我必须将其转换为数组ant Reformate?取决于显示位置和方式。如果您通过http发送帧,则可以发送jbeg_缓冲区数据就目前情况而言,mjpeg流通常缺少哈夫曼表。如果你不介意我问的话,接收端会是什么?更多Python代码?当我将frame.jpeg_缓冲区流到浏览器中时,没有显示。我尝试将jpeg缓冲区重塑为(720,1280,3),但我得到
无法将大小为125576的数组重塑为形状(7201280,3)
。例如如何将
jpeg\u缓冲区
保存为单个文件?或者如果我流
frame.jpeg\u缓冲区
客户端如何显示此内容?我认为jpeg不可重塑。对于BGR(接近RGB),您只需访问
.bgr
属性,而不必重新压缩它。如果您是通过http提供服务,则需要添加一些头文件和
多部分/x-mixed-replace
内容。您可能可以进行调整。