Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/344.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/mongodb/12.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何用python从摄像头(或网络摄像头)捕获视频(和音频)_Python_Video Capture - Fatal编程技术网

如何用python从摄像头(或网络摄像头)捕获视频(和音频)

如何用python从摄像头(或网络摄像头)捕获视频(和音频),python,video-capture,Python,Video Capture,我正在寻找一种解决方案,无论是在linux还是在windows中,都可以让我 同时从我的网络摄像头和麦克风录制视频(+音频) 将其另存为file.AVI(或mpg或其他格式) 录制时在屏幕上显示视频 在我的例子中,压缩不是一个问题,实际上我更喜欢捕获原始数据并在以后进行压缩 到目前为止,我已经用VB中的ActiveX组件完成了这项工作,它处理了所有的事情,我想用python(VB解决方案不稳定,不可靠) 到目前为止,我看到的代码只捕获视频或单个帧 我已经看了这么多了 OpenCV-在那里找

我正在寻找一种解决方案,无论是在linux还是在windows中,都可以让我

  • 同时从我的网络摄像头和麦克风录制视频(+音频)
  • 将其另存为file.AVI(或mpg或其他格式)
  • 录制时在屏幕上显示视频
在我的例子中,压缩不是一个问题,实际上我更喜欢捕获原始数据并在以后进行压缩

到目前为止,我已经用VB中的ActiveX组件完成了这项工作,它处理了所有的事情,我想用python(VB解决方案不稳定,不可靠)

到目前为止,我看到的代码只捕获视频或单个帧

我已经看了这么多了

  • OpenCV-在那里找不到音频捕获
  • PyGame-无同步音频捕获(AFAIK)
  • 视频捕获-仅提供单帧
  • SimpleCV-无音频
  • VLC-将VideoLAN程序绑定到WxThon中-希望它能做到(仍在研究此选项)
  • kivy-刚刚听说,到目前为止还没能在windows下工作
问题是,是否有python的视频和音频捕获库


或者-如果有的话,还有其他选择吗?

我建议使用ffmpeg。有一个python包装器


我一直在寻找一个很好的答案,我想它是


python绑定的文档非常简单,大部分似乎都围绕着旧的0.10版本的GStreamer而不是新的1.X版本,但GStreamer是一个功能极其强大的跨平台多媒体框架,可以流式传输、多路复用、转码,回答:没有。python中没有一个库/解决方案可以同时进行视频/音频录制。您必须分别实现这两个功能,并以智能方式合并音频和视频信号,最终生成视频/音频文件

我为你提出的问题找到了解决办法。我的代码解决了您的三个问题:

  • 同时从网络摄像头和麦克风录制视频和音频
  • 它将最终的视频/音频文件保存为.AVI
  • 取消注释第76、77和78行将使视频在录制时显示在屏幕上
我的解决方案使用
pyaudio
进行音频录制,
opencv
进行视频录制,
ffmpeg
对两个信号进行多路复用。为了能够同时记录两者,我使用多线程。一个线程记录视频,另一个线程记录音频。我已经将我的代码上传到github,并且在这里包含了它的所有重要部分

注意:
opencv
无法控制网络摄像机进行录制的fps。它只能在文件的编码中指定所需的最终fps,但网络摄像机通常根据规格和光线条件表现不同(我发现)。因此,fps必须在代码级别进行控制

import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os

class VideoRecorder():  

    # Video class based on openCV 
    def __init__(self):

        self.open = True
        self.device_index = 0
        self.fps = 6               # fps should be the minimum constant rate at which the camera can
        self.fourcc = "MJPG"       # capture images (with no decrease in speed over time; testing is required)
        self.frameSize = (640,480) # video formats and sizes also depend and vary according to the camera used
        self.video_filename = "temp_video.avi"
        self.video_cap = cv2.VideoCapture(self.device_index)
        self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
        self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
        self.frame_counts = 1
        self.start_time = time.time()


    # Video starts being recorded 
    def record(self):

#       counter = 1
        timer_start = time.time()
        timer_current = 0


        while(self.open==True):
            ret, video_frame = self.video_cap.read()
            if (ret==True):

                    self.video_out.write(video_frame)
#                   print str(counter) + " " + str(self.frame_counts) + " frames written " + str(timer_current)
                    self.frame_counts += 1
#                   counter += 1
#                   timer_current = time.time() - timer_start
                    time.sleep(0.16)
#                   gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
#                   cv2.imshow('video_frame', gray)
#                   cv2.waitKey(1)
            else:
                break

                # 0.16 delay -> 6 fps
                # 


    # Finishes the video recording therefore the thread too
    def stop(self):

        if self.open==True:

            self.open=False
            self.video_out.release()
            self.video_cap.release()
            cv2.destroyAllWindows()

        else: 
            pass


    # Launches the video recording function using a thread          
    def start(self):
        video_thread = threading.Thread(target=self.record)
        video_thread.start()





class AudioRecorder():


    # Audio class based on pyAudio and Wave
    def __init__(self):

        self.open = True
        self.rate = 44100
        self.frames_per_buffer = 1024
        self.channels = 2
        self.format = pyaudio.paInt16
        self.audio_filename = "temp_audio.wav"
        self.audio = pyaudio.PyAudio()
        self.stream = self.audio.open(format=self.format,
                                      channels=self.channels,
                                      rate=self.rate,
                                      input=True,
                                      frames_per_buffer = self.frames_per_buffer)
        self.audio_frames = []


    # Audio starts being recorded
    def record(self):

        self.stream.start_stream()
        while(self.open == True):
            data = self.stream.read(self.frames_per_buffer) 
            self.audio_frames.append(data)
            if self.open==False:
                break


    # Finishes the audio recording therefore the thread too    
    def stop(self):

        if self.open==True:
            self.open = False
            self.stream.stop_stream()
            self.stream.close()
            self.audio.terminate()

            waveFile = wave.open(self.audio_filename, 'wb')
            waveFile.setnchannels(self.channels)
            waveFile.setsampwidth(self.audio.get_sample_size(self.format))
            waveFile.setframerate(self.rate)
            waveFile.writeframes(b''.join(self.audio_frames))
            waveFile.close()

        pass

    # Launches the audio recording function using a thread
    def start(self):
        audio_thread = threading.Thread(target=self.record)
        audio_thread.start()





def start_AVrecording(filename):

    global video_thread
    global audio_thread

    video_thread = VideoRecorder()
    audio_thread = AudioRecorder()

    audio_thread.start()
    video_thread.start()

    return filename




def start_video_recording(filename):

    global video_thread

    video_thread = VideoRecorder()
    video_thread.start()

    return filename


def start_audio_recording(filename):

    global audio_thread

    audio_thread = AudioRecorder()
    audio_thread.start()

    return filename




def stop_AVrecording(filename):

    audio_thread.stop() 
    frame_counts = video_thread.frame_counts
    elapsed_time = time.time() - video_thread.start_time
    recorded_fps = frame_counts / elapsed_time
    print "total frames " + str(frame_counts)
    print "elapsed time " + str(elapsed_time)
    print "recorded fps " + str(recorded_fps)
    video_thread.stop() 

    # Makes sure the threads have finished
    while threading.active_count() > 1:
        time.sleep(1)


#    Merging audio and video signal

    if abs(recorded_fps - 6) >= 0.01:    # If the fps rate was higher/lower than expected, re-encode it to the expected

        print "Re-encoding"
        cmd = "ffmpeg -r " + str(recorded_fps) + " -i temp_video.avi -pix_fmt yuv420p -r 6 temp_video2.avi"
        subprocess.call(cmd, shell=True)

        print "Muxing"
        cmd = "ffmpeg -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video2.avi -pix_fmt yuv420p " + filename + ".avi"
        subprocess.call(cmd, shell=True)

    else:

        print "Normal recording\nMuxing"
        cmd = "ffmpeg -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video.avi -pix_fmt yuv420p " + filename + ".avi"
        subprocess.call(cmd, shell=True)

        print ".."




# Required and wanted processing of final files
def file_manager(filename):

    local_path = os.getcwd()

    if os.path.exists(str(local_path) + "/temp_audio.wav"):
        os.remove(str(local_path) + "/temp_audio.wav")

    if os.path.exists(str(local_path) + "/temp_video.avi"):
        os.remove(str(local_path) + "/temp_video.avi")

    if os.path.exists(str(local_path) + "/temp_video2.avi"):
        os.remove(str(local_path) + "/temp_video2.avi")

    if os.path.exists(str(local_path) + "/" + filename + ".avi"):
        os.remove(str(local_path) + "/" + filename + ".avi")

对于上面提出的问题:是的,代码也应该在Python3下工作。我对它做了一点调整,现在可以用于python2和python3(在windows7上使用2.7和3.6对其进行测试,尽管您需要安装ffmpeg或至少在同一目录下安装可执行文件ffmpeg.exe,但您可以在此处获得:)。当然,您还需要所有其他库cv2、numpy、pyaudio,如下所示:

pip install opencv-python numpy pyaudio
现在可以直接运行代码:

#!/usr/bin/env python
# -*- coding: utf-8 -*-
# VideoRecorder.py

from __future__ import print_function, division
import numpy as np
import cv2
import pyaudio
import wave
import threading
import time
import subprocess
import os

class VideoRecorder():  
    "Video class based on openCV"
    def __init__(self, name="temp_video.avi", fourcc="MJPG", sizex=640, sizey=480, camindex=0, fps=30):
        self.open = True
        self.device_index = camindex
        self.fps = fps                  # fps should be the minimum constant rate at which the camera can
        self.fourcc = fourcc            # capture images (with no decrease in speed over time; testing is required)
        self.frameSize = (sizex, sizey) # video formats and sizes also depend and vary according to the camera used
        self.video_filename = name
        self.video_cap = cv2.VideoCapture(self.device_index)
        self.video_writer = cv2.VideoWriter_fourcc(*self.fourcc)
        self.video_out = cv2.VideoWriter(self.video_filename, self.video_writer, self.fps, self.frameSize)
        self.frame_counts = 1
        self.start_time = time.time()

    def record(self):
        "Video starts being recorded"
        # counter = 1
        timer_start = time.time()
        timer_current = 0
        while self.open:
            ret, video_frame = self.video_cap.read()
            if ret:
                self.video_out.write(video_frame)
                # print(str(counter) + " " + str(self.frame_counts) + " frames written " + str(timer_current))
                self.frame_counts += 1
                # counter += 1
                # timer_current = time.time() - timer_start
                time.sleep(1/self.fps)
                # gray = cv2.cvtColor(video_frame, cv2.COLOR_BGR2GRAY)
                # cv2.imshow('video_frame', gray)
                # cv2.waitKey(1)
            else:
                break

    def stop(self):
        "Finishes the video recording therefore the thread too"
        if self.open:
            self.open=False
            self.video_out.release()
            self.video_cap.release()
            cv2.destroyAllWindows()

    def start(self):
        "Launches the video recording function using a thread"
        video_thread = threading.Thread(target=self.record)
        video_thread.start()

class AudioRecorder():
    "Audio class based on pyAudio and Wave"
    def __init__(self, filename="temp_audio.wav", rate=44100, fpb=1024, channels=2):
        self.open = True
        self.rate = rate
        self.frames_per_buffer = fpb
        self.channels = channels
        self.format = pyaudio.paInt16
        self.audio_filename = filename
        self.audio = pyaudio.PyAudio()
        self.stream = self.audio.open(format=self.format,
                                      channels=self.channels,
                                      rate=self.rate,
                                      input=True,
                                      frames_per_buffer = self.frames_per_buffer)
        self.audio_frames = []

    def record(self):
        "Audio starts being recorded"
        self.stream.start_stream()
        while self.open:
            data = self.stream.read(self.frames_per_buffer) 
            self.audio_frames.append(data)
            if not self.open:
                break

    def stop(self):
        "Finishes the audio recording therefore the thread too"
        if self.open:
            self.open = False
            self.stream.stop_stream()
            self.stream.close()
            self.audio.terminate()
            waveFile = wave.open(self.audio_filename, 'wb')
            waveFile.setnchannels(self.channels)
            waveFile.setsampwidth(self.audio.get_sample_size(self.format))
            waveFile.setframerate(self.rate)
            waveFile.writeframes(b''.join(self.audio_frames))
            waveFile.close()

    def start(self):
        "Launches the audio recording function using a thread"
        audio_thread = threading.Thread(target=self.record)
        audio_thread.start()

def start_AVrecording(filename="test"):
    global video_thread
    global audio_thread
    video_thread = VideoRecorder()
    audio_thread = AudioRecorder()
    audio_thread.start()
    video_thread.start()
    return filename

def start_video_recording(filename="test"):
    global video_thread
    video_thread = VideoRecorder()
    video_thread.start()
    return filename

def start_audio_recording(filename="test"):
    global audio_thread
    audio_thread = AudioRecorder()
    audio_thread.start()
    return filename

def stop_AVrecording(filename="test"):
    audio_thread.stop() 
    frame_counts = video_thread.frame_counts
    elapsed_time = time.time() - video_thread.start_time
    recorded_fps = frame_counts / elapsed_time
    print("total frames " + str(frame_counts))
    print("elapsed time " + str(elapsed_time))
    print("recorded fps " + str(recorded_fps))
    video_thread.stop() 

    # Makes sure the threads have finished
    while threading.active_count() > 1:
        time.sleep(1)

    # Merging audio and video signal
    if abs(recorded_fps - 6) >= 0.01:    # If the fps rate was higher/lower than expected, re-encode it to the expected
        print("Re-encoding")
        cmd = "ffmpeg -r " + str(recorded_fps) + " -i temp_video.avi -pix_fmt yuv420p -r 6 temp_video2.avi"
        subprocess.call(cmd, shell=True)
        print("Muxing")
        cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video2.avi -pix_fmt yuv420p " + filename + ".avi"
        subprocess.call(cmd, shell=True)
    else:
        print("Normal recording\nMuxing")
        cmd = "ffmpeg -y -ac 2 -channel_layout stereo -i temp_audio.wav -i temp_video.avi -pix_fmt yuv420p " + filename + ".avi"
        subprocess.call(cmd, shell=True)
        print("..")

def file_manager(filename="test"):
    "Required and wanted processing of final files"
    local_path = os.getcwd()
    if os.path.exists(str(local_path) + "/temp_audio.wav"):
        os.remove(str(local_path) + "/temp_audio.wav")
    if os.path.exists(str(local_path) + "/temp_video.avi"):
        os.remove(str(local_path) + "/temp_video.avi")
    if os.path.exists(str(local_path) + "/temp_video2.avi"):
        os.remove(str(local_path) + "/temp_video2.avi")
    # if os.path.exists(str(local_path) + "/" + filename + ".avi"):
    #     os.remove(str(local_path) + "/" + filename + ".avi")

if __name__ == '__main__':
    start_AVrecording()
    time.sleep(5)
    stop_AVrecording()
    file_manager()

我在一个项目中使用了JRodrigoF的脚本一段时间。但是,我注意到有时线程会挂起,这会导致程序崩溃。另一个问题是openCV不能以可靠的速率捕获视频帧,ffmpeg在重新编码时会扭曲我的视频

我提出了一个新的解决方案,它可以为我的应用程序提供更可靠、更高质量的记录。它目前仅适用于Windows,因为它使用了
pywinauto
和内置的Windows摄像头应用程序。脚本的最后一位执行一些错误检查,通过检查视频名称的时间戳来确认成功录制的视频


我没有看到ffmpeg如何在视频被录制时也被用来显示视频。(例如,可以将vlc嵌入wxpython,或作为独立的无边界窗口)您知道这是否可以在Python 3.x(理想情况下为3.4)下工作吗?如何支持mac os?我不知道这里使用的库是否也适用于Python 3。不知道如何在mac中向下滚动。很抱歉这里也很重要。这个解决方案是一个非常严格的解决方案,涵盖了原始问题的问题和需求。要获得使用python进行视频/音频录制的真正好的解决方案,请查看ffmpeg。真正的魔法和最好的解决方案就在那里<代码>sudo apt get安装libasound dev portaudio19 dev libportaudio2 libportaudiocpp0
import pywinauto
import time
import subprocess
import os
import datetime


def win_record(duration):
    subprocess.run('start microsoft.windows.camera:', shell=True)  # open camera app

    # focus window by getting handle using title and class name
    # subprocess call opens camera and gets focus, but this provides alternate way
    # t, c = 'Camera', 'ApplicationFrameWindow'
    # handle = pywinauto.findwindows.find_windows(title=t, class_name=c)[0]
    # # get app and window
    # app = pywinauto.application.Application().connect(handle=handle)
    # window = app.window(handle=handle)
    # window.set_focus()  # set focus
    time.sleep(2)  # have to sleep

    # take control of camera window to take video
    desktop = pywinauto.Desktop(backend="uia")
    cam = desktop['Camera']
    # cam.print_control_identifiers()
    # make sure in video mode
    if cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").exists():
        cam.child_window(title="Switch to Video mode", auto_id="CaptureButton_1", control_type="Button").click()
    time.sleep(1)
    # start then stop video
    cam.child_window(title="Take Video", auto_id="CaptureButton_1", control_type="Button").click()
    time.sleep(duration+2)
    cam.child_window(title="Stop taking Video", auto_id="CaptureButton_1", control_type="Button").click()

    # retrieve vids from camera roll and sort
    dir = 'C:/Users/m/Pictures/Camera Roll'
    all_contents = list(os.listdir(dir))
    vids = [f for f in all_contents if "_Pro.mp4" in f]
    vids.sort()
    vid = vids[-1]  # get last vid
    # compute time difference
    vid_time = vid.replace('WIN_', '').replace('_Pro.mp4', '')
    vid_time = datetime.datetime.strptime(vid_time, '%Y%m%d_%H_%M_%S')
    now = datetime.datetime.now()
    diff = now - vid_time
    # time different greater than 2 minutes, assume something wrong & quit
    if diff.seconds > 120:
        quit()
    
    subprocess.run('Taskkill /IM WindowsCamera.exe /F', shell=True)  # close camera app
    print('Recorded successfully!')


win_record(2)