Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/opencv/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/codeigniter/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 使用OpenCV和PyAudio同步音频和视频_Python_Opencv_Pyaudio - Fatal编程技术网

Python 使用OpenCV和PyAudio同步音频和视频

Python 使用OpenCV和PyAudio同步音频和视频,python,opencv,pyaudio,Python,Opencv,Pyaudio,我已经让OpenCV和PyAudio都工作了,但是我不确定如何将它们同步在一起。我无法从OpenCV获得帧速率,并且测量帧的调用时间会每时每刻发生变化。然而,PyAudio的基础是获取一定的采样率。我如何将它们同步到相同的速率。我假设编解码器有某种标准或某种方式来实现这一点。(我试过谷歌,我得到的只是关于假唱的信息:/) OpenCV帧速率 from __future__ import division import time import math import cv2, cv vc = c

我已经让OpenCV和PyAudio都工作了,但是我不确定如何将它们同步在一起。我无法从OpenCV获得帧速率,并且测量帧的调用时间会每时每刻发生变化。然而,PyAudio的基础是获取一定的采样率。我如何将它们同步到相同的速率。我假设编解码器有某种标准或某种方式来实现这一点。(我试过谷歌,我得到的只是关于假唱的信息:/)

OpenCV帧速率

from __future__ import division
import time
import math
import cv2, cv

vc = cv2.VideoCapture(0)
# get the frame
while True:

    before_read = time.time()
    rval, frame = vc.read()
    after_read  = time.time()
    if frame is not None:
        print len(frame)
        print math.ceil((1.0 / (after_read - before_read)))
        cv2.imshow("preview", frame)

        if cv2.waitKey(1) & 0xFF == ord('q'):
            break

    else:
        print "None..."
        cv2.waitKey(1)

# display the frame

while True:
    cv2.imshow("preview", frame)

    if cv2.waitKey(1) & 0xFF == ord('q'):
        break
抓取和保存音频

from sys import byteorder
from array import array
from struct import pack

import pyaudio
import wave

THRESHOLD = 500
CHUNK_SIZE = 1024
FORMAT = pyaudio.paInt16
RATE = 44100

def is_silent(snd_data):
    "Returns 'True' if below the 'silent' threshold"
    print "\n\n\n\n\n\n\n\n"
    print max(snd_data)
    print "\n\n\n\n\n\n\n\n"
    return max(snd_data) < THRESHOLD

def normalize(snd_data):
    "Average the volume out"
    MAXIMUM = 16384
    times = float(MAXIMUM)/max(abs(i) for i in snd_data)

    r = array('h')
    for i in snd_data:
        r.append(int(i*times))
    return r

def trim(snd_data):
    "Trim the blank spots at the start and end"
    def _trim(snd_data):
        snd_started = False
        r = array('h')

        for i in snd_data:
            if not snd_started and abs(i)>THRESHOLD:
                snd_started = True
                r.append(i)

            elif snd_started:
                r.append(i)
        return r

    # Trim to the left
    snd_data = _trim(snd_data)

    # Trim to the right
    snd_data.reverse()
    snd_data = _trim(snd_data)
    snd_data.reverse()
    return snd_data

def add_silence(snd_data, seconds):
    "Add silence to the start and end of 'snd_data' of length 'seconds' (float)"
    r = array('h', [0 for i in xrange(int(seconds*RATE))])
    r.extend(snd_data)
    r.extend([0 for i in xrange(int(seconds*RATE))])
    return r

def record():
    """
    Record a word or words from the microphone and 
    return the data as an array of signed shorts.

    Normalizes the audio, trims silence from the 
    start and end, and pads with 0.5 seconds of 
    blank sound to make sure VLC et al can play 
    it without getting chopped off.
    """
    p = pyaudio.PyAudio()
    stream = p.open(format=FORMAT, channels=1, rate=RATE,
        input=True, output=True,
        frames_per_buffer=CHUNK_SIZE)

    num_silent = 0
    snd_started = False

    r = array('h')

    while 1:
        # little endian, signed short
        snd_data = array('h', stream.read(1024))
        if byteorder == 'big':
            snd_data.byteswap()

        print "\n\n\n\n\n\n"
        print len(snd_data)
        print snd_data

        r.extend(snd_data)

        silent = is_silent(snd_data)

        if silent and snd_started:
            num_silent += 1
        elif not silent and not snd_started:
            snd_started = True

        if snd_started and num_silent > 1:
            break

    sample_width = p.get_sample_size(FORMAT)
    stream.stop_stream()
    stream.close()
    p.terminate()

    r = normalize(r)
    r = trim(r)
    r = add_silence(r, 0.5)
    return sample_width, r

def record_to_file(path):
    "Records from the microphone and outputs the resulting data to 'path'"
    sample_width, data = record()
    data = pack('<' + ('h'*len(data)), *data)

    wf = wave.open(path, 'wb')
    wf.setnchannels(1)
    wf.setsampwidth(sample_width)
    wf.setframerate(RATE)
    wf.writeframes(data)
    wf.close()

if __name__ == '__main__':
    print("please speak a word into the microphone")
    record_to_file('demo.wav')
    print("done - result written to demo.wav")
从系统导入字节顺序
从数组导入数组
从结构导入包
导入pyaudio
输入波
阈值=500
块大小=1024
格式=pyaudio.paInt16
费率=44100
def静音(snd数据):
“如果低于“静默”阈值,则返回“True”
打印“\n\n\n\n\n\n\n\n\n”
打印最大值(snd_数据)
打印“\n\n\n\n\n\n\n\n\n”
返回最大值(snd_数据)<阈值
def正常化(snd_数据):
“求出卷的平均值”
最大值=16384
时间=浮动(最大)/最大值(snd_数据中i的绝对值(i))
r=数组('h')
对于snd_数据中的i:
r、 追加(整数(i*次))
返回r
def微调(snd_数据):
“在开始和结束时修剪空白点”
def_微调(snd_数据):
snd_start=False
r=数组('h')
对于snd_数据中的i:
如果未启动snd_且abs(i)>阈值:
snd_start=True
r、 附加(i)
elif snd_开始:
r、 附加(i)
返回r
#向左修剪
snd_数据=_修剪(snd_数据)
#向右修剪
snd_data.reverse()
snd_数据=_修剪(snd_数据)
snd_data.reverse()
返回snd_数据
def添加_静音(snd_数据,秒):
“为长度为“秒”(浮动)的“snd_数据”的开始和结束添加静音”
r=数组('h',[0表示x范围内的i(int(秒*速率))]))
r、 扩展(snd_数据)
r、 扩展([0表示xrange中的i(int(秒*速率))]))
返回r
def记录():
"""
从麦克风录制一个或多个单词,然后
将数据作为带符号的短字符数组返回。
使音频正常化,从
开始和结束,并用0.5秒的
空白声音,确保VLC等可以播放
它没有被砍掉。
"""
p=pyaudio.pyaudio()
流=p.open(格式=格式,通道=1,速率=速率,
输入=真,输出=真,
每个缓冲区的帧数=块大小)
num_silent=0
snd_start=False
r=数组('h')
而1:
#小恩迪安,签名很短
snd_data=数组('h',stream.read(1024))
如果字节顺序==“大”:
snd_data.byteswap()
打印“\n\n\n\n\n\n”
打印透镜(snd_数据)
打印snd_数据
r、 扩展(snd_数据)
静默=静默(snd\U数据)
如果静音且snd_启动:
num_silent+=1
如果不沉默且未启动,则:
snd_start=True
如果snd_已启动且num_silent>1:
打破
样本宽度=p.获取样本大小(格式)
stream.stop_stream()
stream.close()
p、 终止()
r=标准化(r)
r=纵倾(r)
r=加上静音(r,0.5)
返回样本宽度,r
def记录到文件(路径):
“从麦克风录制并将结果数据输出到‘路径’”
样本宽度,数据=记录()

data=pack(“我认为您最好使用GSreamer或ffmpeg,或者如果您使用Windows,则使用DirectShow。这些LIB可以处理音频和视频,并且应该具有某种多路复用器,以允许您正确混合视频和音频

但是如果你真的想使用Opencv来实现这一点,你应该能够使用
VideoCapture
来获得帧速率,你试过使用吗

另一种方法是将fps估计为帧数除以持续时间:

nFrames  = cv.GetCaptureProperty(vc, CV_CAP_PROP_FRAME_COUNT)
           cv.SetCaptureProperty(vc, CV_CAP_PROP_POS_AVI_RATIO, 1)
duration = cv.GetCaptureProperty(vc, CV_CAP_PROP_POS_MSEC)
fps = 1000 * nFrames / duration;
我不确定我是否理解你在这里试图做什么:

before_read = time.time()
rval, frame = vc.read()
after_read  = time.time()

在我看来,读取后-读取前执行<代码>只测量OpenCV加载下一帧所需的时间,而不测量fps。OpenCV不尝试播放,它只是加载帧,它会尝试以最快的速度加载帧,我认为没有办法配置。我认为放置<代码>等待键(1/fps)
在显示每个帧后,将实现您想要的效果。

我个人为此使用了线程

import concurrent.futures
import pyaudio
import cv2
class Aud_Vid():

def __init__(self, arg):
    self.video = cv2.VideoCapture(0)
    self.CHUNK = 1470
    self.FORMAT = pyaudio.paInt16
    self.CHANNELS = 2
    self.RATE = 44100
    self.audio = pyaudio.PyAudio()
    self.instream = self.audio.open(format=self.FORMAT,channels=self.CHANNELS,rate=self.RATE,input=True,frames_per_buffer=self.CHUNK)
    self.outstream = self.audio.open(format=self.FORMAT,channels=self.CHANNELS,rate=self.RATE,output=True,frames_per_buffer=self.CHUNK)


def sync(self):
      with concurrent.futures.ThreadPoolExecutor() as executor:
              tv = executor.submit(self.video.read)
              ta = executor.submit(self.instream.read,1470)
              vid = tv.result()
              aud = ta.result()
              return(vid[1].tobytes(),aud)

您可以有两个计数器,一个用于音频,一个用于视频。 当显示图像和音频时,视频计数器将变为+(1/fps),其中秒是每次写入流的音频的秒数。然后在代码的音频部分,您可以执行以下操作 当audiosec-videosec>=0.05时:#音频领先 睡眠时间(0.05)

还有视频部分 当videosec audiosec>=0.2时:#视频领先 睡眠时间(0.2)

你可以玩数字游戏


这就是我如何使用pyaudio ffmpeg而不是cv2在我自己的视频播放器项目上实现某种同步的方法。

如果您安装了一个工作的
pyffmpeg
,您可以尝试使用
ffmpeg
的视频(和音频)显示功能,而不是使用OpenCV进行视频显示。虽然这已经很晚了,但我没有使用GStreamer,因为我有一些特定的目标想要实现,并且过去在使用GStreamer时遇到过问题。
import concurrent.futures
import pyaudio
import cv2
class Aud_Vid():

def __init__(self, arg):
    self.video = cv2.VideoCapture(0)
    self.CHUNK = 1470
    self.FORMAT = pyaudio.paInt16
    self.CHANNELS = 2
    self.RATE = 44100
    self.audio = pyaudio.PyAudio()
    self.instream = self.audio.open(format=self.FORMAT,channels=self.CHANNELS,rate=self.RATE,input=True,frames_per_buffer=self.CHUNK)
    self.outstream = self.audio.open(format=self.FORMAT,channels=self.CHANNELS,rate=self.RATE,output=True,frames_per_buffer=self.CHUNK)


def sync(self):
      with concurrent.futures.ThreadPoolExecutor() as executor:
              tv = executor.submit(self.video.read)
              ta = executor.submit(self.instream.read,1470)
              vid = tv.result()
              aud = ta.result()
              return(vid[1].tobytes(),aud)