Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/340.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
用python绘制俯仰跟踪线_Python_Plot_Pyaudio_Aubio - Fatal编程技术网

用python绘制俯仰跟踪线

用python绘制俯仰跟踪线,python,plot,pyaudio,aubio,Python,Plot,Pyaudio,Aubio,我编写了这段代码,用pyaudio和aubio从麦克风捕获人声,并找到其基本频率: import sys import aubio from aubio import pitch import queue import music21 import pyaudio import numpy as np # Open stream. # PyAudio object. p = pyaudio.PyAudio() stream = p.open(format=pyaudio.paFloat32,

我编写了这段代码,用pyaudio和aubio从麦克风捕获人声,并找到其基本频率:

import sys
import aubio
from aubio import pitch
import queue
import music21
import pyaudio
import numpy as np
# Open stream.
# PyAudio object.
p = pyaudio.PyAudio()
stream = p.open(format=pyaudio.paFloat32,
                channels=1, rate=44100, input=True,
                input_device_index=0, frames_per_buffer=512)

q = queue.Queue()  
current_pitch = music21.pitch.Pitch()

filename = 'piano.wav'
samplerate = 44100

win_s = 512 
hop_s = 512 

tolerance = 0.8

pitch_o = pitch("default",win_s,hop_s,samplerate)
#pitch_o.set_unit("")
pitch_o.set_tolerance(tolerance)

# total number of frames read
total_frames = 0
def get_current_note():
    pitches = []
    confidences = []
    current_pitch = music21.pitch.Pitch()

    while True:
        data = stream.read(hop_s, exception_on_overflow=False)
        samples = np.fromstring(data,dtype=aubio.float_type)        
        pitch = (pitch_o(samples)[0])
        #pitch = int(round(pitch))
        confidence = pitch_o.get_confidence()
        #if confidence < 0.8: pitch = 0.
        pitches += [pitch]
        confidences += [confidence]
        current='Nan'
        if pitch>0:
            current_pitch.frequency = float(pitch)
            current=current_pitch.nameWithOctave
            print(pitch,'----',current,'----',current_pitch.microtone.cents)
        q.put({'Note': current, 'Cents': current_pitch.microtone.cents,'hz':pitch})

if __name__ == '__main__':
    get_current_note()
导入系统 进口奥比奥 从奥比奥进口沥青 导入队列 导入音乐21 导入pyaudio 将numpy作为np导入 #开放的溪流。 #PyAudio对象。 p=pyaudio.pyaudio() 流=p.open(格式=pyaudio.paFloat32, 通道=1,速率=44100,输入=True, 输入\u设备\u索引=0,每\u缓冲区的帧数=512) q=队列。队列() 当前音高=音乐21.pitch.pitch() 文件名='piano.wav' 采样率=44100 win_s=512 hop_s=512 公差=0.8 音高=音高(“默认”,win_s,hop_s,samplerate) #螺距设置单位(“”) 螺距设置公差(公差) #读取的总帧数 总帧数=0 def get_current_note(): 音高=[] 信任=[] 当前音高=音乐21.pitch.pitch() 尽管如此: data=stream.read(跃点、溢出异常=False) samples=np.fromstring(数据,dtype=aubio.float\u类型) 螺距=(螺距_o(样本)[0]) #螺距=整数(圆形(螺距)) 信心=投球获得信心() #如果置信度<0.8:螺距=0。 音高+=[音高] 信心+=[信心] 当前的'Nan' 如果节距>0: 当前俯仰频率=浮动(俯仰) 当前=当前音高名称(带倍频程) 打印(音高,'-',当前,'-',当前音高.microtone.cents) q、 put({'Note':current,'Cents':current_pitch.microtone.Cents,'hz':pitch}) 如果uuuu name uuuuuu='\uuuuuuu main\uuuuuuu': 获取当前注释() 现在,我需要在pygame或其他任何东西(不是matplotlib之类的静态绘图!)这样的视觉环境中实时动态地绘制频率线(在时间上设置动画)。 类似于我附加在这个问题上的图片