Ios 使用AVAudioEngine进行液位测量

Ios 使用AVAudioEngine进行液位测量,ios,objective-c,macos,avfoundation,avaudioengine,Ios,Objective C,Macos,Avfoundation,Avaudioengine,我刚刚在AVAudioEngine上观看了WWDC视频(练习中的502节AVAudioEngine),我非常兴奋能制作一个基于这项技术的应用程序 我还不知道如何对麦克风输入或混音器输出进行电平监控 有人能帮忙吗?明确地说,我说的是监视当前输入信号(并在UI中显示),而不是通道/轨道的输入/输出音量设置 我知道你可以用AVAudioRecorder来实现这一点,但这不是AVAudioEngine所需要的AVAudioNode。尝试在主混音器上安装一个水龙头,然后通过设置帧长来加快速度,然后读取样本

我刚刚在
AVAudioEngine
上观看了WWDC视频(练习中的502节
AVAudioEngine
),我非常兴奋能制作一个基于这项技术的应用程序

我还不知道如何对麦克风输入或混音器输出进行电平监控

有人能帮忙吗?明确地说,我说的是监视当前输入信号(并在UI中显示),而不是通道/轨道的输入/输出音量设置


我知道你可以用
AVAudioRecorder
来实现这一点,但这不是
AVAudioEngine
所需要的
AVAudioNode

尝试在主混音器上安装一个水龙头,然后通过设置帧长来加快速度,然后读取样本并获得平均值,类似这样:

顶部导入框架

那么下面的内容也一样>>

然后,获取所需的目标值

要获得峰值,请使用vDSP_maxmgv而不是vDSP_meamgv



LEVEL_LOWPASS_TRIG是一个值介于0.0到1.0之间的简单过滤器,如果设置为0.0,则会过滤所有值,而不会获得任何数据。如果将其设置为1.0,则会产生太多噪音。基本上,值越高,数据的变化就越大。似乎0.10到0.30之间的值对于大多数应用程序都是好的。

我发现了另一个有点奇怪的解决方案,但效果非常好,比tap好得多。混音器没有AudioUnit,但如果您将其转换为AvaudioNode,则可以获得AudioUnit并使用iOS的计量设备。以下是如何:

要启用或禁用计量,请执行以下操作:

- (void)setMeteringEnabled:(BOOL)enabled;
{
    UInt32 on = (enabled)?1:0;
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;
    OSStatus err = AudioUnitSetProperty(node.audioUnit, kAudioUnitProperty_MeteringMode, kAudioUnitScope_Output, 0, &on, sizeof(on));
}
要更新仪表,请执行以下操作:

- (void)updateMeters;
{
    AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;

    AudioUnitParameterValue level;
    AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower, kAudioUnitScope_Output, 0, &level);

    self.averagePowerForChannel1 = self.averagePowerForChannel0 = level;
    if(self.numberOfChannels>1)
    {
        err = AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower+1, kAudioUnitScope_Output, 0, &level);
    }
}

法哈德·马勒克普尔回答的等效Swift 3代码

顶部导入框架

全球申报

在需要的地方使用以下代码

计算

#导入
#进口
#进口
#进口
#进口
#进口
@接口录音:NSObject{
}
@属性(非原子)AVAudioEngine*记录引擎;
@属性float averagePowerForChannel0;
@信道1的属性浮点平均功率;
@财产浮动通道数;
@属性NSLevelIndicator*levelIndicator;
-(BOOL)recordToFile:(NSString*)文件路径;
@结束

Swift 5+

  • 下载以上项目并在您的项目中复制“micromic.swift”类

  • 在项目中复制并粘贴这些定位代码:

    import AVFoundation
    
    private var mic = MicrophoneMonitor(numberOfSamples: 1)
    private var timer:Timer!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        timer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(startMonitoring), userInfo: nil, repeats: true)
        timer.fire()
    }
    
    @objc func startMonitoring() {
      print("sound level:", normalizeSoundLevel(level: mic.soundSamples.first!))
    }
    
    private func normalizeSoundLevel(level: Float) -> CGFloat {
        let level = max(0.2, CGFloat(level) + 50) / 2 // between 0.1 and 25
        return CGFloat(level * (300 / 25)) // scaled to max at 300 (our height of our bar)
    }
    

  • 3.开瓶啤酒庆祝一下

    LEVEL_LOWPASS_TRIG使用的值(或范围)是多少?要使用vDSP_meamgv,请使用Apple的高性能数学框架进行“导入加速”。您可以在Github中发布完整的工作示例吗?@apocolipse我也不知道该放什么。。。电平低通触发=0.01对我有效。这是最好的选择。我对Swift也做了同样的事情,所以这个ObjC语法在另一个应用程序上对我来说是一个救命稻草。它可以根据音量的不同视觉表现进行调整:波形字符、简单的音量条或音量相关的透明度(褪色的麦克风图标等)。您是否有此代码的工作示例?这显示了整个循环。。如何实例化AudioEngine等。noob问题-如果节点设置在通道0上,为什么有两个通道?对于@omarojo。下面是使用其他两个答案组合的工作代码。要使用的.h文件,只需调用newAudioRecord=[AudioRecord new];newAudioRecord.levelIndicator=self.audioLevelIndicator;--实验性的(但不是很好的)[newAudioRecord recordToFile:fullFilePath_Name];[newAudioRecord.recordEngine stop];[newAudioRecord.recordEngine重置];newAudioRecord.recordEngine暂停];要继续:[newAudioRecord.recordEngine startAndReturnError:NULL];这是在不断地将音频重新编码到文件中吗?似乎效率不高。这是我找到的唯一方法!
    NSLog(@"===test===%.2f", self.averagePowerForChannel1);
    
    - (void)setMeteringEnabled:(BOOL)enabled;
    {
        UInt32 on = (enabled)?1:0;
        AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;
        OSStatus err = AudioUnitSetProperty(node.audioUnit, kAudioUnitProperty_MeteringMode, kAudioUnitScope_Output, 0, &on, sizeof(on));
    }
    
    - (void)updateMeters;
    {
        AVAudioIONode *node = (AVAudioIONode*)self.engine.mainMixerNode;
    
        AudioUnitParameterValue level;
        AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower, kAudioUnitScope_Output, 0, &level);
    
        self.averagePowerForChannel1 = self.averagePowerForChannel0 = level;
        if(self.numberOfChannels>1)
        {
            err = AudioUnitGetParameter(node.audioUnit, kMultiChannelMixerParam_PostAveragePower+1, kAudioUnitScope_Output, 0, &level);
        }
    }
    
    import Accelerate
    
    private var audioEngine: AVAudioEngine?
        private var averagePowerForChannel0: Float = 0
        private var averagePowerForChannel1: Float = 0
    let LEVEL_LOWPASS_TRIG:Float32 = 0.30
    
    let inputNode = audioEngine!.inputNode//since i need microphone audio level i have used `inputNode` otherwise you have to use `mainMixerNode`
    let recordingFormat: AVAudioFormat = inputNode!.outputFormat(forBus: 0)
     inputNode!.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) {[weak self] (buffer:AVAudioPCMBuffer, when:AVAudioTime) in
                    guard let strongSelf = self else {
                        return
                    }
                    strongSelf.audioMetering(buffer: buffer)
    }
    
    private func audioMetering(buffer:AVAudioPCMBuffer) {
                buffer.frameLength = 1024
                let inNumberFrames:UInt = UInt(buffer.frameLength)
                if buffer.format.channelCount > 0 {
                    let samples = (buffer.floatChannelData![0])
                    var avgValue:Float32 = 0
                    vDSP_meamgv(samples,1 , &avgValue, inNumberFrames)
                    var v:Float = -100
                    if avgValue != 0 {
                        v = 20.0 * log10f(avgValue)
                    }
                    self.averagePowerForChannel0 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0)
                    self.averagePowerForChannel1 = self.averagePowerForChannel0
                }
    
                if buffer.format.channelCount > 1 {
                    let samples = buffer.floatChannelData![1]
                    var avgValue:Float32 = 0
                    vDSP_meamgv(samples, 1, &avgValue, inNumberFrames)
                    var v:Float = -100
                    if avgValue != 0 {
                        v = 20.0 * log10f(avgValue)
                    }
                    self.averagePowerForChannel1 = (self.LEVEL_LOWPASS_TRIG*v) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel1)
                }
        }
    
    #define LEVEL_LOWPASS_TRIG .3
    
    #import "AudioRecorder.h"
    
    
    
    
    
    @implementation AudioRecord
    
    
    -(id)init {
         self = [super init];
         self.recordEngine = [[AVAudioEngine alloc] init];
    
         return self;
    }
    
    
     /**  ----------------------  Snippet Stackoverflow.com not including Audio Level Meter    ---------------------     **/
    
    
    -(BOOL)recordToFile:(NSString*)filePath {
    
         NSURL *fileURL = [NSURL fileURLWithPath:filePath];
    
         const Float64 sampleRate = 44100;
    
         AudioStreamBasicDescription aacDesc = { 0 };
         aacDesc.mSampleRate = sampleRate;
         aacDesc.mFormatID = kAudioFormatMPEG4AAC; 
         aacDesc.mFramesPerPacket = 1024;
         aacDesc.mChannelsPerFrame = 2;
    
         ExtAudioFileRef eaf;
    
         OSStatus err = ExtAudioFileCreateWithURL((__bridge CFURLRef)fileURL, kAudioFileAAC_ADTSType, &aacDesc, NULL, kAudioFileFlags_EraseFile, &eaf);
         assert(noErr == err);
    
         AVAudioInputNode *input = self.recordEngine.inputNode;
         const AVAudioNodeBus bus = 0;
    
         AVAudioFormat *micFormat = [input inputFormatForBus:bus];
    
         err = ExtAudioFileSetProperty(eaf, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), micFormat.streamDescription);
         assert(noErr == err);
    
         [input installTapOnBus:bus bufferSize:1024 format:micFormat block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when) {
           const AudioBufferList *abl = buffer.audioBufferList;
           OSStatus err = ExtAudioFileWrite(eaf, buffer.frameLength, abl);
           assert(noErr == err);
    
    
           /**  ----------------------  Snippet from stackoverflow.com in different context  ---------------------     **/
    
    
           UInt32 inNumberFrames = buffer.frameLength;
           if(buffer.format.channelCount>0) {
             Float32* samples = (Float32*)buffer.floatChannelData[0]; 
             Float32 avgValue = 0;
             vDSP_maxv((Float32*)samples, 1.0, &avgValue, inNumberFrames);
             self.averagePowerForChannel0 = (LEVEL_LOWPASS_TRIG*((avgValue==0)?
                                      -100:20.0*log10f(avgValue))) + ((1- LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
             self.averagePowerForChannel1 = self.averagePowerForChannel0;
           }
    
           dispatch_async(dispatch_get_main_queue(), ^{
    
             self.levelIndicator.floatValue=self.averagePowerForChannel0;
    
           });     
    
    
           /**  ---------------------- End of Snippet from stackoverflow.com in different context  ---------------------     **/
    
         }];
    
         BOOL startSuccess;
         NSError *error;
    
         startSuccess = [self.recordEngine startAndReturnError:&error]; 
         return startSuccess;
    }
    
    
    
    @end
    
    #import <Foundation/Foundation.h>
    #import <AVFoundation/AVFoundation.h>
    #import <AudioToolbox/ExtendedAudioFile.h>
    #import <CoreAudio/CoreAudio.h>
    #import <Accelerate/Accelerate.h>
    #import <AppKit/AppKit.h>
    
    @interface AudioRecord : NSObject {
    
    }
    
    @property (nonatomic) AVAudioEngine *recordEngine;
    
    
    @property float averagePowerForChannel0;
    @property float averagePowerForChannel1;
    @property float numberOfChannels;
    @property NSLevelIndicator * levelIndicator;
    
    
    -(BOOL)recordToFile:(NSString*)filePath;
    
    @end
    
    import AVFoundation
    
    private var mic = MicrophoneMonitor(numberOfSamples: 1)
    private var timer:Timer!
    
    override func viewDidLoad() {
        super.viewDidLoad()
        timer = Timer.scheduledTimer(timeInterval: 0.1, target: self, selector: #selector(startMonitoring), userInfo: nil, repeats: true)
        timer.fire()
    }
    
    @objc func startMonitoring() {
      print("sound level:", normalizeSoundLevel(level: mic.soundSamples.first!))
    }
    
    private func normalizeSoundLevel(level: Float) -> CGFloat {
        let level = max(0.2, CGFloat(level) + 50) / 2 // between 0.1 and 25
        return CGFloat(level * (300 / 25)) // scaled to max at 300 (our height of our bar)
    }