Ios 使用AUAUAudioUnit录制音频时禁用扬声器中的声音
我尝试使用AUAudioUnit录制音频。我成功地获得了音频缓冲区,但在录音时我也能通过扬声器听到录制的声音。问题是如何让缓冲器不向扬声器传递声音Ios 使用AUAUAudioUnit录制音频时禁用扬声器中的声音,ios,avfoundation,core-audio,audiounit,audiotoolbox,Ios,Avfoundation,Core Audio,Audiounit,Audiotoolbox,我尝试使用AUAudioUnit录制音频。我成功地获得了音频缓冲区,但在录音时我也能通过扬声器听到录制的声音。问题是如何让缓冲器不向扬声器传递声音 func startRecording() { setupAudioSessionForRecording() do { let audioComponentDescription = AudioComponentDescription( componentType: kAudioUnitTy
func startRecording() {
setupAudioSessionForRecording()
do {
let audioComponentDescription = AudioComponentDescription(
componentType: kAudioUnitType_Output,
componentSubType: kAudioUnitSubType_RemoteIO,
componentManufacturer: kAudioUnitManufacturer_Apple,
componentFlags: 0,
componentFlagsMask: 0 )
try auAudioUnit = AUAudioUnit(componentDescription: audioComponentDescription)
let audioFormat = AVAudioFormat(commonFormat: .pcmFormatInt16,
sampleRate: sampleRate,
interleaved: true,
channelLayout: AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Mono)!)
try auAudioUnit.inputBusses[0].setFormat(audioFormat)
try auAudioUnit.outputBusses[1].setFormat(audioFormat)
} catch {
print(error)
}
auAudioUnit.isInputEnabled = true
auAudioUnit.outputProvider = {(actionFlags, timestamp, frameCount, inputBusNumber, inputData) -> AUAudioUnitStatus in
let err : OSStatus = self.auAudioUnit.renderBlock(actionFlags,
timestamp,
frameCount,
1,
inputData,
.none)
if err == noErr {
self.processMicrophoneBuffer(inputDataList: inputData,
frameCount: UInt32(frameCount) )
} else {
print(err)
}
return err
}
do {
try auAudioUnit.allocateRenderResources()
try auAudioUnit.startHardware()
} catch {
print(error)
}
}
解决方案:
在这里找到了解决方案:
其思想是在inputHandler
内部调用渲染块,而不是outputProvider
auAudioUnit.inputHandler = { (actionFlags, timestamp, frameCount, inputBusNumber) in
var bufferList = AudioBufferList(mNumberBuffers: 1,
mBuffers: AudioBuffer(
mNumberChannels: audioFormat!.channelCount,
mDataByteSize: 0,
mData: nil))
let err: OSStatus = block(actionFlags,
timestamp,
frameCount,
inputBusNumber,
&bufferList,
.none)
if err == noErr {
self.processMicrophoneBuffer(inputDataList: inputData,
frameCount: UInt32(frameCount) )
} else {
print(err)
}
}
使RemoteIO输出静音的一种方法是在处理(复制)每个缓冲区后将录制的输入数据中的音频缓冲区的内容(帧数采样)归零。使RemoteIO输出静音的一种方法是在处理(复制)后将录制的输入数据中的音频缓冲区的内容(帧数采样)归零每个缓冲区。感谢您在github上给出的答案和代码示例!在这里找到了更合适的解决方案,感谢您的回答和github上的代码示例!在这里找到了更合适的解决方案