Avaudioengine 将AVAudioPCMBuffer转换为另一个AVAudioPCMBuffer

Avaudioengine 将AVAudioPCMBuffer转换为另一个AVAudioPCMBuffer,avaudioengine,audiokit,avaudiofile,Avaudioengine,Audiokit,Avaudiofile,我试图使用AVAudioConverter将一个已确定的avaudiopcbuffer(44.1khz,1ch,float32,非交错)转换为另一个avaudiopcbuffer(16khz,1ch,int16,非交错),并使用AVAudioFile编写它。 我的代码使用库AudioKit和点击AKLazyTap在每个确定的时间根据以下来源获取缓冲区: 以下是我的实现: lazy var downAudioFormat: AVAudioFormat = { let avAudioChann

我试图使用
AVAudioConverter
将一个已确定的
avaudiopcbuffer
(44.1khz,1ch,float32,非交错)转换为另一个
avaudiopcbuffer
(16khz,1ch,int16,非交错),并使用
AVAudioFile
编写它。 我的代码使用库
AudioKit
和点击
AKLazyTap
在每个确定的时间根据以下来源获取缓冲区:

以下是我的实现:

lazy var downAudioFormat: AVAudioFormat = {
  let avAudioChannelLayout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Mono)!
  return AVAudioFormat(
      commonFormat: .pcmFormatInt16,
      sampleRate: 16000,
      interleaved: false,
      channelLayout: avAudioChannelLayout)
}()

//...
AKSettings.sampleRate = 44100
AKSettings.numberOfChannels = AVAudioChannelCount(1)
AKSettings.ioBufferDuration = 0.002
AKSettings.defaultToSpeaker = true

//...
let mic = AKMicrophone()
let originalAudioFormat: AVAudioFormat = mic.avAudioNode.outputFormat(forBus: 0) //41.100, 1ch, float32...
let inputFrameCapacity = AVAudioFrameCount(1024)
//I don't think this is correct, the audio is getting chopped... 
//How to calculate it correctly?
let outputFrameCapacity = AVAudioFrameCount(512)

guard let inputBuffer = AVAudioPCMBuffer(
  pcmFormat: originalAudioFormat,
  frameCapacity: inputFrameCapacity) else {
  fatalError()
}

// Your timer should fire equal to or faster than your buffer duration
bufferTimer = Timer.scheduledTimer(
  withTimeInterval: AKSettings.ioBufferDuration/2,
  repeats: true) { [weak self] _ in

  guard let unwrappedSelf = self else {
    return
  }

  unwrappedSelf.lazyTap?.fillNextBuffer(inputBuffer, timeStamp: nil)

  // This is important, since we're polling for samples, sometimes
  //it's empty, and sometimes it will be double what it was the last call.
  if inputBuffer.frameLength == 0 {
    return
  }

  //This converter is only create once, as the AVAudioFile. Ignore this code I call a function instead.
  let converter = AVAudioConverter(from: originalAudioFormat, to: downAudioFormat)
  converter.sampleRateConverterAlgorithm = AVSampleRateConverterAlgorithm_Normal
  converter.sampleRateConverterQuality = .min
  converter.bitRateStrategy = AVAudioBitRateStrategy_Constant

  guard let outputBuffer = AVAudioPCMBuffer(
      pcmFormat: converter.outputFormat,
      frameCapacity: outputFrameCapacity) else {
    print("Failed to create new buffer")
    return
  }

  let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
    outStatus.pointee = AVAudioConverterInputStatus.haveData
    return inputBuffer
  }

  var error: NSError?
  let status: AVAudioConverterOutputStatus = converter.convert(
      to: outputBuffer,
      error: &error,
      withInputFrom: inputBlock)

  switch status {
  case .error:
    if let unwrappedError: NSError = error {
      print(unwrappedError)
    }
    return
  default: break
  }

  //Only created once, instead of this code my code uses a function to verify if the AVAudioFile has been created, ignore it.
  outputAVAudioFile = try AVAudioFile(
    forWriting: unwrappedCacheFilePath,
    settings: format.settings,
    commonFormat: format.commonFormat,
    interleaved: false)

  do {
    try outputAVAudioFile?.write(from: avAudioPCMBuffer)
  } catch {
    print(error)
  }

}
(请注意,
AVAudioConverter
AVAudioFile
正在被重用,那里的初始化并不代表我的代码上的真正实现,只是为了简化并使其更易于理解。)

outputBuffer:AVAudioPCMBuffer
上的
frameCapacity
设置为512时,音频将被切碎。是否有任何方法可以发现此缓冲区的正确
帧容量

使用Swift 4和AudioKit 4.1编写


非常感谢

我设法解决了这个问题,在
inputNode
上安装了一个
Tap
,如下所示:

lazy var downAudioFormat: AVAudioFormat = {
  let avAudioChannelLayout = AVAudioChannelLayout(layoutTag: kAudioChannelLayoutTag_Mono)!
  return AVAudioFormat(
      commonFormat: .pcmFormatInt16,
      sampleRate: SAMPLE_RATE,
      interleaved: true,
      channelLayout: avAudioChannelLayout)
}()

private func addBufferListener(_ avAudioNode: AVAudioNode) {

  let originalAudioFormat: AVAudioFormat = avAudioNode.inputFormat(forBus: 0)
  let downSampleRate: Double = downAudioFormat.sampleRate
  let ratio: Float = Float(originalAudioFormat.sampleRate)/Float(downSampleRate)
  let converter: AVAudioConverter = buildConverter(originalAudioFormat)

  avAudioNode.installTap(
      onBus: 0,
      bufferSize: AVAudioFrameCount(downSampleRate * 2),
      format: originalAudioFormat,
      block: { (buffer: AVAudioPCMBuffer!, _ : AVAudioTime!) -> Void in

        let capacity = UInt32(Float(buffer.frameCapacity)/ratio)

        guard let outputBuffer = AVAudioPCMBuffer(
            pcmFormat: self.downAudioFormat,
            frameCapacity: capacity) else {
          print("Failed to create new buffer")
          return
        }

        let inputBlock: AVAudioConverterInputBlock = { inNumPackets, outStatus in
          outStatus.pointee = AVAudioConverterInputStatus.haveData
          return buffer
        }

        var error: NSError?
        let status: AVAudioConverterOutputStatus = converter.convert(
            to: outputBuffer,
            error: &error,
            withInputFrom: inputBlock)

        switch status {
        case .error:
          if let unwrappedError: NSError = error {
            print("Error \(unwrappedError)"))
          }
          return
        default: break
        }

        self.delegate?.flushAudioBuffer(outputBuffer)

  })

}

最好包含您的代码而不是链接到它,它已经足够短了。@dave234完成,添加了代码。函数
buildConverter
来自何处?
AVAudioConverter(from:originalAudioFormat,to:downAudioFormat)