Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/121.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/swift/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 当AvaudioInTimePitch改变播放速率时,如何使用AVAudioEngine执行脱机音频处理?_Ios_Swift_Avfoundation_Avaudioengine - Fatal编程技术网

Ios 当AvaudioInTimePitch改变播放速率时,如何使用AVAudioEngine执行脱机音频处理?

Ios 当AvaudioInTimePitch改变播放速率时,如何使用AVAudioEngine执行脱机音频处理?,ios,swift,avfoundation,avaudioengine,Ios,Swift,Avfoundation,Avaudioengine,我学会了如何使用AVAudioEngine将音频处理成文件,只要我不使用AVAudioUnitTimePitch改变播放速率,我的代码就可以工作 更改速率时,渲染音频的长度与原始音频的长度相同(速率不变)。因此,如果音频减慢(速率1),渲染音频的最后部分将静音 代码如下: // engine: AVAudioEngine // playerNode: AVAudioPlayerNode // audioFile: AVAudioFile open func render(to destinat

我学会了如何使用AVAudioEngine将音频处理成文件,只要我不使用AVAudioUnitTimePitch改变播放速率,我的代码就可以工作

更改速率时,渲染音频的长度与原始音频的长度相同(速率不变)。因此,如果音频减慢(速率<1),部分音频将被修剪,如果音频加速(速率>1),渲染音频的最后部分将静音

代码如下:

// engine: AVAudioEngine
// playerNode: AVAudioPlayerNode
// audioFile: AVAudioFile

open func render(to destinationFile: AVAudioFile) throws {
    
    playerNode.scheduleFile(audioFile, at: nil)
    
    do {
        let buffCapacity: AVAudioFrameCount = 4096
        try engine.enableManualRenderingMode(.offline, format: audioFile.processingFormat, maximumFrameCount: buffCapacity)
    }
    catch {
        print("Failed to enable manual rendering mode: \(error)")
        throw error
    }
    
    do {
        try engine.start()
    }
    catch {
        print("Failed to start the engine: \(error)")
    }
    
    playerNode.play()
    
    let outputBuff = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat,
                                      frameCapacity: engine.manualRenderingMaximumFrameCount)!
    
    while engine.manualRenderingSampleTime < audioFile.length {
        let remainingSamples = audioFile.length - engine.manualRenderingSampleTime
        let framesToRender = min(outputBuff.frameCapacity, AVAudioFrameCount(remainingSamples))
        
        do {
            let renderingStatus = try engine.renderOffline(framesToRender, to: outputBuff)
            
            switch renderingStatus {
            
            case .success:
                do {
                    try destinationFile.write(from: outputBuff)
                }
                catch {
                    print("Failed to write from file to buffer: \(error)")
                    throw error
                }
                
            case .insufficientDataFromInputNode:
                break
            
            case.cannotDoInCurrentContext:
                break
            
            case .error:
                print("An error occured during rendering.")
                throw AudioPlayer.ExportError.renderingError
            
            @unknown default:
                fatalError("engine.renderOffline() returned an unknown value.")
            }
        }
        catch {
            print("Failed to render offline manually: \(error)")
            throw error
        }
    }
    
    playerNode.stop()
    engine.stop()
    engine.disableManualRenderingMode()
}
//引擎:AVAudioEngine
//播放节点:AVAudioPlayerNode
//音频文件:AVAudioFile
打开func渲染(到destinationFile:AVAudioFile)抛出{
playerNode.scheduleFile(音频文件,at:nil)
做{
让buffCapacity:AVAudioFrameCount=4096
尝试engine.enableManualRenderingMode(.offline,格式:audioFile.processingFormat,最大帧数:buffCapacity)
}
抓住{
打印(“无法启用手动呈现模式:\(错误)”)
抛出错误
}
做{
试试引擎启动()
}
抓住{
打印(“启动引擎失败:\(错误)”)
}
playerNode.play()
让outputBuff=AVAudioPCMBuffer(PCM格式:engine.manualRenderingFormat,
帧容量:发动机。手动渲染最大帧数)!
而engine.manualRenderingSampleTime

我试图通过渲染与播放速率成反比的样本量来解决这个问题。这只解决了速率大于1时的问题。

由于没有人回答,而且很多天过去了,我将分享我是如何解决这个问题的

事实证明,渲染与播放速率成反比的样本量是有效的。起初,这种方法不起作用,因为我做错了

以下是如何获得要渲染的正确采样数:

let framesToRenderCount = AVAudioFramePosition(Float(audioFile.length) * 1 / rate)

这回答了你的问题吗?