Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/swift/20.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Swift AVAudio和脱机手动渲染模式-can';t将更高频率的缓冲器写入输出文件_Swift_Avaudioplayer_Avaudioengine_Avaudioplayernode_Avaudiofile - Fatal编程技术网

Swift AVAudio和脱机手动渲染模式-can';t将更高频率的缓冲器写入输出文件

Swift AVAudio和脱机手动渲染模式-can';t将更高频率的缓冲器写入输出文件,swift,avaudioplayer,avaudioengine,avaudioplayernode,avaudiofile,Swift,Avaudioplayer,Avaudioengine,Avaudioplayernode,Avaudiofile,我正在读取输入文件,在脱机手动渲染模式下,我想执行振幅调制,并将结果写入输出文件 为了测试,我产生纯正弦波-这在低于6.000 Hz的频率下工作得很好。对于更高的频率(我的目标是使用约20.000 Hz),信号(因此收听输出文件)会失真,频谱以8.000 Hz结束-不再是具有0到8.000 Hz之间多个峰值的纯频谱 以下是我的代码片段: let outputFile: AVAudioFile do { let documentsURL = FileManager

我正在读取输入文件,在脱机手动渲染模式下,我想执行振幅调制,并将结果写入输出文件

为了测试,我产生纯正弦波-这在低于6.000 Hz的频率下工作得很好。对于更高的频率(我的目标是使用约20.000 Hz),信号(因此收听输出文件)会失真,频谱以8.000 Hz结束-不再是具有0到8.000 Hz之间多个峰值的纯频谱

以下是我的代码片段:

    let outputFile: AVAudioFile

    do {
        let documentsURL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
        let outputURL = documentsURL.appendingPathComponent("output.caf")
        outputFile = try AVAudioFile(forWriting: outputURL, settings: sourceFile.fileFormat.settings)
    } catch {
        fatalError("Unable to open output audio file: \(error).")
    }

    var sampleTime: Float32 = 0

    while engine.manualRenderingSampleTime < sourceFile.length {
        do {
            let frameCount = sourceFile.length - engine.manualRenderingSampleTime
            let framesToRender = min(AVAudioFrameCount(frameCount), buffer.frameCapacity)
            
            let status = try engine.renderOffline(framesToRender, to: buffer)
            
            switch status {
            
            case .success:
                // The data rendered successfully. Write it to the output file.
                let sampleRate:Float = Float((mixer.outputFormat(forBus: 0).sampleRate))

                let modulationFrequency: Float = 20000.0
                
                for i in stride(from:0, to: Int(buffer.frameLength), by: 1) {
                    let val = sinf(2.0 * .pi * modulationFrequency * Float(sampleTime) / Float(sampleRate))
                    // TODO: perform modulation later
                    buffer.floatChannelData?.pointee[Int(i)] = val
                    sampleTime = sampleTime + 1.0
                }

                try outputFile.write(from: buffer)
                
            case .insufficientDataFromInputNode:
                // Applicable only when using the input node as one of the sources.
                break
                
            case .cannotDoInCurrentContext:
                // The engine couldn't render in the current render call.
                // Retry in the next iteration.
                break
                
            case .error:
                // An error occurred while rendering the audio.
                fatalError("The manual rendering failed.")
            @unknown default:
                fatalError("unknown error")
            }
        } catch {
            fatalError("The manual rendering failed: \(error).")
        }
    }
现在输出信号的质量更好了,但并不完美。我得到了更高的振幅,但在频谱分析仪中总是不止一个频率。也许一个解决办法是应用高通滤波器


同时,我使用了一种信号发生器,将经过处理的缓冲区(带正弦波)直接传输到扬声器——在这种情况下,输出是完美的。我认为将信号路由到文件会导致此类问题。

手动渲染模式的速度不是问题,因为手动渲染环境中的速度有些不相关

以下是从源文件手动渲染到输出文件的框架代码:

//打开输入文件
让file=试试!AVAudioFile(用于读取:URL(fileURLWithPath:“/tmp/test.wav”))
让发动机=AVAudioEngine()
让player=AVAudioPlayerNode()
引擎。连接(播放器)
engine.connect(播放器,收件人:engine.mainMixerNode,格式:nil)
//使用512帧的块在手动渲染模式下运行引擎
让渲染化:AVAudioFrameCount=512
//使用文件的处理格式作为渲染格式
让renderFormat=AVAudioFormat(commonFormat:file.processingFormat.commonFormat,sampleRate:file.processingFormat.sampleRate,通道:file.processingFormat.channelCount,交错:true)!
让renderBuffer=avaudiopcBuffer(pcmFormat:renderFormat,frameCapacity:renderSize)!
尝试engine.enableManualRenderingMode(.offline,格式:renderFormat,最大帧数:renderBuffer.frameCapacity)
尝试引擎启动()
player.play()
//渲染格式也是输出格式
让输出=尝试!AVAudioFile(用于写入:URL(fileURLWithPath:“/tmp/foo.wav”),设置:renderFormat.settings,commonFormat:renderFormat.commonFormat,交错:renderFormat.isInterleaved)
//使用大小为“renderize”输出帧的缓冲区进行读取
让readBuffer=avaudiopcbuffer(pcmFormat:file.processingFormat,frameCapacity:renderize)!
//处理文件
虽然是真的{
做{
//如果已读取所有帧,则处理完成
如果file.framePosition==file.length{
打破
}
尝试file.read(进入:readBuffer)
scheduleBuffer(readBuffer,completionHandler:nil)
让结果=try engine.renderofline(readBuffer.frameLength,to:renderBuffer)
//在“renderBuffer”中处理音频
//写音频
尝试输出。写入(从:renderBuffer)
如果结果是!=成功{
打破
}
}
抓住{
打破
}
}
player.stop()
引擎停止()
下面是一个片段,演示如何在整个引擎中设置相同的采样率:

//替换:
//engine.connect(播放器,收件人:engine.mainMixerNode,格式:nil)
//与:
让busFormat=AVAudioFormat(带采样的标准格式:file.fileFormat.sampleRate,通道:file.fileFormat.channelCount)
engine.disconnectNodeInput(engine.outputNode,总线:0)
engine.connect(engine.mainMixerNode,收件人:engine.outputNode,格式:busFormat)
engine.connect(播放器,收件人:engine.mainMixerNode,格式:busFormat)
通过以下方式验证采样率是否相同:

NSLog(“%@”,引擎)
图形描述________
AVAUDIOENGIEGRAPH 0x7f8194905af0:已初始化=0,正在运行=0,节点数=3
********输出链********
节点0x600001db9500{'auou'ahal'appl'}'U'
输入=1
(bus0,en1)(bus0)0x600001d80b80,{'aumx''mcmx''appl'},[2通道,48000赫兹,'lpcm'(0x00000029)32位小端浮点,去交错]
______________________________________

感谢您的提示和代码片段!不幸的是,我也遇到了同样的问题。假设:我用大约20.000 Hz的纯正弦波填充输出缓冲区,频谱(例如,将输出文件导出到Audacity)显示许多行,不仅在20.000 Hz,而且在下面。不幸的是,我不能粘贴任何频谱截图,但行为如下:以水平轴为时间轴,中间峰值(代表频率)的数量随着时间逐渐增加。此外,信号会随着时间的推移而变得更响亮。请看我上面的更新:由于频率更高,我在重叠方面存在一些问题。您可能需要配置
AVAudioEngine
,通过专门设置总线格式,在整个过程中使用所需的采样率,否则SRC可能会在幕后发生。我就是这么做的-但它没有任何效果:engine.connect(player,to:engine.mainMixerNode,format:AVAudioFormat.init(standardFormatWithSampleRate:sampleRate,Channel:1))要在整个过程中使用相同的采样率,您需要断开主混频器与输出的连接。您可以使用
NSLog
%@
// Process the audio in `renderBuffer` here
for i in 0..<Int(renderBuffer.frameLength) {
    let val = sinf(1000.0*Float(index) *2 * .pi / Float(sampleRate))
    renderBuffer.floatChannelData?.pointee[i] = val
    index += 1
}
    settings[AVFormatIDKey] = kAudioFormatAppleLossless
    settings[AVAudioFileTypeKey] = kAudioFileCAFType
    settings[AVSampleRateKey] = readBuffer.format.sampleRate
    settings[AVNumberOfChannelsKey] = 1
    settings[AVLinearPCMIsFloatKey] = (readBuffer.format.commonFormat == .pcmFormatInt32)
    settings[AVSampleRateConverterAudioQualityKey] = AVAudioQuality.max
    settings[AVLinearPCMBitDepthKey] = 32
    settings[AVEncoderAudioQualityKey] = AVAudioQuality.max
________ GraphDescription ________
AVAudioEngineGraph 0x7f8194905af0: initialized = 0, running = 0, number of nodes = 3

     ******** output chain ********

     node 0x600001db9500 {'auou' 'ahal' 'appl'}, 'U'
         inputs = 1
             (bus0, en1) <- (bus0) 0x600001d80b80, {'aumx' 'mcmx' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]

     node 0x600001d80b80 {'aumx' 'mcmx' 'appl'}, 'U'
         inputs = 1
             (bus0, en1) <- (bus0) 0x600000fa0200, {'augn' 'sspl' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
         outputs = 1
             (bus0, en1) -> (bus0) 0x600001db9500, {'auou' 'ahal' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]

     node 0x600000fa0200 {'augn' 'sspl' 'appl'}, 'U'
         outputs = 1
             (bus0, en1) -> (bus0) 0x600001d80b80, {'aumx' 'mcmx' 'appl'}, [ 2 ch,  48000 Hz, 'lpcm' (0x00000029) 32-bit little-endian float, deinterleaved]
______________________________________