ios核心音频:如何使用交错音频从AudioBuffer获取样本

ios核心音频:如何使用交错音频从AudioBuffer获取样本,ios,core-audio,audiobuffer,Ios,Core Audio,Audiobuffer,我已使用ExtAudioFileRead功能将音频文件读入AudioBufferList。 这是音频的ASBD和ASBD: AudioStreamBasicDescription importFormat; importFormat.mFormatID = kAudioFormatLinearPCM; importFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPa

我已使用
ExtAudioFileRead
功能将音频文件读入
AudioBufferList

这是音频的ASBD和ASBD:

AudioStreamBasicDescription importFormat;

importFormat.mFormatID          = kAudioFormatLinearPCM;
importFormat.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
importFormat.mBytesPerPacket    = 4;
importFormat.mFramesPerPacket   = 1;
importFormat.mBytesPerFrame     = 4;
importFormat.mChannelsPerFrame  = 2;
importFormat.mBitsPerChannel    = 16;
importFormat.mSampleRate = [[AVAudioSession sharedInstance] sampleRate];
因此,我们获得并交错了两个通道的音频,每个通道16位带符号int
AudioBufferList
init:

UInt32 *audioData = (UInt32 *) calloc (totalFramesInFile, sizeof (UInt32));

AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc (sizeof (AudioBufferList));

// buffers amount is 1 because audio is interleaved
bufferList->mNumberBuffers = 1;

bufferList->mBuffers[0].mNumberChannels  = 2;
bufferList->mBuffers[0].mDataByteSize    = totalFramesInFile * sizeof(UInt32);
bufferList->mBuffers[0].mData            = audioData;
SInt16 *audioData = (SInt16 *) malloc (sizeof(SInt16) * totalFramesInFile * 2);
并读入缓冲区:

CheckError(ExtAudioFileRead (
                             audioFileObject,
                             &numberOfPacketsToRead,
                             bufferList), "error ExtAudioFileRead");
audioFileObject
ExtAudioFileRef
的一个实例,它是在前面的代码中启动的,为了节省空间,我没有粘贴到这里。
我试图完成的是在渲染回调中修改音频样本

OSStatus MyCallback (void *inRefCon,
                 AudioUnitRenderActionFlags *ioActionFlags,
                 const AudioTimeStamp *inTimeStamp,
                 UInt32 inBusNumber,
                 UInt32 inNumberFrames,
                 AudioBufferList *ioData){


    ViewController *view = (__bridge ViewController *) inRefCon;

    soundStruct *soundStruct = (soundStruct *) &view->mys;

    SInt64            frameTotalForSound        = soundStruct->frameCount;

    soundStruct->isPlaying = true;

    UInt32 *audioData   = soundStruct->audioData;

    UInt32 sampleNumber = soundStruct->sampleNumber;

    for( int i = 0; i < ioData->mNumberBuffers; i++){

        AudioBuffer buffer = ioData->mBuffers[i];
        UInt32 *frameBuffer = buffer.mData;

        for(UInt32 frame = 0; frame < inNumberFrames; frame++) {

            // here I fill the buffer with my audio data.
            // i need to get left and right channel samples 
            // from  audioData[sampleNumber], modify them
            // and write into frameBuffer 

            frameBuffer[frame] = audioData[sampleNumber];

            sampleNumber++;

            if(sampleNumber > frameTotalForSound) {
                soundStruct->isPlaying = false;
                AudioOutputUnitStop(soundStruct->outputUnit);
            }
        }
    }

    soundStruct->sampleNumber = sampleNumber;

    return noErr;

}
OSStatus MyCallback(在refcon中无效*,
AudioUnitRenderActionFlags*ioActionFlags,
常量音频时间戳*inTimeStamp,
UInt32 InBunsNumber,
UInt32数字帧,
音频缓冲列表*ioData){
视图控制器*视图=(u桥视图控制器*)在refcon中;
soundStruct*soundStruct=(soundStruct*)&view->mys;
SInt64 frameTotalForSound=soundStruct->frameCount;
soundStruct->isplay=true;
UInt32*audioData=soundStruct->audioData;
UInt32 sampleNumber=soundStruct->sampleNumber;
对于(int i=0;imNumberBuffers;i++){
AudioBuffer=ioData->mBuffers[i];
UInt32*frameBuffer=buffer.mData;
对于(UInt32 frame=0;frameframeTotalForSound){
soundStruct->isplay=false;
AudioOutputUnitStop(soundStruct->outputUnit);
}
}
}
soundStruct->sampleNumber=sampleNumber;
返回noErr;
}

是否可以从UInt32音频数据阵列中获取Sint16左声道和右声道采样?

音频数据
帧缓冲区
都成为
Sint16
s:

SInt16 *audioData;
// ...
SInt16 *frameBuffer;
您的缓冲区大小计算应该是
n*2*sizeof(SInt16),您需要更改
soundStruct`或添加类型转换

然后您可以访问交错样本,如下所示:

frameBuffer[0] = modify(audioData[0]);    // left sample 1
frameBuffer[1] = modify(audioData[1]);    // right sample 1
frameBuffer[2] = modify(audioData[2]);    // left sample 2
frameBuffer[3] = modify(audioData[3]);    // right sample 2
// ...
frameBuffer[2*(n-1)] = modify(audioData[2*(n-1)]);    // left sample n
frameBuffer[2*(n-1)+1] = modify(audioData[2*(n-1)+1]); // right sample n

@Rhythmatic Fistman,非常感谢-这很有帮助。
但我无法设置
帧缓冲区
。声音在输出端失真了
我猜这是因为AudioUnit希望两个通道的数据都在一个帧中。或者可能有其他的解释
这是我修改的代码,希望它能帮助别人:

audioData
init:

UInt32 *audioData = (UInt32 *) calloc (totalFramesInFile, sizeof (UInt32));

AudioBufferList *bufferList;
bufferList = (AudioBufferList *) malloc (sizeof (AudioBufferList));

// buffers amount is 1 because audio is interleaved
bufferList->mNumberBuffers = 1;

bufferList->mBuffers[0].mNumberChannels  = 2;
bufferList->mBuffers[0].mDataByteSize    = totalFramesInFile * sizeof(UInt32);
bufferList->mBuffers[0].mData            = audioData;
SInt16 *audioData = (SInt16 *) malloc (sizeof(SInt16) * totalFramesInFile * 2);
修改的渲染回调:

OSStatus MyCallback (void *inRefCon,
             AudioUnitRenderActionFlags *ioActionFlags,
             const AudioTimeStamp *inTimeStamp,
             UInt32 inBusNumber,
             UInt32 inNumberFrames,
             AudioBufferList *ioData)
{
    ViewController *view = (__bridge ViewController *) inRefCon;

    soundStruct *soundStruct  = (soundStruct *) &view->mys;

    SInt64 frameTotalForSound = soundStruct->frameCount;

    soundStruct->isPlaying = true;

    SInt16 *audioData   = soundStruct->audioData;

    UInt32 sampleNumber = soundStruct->sampleNumber;

    for( int i = 0; i < ioData->mNumberBuffers; i++){
        AudioBuffer buffer = ioData->mBuffers[i];
        SInt16 *frameBuffer = (SInt16*) ioData->mBuffers[0].mData;

        for(UInt32 frame = 0; frame < inNumberFrames * 2; frame+=2) {

            /* .. some samples modification code .. */

            // left channel
            frameBuffer[frame] = audioData[sampleNumber];
            // right channel
            frameBuffer[frame + 1] = audioData[sampleNumber + 1];

            sampleNumber +=2;

            if(sampleNumber > frameTotalForSound * 2) {
                soundStruct->isPlaying = false;
                AudioOutputUnitStop(soundStruct->outputUnit);
            }
        }
    }

    soundStruct->sampleNumber = sampleNumber;
    return noErr;
}
OSStatus MyCallback(在refcon中无效*,
AudioUnitRenderActionFlags*ioActionFlags,
常量音频时间戳*inTimeStamp,
UInt32 InBunsNumber,
UInt32数字帧,
音频缓冲列表*ioData)
{
视图控制器*视图=(u桥视图控制器*)在refcon中;
soundStruct*soundStruct=(soundStruct*)&view->mys;
SInt64 frameTotalForSound=soundStruct->frameCount;
soundStruct->isplay=true;
SInt16*audioData=soundStruct->audioData;
UInt32 sampleNumber=soundStruct->sampleNumber;
对于(int i=0;imNumberBuffers;i++){
AudioBuffer=ioData->mBuffers[i];
SInt16*帧缓冲区=(SInt16*)ioData->mbuffer[0].mData;
对于(UInt32帧=0;帧frameTotalForSound*2){
soundStruct->isplay=false;
AudioOutputUnitStop(soundStruct->outputUnit);
}
}
}
soundStruct->sampleNumber=sampleNumber;
返回noErr;
}

我不确定是否理解您的问题:您的程序正在生成交错
UInt32
格式的音频数据,并希望将其转换为非交错
SInt16
,以便实时播放?@user3078414,不完全理解。我根本不想皈依。我有一个音频文件,每个通道有交错的音频和
SInt16
值。目标是将其写入
AudioBufferList
并保存在内存中。当RemoteIO AudioUnit请求声音的新部分时,会触发此渲染回调,我希望实时修改每个通道的采样。由于有2个通道-我将样本存储在
UInt32
阵列中。我不确定这是否是一个正确的调用,因为我不知道如何在稍后的回调中提取该样本。下面是我对一个类似但略有不同的问题的看法:写入文件的输出需要一个交错缓冲区,而AU生成的是非交错缓冲区。我也看到了你问题的答案-在最内部的实时循环中小心索引乘法…
[2*(n-1)+1]
有助于演示的清晰性-我宁愿使用
[n+n-1]
。(:为什么不将
frameBuffer
更改为
SInt16
?这样您就不需要非法创建这对了。完成:)我修改了源代码。现在没有失真,一切似乎都很好。谢谢你的帮助这是非常无私的你,有完整的解决方案,你的问题张贴。类似的问题时不时出现。“我相信这会有帮助的,”詹戈费特说