Ios 音频转换器数据包数错误

Ios 音频转换器数据包数错误,ios,core-audio,audio-converter,Ios,Core Audio,Audio Converter,我已经设置了一个类,在给定输入和输出的情况下,将音频从一种格式转换为另一种格式AudioStreamBasicDescription。当我将线性PCM从麦克风转换为iLBC时,它会工作,当我从AudioUnitRender函数中给它1024个数据包时,它会给我6个数据包。然后,我通过UDP将这226个字节发送到运行在不同设备上的同一个应用程序。问题是,当我使用同一个类转换回线性PCM以提供给音频单元输入时,AudioConverterFillComplexBuffer函数不提供1024个数据包,

我已经设置了一个类,在给定输入和输出的情况下,将音频从一种格式转换为另一种格式
AudioStreamBasicDescription
。当我将线性PCM从麦克风转换为iLBC时,它会工作,当我从
AudioUnitRender
函数中给它1024个数据包时,它会给我6个数据包。然后,我通过UDP将这226个字节发送到运行在不同设备上的同一个应用程序。问题是,当我使用同一个类转换回线性PCM以提供给音频单元输入时,
AudioConverterFillComplexBuffer
函数不提供1024个数据包,它提供960个。。。这意味着音频单元输入预期为4096字节(立体声为2048 x 2),但我只能给它3190字节左右,所以它听起来真的很脆和失真

如果我给
AudioConverter
1024包LinearPCM,转换成iLBC,再转换回LinearPCM,我肯定还会得到1024包

音频转换器功能:

-(void) doConvert {

    // Start converting
    if (converting) return;
    converting = YES;

    while (true) {

        // Get next buffer
        id bfr = [buffers getNextBuffer];
        if (!bfr) {
            converting = NO;
            return;
        }

        // Get info
        NSArray* bfrs = ([bfr isKindOfClass:[NSArray class]] ? bfr : @[bfr]);
        int bfrSize = 0;
        for (NSData* dat in bfrs) bfrSize += dat.length;

        int inputPackets = bfrSize / self.inputFormat.mBytesPerPacket;
        int outputPackets = (inputPackets * self.inputFormat.mFramesPerPacket) / self.outputFormat.mFramesPerPacket;

        // Create output buffer
        AudioBufferList* bufferList = (AudioBufferList*) malloc(sizeof(AudioBufferList) * self.outputFormat.mChannelsPerFrame);
        bufferList -> mNumberBuffers = self.outputFormat.mChannelsPerFrame;
        for (int i = 0 ; i < self.outputFormat.mChannelsPerFrame ; i++) {
            bufferList -> mBuffers[i].mNumberChannels = 1;
            bufferList -> mBuffers[i].mDataByteSize = 4*1024;
            bufferList -> mBuffers[i].mData = malloc(bufferList -> mBuffers[i].mDataByteSize);
        }

        // Create input buffer
        AudioBufferList* inputBufferList = (AudioBufferList*) malloc(sizeof(AudioBufferList) * bfrs.count);
        inputBufferList -> mNumberBuffers = bfrs.count;
        for (int i = 0 ; i < bfrs.count ; i++) {
            inputBufferList -> mBuffers[i].mNumberChannels = 1;
            inputBufferList -> mBuffers[i].mDataByteSize = [[bfrs objectAtIndex:i] length];
            inputBufferList -> mBuffers[i].mData = (void*) [[bfrs objectAtIndex:i] bytes];
        }

        // Create sound data payload
        struct SoundDataPayload payload;
        payload.data = inputBufferList;
        payload.numPackets = inputPackets;
        payload.packetDescriptions = NULL;
        payload.used = NO;

        // Convert data
        UInt32 numPackets = outputPackets;
        OSStatus err = AudioConverterFillComplexBuffer(converter, acvConverterComplexInput, &payload, &numPackets, bufferList, NULL);
        if (err)
            continue;

        // Check how to output
        if (bufferList -> mNumberBuffers > 1) {

            // Output as array
            NSMutableArray* array = [NSMutableArray arrayWithCapacity:bufferList -> mNumberBuffers];
            for (int i = 0 ; i < bufferList -> mNumberBuffers ; i++)
                [array addObject:[NSData dataWithBytes:bufferList -> mBuffers[i].mData length:bufferList -> mBuffers[i].mDataByteSize]];

            // Save
            [convertedBuffers addBuffer:array];

        } else {

            // Output as data
            NSData* newData = [NSData dataWithBytes:bufferList -> mBuffers[0].mData length:bufferList -> mBuffers[0].mDataByteSize];

            // Save
            [convertedBuffers addBuffer:newData];

        }

        // Free memory
        for (int i = 0 ; i < bufferList -> mNumberBuffers ; i++)
            free(bufferList -> mBuffers[i].mData);

        free(inputBufferList);
        free(bufferList);

        // Tell delegate
        if (self.convertHandler)
            //dispatch_async(dispatch_get_main_queue(), self.convertHandler);
            self.convertHandler();

    }

}
// Get input format from mic
UInt32 size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription inputDesc;
AudioUnitGetProperty(self.ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &inputDesc, &size);

// Set output stream description
size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription outputDescription;
memset(&outputDescription, 0, size);
outputDescription.mFormatID         = kAudioFormatiLBC;
OSStatus err = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &outputDescription);
// Set input stream description
size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription inputDescription;
memset(&inputDescription, 0, size);
inputDescription.mFormatID        = kAudioFormatiLBC;
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &inputDescription);

// Set output stream description
UInt32 size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription outputDesc;
AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outputDesc, &size);
从iLBC转换时的格式:

-(void) doConvert {

    // Start converting
    if (converting) return;
    converting = YES;

    while (true) {

        // Get next buffer
        id bfr = [buffers getNextBuffer];
        if (!bfr) {
            converting = NO;
            return;
        }

        // Get info
        NSArray* bfrs = ([bfr isKindOfClass:[NSArray class]] ? bfr : @[bfr]);
        int bfrSize = 0;
        for (NSData* dat in bfrs) bfrSize += dat.length;

        int inputPackets = bfrSize / self.inputFormat.mBytesPerPacket;
        int outputPackets = (inputPackets * self.inputFormat.mFramesPerPacket) / self.outputFormat.mFramesPerPacket;

        // Create output buffer
        AudioBufferList* bufferList = (AudioBufferList*) malloc(sizeof(AudioBufferList) * self.outputFormat.mChannelsPerFrame);
        bufferList -> mNumberBuffers = self.outputFormat.mChannelsPerFrame;
        for (int i = 0 ; i < self.outputFormat.mChannelsPerFrame ; i++) {
            bufferList -> mBuffers[i].mNumberChannels = 1;
            bufferList -> mBuffers[i].mDataByteSize = 4*1024;
            bufferList -> mBuffers[i].mData = malloc(bufferList -> mBuffers[i].mDataByteSize);
        }

        // Create input buffer
        AudioBufferList* inputBufferList = (AudioBufferList*) malloc(sizeof(AudioBufferList) * bfrs.count);
        inputBufferList -> mNumberBuffers = bfrs.count;
        for (int i = 0 ; i < bfrs.count ; i++) {
            inputBufferList -> mBuffers[i].mNumberChannels = 1;
            inputBufferList -> mBuffers[i].mDataByteSize = [[bfrs objectAtIndex:i] length];
            inputBufferList -> mBuffers[i].mData = (void*) [[bfrs objectAtIndex:i] bytes];
        }

        // Create sound data payload
        struct SoundDataPayload payload;
        payload.data = inputBufferList;
        payload.numPackets = inputPackets;
        payload.packetDescriptions = NULL;
        payload.used = NO;

        // Convert data
        UInt32 numPackets = outputPackets;
        OSStatus err = AudioConverterFillComplexBuffer(converter, acvConverterComplexInput, &payload, &numPackets, bufferList, NULL);
        if (err)
            continue;

        // Check how to output
        if (bufferList -> mNumberBuffers > 1) {

            // Output as array
            NSMutableArray* array = [NSMutableArray arrayWithCapacity:bufferList -> mNumberBuffers];
            for (int i = 0 ; i < bufferList -> mNumberBuffers ; i++)
                [array addObject:[NSData dataWithBytes:bufferList -> mBuffers[i].mData length:bufferList -> mBuffers[i].mDataByteSize]];

            // Save
            [convertedBuffers addBuffer:array];

        } else {

            // Output as data
            NSData* newData = [NSData dataWithBytes:bufferList -> mBuffers[0].mData length:bufferList -> mBuffers[0].mDataByteSize];

            // Save
            [convertedBuffers addBuffer:newData];

        }

        // Free memory
        for (int i = 0 ; i < bufferList -> mNumberBuffers ; i++)
            free(bufferList -> mBuffers[i].mData);

        free(inputBufferList);
        free(bufferList);

        // Tell delegate
        if (self.convertHandler)
            //dispatch_async(dispatch_get_main_queue(), self.convertHandler);
            self.convertHandler();

    }

}
// Get input format from mic
UInt32 size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription inputDesc;
AudioUnitGetProperty(self.ioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 1, &inputDesc, &size);

// Set output stream description
size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription outputDescription;
memset(&outputDescription, 0, size);
outputDescription.mFormatID         = kAudioFormatiLBC;
OSStatus err = AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &outputDescription);
// Set input stream description
size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription inputDescription;
memset(&inputDescription, 0, size);
inputDescription.mFormatID        = kAudioFormatiLBC;
AudioFormatGetProperty(kAudioFormatProperty_FormatInfo, 0, NULL, &size, &inputDescription);

// Set output stream description
UInt32 size = sizeof(AudioStreamBasicDescription);
AudioStreamBasicDescription outputDesc;
AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outputDesc, &size);

您必须使用中间缓冲区从足够的传入数据包中保存足够的字节,以便与音频单元输入请求的数字完全匹配。依靠任何一个压缩格式的UDP数据包的大小都是完全正确的是行不通的


音频转换器可能会根据压缩格式缓冲样本并更改数据包大小。

顺便说一句,此代码正在其自己的调度队列中运行…好的。。。我该怎么做呢?我从转换器中得到了一堆NSDATA(如果有两个通道,则为NSArrays),我如何制作中间缓冲区?我在转换器中添加了一个
preferredOutBufferSize
字段,该字段现在可以发送任何设置大小的缓冲区,现在可以工作了,谢谢。。。