Ios 使用AudioQueue进行断断续续的音频播放

Ios 使用AudioQueue进行断断续续的音频播放,ios,objective-c,audio,core-audio,audioqueue,Ios,Objective C,Audio,Core Audio,Audioqueue,我有以下代码,它打开一个音频队列,在44100Hz的频率下播放16位pcm。它有一个非常奇怪的怪癖,一旦初始缓冲区被填满,它就会很快地回放,然后在等待更多字节通过网络传输时变得“不稳定” 因此,要么我不知何故弄乱了将一个子范围的数据复制到缓冲区的代码,要么我告诉音频队列以比通过网络传输的数据更快的速度播放 有人有什么想法吗?我已经被困了几天了 // // Created by Benjamin St Pierre on 15-01-02. // Copyright (c) 2015 Lightn

我有以下代码,它打开一个音频队列,在44100Hz的频率下播放16位pcm。它有一个非常奇怪的怪癖,一旦初始缓冲区被填满,它就会很快地回放,然后在等待更多字节通过网络传输时变得“不稳定”

因此,要么我不知何故弄乱了将一个子范围的数据复制到缓冲区的代码,要么我告诉音频队列以比通过网络传输的数据更快的速度播放

有人有什么想法吗?我已经被困了几天了

//
// Created by Benjamin St Pierre on 15-01-02.
// Copyright (c) 2015 Lightning Strike Solutions. All rights reserved.
//

#import <MacTypes.h>
#import "MediaPlayer.h"


@implementation MediaPlayer


@synthesize sampleQueue;


void OutputBufferCallback(void *inUserData, AudioQueueRef inAQ, AudioQueueBufferRef inBuffer) {
    //Cast userData to MediaPlayer Objective-C class instance
    MediaPlayer *mediaPlayer = (__bridge MediaPlayer *) inUserData;
    // Fill buffer.
    [mediaPlayer fillAudioBuffer:inBuffer];
    // Re-enqueue buffer.
    OSStatus err = AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULL);
    if (err != noErr)
        NSLog(@"AudioQueueEnqueueBuffer() error %d", (int) err);
}

- (void)fillAudioBuffer:(AudioQueueBufferRef)inBuffer {
    if (self.currentAudioPiece == nil || self.currentAudioPiece.duration >= self.currentAudioPieceIndex) {
        //grab latest sample from sample queue
        self.currentAudioPiece = sampleQueue.dequeue;
        self.currentAudioPieceIndex = 0;
    }

    //Check for empty sample queue
    if (self.currentAudioPiece == nil) {
        NSLog(@"Empty sample queue");
        memset(inBuffer->mAudioData, 0, kBufferByteSize);
        return;
    }

    UInt32 bytesToRead = inBuffer->mAudioDataBytesCapacity;

    while (bytesToRead > 0) {
        UInt32 maxBytesFromCurrentPiece = self.currentAudioPiece.audioData.length - self.currentAudioPieceIndex;
        //Take the min of what the current piece can provide OR what is needed to be read
        UInt32 bytesToReadNow = MIN(maxBytesFromCurrentPiece, bytesToRead);

        NSData *subRange = [self.currentAudioPiece.audioData subdataWithRange:NSMakeRange(self.currentAudioPieceIndex, bytesToReadNow)];
        //Copy what you can before continuing loop
        memcpy(inBuffer->mAudioData, subRange.bytes, subRange.length);
        bytesToRead -= bytesToReadNow;

        if (bytesToReadNow == maxBytesFromCurrentPiece) {
            @synchronized (sampleQueue) {
                self.currentAudioPiece = self.sampleQueue.dequeue;
                self.currentAudioPieceIndex = 0;
            }
        } else {
            self.currentAudioPieceIndex += bytesToReadNow;
        }
    }
    inBuffer->mAudioDataByteSize = kBufferByteSize;
}

- (void)startMediaPlayer {
    AudioStreamBasicDescription streamFormat;
    streamFormat.mFormatID = kAudioFormatLinearPCM;
    streamFormat.mSampleRate = 44100.0;
    streamFormat.mChannelsPerFrame = 2;
    streamFormat.mBytesPerFrame = 4;
    streamFormat.mFramesPerPacket = 1;
    streamFormat.mBytesPerPacket = 4;
    streamFormat.mBitsPerChannel = 16;
    streamFormat.mReserved = 0;
    streamFormat.mFormatFlags = kAudioFormatFlagIsPacked | kAudioFormatFlagIsSignedInteger;

    // New input queue
    OSStatus err = AudioQueueNewOutput(&streamFormat, OutputBufferCallback, (__bridge void *) self, nil, nil, 0, &outputQueue);
    if (err != noErr) {
        NSLog(@"AudioQueueNewOutput() error: %d", (int) err);
    }

    int i;
    // Enqueue buffers
    AudioQueueBufferRef buffer;
    for (i = 0; i < kNumberBuffers; i++) {
        err = AudioQueueAllocateBuffer(outputQueue, kBufferByteSize, &buffer);
        memset(buffer->mAudioData, 0, kBufferByteSize);
        buffer->mAudioDataByteSize = kBufferByteSize;
        if (err == noErr) {
            err = AudioQueueEnqueueBuffer(outputQueue, buffer, 0, nil);
            if (err != noErr) NSLog(@"AudioQueueEnqueueBuffer() error: %d", (int) err);
        } else {
            NSLog(@"AudioQueueAllocateBuffer() error: %d", (int) err);
            return;
        }
    }

    // Start queue
    err = AudioQueueStart(outputQueue, nil);
    if (err != noErr) NSLog(@"AudioQueueStart() error: %d", (int) err);
}

@end
//
//本杰明·圣皮埃尔于2002年1月15日创作。
//版权所有(c)2015雷击解决方案。版权所有。
//
#进口
#导入“MediaPlayer.h”
@MediaPlayer的实现
@综合样本队列;
void OutputBufferCallback(void*inUserData、AudioQueueRef inAQ、AudioQueueBufferRef inBuffer){
//将用户数据强制转换为MediaPlayer Objective-C类实例
MediaPlayer*MediaPlayer=(uu桥MediaPlayer*)inUserData;
//填充缓冲区。
[mediaPlayer填充音频缓冲区:inBuffer];
//重新排队缓冲区。
OSStatus err=音频队列缓冲区(inAQ、inBuffer、0、NULL);
如果(错误!=noErr)
NSLog(@“AudioQueueBuffer()错误%d”,(int)错误);
}
-(void)fillAudioBuffer:(AudioQueueBufferRef)inBuffer{
如果(self.currentAudioPiece==nil | | self.currentAudioPiece.duration>=self.currentAudioPieceIndex){
//从样本队列中获取最新样本
self.currentAudioPiece=sampleQueue.dequeue;
self.currentAudioPieceIndex=0;
}
//检查样本队列是否为空
如果(self.currentAudioPiece==nil){
NSLog(@“空样本队列”);
memset(inBuffer->mAudioData,0,kBufferByteSize);
返回;
}
UInt32 bytesToRead=inBuffer->maudiodata字节容量;
while(bytesToRead>0){
UInt32 maxBytesFromCurrentPiece=self.currentAudioPiece.audioData.length-self.currentAudioPieceIndex;
//取当前作品能提供的内容或需要阅读的内容的最小值
UInt32 bytesToReadNow=MIN(当前件的最大字节数,bytesToRead);
NSData*子范围=[self.currentAudioPiece.audioData子数据WithRange:NSMakeRange(self.currentAudioPieceIndex,bytesToReadNow)];
//在继续循环之前复制您可以复制的内容
memcpy(inBuffer->mAudioData,subRange.bytes,subRange.length);
bytesToRead-=bytesToReadNow;
if(bytesToReadNow==maxBytesFromCurrentPiece){
@已同步(采样队列){
self.currentAudioPiece=self.sampleQueue.dequeue;
self.currentAudioPieceIndex=0;
}
}否则{
self.currentAudioPieceIndex+=bytesToReadNow;
}
}
inBuffer->mAudioDataByteSize=kBufferByteSize;
}
-(无效)startMediaPlayer{
音频流基本描述流格式;
streamFormat.mFormatID=kAudioFormatLinearPCM;
streamFormat.mSampleRate=44100.0;
streamFormat.mChannelsPerFrame=2;
streamFormat.mBytesPerFrame=4;
streamFormat.mFramesPerPacket=1;
streamFormat.mBytesPerPacket=4;
streamFormat.mBitsPerChannel=16;
streamFormat.mReserved=0;
streamFormat.mFormatFlags=kaudioformatflagspacked | kaudioformatflagsignedinteger;
//新输入队列
OSStatus err=AudioQueueNewOutput(&streamFormat,OutputBufferCallback,(u桥void*)self,nil,nil,0,&outputQueue);
如果(错误!=noErr){
NSLog(@“AudioQueueNewOutput()错误:%d”,(int)错误);
}
int i;
//排队缓冲区
AudioQueueBufferRef缓冲区;
对于(i=0;imAudioData,0,kBufferByteSize);
缓冲区->mAudioDataByteSize=kBufferByteSize;
如果(err==noErr){
err=audioqueuenbuffer(outputQueue,buffer,0,nil);
如果(err!=noErr)NSLog(@“audioqueuenbuffer()错误:%d”,(int)err);
}否则{
NSLog(@“AudioQueueAllocateBuffer()错误:%d”,(int)错误);
返回;
}
}
//启动队列
err=AudioQueueStart(outputQueue,nil);
如果(err!=noErr)NSLog(@“AudioQueueStart()错误:%d”,(int)err);
}
@结束

我将在这里进行一次抽签,并说您得到的是断断续续的播放,因为您没有推进数据的写入指针。我不太了解objective-C,无法告诉您这种语法是否正确,但我认为您需要添加以下内容:

while (bytesToRead > 0) {
    ....
    memcpy(inBuffer->mAudioData, subRange.bytes, subRange.length);
    bytesToRead -= bytesToReadNow;
    inBuffer->mAudioData += bytesReadNow; // move the write pointer
    ...
}

您正在渲染回调中调用Objective-C代码,这通常不是实时安全的,可能会导致重载。在
-fillAudioBuffer:
中,互斥锁的使用绝对不是实时安全的。在我第一次尝试音频队列时,我得到了声音人工制品,原因有两个:1)我没有在回调中正确设置缓冲区->mAudioDataByteSize;2)我使用的缓冲区太少;将缓冲区从3增加到5解决了我最后一个剩余的音频口吃问题。