当耳机插入或拔出时,iOS应用程序崩溃

当耳机插入或拔出时,iOS应用程序崩溃,ios,audiounit,headphones,Ios,Audiounit,Headphones,我正在iOS 6.1.3 iPad2和新款iPad上运行SIP音频流应用程序 我在iPad上启动我的应用程序(无需插入电源)。 音频工作。 我插上耳机。 应用程序崩溃:malloc:对象0x的错误…:未分配要释放的指针或EXC_BAD_访问 或者: 我在iPad上启动我的应用程序(插入耳机)。 音频从耳机中传出。 我拔下耳机。 应用程序崩溃:malloc:对象0x的错误…:未分配要释放的指针或EXC_BAD_访问 应用程序代码采用基于示例代码的AudioUnitAPI(见下文) 我使用kAudi

我正在iOS 6.1.3 iPad2和新款iPad上运行SIP音频流应用程序

我在iPad上启动我的应用程序(无需插入电源)。
音频工作。
我插上耳机。
应用程序崩溃:malloc:对象0x的错误…:未分配要释放的指针或EXC_BAD_访问

或者:

我在iPad上启动我的应用程序(插入耳机)。
音频从耳机中传出。
我拔下耳机。
应用程序崩溃:malloc:对象0x的错误…:未分配要释放的指针或EXC_BAD_访问

应用程序代码采用基于示例代码的AudioUnitAPI(见下文)

我使用kAudioSessionProperty\u AudioRouteChange回调来获得更改意识。因此,操作系统声音管理器有三个回调:
1) 处理记录的麦克风样本
2) 为演讲者提供示例
3) 通知音频硬件存在

经过大量测试后,我感觉棘手的代码就是执行麦克风捕获的代码。在“插入/拔出”操作之后,在调用RouteChange之前,录制回调的大部分时间会被调用几次,这会导致以后出现“分段错误”,并且RouteChange回调永远不会被调用。更具体地说,我认为AudioUnitRender函数会导致“内存错误访问”,而不会引发异常

我的感觉是,非原子录音回调代码和操作系统更新与声音设备相关的结构相竞争。因此,录制回调的非原子性越大,操作系统硬件更新和录制回调的并发性就越大

我修改了代码,使录制回调尽可能精简,但我的感觉是,我的应用程序的其他线程带来的高处理负载正在引发前面描述的并发竞争。因此,由于AudioUnitRender访问错误,代码的其他部分会出现malloc/free错误

我尝试通过以下方式减少录制回调延迟:

UInt32 numFrames = 256;
UInt32 dataSize = sizeof(numFrames);

AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_MaximumFramesPerSlice,
    kAudioUnitScope_Global,
    0,
    &numFrames,
    dataSize);
我试图提高有问题的代码:

dispatch_async(dispatch_get_main_queue(), ^{
有人对此有什么建议或解决办法吗? 为了重现错误,这里是我的音频会话代码:

//
//  IosAudioController.m
//  Aruts
//
//  Created by Simon Epskamp on 10/11/10.
//  Copyright 2010 __MyCompanyName__. All rights reserved.
//

#import "IosAudioController.h"
#import <AudioToolbox/AudioToolbox.h>

#define kOutputBus 0
#define kInputBus 1

IosAudioController* iosAudio;

void checkStatus(int status) {
    if (status) {
        printf("Status not 0! %d\n", status);
        // exit(1);
    }
}

/**
 * This callback is called when new audio data from the microphone is available.
 */
static OSStatus recordingCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {

    // Because of the way our audio format (setup below) is chosen:
    // we only need 1 buffer, since it is mono
    // Samples are 16 bits = 2 bytes.
    // 1 frame includes only 1 sample

    AudioBuffer buffer;

    buffer.mNumberChannels = 1;
    buffer.mDataByteSize = inNumberFrames * 2;
    buffer.mData = malloc( inNumberFrames * 2 );

    // Put buffer in a AudioBufferList
    AudioBufferList bufferList;
    bufferList.mNumberBuffers = 1;
    bufferList.mBuffers[0] = buffer;

    NSLog(@"Recording Callback 1 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // Then:
    // Obtain recorded samples

    OSStatus status;
    status = AudioUnitRender([iosAudio audioUnit],
        ioActionFlags, 
        inTimeStamp,
        inBusNumber,
        inNumberFrames,
        &bufferList);
        checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    // Process the new data
    [iosAudio processAudio:&bufferList];

    NSLog(@"Recording Callback 2 0x%x ? 0x%x",buffer.mData, 
        bufferList.mBuffers[0].mData);

    // release the malloc'ed data in the buffer we created earlier
    free(bufferList.mBuffers[0].mData);

    return noErr;
}

/**
 * This callback is called when the audioUnit needs new data to play through the
 * speakers. If you don't have any, just don't write anything in the buffers
 */
static OSStatus playbackCallback(void *inRefCon, 
    AudioUnitRenderActionFlags *ioActionFlags, 
    const AudioTimeStamp *inTimeStamp, 
    UInt32 inBusNumber, 
    UInt32 inNumberFrames, 
    AudioBufferList *ioData) {
        // Notes: ioData contains buffers (may be more than one!)
        // Fill them up as much as you can.
        // Remember to set the size value in each 
        // buffer to match how much data is in the buffer.

    for (int i=0; i < ioData->mNumberBuffers; i++) {
        // in practice we will only ever have 1 buffer, since audio format is mono
        AudioBuffer buffer = ioData->mBuffers[i];

        // NSLog(@"  Buffer %d has %d channels and wants %d bytes of data.", i, 
            buffer.mNumberChannels, buffer.mDataByteSize);

        // copy temporary buffer data to output buffer
        UInt32 size = min(buffer.mDataByteSize,
            [iosAudio tempBuffer].mDataByteSize);

        // dont copy more data then we have, or then fits
        memcpy(buffer.mData, [iosAudio tempBuffer].mData, size);
        // indicate how much data we wrote in the buffer
        buffer.mDataByteSize = size;

        // uncomment to hear random noise
        /*
         * UInt16 *frameBuffer = buffer.mData;
         * for (int j = 0; j < inNumberFrames; j++) {
         *     frameBuffer[j] = rand();
         * }
         */
    }

    return noErr;
}

@implementation IosAudioController
@synthesize audioUnit, tempBuffer;

void propListener(void *inClientData,
    AudioSessionPropertyID inID,
    UInt32 inDataSize,
    const void *inData) {

    if (inID == kAudioSessionProperty_AudioRouteChange) {

        UInt32 isAudioInputAvailable;
        UInt32 size = sizeof(isAudioInputAvailable);
        CFStringRef newRoute;
        size = sizeof(CFStringRef);

        AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRoute);

        if (newRoute) {
            CFIndex length = CFStringGetLength(newRoute);
            CFIndex maxSize = CFStringGetMaximumSizeForEncoding(length,
                kCFStringEncodingUTF8);

            char *buffer = (char *)malloc(maxSize);
            CFStringGetCString(newRoute, buffer, maxSize,
                kCFStringEncodingUTF8);

            //CFShow(newRoute);
            printf("New route is %s\n",buffer);

            if (CFStringCompare(newRoute, CFSTR("HeadsetInOut"), NULL) == 
                kCFCompareEqualTo) // headset plugged in
            {
                printf("Headset\n");
            } else {
                printf("Another device\n");

                UInt32 audioRouteOverride = kAudioSessionOverrideAudioRoute_Speaker;
                AudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                    sizeof (audioRouteOverride),&audioRouteOverride);
            }
            printf("New route is %s\n",buffer);
            free(buffer);
        }
        newRoute = nil;
    } 
}

/**
 * Initialize the audioUnit and allocate our own temporary buffer.
 * The temporary buffer will hold the latest data coming in from the microphone,
 * and will be copied to the output when this is requested.
 */
- (id) init {
    self = [super init];
    OSStatus status;

    // Initialize and configure the audio session
    AudioSessionInitialize(NULL, NULL, NULL, self);

    UInt32 audioCategory = kAudioSessionCategory_PlayAndRecord;
    AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, 
        sizeof(audioCategory), &audioCategory);
    AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, 
        propListener, self);

    Float32 preferredBufferSize = .020;
    AudioSessionSetProperty(kAudioSessionProperty_PreferredHardwareIOBufferDuration, 
        sizeof(preferredBufferSize), &preferredBufferSize);

    AudioSessionSetActive(true);

    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = 
        kAudioUnitSubType_VoiceProcessingIO/*kAudioUnitSubType_RemoteIO*/;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Input, 
        kInputBus,
        &flag, 
        sizeof(flag));
        checkStatus(status);

    // Enable IO for playback
    flag = 1;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioOutputUnitProperty_EnableIO, 
        kAudioUnitScope_Output, 
        kOutputBus,
        &flag, 
        sizeof(flag));

    checkStatus(status);

    // Describe format
    AudioStreamBasicDescription audioFormat;
    audioFormat.mSampleRate = 8000.00;
    //audioFormat.mSampleRate = 44100.00;
    audioFormat.mFormatID = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags = 
        kAudioFormatFlagsCanonical/* kAudioFormatFlagIsSignedInteger | 
        kAudioFormatFlagIsPacked*/;
    audioFormat.mFramesPerPacket = 1;
    audioFormat.mChannelsPerFrame = 1;
    audioFormat.mBitsPerChannel = 16;
    audioFormat.mBytesPerPacket = 2;
    audioFormat.mBytesPerFrame = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Output, 
        kInputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_StreamFormat, 
        kAudioUnitScope_Input, 
        kOutputBus, 
        &audioFormat, 
        sizeof(audioFormat));

    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        AudioOutputUnitProperty_SetInputCallback, 
        kAudioUnitScope_Global, 
        kInputBus, 
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);
    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
        kAudioUnitProperty_SetRenderCallback, 
        kAudioUnitScope_Global, 
        kOutputBus,
        &callbackStruct, 
        sizeof(callbackStruct));

    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to 
    // pass in our own)

    flag = 0;
    status = AudioUnitSetProperty(audioUnit, 
        kAudioUnitProperty_ShouldAllocateBuffer,
        kAudioUnitScope_Output, 
        kInputBus,
        &flag, 
        sizeof(flag)); 


    flag = 0;
    status = AudioUnitSetProperty(audioUnit,
    kAudioUnitProperty_ShouldAllocateBuffer, 
        kAudioUnitScope_Output,
        kOutputBus,
        &flag,
        sizeof(flag));

    // Allocate our own buffers (1 channel, 16 bits per sample, thus 16 bits per 
    // frame, thus 2 bytes per frame).
    // Practice learns the buffers used contain 512 frames,
    // if this changes it will be fixed in processAudio.
    tempBuffer.mNumberChannels = 1;
    tempBuffer.mDataByteSize = 512 * 2;
    tempBuffer.mData = malloc( 512 * 2 );

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);

    return self;
}

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(audioUnit);
    checkStatus(status);
}

/**
 * Stop the audioUnit
 */
- (void) stop {
    OSStatus status = AudioOutputUnitStop(audioUnit);
    checkStatus(status);
}

/**
 * Change this function to decide what is done with incoming
 * audio data from the microphone.
 * Right now we copy it to our own temporary buffer.
 */
- (void) processAudio: (AudioBufferList*) bufferList {
    AudioBuffer sourceBuffer = bufferList->mBuffers[0];

    // fix tempBuffer size if it's the wrong size
    if (tempBuffer.mDataByteSize != sourceBuffer.mDataByteSize) {
        free(tempBuffer.mData);
        tempBuffer.mDataByteSize = sourceBuffer.mDataByteSize;
        tempBuffer.mData = malloc(sourceBuffer.mDataByteSize);
    }

    // copy incoming audio data to temporary buffer
    memcpy(tempBuffer.mData, bufferList->mBuffers[0].mData, 
        bufferList->mBuffers[0].mDataByteSize);
    usleep(1000000); // <- TO REPRODUCE THE ERROR, CONCURRENCY MORE LIKELY

}

/**
 * Clean up.
 */
- (void) dealloc {
    [super dealloc];
    AudioUnitUninitialize(audioUnit);
    free(tempBuffer.mData);
}

@end
//
//IosAudioController.m
//海芋
//
//由Simon Epskamp于10/11/10创建。
//版权所有2010年uu MyCompanyName uuu。版权所有。
//
#导入“IosAudioController.h”
#进口
#定义kOutputBus 0
#定义kInputBus 1
iosAudio控制器*iosAudio;
无效检查状态(内部状态){
如果(状态){
printf(“状态不是0!%d\n”,状态);
//出口(1);
}
}
/**
*当来自麦克风的新音频数据可用时,将调用此回调。
*/
静态OSStatus recordingCallback(在refcon中为void*,
AudioUnitRenderActionFlags*ioActionFlags,
常量音频时间戳*inTimeStamp,
UInt32 InBunsNumber,
UInt32数字帧,
音频缓冲列表*ioData){
//由于我们选择音频格式(如下设置)的方式:
//我们只需要1个缓冲区,因为它是单声道的
//样本为16位=2字节。
//1帧仅包含1个样本
音频缓冲区;
buffer.mNumberChannels=1;
buffer.mDataByteSize=inNumberFrames*2;
buffer.mData=malloc(inNumberFrames*2);
//将缓冲区放入AudioBufferList中
音频缓冲列表;
bufferList.mNumberBuffers=1;
bufferList.mBuffers[0]=缓冲区;
NSLog(@“录制回调1 0x%x?0x%x”,buffer.mData,
bufferList.mBuffers[0].mData);
//然后:
//获取记录的样本
骨状态;
状态=AudioUnitRender([iosAudio audioUnit],
ioActionFlags,
起义坦普,
在美国,
无数帧,
&缓冲区列表);
检查状态(状态);
//现在,我们在bufferList的缓冲区中有了刚刚读取的样本
//处理新数据
[IOSSaudio processAudio:&bufferList];
NSLog(@“录制回调2 0x%x?0x%x”,buffer.mData,
bufferList.mBuffers[0].mData);
//在我们前面创建的缓冲区中释放malloc'ed数据
空闲(bufferList.mBuffers[0].mData);
返回noErr;
}
/**
*当audioUnit需要新数据通过音频设备播放时,调用此回调
*发言者。如果没有,就不要在缓冲区中写入任何内容
*/
静态OSStatus播放回调(在refcon中无效*,
AudioUnitRenderActionFlags*ioActionFlags,
常量音频时间戳*inTimeStamp,
UInt32 InBunsNumber,
UInt32数字帧,
音频缓冲列表*ioData){
//注意:ioData包含缓冲区(可能不止一个!)
//尽可能多地加满。
//请记住在每种情况下设置大小值
//缓冲区,以匹配缓冲区中的数据量。
对于(int i=0;imNumberBuffers;i++){
//实际上我们只有一个缓冲区,因为音频格式是单声道的
AudioBuffer=ioData->mBuffers[i];
//NSLog(@“缓冲区%d有%d个通道,需要%d字节的数据。”,i,
buffer.mNumberChannels,buffer.mDataByteSize);
//将临时缓冲区数据复制到输出缓冲区
UInt32 size=min(buffer.mDataByteSize,
[IOSSaudio tempBuffer].mDataByteSize);
//不要复制比我们现有的更多的数据,否则会出错
memcpy(buffer.mData[iosAudio tempBuffer].mData,size);
//指示我们在缓冲区中写入了多少数据
buffer.mDataByteSize=大小;
//取消注释以听到随机噪音
/*
*UInt16*frameBuffer=buffer.mData;
*对于(int j=0;jAudioSessionSetProperty(kAudioSessionProperty_OverrideAudioRoute,
                                    sizeof (audioRouteOverride),&audioRouteOverride);
void propListener(void *inClientData,
              AudioSessionPropertyID inID,
              UInt32 inDataSize,
              const void *inData) {

[iosAudio stop];
// ...

[iosAudio start];
}
static BOOL isStopped = NO;
static OSStatus recordingCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

static OSStatus playbackCallback(void *inRefCon, //...
{
  if(isStopped) {
    NSLog(@"Stopped, ignoring");
    return noErr;
  }
  // ...
}

// ...

/**
 * Start the audioUnit. This means data will be provided from
 * the microphone, and requested for feeding to the speakers, by
 * use of the provided callbacks.
 */
- (void) start {
    OSStatus status = AudioOutputUnitStart(_audioUnit);
    checkStatus(status);

    isStopped = NO;
}

/**
 * Stop the audioUnit
 */
- (void) stop {

    isStopped = YES;

    OSStatus status = AudioOutputUnitStop(_audioUnit);
    checkStatus(status);
}

// ...