Warning: file_get_contents(/data/phpspider/zhask/data//catemap/7/sql-server/23.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
EZAudio自定义AudioStreamBasicDescription未按预期工作_Ios_Audiotoolbox - Fatal编程技术网

EZAudio自定义AudioStreamBasicDescription未按预期工作

EZAudio自定义AudioStreamBasicDescription未按预期工作,ios,audiotoolbox,Ios,Audiotoolbox,我希望尽可能创建mono light audioBufferList。在过去,我每个音频缓冲区有46个字节,但缓冲持续时间相对较小。首先,如果我对输入和输出使用下面的AudioStreamBasicd描述 AudioStreamBasicDescription audioFormat; audioFormat.mBitsPerChannel = 8 * sizeof(AudioUnitSampleType); audioFormat.mBytesPerFrame =

我希望尽可能创建mono light audioBufferList。在过去,我每个音频缓冲区有46个字节,但缓冲持续时间相对较小。首先,如果我对输入和输出使用下面的AudioStreamBasicd描述

AudioStreamBasicDescription audioFormat;
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;
使用TPCircularBuffer作为转运器,然后我在bufferList中得到两个带有mDataByteSize 4096的缓冲区,这肯定太多了。所以我尝试使用我以前的ASBD

audioFormat.mSampleRate         = 8000.00;
audioFormat.mFormatID           = kAudioFormatLinearPCM;
audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
audioFormat.mFramesPerPacket    = 1;
audioFormat.mChannelsPerFrame   = 1;
audioFormat.mBitsPerChannel     = 8;
audioFormat.mBytesPerPacket     = 1;
audioFormat.mBytesPerFrame      = 1;
现在mDataByteSize是128,我只有一个缓冲区,但TPCircularBuffer无法正确处理这个问题。我想这是因为我只想使用一个频道。所以atm I拒绝了TBCB,并尝试将字节编码和解码为NSData,或者只是为了测试直接通过的AudioBufferList,但即使是第一个AudioStreamBasicDescription,声音也会失真太多

我当前的代码

-(void)initMicrophone{

    AudioStreamBasicDescription audioFormat;
    //*
     audioFormat.mBitsPerChannel   = 8 * sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerFrame    = sizeof(AudioUnitSampleType);
     audioFormat.mBytesPerPacket   = sizeof(AudioUnitSampleType);
     audioFormat.mChannelsPerFrame = 2;
     audioFormat.mFormatFlags      = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
     audioFormat.mFormatID         = kAudioFormatLinearPCM;
     audioFormat.mFramesPerPacket  = 1;
     audioFormat.mSampleRate       = 44100;

     /*/
    audioFormat.mSampleRate         = 8000.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagsCanonical | kAudioFormatFlagIsNonInterleaved;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 8;
    audioFormat.mBytesPerPacket     = 1;
    audioFormat.mBytesPerFrame      = 1;

    //*/


    _microphone = [EZMicrophone microphoneWithDelegate:self withAudioStreamBasicDescription:audioFormat];

    _output = [EZOutput outputWithDataSource:self withAudioStreamBasicDescription:audioFormat];
    [EZAudio circularBuffer:&_cBuffer withSize:128];
}

-(void)startSending{
    [_microphone startFetchingAudio];
    [_output startPlayback];
}

-(void)stopSending{
    [_microphone stopFetchingAudio];
    [_output stopPlayback];
}

-(void)microphone:(EZMicrophone *)microphone
 hasAudioReceived:(float **)buffer
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
    dispatch_async(dispatch_get_main_queue(), ^{
    });
}

-(void)microphone:(EZMicrophone *)microphone
    hasBufferList:(AudioBufferList *)bufferList
   withBufferSize:(UInt32)bufferSize
withNumberOfChannels:(UInt32)numberOfChannels{
//*
        abufferlist = bufferList;
    /*/
     audioBufferData = [NSData dataWithBytes:bufferList->mBuffers[0].mData length:bufferList->mBuffers[0].mDataByteSize];
     //*/
 dispatch_async(dispatch_get_main_queue(), ^{
 });
}
-(AudioBufferList*)output:(EZOutput *)output needsBufferListWithFrames:(UInt32)frames withBufferSize:(UInt32 *)bufferSize{
    //*
    return abufferlist;
    /*/
     //    int bSize = 128;
     //    AudioBuffer audioBuffer;
     //    audioBuffer.mNumberChannels = 1;
     //    audioBuffer.mDataByteSize = bSize;
     //    audioBuffer.mData = malloc(bSize);
     ////    [audioBufferData getBytes:audioBuffer.mData length:bSize];
     //    memcpy(audioBuffer.mData, [audioBufferData bytes], bSize);
     //
     //
     //    AudioBufferList *bufferList = [EZAudio audioBufferList];
     //    bufferList->mNumberBuffers = 1;
     //    bufferList->mBuffers[0] = audioBuffer;
     //
     //    return bufferList;
    //*/


}
我知道output:needsBufferListWithFrames:withBufferSize:中的
bSize
的值可能已更改


我的主要目标是尽可能多地创建单声道灯光,将其编码为nsdata并解码为输出。你能告诉我我做错了什么吗?

我也遇到了同样的问题,移动到AVAudioRecorder并设置我需要的参数。我保留了EZAudio(EZMirror)用于音频可视化。这里有一个链接来实现这一点: