Warning: file_get_contents(/data/phpspider/zhask/data//catemap/0/hadoop/6.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 如何修改此AudioUnit代码,使其具有立体声输出?_Ios_Core Audio_Audiounit_Audiotoolbox - Fatal编程技术网

Ios 如何修改此AudioUnit代码,使其具有立体声输出?

Ios 如何修改此AudioUnit代码,使其具有立体声输出?,ios,core-audio,audiounit,audiotoolbox,Ios,Core Audio,Audiounit,Audiotoolbox,我似乎在文档中找不到我要找的东西。这段代码很好用,但我想要立体声输出 - (void)createToneUnit { // Configure the search parameters to find the default playback output unit // (called the kAudioUnitSubType_RemoteIO on iOS but // kAudioUnitSubType_DefaultOutput on Mac OS X)

我似乎在文档中找不到我要找的东西。这段代码很好用,但我想要立体声输出

- (void)createToneUnit
{
    // Configure the search parameters to find the default playback output unit
    // (called the kAudioUnitSubType_RemoteIO on iOS but
    // kAudioUnitSubType_DefaultOutput on Mac OS X)
    AudioComponentDescription defaultOutputDescription;
    defaultOutputDescription.componentType = kAudioUnitType_Output;
    defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
    defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
    defaultOutputDescription.componentFlags = 0;
    defaultOutputDescription.componentFlagsMask = 0;

    // Get the default playback output unit
    AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
    NSAssert(defaultOutput, @"Can't find default output");

    // Create a new unit based on this that we'll use for output
    OSErr err = AudioComponentInstanceNew(defaultOutput, &_toneUnit);
    NSAssert1(_toneUnit, @"Error creating unit: %d", err);

    // Set our tone rendering function on the unit
    AURenderCallbackStruct input;
    input.inputProc = RenderTone;
    input.inputProcRefCon = (__bridge void*)self;
    err = AudioUnitSetProperty(_toneUnit,
                               kAudioUnitProperty_SetRenderCallback,
                               kAudioUnitScope_Input,
                               0,
                               &input,
                               sizeof(input));
    NSAssert1(err == noErr, @"Error setting callback: %d", err);

    // Set the format to 32 bit, single channel, floating point, linear PCM
    const int four_bytes_per_float = 4;
    const int eight_bits_per_byte = 8;
    AudioStreamBasicDescription streamFormat;
    streamFormat.mSampleRate = kSampleRate;
    streamFormat.mFormatID = kAudioFormatLinearPCM;
    streamFormat.mFormatFlags =
    kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
    streamFormat.mBytesPerPacket = four_bytes_per_float;
    streamFormat.mFramesPerPacket = 1;
    streamFormat.mBytesPerFrame = four_bytes_per_float;
    streamFormat.mChannelsPerFrame = 1;
    streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
    err = AudioUnitSetProperty (_toneUnit,
                                kAudioUnitProperty_StreamFormat,
                                kAudioUnitScope_Input,
                                0,
                                &streamFormat,
                                sizeof(AudioStreamBasicDescription));
    NSAssert1(err == noErr, @"Error setting stream format: %dd", err);
}
下面是回调:

OSStatus RenderTone( void* inRefCon,
                       AudioUnitRenderActionFlags  *ioActionFlags,
                       const AudioTimeStamp        *inTimeStamp,
                       UInt32                      inBusNumber,
                       UInt32                      inNumberFrames,
                       AudioBufferList             *ioData){



    // Get the tone parameters out of the view controller
    VWWSynthesizerC *synth = (__bridge VWWSynthesizerC *)inRefCon;
    double theta = synth.theta;
    double theta_increment = 2.0 * M_PI * synth.frequency / kSampleRate;




    // This is a mono tone generator so we only need the first buffer
    const int channel = 0;
    Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;

    // Generate the samples
    for (UInt32 frame = 0; frame < inNumberFrames; frame++)
    {
        if(synth.muted){
            buffer[frame] = 0;
        }
        else{
            switch(synth.waveType){
                case VWWWaveTypeSine:{
                    buffer[frame] = sin(theta) * synth.amplitude;
                    break;
                }
                case VWWWaveTypeSquare:{
                    buffer[frame] = square(theta) * synth.amplitude;
                    break;
                }
                case VWWWaveTypeSawtooth:{
                    buffer[frame] = sawtooth(theta) * synth.amplitude;
                    break;
                }
                case VWWWaveTypeTriangle:{
                    buffer[frame] = triangle(theta) * synth.amplitude;
                    break;
                }
                default:
                    break;

            }
        }
        theta += theta_increment;
        if (theta > 2.0 * M_PI)
        {
            theta -= 2.0 * M_PI;
        }
    }

    synth.theta = theta;

    return noErr;
}

如果有不同或更好的方法来呈现这些数据,我愿意接受建议。我正在渲染正弦、正方形、三角形、锯齿等。。。波浪

我做了以下更改,但我不知道如何在渲染回调中正确填充缓冲区:streamFormat.mChannelsPerFrame=2;立体声是指通过remoteIO的两个通道输出相同的信号吗?或者两个不同的信号,每个通过不同的remoteIO输出通道?我希望一个信号是左耳,一个是右耳。完全分离。我的想法是实际使用它来驱动一些带有串行接口的电子设备,如果我可以将通道分离到左/右,我可以使用一个作为时钟,另一个作为数据。至少这是个主意。你有立体声的解决方案吗?是的,我有,但没有做太多。我想我的问题完全是左右平移。我决定在我的项目中使用mono。不过我有一个立体声实现的分支。您可以在此处找到相关文件:。如果你得到项目并运行单元测试,我相信它处于工作状态。祝你好运。如果你有什么进展,请告诉我。