Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/98.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 仅播放来自阵列的音频一次,无循环_Ios_Core Audio_Audiounit - Fatal编程技术网

Ios 仅播放来自阵列的音频一次,无循环

Ios 仅播放来自阵列的音频一次,无循环,ios,core-audio,audiounit,Ios,Core Audio,Audiounit,说到音频编程,我完全是一个初学者,现在我正在玩AudioUnit。我正在关注,我已经移植了代码来使用iOS7。问题是,我只希望它播放一次生成的正弦波,而不是继续播放声波。但我不知道如何做到这一点 生成音频样本: OSStatus RenderTone( void *inRefCon, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp,

说到音频编程,我完全是一个初学者,现在我正在玩AudioUnit。我正在关注,我已经移植了代码来使用iOS7。问题是,我只希望它播放一次生成的正弦波,而不是继续播放声波。但我不知道如何做到这一点

生成音频样本:

    OSStatus RenderTone(
        void *inRefCon, 
        AudioUnitRenderActionFlags *ioActionFlags, 
        const AudioTimeStamp *inTimeStamp, 
        UInt32 inBusNumber, 
        UInt32 inNumberFrames, 
        AudioBufferList *ioData)

    {
        // Fixed amplitude is good enough for our purposes
        const double amplitude = 0.25;

        // Get the tone parameters out of the view controller
        ToneGeneratorViewController *viewController =
            (ToneGeneratorViewController *)inRefCon;
        double theta = viewController->theta;
        double theta_increment =
            2.0 * M_PI * viewController->frequency / viewController->sampleRate;

        // This is a mono tone generator so we only need the first buffer
        const int channel = 0;
        Float32 *buffer = (Float32 *)ioData->mBuffers[channel].mData;

        // Generate the samples
        for (UInt32 frame = 0; frame < inNumberFrames; frame++) 
        {
            buffer[frame] = sin(theta) * amplitude;

            theta += theta_increment;
            if (theta > 2.0 * M_PI)
            {
                theta -= 2.0 * M_PI;
            }
        }

        // Store the updated theta back in the view controller
        viewController->theta = theta;

        return noErr;
    }
谢谢

问题是,我只希望它播放生成的正弦波一次

您应该在一段时间后停止音频单元

例如,您可以在调用
AudioOutputUnitStart
时设置
NSTimer
,然后在计时器启动时调用
AudioOutputUnitStop
(实际上是您的音频单元处理代码)。更简单的是,您可以使用
performSelector:withObject:afterDelay:
并调用音频单元处理方法


希望这有帮助。

谢谢塞吉奥!所以我没办法给它一系列的浮子,然后让它停下来?最好的方法是计算将要经过的时间量,然后在该时间后调用AudioOutputUnitStop?好的,您可以修改
RenderOne
回调,使其只产生所需的数据包量(在我看来,它现在产生无限正弦波),然后为音频单元提供一个空的缓冲区:你会听到生成的正弦波一次,之后就什么也听不到了——但音频仍然会打开,cpu等。我认为如果你计算出正弦波应该持续多长时间,然后停止声音,会更干净(更容易)。一个好的触摸也会使声音逐渐消失,而不是突然停止…啊。。如果以前调用过
RenderTone
,我会有一些变量来跟踪吗?如果有,那么我会使
mbuffer
null填充?如果我错了,请纠正我,但它之所以一次又一次地循环是因为我将
RenderTone
注册为回调,而它只是重复调用自己?非常感谢@sergio!正如您所说,RenderTone会根据音频单元的需要反复调用。它不是一个调用对应一个完整正弦波的情况——绝对不是。事实上,一个完整的正弦波是听不见的。您必须修改RenderTone,使其生成您喜欢的正弦波,然后返回零。但是,正如我所建议的那样,停止音频单元要容易得多--尝试一下
性能选择器…:延时后:
,它将是一个快照。。。
// Configure the search parameters to find the default playback output unit
// (called the kAudioUnitSubType_RemoteIO on iOS but
// kAudioUnitSubType_DefaultOutput on Mac OS X)
AudioComponentDescription defaultOutputDescription;
defaultOutputDescription.componentType = kAudioUnitType_Output;
defaultOutputDescription.componentSubType = kAudioUnitSubType_RemoteIO;
defaultOutputDescription.componentManufacturer = kAudioUnitManufacturer_Apple;
defaultOutputDescription.componentFlags = 0;
defaultOutputDescription.componentFlagsMask = 0;

// Get the default playback output unit
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &defaultOutputDescription);
NSAssert(defaultOutput, @"Can't find default output");

// Create a new unit based on this that we'll use for output
OSErr err = AudioComponentInstanceNew(defaultOutput, &toneUnit);
NSAssert1(toneUnit, @"Error creating unit: %ld", err);

// Set our tone rendering function on the unit
AURenderCallbackStruct input;
input.inputProc = RenderTone;
input.inputProcRefCon = self;
err = AudioUnitSetProperty(toneUnit, 
    kAudioUnitProperty_SetRenderCallback, 
    kAudioUnitScope_Input,
    0, 
    &input, 
    sizeof(input));
NSAssert1(err == noErr, @"Error setting callback: %ld", err);

// Set the format to 32 bit, single channel, floating point, linear PCM
const int four_bytes_per_float = 4;
const int eight_bits_per_byte = 8;
AudioStreamBasicDescription streamFormat;
streamFormat.mSampleRate = sampleRate;
streamFormat.mFormatID = kAudioFormatLinearPCM;
streamFormat.mFormatFlags =
    kAudioFormatFlagsNativeFloatPacked | kAudioFormatFlagIsNonInterleaved;
streamFormat.mBytesPerPacket = four_bytes_per_float;
streamFormat.mFramesPerPacket = 1;    
streamFormat.mBytesPerFrame = four_bytes_per_float;        
streamFormat.mChannelsPerFrame = 1;    
streamFormat.mBitsPerChannel = four_bytes_per_float * eight_bits_per_byte;
err = AudioUnitSetProperty (toneUnit,
    kAudioUnitProperty_StreamFormat,
    kAudioUnitScope_Input,
    0,
    &streamFormat,
    sizeof(AudioStreamBasicDescription));
NSAssert1(err == noErr, @"Error setting stream format: %ld", err);