Iphone iOS、AudioUnits录制到本地URL

Iphone iOS、AudioUnits录制到本地URL,iphone,ios,url,audio,audiounit,Iphone,Ios,Url,Audio,Audiounit,在我的iPhone应用程序中,我想录制由我自己的应用程序v.s.内部产生的声音,录制麦克风捕获的外部声音。另一种说法是,我想在播放时直接从声卡上录制声音。我想从那里把新录制的声音文件保存到一个指定的本地URL。发布了一个类似的问题。我已经阅读了一些教程和一些代码,但是有一些事情我需要帮助。这是我的密码: 头文件 OSStatus status; #define kOutputBus 0 #define kInputBus 1 static AudioComponentInstance aud

在我的iPhone应用程序中,我想录制由我自己的应用程序v.s.内部产生的声音,录制麦克风捕获的外部声音。另一种说法是,我想在播放时直接从声卡上录制声音。我想从那里把新录制的声音文件保存到一个指定的本地URL。发布了一个类似的问题。我已经阅读了一些教程和一些代码,但是有一些事情我需要帮助。这是我的密码:

头文件

OSStatus status;
#define kOutputBus 0
#define kInputBus 1

static AudioComponentInstance audioUnit;

static OSStatus recordingCallback(void *inRefCon,
                                  AudioUnitRenderActionFlags *ioActionFlags,
                                  const AudioTimeStamp *inTimeStamp,
                                  UInt32 inBusNumber,
                                  UInt32 inNumberFrames,
                                  AudioBufferList *ioData) {

    // TODO: Use inRefCon to access our interface object to do stuff
    // Then, use inNumberFrames to figure out how much data is available, and make
    // that much space available in buffers in an AudioBufferList.

    AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)

    // Then:
    // Obtain recorded samples

    OSStatus status;

    status = AudioUnitRender([audioInterface audioUnit],
                             ioActionFlags,
                             inTimeStamp,
                             inBusNumber,
                             inNumberFrames,
                             bufferList);
    checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    DoStuffWithTheRecordedAudio(bufferList);
    return noErr;
}

static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {
    // Notes: ioData contains buffers (may be more than one!)
    // Fill them up as much as you can. Remember to set the size value in each buffer to match how
    // much data is in the buffer.
    return noErr;
}


void initializeInternalAudioRecorder() {
    AudioStreamBasicDescription audioFormat; //this is currently being called as a local variable, try calling it as a golbal variable if it doesnt work
    OSStatus status;
    AudioComponentInstance audioUnit;


    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Input,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  kOutputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Describe format
    audioFormat.mSampleRate         = 44100.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  kOutputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_SetInputCallback,
                                  kAudioUnitScope_Global,
                                  kInputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
    flag = 0;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));

    // TODO: Allocate our own buffers if we want

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);
}

-(void)startInternalRecorder {
        OSStatus status = AudioOutputUnitStart(audioUnit);
        checkStatus(status);
    }

-(void)stopInternalRecorder {
        OSStatus status = AudioOutputUnitStop(audioUnit);
        checkStatus(status);
        AudioComponentInstanceDispose(audioUnit);
    }
实施文件

OSStatus status;
#define kOutputBus 0
#define kInputBus 1

static AudioComponentInstance audioUnit;

static OSStatus recordingCallback(void *inRefCon,
                                  AudioUnitRenderActionFlags *ioActionFlags,
                                  const AudioTimeStamp *inTimeStamp,
                                  UInt32 inBusNumber,
                                  UInt32 inNumberFrames,
                                  AudioBufferList *ioData) {

    // TODO: Use inRefCon to access our interface object to do stuff
    // Then, use inNumberFrames to figure out how much data is available, and make
    // that much space available in buffers in an AudioBufferList.

    AudioBufferList *bufferList; // <- Fill this up with buffers (you will want to malloc it, as it's a dynamic-length list)

    // Then:
    // Obtain recorded samples

    OSStatus status;

    status = AudioUnitRender([audioInterface audioUnit],
                             ioActionFlags,
                             inTimeStamp,
                             inBusNumber,
                             inNumberFrames,
                             bufferList);
    checkStatus(status);

    // Now, we have the samples we just read sitting in buffers in bufferList
    DoStuffWithTheRecordedAudio(bufferList);
    return noErr;
}

static OSStatus playbackCallback(void *inRefCon,
                                 AudioUnitRenderActionFlags *ioActionFlags,
                                 const AudioTimeStamp *inTimeStamp,
                                 UInt32 inBusNumber,
                                 UInt32 inNumberFrames,
                                 AudioBufferList *ioData) {
    // Notes: ioData contains buffers (may be more than one!)
    // Fill them up as much as you can. Remember to set the size value in each buffer to match how
    // much data is in the buffer.
    return noErr;
}


void initializeInternalAudioRecorder() {
    AudioStreamBasicDescription audioFormat; //this is currently being called as a local variable, try calling it as a golbal variable if it doesnt work
    OSStatus status;
    AudioComponentInstance audioUnit;


    // Describe audio component
    AudioComponentDescription desc;
    desc.componentType = kAudioUnitType_Output;
    desc.componentSubType = kAudioUnitSubType_RemoteIO;
    desc.componentFlags = 0;
    desc.componentFlagsMask = 0;
    desc.componentManufacturer = kAudioUnitManufacturer_Apple;

    // Get component
    AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);

    // Get audio units
    status = AudioComponentInstanceNew(inputComponent, &audioUnit);
    checkStatus(status);

    // Enable IO for recording
    UInt32 flag = 1;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Input,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Enable IO for playback
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_EnableIO,
                                  kAudioUnitScope_Output,
                                  kOutputBus,
                                  &flag,
                                  sizeof(flag));
    checkStatus(status);

    // Describe format
    audioFormat.mSampleRate         = 44100.00;
    audioFormat.mFormatID           = kAudioFormatLinearPCM;
    audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
    audioFormat.mFramesPerPacket    = 1;
    audioFormat.mChannelsPerFrame   = 1;
    audioFormat.mBitsPerChannel     = 16;
    audioFormat.mBytesPerPacket     = 2;
    audioFormat.mBytesPerFrame      = 2;

    // Apply format
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_StreamFormat,
                                  kAudioUnitScope_Input,
                                  kOutputBus,
                                  &audioFormat,
                                  sizeof(audioFormat));
    checkStatus(status);


    // Set input callback
    AURenderCallbackStruct callbackStruct;
    callbackStruct.inputProc = recordingCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioOutputUnitProperty_SetInputCallback,
                                  kAudioUnitScope_Global,
                                  kInputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Set output callback
    callbackStruct.inputProc = playbackCallback;
    callbackStruct.inputProcRefCon = self;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_SetRenderCallback,
                                  kAudioUnitScope_Global,
                                  kOutputBus,
                                  &callbackStruct,
                                  sizeof(callbackStruct));
    checkStatus(status);

    // Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)
    flag = 0;
    status = AudioUnitSetProperty(audioUnit,
                                  kAudioUnitProperty_ShouldAllocateBuffer,
                                  kAudioUnitScope_Output,
                                  kInputBus,
                                  &flag,
                                  sizeof(flag));

    // TODO: Allocate our own buffers if we want

    // Initialise
    status = AudioUnitInitialize(audioUnit);
    checkStatus(status);
}

-(void)startInternalRecorder {
        OSStatus status = AudioOutputUnitStart(audioUnit);
        checkStatus(status);
    }

-(void)stopInternalRecorder {
        OSStatus status = AudioOutputUnitStop(audioUnit);
        checkStatus(status);
        AudioComponentInstanceDispose(audioUnit);
    }
#定义kOutputBus 0
#定义kInputBus 1
静态音频元件;恒定音频单元;
静态OSStatus recordingCallback(在refcon中为void*,
AudioUnitRenderActionFlags*ioActionFlags,
常量音频时间戳*inTimeStamp,
UInt32 InBunsNumber,
UInt32数字帧,
音频缓冲列表*ioData){
//TODO:使用inRefCon访问我们的接口对象来完成任务
//然后,使用inNumberFrames计算出有多少数据可用,然后
//AudioBufferList中缓冲区中的可用空间。

AudioBufferList*bufferList;//Hmm,您从中复制/粘贴的代码似乎非常不完整。我会小心的。:)此外,您复制/粘贴的代码似乎没有保留其应有的结构

无论如何,
audioFormat
应声明为局部变量,其类型为
AudioStreamBasicDescription
。代码的顶部(即
recordingCallback
函数声明上方的所有内容)它实际上是一个初始化函数,尽管最初的作者对此并没有明确说明。因此,代码需要像这样包装:

void initializeMyStuff() {
  // Describe audio component
  AudioComponentDescription desc;
  desc.componentType = kAudioUnitType_Output;

  ... lots more code ...

  // Initialise
  status = AudioUnitInitialize(audioUnit);
  checkStatus(status);
} // <-- you were missing this end bracket, which caused the compilation errors

static OSStatus recordingCallback(void *inRefCon,
                                  AudioUnitRenderActionFlags *ioActionFlags,
                                  const AudioTimeStamp *inTimeStamp, ... etc
void initializeMyStuff(){
//描述音频组件
音频成分描述描述;
desc.componentType=kAudioUnitType_输出;
…更多的代码。。。
//初始化
状态=AudioUnitInitialize(audioUnit);
检查状态(状态);

}//是的,这是有意义的。那么,我如何将通过录制创建的音频文件保存到磁盘上指定的URL?我可以创建URL,使用AVAudioRecorder,我只需使用我想要保存文件的URL初始化录音机,但看起来我无法在这里这样做……因此,如果我声明OSStatus状态,仍然可以;AudioComponentInstance audioUnit;在头文件中,还是我需要在initializeMyStuff()的顶部声明它们?
audioUnit
可能需要是全局的(在这种情况下,它应该在头文件中声明
extern
,然后在.c文件的顶部实际声明).好的,我已经更新了我问题中的代码。我在头文件中声明了audioUnit extern,但我不知道如何在.c文件的顶部声明它。我实际上没有使用c文件。我的所有程序都是Objective-c,除了这部分。在头文件中,您有
extern AudioComponentInstance audioUnit;
,并在实现文件的顶部(我想它不必是纯C),您有
AudioComponentInstance audioUnit;
。如果这不起作用,请尝试从头文件中删除声明,只需将
static AudioComponentInstance audioUnit;
放在实现文件的顶部。根据您下面的评论,听起来您想录制音频并在e设备。这是很自然的,因为API是这样声明的,但在你的问题中,这可能有点误导,因为这意味着你想把它保存在某个服务器上。如果你只想在本地保存文件,
AVAudioRecorder
是一种更简单的方法。我只想保存文件ally,但我不能使用AVAudioRecorder的原因是它只能录制外部声音(例如通过麦克风)我想在iPhone上直接从声卡上录制内部声音。例如,如果有人戴着耳机在听什么。我希望能够录制从应用程序播放到耳机的任何声音,而不必摘下耳机。我的错是,我没有足够仔细地阅读你的问题。