在“向下”模式下播放iphone音频;ipod“;麦克风

在“向下”模式下播放iphone音频;ipod“;麦克风,iphone,dynamic,audio,Iphone,Dynamic,Audio,它只在耳麦中播放 我使用远程IO进行播放 OSStatus status; // Describe audio component AudioComponentDescription desc; desc.componentType = kAudioUnitType_Output; desc.componentSubType = kAudioUnitSubType_RemoteIO; desc.componentFlags = 0; desc.componentFlagsMask

它只在耳麦中播放

我使用远程IO进行播放

OSStatus status;    // Describe audio component  AudioComponentDescription desc;  desc.componentType = kAudioUnitType_Output;  desc.componentSubType = kAudioUnitSubType_RemoteIO;  desc.componentFlags = 0;  desc.componentFlagsMask = 0;  desc.componentManufacturer = kAudioUnitManufacturer_Apple;    // Get component  AudioComponent inputComponent = AudioComponentFindNext(NULL, &desc);   // Get audio units  status = AudioComponentInstanceNew(inputComponent, &audioUnit);    // Enable IO for recording  UInt32 flag = 1;  status = AudioUnitSetProperty(audioUnit, 
          kAudioOutputUnitProperty_EnableIO, 
          kAudioUnitScope_Input, 
          kInputBus,
          &flag, 
          sizeof(flag));    // Enable IO for playback  status = AudioUnitSetProperty(audioUnit, 
          kAudioOutputUnitProperty_EnableIO, 
          kAudioUnitScope_Output, 
          kOutputBus,
          &flag, 
          sizeof(flag));    // Describe format  audioFormat.mSampleRate   = 44100;  audioFormat.mFormatID   = kAudioFormatLinearPCM;  audioFormat.mFormatFlags  = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;  audioFormat.mFramesPerPacket = 1;  audioFormat.mChannelsPerFrame = 1;  audioFormat.mBitsPerChannel  = 16;  audioFormat.mBytesPerPacket  = 2;  audioFormat.mBytesPerFrame  = 2;


     // Apply format  status = AudioUnitSetProperty(audioUnit, 
          kAudioUnitProperty_StreamFormat, 
          kAudioUnitScope_Output, 
          kInputBus, 
          &audioFormat, 
          sizeof(audioFormat));
     status = AudioUnitSetProperty(audioUnit, 
          kAudioUnitProperty_StreamFormat, 
          kAudioUnitScope_Input, 
          kOutputBus, 
          &audioFormat, 
          sizeof(audioFormat));    AURenderCallbackStruct callbackStruct; // Set output callback  callbackStruct.inputProc = playbackCallback;  callbackStruct.inputProcRefCon = self; status = AudioUnitSetProperty(audioUnit, 
          kAudioUnitProperty_SetRenderCallback, 
          //kAudioUnitScope_Global, 
          kAudioUnitScope_Output, 
          kOutputBus,
          &callbackStruct, 
          sizeof(callbackStruct));    // Set input callback

   callbackStruct.inputProc = recordingCallback;  callbackStruct.inputProcRefCon = self; status = AudioUnitSetProperty(audioUnit, 
          kAudioOutputUnitProperty_SetInputCallback,

          //kAudioUnitScope_Global, 
          kAudioUnitScope_Input, 
          kInputBus, 
          &callbackStruct, 
          sizeof(callbackStruct));
     // Disable buffer allocation for the recorder (optional - do this if we want to pass in our own)  flag = 0;  status = AudioUnitSetProperty(audioUnit, 
          kAudioUnitProperty_ShouldAllocateBuffer,
          kAudioUnitScope_Output, 
          kInputBus,
          &flag, 
          sizeof(flag));  /*  // TODO: Allocate our own buffers if we want 
*/  // Initialise  status = AudioUnitInitialize(audioUnit);    AudioUnitSetParameter(audioUnit, kHALOutputParam_Volume,
        kAudioUnitScope_Input, kInputBus,
        1, 0);

播放音频文件之前,请将AVAudioSession设置为AVAudioSessionCategoryPlayback

AVAudioSession * audioSession;
[audioSession setCategory:AVAudioSessionCategoryPlayback error: &error];
//Activate the session
[audioSession setActive:YES error: &error];

我不知道你在问什么。考虑重新处理你的问题。并提供有关实际问题的更多详细信息。代码很棒,但如果没有上下文或解释,它是无用的。