Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/119.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/html/82.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何在iOS上录制对话/电话?_Ios_Iphone_Audio_Audio Recording - Fatal编程技术网

如何在iOS上录制对话/电话?

如何在iOS上录制对话/电话?,ios,iphone,audio,audio-recording,Ios,Iphone,Audio,Audio Recording,从理论上讲,可以在iPhone上录制电话通话吗 我接受以下答案: 可能需要也可能不需要越狱 由于使用私有API,可能会通过苹果的指导方针,也可能不会通过(我不在乎;这不适用于应用商店) 可以或不可以使用私人SDK 我不希望答案只是直截了当地说“苹果不允许这样”。 我知道没有官方的方法可以做到这一点,当然也不适用于App Store应用程序,而且我知道有些通话记录应用程序可以通过自己的服务器拨打电话。苹果不允许这样做,也不提供任何API 然而,在越狱设备上,我相信这是可能的。事实上,我认为这已

从理论上讲,可以在iPhone上录制电话通话吗

我接受以下答案:

  • 可能需要也可能不需要越狱
  • 由于使用私有API,可能会通过苹果的指导方针,也可能不会通过(我不在乎;这不适用于应用商店)
  • 可以或不可以使用私人SDK
我不希望答案只是直截了当地说“苹果不允许这样”。
我知道没有官方的方法可以做到这一点,当然也不适用于App Store应用程序,而且我知道有些通话记录应用程序可以通过自己的服务器拨打电话。

苹果不允许这样做,也不提供任何API


然而,在越狱设备上,我相信这是可能的。事实上,我认为这已经完成了。我记得在我的手机被越狱时看到一个应用程序,它改变了你的声音并录制了通话——我记得那是一家只在美国提供的美国公司。不幸的是,我不记得名字了……

我能想到的唯一解决方案是使用框架,更具体地说,使用属性,在来电时进行拦截,然后使用录制手机用户的声音(可能还有另一条线路上的一小部分用户的声音)。这显然不是完美的,并且只有当您的应用程序在调用时处于前台时才起作用,但这可能是您所能得到的最好结果。有关查找是否有来电的详细信息,请参见此处:

编辑:

.h:

AppDelegate.m:

- (void)applicationDidEnterBackground:(UIApplication *)application//Makes sure that the recording keeps happening even when app is in the background, though only can go for 10 minutes.
{
    __block UIBackgroundTaskIdentifier task = 0;
    task=[application beginBackgroundTaskWithExpirationHandler:^{
    NSLog(@"Expiration handler called %f",[application backgroundTimeRemaining]);
    [application endBackgroundTask:task];
    task=UIBackgroundTaskInvalid;
}];
这是第一次使用这些功能,所以不确定这是否完全正确,但我想你明白了。未经测试,因为我目前无法使用正确的工具。使用以下来源编译:


    • 我想一些硬件可以解决这个问题。连接至微型插孔端口;有耳塞和通过小型录音机的麦克风。这台录音机可以很简单。不通话时,录音机可以向手机提供数据/录音(通过插孔电缆)。只需一个简单的开始按钮(就像耳塞上的音量控制按钮一样)就足以为录音计时

      一些设置

      • 是的。由一个名为Limneos的开发人员完成(而且做得很好)。你可以在Cydia上找到它。它可以在iPhone 5及以上手机上录制任何类型的通话,而无需使用任何服务器等。呼叫将以音频文件的形式置于设备上。它也支持iPhone4S,但只支持扬声器

        这是第一个不用第三方服务器、VOIP或类似设备就能录制两个音频流的技术

        开发人员在通话的另一端发出嘟嘟声,提醒正在录音的人,但这些声音也被网络上的黑客删除了。回答你的问题,是的,这是非常可能的,不仅仅是理论上的

        进一步阅读


          • 给你。完整的工作示例。调整应该加载到
            mediaserverd
            守护进程中。它将在
            /var/mobile/Media/DCIM/result.m4a中记录每个电话。音频文件有两个通道。左边是麦克风,右边是扬声器。在iPhone 4S上,只有当扬声器打开时才会记录通话。在iPhone5上,5C和5S通话都会被记录下来。切换到扬声器或从扬声器切换到扬声器时可能会出现小的打嗝,但录音将继续

            #import <AudioToolbox/AudioToolbox.h>
            #import <libkern/OSAtomic.h>
            
            //CoreTelephony.framework
            extern "C" CFStringRef const kCTCallStatusChangeNotification;
            extern "C" CFStringRef const kCTCallStatus;
            extern "C" id CTTelephonyCenterGetDefault();
            extern "C" void CTTelephonyCenterAddObserver(id ct, void* observer, CFNotificationCallback callBack, CFStringRef name, void *object, CFNotificationSuspensionBehavior sb);
            extern "C" int CTGetCurrentCallCount();
            enum
            {
                kCTCallStatusActive = 1,
                kCTCallStatusHeld = 2,
                kCTCallStatusOutgoing = 3,
                kCTCallStatusIncoming = 4,
                kCTCallStatusHanged = 5
            };
            
            NSString* kMicFilePath = @"/var/mobile/Media/DCIM/mic.caf";
            NSString* kSpeakerFilePath = @"/var/mobile/Media/DCIM/speaker.caf";
            NSString* kResultFilePath = @"/var/mobile/Media/DCIM/result.m4a";
            
            OSSpinLock phoneCallIsActiveLock = 0;
            OSSpinLock speakerLock = 0;
            OSSpinLock micLock = 0;
            
            ExtAudioFileRef micFile = NULL;
            ExtAudioFileRef speakerFile = NULL;
            
            BOOL phoneCallIsActive = NO;
            
            void Convert()
            {
                //File URLs
                CFURLRef micUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kMicFilePath, kCFURLPOSIXPathStyle, false);
                CFURLRef speakerUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kSpeakerFilePath, kCFURLPOSIXPathStyle, false);
                CFURLRef mixUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kResultFilePath, kCFURLPOSIXPathStyle, false);
            
                ExtAudioFileRef micFile = NULL;
                ExtAudioFileRef speakerFile = NULL;
                ExtAudioFileRef mixFile = NULL;
            
                //Opening input files (speaker and mic)
                ExtAudioFileOpenURL(micUrl, &micFile);
                ExtAudioFileOpenURL(speakerUrl, &speakerFile);
            
                //Reading input file audio format (mono LPCM)
                AudioStreamBasicDescription inputFormat, outputFormat;
                UInt32 descSize = sizeof(inputFormat);
                ExtAudioFileGetProperty(micFile, kExtAudioFileProperty_FileDataFormat, &descSize, &inputFormat);
                int sampleSize = inputFormat.mBytesPerFrame;
            
                //Filling input stream format for output file (stereo LPCM)
                FillOutASBDForLPCM(inputFormat, inputFormat.mSampleRate, 2, inputFormat.mBitsPerChannel, inputFormat.mBitsPerChannel, true, false, false);
            
                //Filling output file audio format (AAC)
                memset(&outputFormat, 0, sizeof(outputFormat));
                outputFormat.mFormatID = kAudioFormatMPEG4AAC;
                outputFormat.mSampleRate = 8000;
                outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
                outputFormat.mChannelsPerFrame = 2;
            
                //Opening output file
                ExtAudioFileCreateWithURL(mixUrl, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &mixFile);
                ExtAudioFileSetProperty(mixFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inputFormat), &inputFormat);
            
                //Freeing URLs
                CFRelease(micUrl);
                CFRelease(speakerUrl);
                CFRelease(mixUrl);
            
                //Setting up audio buffers
                int bufferSizeInSamples = 64 * 1024;
            
                AudioBufferList micBuffer;
                micBuffer.mNumberBuffers = 1;
                micBuffer.mBuffers[0].mNumberChannels = 1;
                micBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
                micBuffer.mBuffers[0].mData = malloc(micBuffer.mBuffers[0].mDataByteSize);
            
                AudioBufferList speakerBuffer;
                speakerBuffer.mNumberBuffers = 1;
                speakerBuffer.mBuffers[0].mNumberChannels = 1;
                speakerBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
                speakerBuffer.mBuffers[0].mData = malloc(speakerBuffer.mBuffers[0].mDataByteSize);
            
                AudioBufferList mixBuffer;
                mixBuffer.mNumberBuffers = 1;
                mixBuffer.mBuffers[0].mNumberChannels = 2;
                mixBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples * 2;
                mixBuffer.mBuffers[0].mData = malloc(mixBuffer.mBuffers[0].mDataByteSize);
            
                //Converting
                while (true)
                {
                    //Reading data from input files
                    UInt32 framesToRead = bufferSizeInSamples;
                    ExtAudioFileRead(micFile, &framesToRead, &micBuffer);
                    ExtAudioFileRead(speakerFile, &framesToRead, &speakerBuffer);
                    if (framesToRead == 0)
                    {
                        break;
                    }
            
                    //Building interleaved stereo buffer - left channel is mic, right - speaker
                    for (int i = 0; i < framesToRead; i++)
                    {
                        memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2, (char*)micBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
                        memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2 + sampleSize, (char*)speakerBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
                    }
            
                    //Writing to output file - LPCM will be converted to AAC
                    ExtAudioFileWrite(mixFile, framesToRead, &mixBuffer);
                }
            
                //Closing files
                ExtAudioFileDispose(micFile);
                ExtAudioFileDispose(speakerFile);
                ExtAudioFileDispose(mixFile);
            
                //Freeing audio buffers
                free(micBuffer.mBuffers[0].mData);
                free(speakerBuffer.mBuffers[0].mData);
                free(mixBuffer.mBuffers[0].mData);
            }
            
            void Cleanup()
            {
                [[NSFileManager defaultManager] removeItemAtPath:kMicFilePath error:NULL];
                [[NSFileManager defaultManager] removeItemAtPath:kSpeakerFilePath error:NULL];
            }
            
            void CoreTelephonyNotificationCallback(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo)
            {
                NSDictionary* data = (NSDictionary*)userInfo;
            
                if ([(NSString*)name isEqualToString:(NSString*)kCTCallStatusChangeNotification])
                {
                    int currentCallStatus = [data[(NSString*)kCTCallStatus] integerValue];
            
                    if (currentCallStatus == kCTCallStatusActive)
                    {
                        OSSpinLockLock(&phoneCallIsActiveLock);
                        phoneCallIsActive = YES;
                        OSSpinLockUnlock(&phoneCallIsActiveLock);
                    }
                    else if (currentCallStatus == kCTCallStatusHanged)
                    {
                        if (CTGetCurrentCallCount() > 0)
                        {
                            return;
                        }
            
                        OSSpinLockLock(&phoneCallIsActiveLock);
                        phoneCallIsActive = NO;
                        OSSpinLockUnlock(&phoneCallIsActiveLock);
            
                        //Closing mic file
                        OSSpinLockLock(&micLock);
                        if (micFile != NULL)
                        {
                            ExtAudioFileDispose(micFile);
                        }
                        micFile = NULL;
                        OSSpinLockUnlock(&micLock);
            
                        //Closing speaker file
                        OSSpinLockLock(&speakerLock);
                        if (speakerFile != NULL)
                        {
                            ExtAudioFileDispose(speakerFile);
                        }
                        speakerFile = NULL;
                        OSSpinLockUnlock(&speakerLock);
            
                        Convert();
                        Cleanup();
                    }
                }
            }
            
            OSStatus(*AudioUnitProcess_orig)(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData);
            OSStatus AudioUnitProcess_hook(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData)
            {
                OSSpinLockLock(&phoneCallIsActiveLock);
                if (phoneCallIsActive == NO)
                {
                    OSSpinLockUnlock(&phoneCallIsActiveLock);
                    return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
                }
                OSSpinLockUnlock(&phoneCallIsActiveLock);
            
                ExtAudioFileRef* currentFile = NULL;
                OSSpinLock* currentLock = NULL;
            
                AudioComponentDescription unitDescription = {0};
                AudioComponentGetDescription(AudioComponentInstanceGetComponent(unit), &unitDescription);
                //'agcc', 'mbdp' - iPhone 4S, iPhone 5
                //'agc2', 'vrq2' - iPhone 5C, iPhone 5S
                if (unitDescription.componentSubType == 'agcc' || unitDescription.componentSubType == 'agc2')
                {
                    currentFile = &micFile;
                    currentLock = &micLock;
                }
                else if (unitDescription.componentSubType == 'mbdp' || unitDescription.componentSubType == 'vrq2')
                {
                    currentFile = &speakerFile;
                    currentLock = &speakerLock;
                }
            
                if (currentFile != NULL)
                {
                    OSSpinLockLock(currentLock);
            
                    //Opening file
                    if (*currentFile == NULL)
                    {
                        //Obtaining input audio format
                        AudioStreamBasicDescription desc;
                        UInt32 descSize = sizeof(desc);
                        AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &desc, &descSize);
            
                        //Opening audio file
                        CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)((currentFile == &micFile) ? kMicFilePath : kSpeakerFilePath), kCFURLPOSIXPathStyle, false);
                        ExtAudioFileRef audioFile = NULL;
                        OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile);
                        if (result != 0)
                        {
                            *currentFile = NULL;
                        }
                        else
                        {
                            *currentFile = audioFile;
            
                            //Writing audio format
                            ExtAudioFileSetProperty(*currentFile, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc);
                        }
                        CFRelease(url);
                    }
                    else
                    {
                        //Writing audio buffer
                        ExtAudioFileWrite(*currentFile, inNumberFrames, ioData);
                    }
            
                    OSSpinLockUnlock(currentLock);
                }
            
                return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
            }
            
            __attribute__((constructor))
            static void initialize()
            {
                CTTelephonyCenterAddObserver(CTTelephonyCenterGetDefault(), NULL, CoreTelephonyNotificationCallback, NULL, NULL, CFNotificationSuspensionBehaviorHold);
            
                MSHookFunction(AudioUnitProcess, AudioUnitProcess_hook, &AudioUnitProcess_orig);
            }
            
            #导入
            #进口
            //CoreTelephony.framework
            外部“C”CFStringRef const kCTCallStatusChangeNotification;
            外部“C”CFStringRef const kCTCallStatus;
            外部“C”id CTTelephonyCenterGetDefault();
            外部“C”void CTTelephonyCenterAddObserver(id ct、void*observer、CFNotificationCallback回调、CFStringRef名称、void*对象、CFNotificationSuspensionBehavior sb);
            外部“C”int CTGetCurrentCallCount();
            枚举
            {
            kCTCallStatusActive=1,
            KCTCallStatusHold=2,
            kCTCallStatusOutgoing=3,
            kCTCallStatusIncoming=4,
            kCTCallStatusHanged=5
            };
            NSString*kMicFilePath=@/var/mobile/Media/DCIM/mic.caf”;
            NSString*kSpeakerFilePath=@/var/mobile/Media/DCIM/speaker.caf”;
            NSString*kResultFilePath=@/var/mobile/Media/DCIM/result.m4a”;
            OSSpinLock phoneCallIsActiveLock=0;
            OSSpinLock扬声器锁定=0;
            OSSpinLock micLock=0;
            ExtAudioFileRef micFile=NULL;
            ExtAudioFileRef speakerFile=NULL;
            BOOL phoneCallIsActive=否;
            void Convert()
            {
            //文件URL
            CFURLRef micUrl=cfurlCreateWithFileSystemPassport(NULL,(CFStringRef)kMicFilePath,kCFURLPOSIXPathStyle,false);
            CFURLRef speakerUrl=cfurlCreateWithFileSystemPassport(NULL,(CFStringRef)kSpeakerFilePath,kCFURLPOSIXPathStyle,false);
            CFURLRef mixUrl=cfurlCreateWithFileSystemPassport(NULL,(CFStringRef)kResultFilePath,kCFURLPOSIXPathStyle,false);
            ExtAudioFileRef micFile=NULL;
            ExtAudioFileRef speakerFile=NULL;
            ExtAudioFileRef mixFile=NULL;
            //打开输入文件(扬声器和麦克风)
            ExtAudioFileOpenURL(micUrl和micFile);
            ExtAudioFileOpenURL(speakerUrl和speakerFile);
            //读取输入文件音频格式(单声道LPCM)
            音频流基本描述输入格式、输出格式;
            UInt32 descSize=sizeof(inputFormat);
            ExtAudioFileGetProperty(micFile、kExtAudioFileProperty_FileDataFormat、&descSize、&inputFormat);
            int sampleSize=inputFormat.mBytesPerFrame;
            //填充输出文件的输入流格式(立体声LPCM)
            FILLOUTASBDBFORLPCM(inputFormat,inputFormat.mSampleRate,2,inputFormat.mBitsPerChannel,inputFormat.mBitsPerChannel,true,false,false);
            //填充输出文件音频格式(AAC)
            memset(&outputFormat,0,sizeof(outputFormat));
            outputFormat.mFormatID=kaudioformampeg4aac;
            outputFormat.mSampleRate=8000;
            outputFormat.mFormatFlags=kMPEG4Object\u AAC\u Main;
            outputFormat.mChannelsPerFrame=2;
            //打开输出文件
            ExtAudioFileCreate
            
            - (void)applicationDidEnterBackground:(UIApplication *)application//Makes sure that the recording keeps happening even when app is in the background, though only can go for 10 minutes.
            {
                __block UIBackgroundTaskIdentifier task = 0;
                task=[application beginBackgroundTaskWithExpirationHandler:^{
                NSLog(@"Expiration handler called %f",[application backgroundTimeRemaining]);
                [application endBackgroundTask:task];
                task=UIBackgroundTaskInvalid;
            }];
            
            #import <AudioToolbox/AudioToolbox.h>
            #import <libkern/OSAtomic.h>
            
            //CoreTelephony.framework
            extern "C" CFStringRef const kCTCallStatusChangeNotification;
            extern "C" CFStringRef const kCTCallStatus;
            extern "C" id CTTelephonyCenterGetDefault();
            extern "C" void CTTelephonyCenterAddObserver(id ct, void* observer, CFNotificationCallback callBack, CFStringRef name, void *object, CFNotificationSuspensionBehavior sb);
            extern "C" int CTGetCurrentCallCount();
            enum
            {
                kCTCallStatusActive = 1,
                kCTCallStatusHeld = 2,
                kCTCallStatusOutgoing = 3,
                kCTCallStatusIncoming = 4,
                kCTCallStatusHanged = 5
            };
            
            NSString* kMicFilePath = @"/var/mobile/Media/DCIM/mic.caf";
            NSString* kSpeakerFilePath = @"/var/mobile/Media/DCIM/speaker.caf";
            NSString* kResultFilePath = @"/var/mobile/Media/DCIM/result.m4a";
            
            OSSpinLock phoneCallIsActiveLock = 0;
            OSSpinLock speakerLock = 0;
            OSSpinLock micLock = 0;
            
            ExtAudioFileRef micFile = NULL;
            ExtAudioFileRef speakerFile = NULL;
            
            BOOL phoneCallIsActive = NO;
            
            void Convert()
            {
                //File URLs
                CFURLRef micUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kMicFilePath, kCFURLPOSIXPathStyle, false);
                CFURLRef speakerUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kSpeakerFilePath, kCFURLPOSIXPathStyle, false);
                CFURLRef mixUrl = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)kResultFilePath, kCFURLPOSIXPathStyle, false);
            
                ExtAudioFileRef micFile = NULL;
                ExtAudioFileRef speakerFile = NULL;
                ExtAudioFileRef mixFile = NULL;
            
                //Opening input files (speaker and mic)
                ExtAudioFileOpenURL(micUrl, &micFile);
                ExtAudioFileOpenURL(speakerUrl, &speakerFile);
            
                //Reading input file audio format (mono LPCM)
                AudioStreamBasicDescription inputFormat, outputFormat;
                UInt32 descSize = sizeof(inputFormat);
                ExtAudioFileGetProperty(micFile, kExtAudioFileProperty_FileDataFormat, &descSize, &inputFormat);
                int sampleSize = inputFormat.mBytesPerFrame;
            
                //Filling input stream format for output file (stereo LPCM)
                FillOutASBDForLPCM(inputFormat, inputFormat.mSampleRate, 2, inputFormat.mBitsPerChannel, inputFormat.mBitsPerChannel, true, false, false);
            
                //Filling output file audio format (AAC)
                memset(&outputFormat, 0, sizeof(outputFormat));
                outputFormat.mFormatID = kAudioFormatMPEG4AAC;
                outputFormat.mSampleRate = 8000;
                outputFormat.mFormatFlags = kMPEG4Object_AAC_Main;
                outputFormat.mChannelsPerFrame = 2;
            
                //Opening output file
                ExtAudioFileCreateWithURL(mixUrl, kAudioFileM4AType, &outputFormat, NULL, kAudioFileFlags_EraseFile, &mixFile);
                ExtAudioFileSetProperty(mixFile, kExtAudioFileProperty_ClientDataFormat, sizeof(inputFormat), &inputFormat);
            
                //Freeing URLs
                CFRelease(micUrl);
                CFRelease(speakerUrl);
                CFRelease(mixUrl);
            
                //Setting up audio buffers
                int bufferSizeInSamples = 64 * 1024;
            
                AudioBufferList micBuffer;
                micBuffer.mNumberBuffers = 1;
                micBuffer.mBuffers[0].mNumberChannels = 1;
                micBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
                micBuffer.mBuffers[0].mData = malloc(micBuffer.mBuffers[0].mDataByteSize);
            
                AudioBufferList speakerBuffer;
                speakerBuffer.mNumberBuffers = 1;
                speakerBuffer.mBuffers[0].mNumberChannels = 1;
                speakerBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples;
                speakerBuffer.mBuffers[0].mData = malloc(speakerBuffer.mBuffers[0].mDataByteSize);
            
                AudioBufferList mixBuffer;
                mixBuffer.mNumberBuffers = 1;
                mixBuffer.mBuffers[0].mNumberChannels = 2;
                mixBuffer.mBuffers[0].mDataByteSize = sampleSize * bufferSizeInSamples * 2;
                mixBuffer.mBuffers[0].mData = malloc(mixBuffer.mBuffers[0].mDataByteSize);
            
                //Converting
                while (true)
                {
                    //Reading data from input files
                    UInt32 framesToRead = bufferSizeInSamples;
                    ExtAudioFileRead(micFile, &framesToRead, &micBuffer);
                    ExtAudioFileRead(speakerFile, &framesToRead, &speakerBuffer);
                    if (framesToRead == 0)
                    {
                        break;
                    }
            
                    //Building interleaved stereo buffer - left channel is mic, right - speaker
                    for (int i = 0; i < framesToRead; i++)
                    {
                        memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2, (char*)micBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
                        memcpy((char*)mixBuffer.mBuffers[0].mData + i * sampleSize * 2 + sampleSize, (char*)speakerBuffer.mBuffers[0].mData + i * sampleSize, sampleSize);
                    }
            
                    //Writing to output file - LPCM will be converted to AAC
                    ExtAudioFileWrite(mixFile, framesToRead, &mixBuffer);
                }
            
                //Closing files
                ExtAudioFileDispose(micFile);
                ExtAudioFileDispose(speakerFile);
                ExtAudioFileDispose(mixFile);
            
                //Freeing audio buffers
                free(micBuffer.mBuffers[0].mData);
                free(speakerBuffer.mBuffers[0].mData);
                free(mixBuffer.mBuffers[0].mData);
            }
            
            void Cleanup()
            {
                [[NSFileManager defaultManager] removeItemAtPath:kMicFilePath error:NULL];
                [[NSFileManager defaultManager] removeItemAtPath:kSpeakerFilePath error:NULL];
            }
            
            void CoreTelephonyNotificationCallback(CFNotificationCenterRef center, void *observer, CFStringRef name, const void *object, CFDictionaryRef userInfo)
            {
                NSDictionary* data = (NSDictionary*)userInfo;
            
                if ([(NSString*)name isEqualToString:(NSString*)kCTCallStatusChangeNotification])
                {
                    int currentCallStatus = [data[(NSString*)kCTCallStatus] integerValue];
            
                    if (currentCallStatus == kCTCallStatusActive)
                    {
                        OSSpinLockLock(&phoneCallIsActiveLock);
                        phoneCallIsActive = YES;
                        OSSpinLockUnlock(&phoneCallIsActiveLock);
                    }
                    else if (currentCallStatus == kCTCallStatusHanged)
                    {
                        if (CTGetCurrentCallCount() > 0)
                        {
                            return;
                        }
            
                        OSSpinLockLock(&phoneCallIsActiveLock);
                        phoneCallIsActive = NO;
                        OSSpinLockUnlock(&phoneCallIsActiveLock);
            
                        //Closing mic file
                        OSSpinLockLock(&micLock);
                        if (micFile != NULL)
                        {
                            ExtAudioFileDispose(micFile);
                        }
                        micFile = NULL;
                        OSSpinLockUnlock(&micLock);
            
                        //Closing speaker file
                        OSSpinLockLock(&speakerLock);
                        if (speakerFile != NULL)
                        {
                            ExtAudioFileDispose(speakerFile);
                        }
                        speakerFile = NULL;
                        OSSpinLockUnlock(&speakerLock);
            
                        Convert();
                        Cleanup();
                    }
                }
            }
            
            OSStatus(*AudioUnitProcess_orig)(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData);
            OSStatus AudioUnitProcess_hook(AudioUnit unit, AudioUnitRenderActionFlags *ioActionFlags, const AudioTimeStamp *inTimeStamp, UInt32 inNumberFrames, AudioBufferList *ioData)
            {
                OSSpinLockLock(&phoneCallIsActiveLock);
                if (phoneCallIsActive == NO)
                {
                    OSSpinLockUnlock(&phoneCallIsActiveLock);
                    return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
                }
                OSSpinLockUnlock(&phoneCallIsActiveLock);
            
                ExtAudioFileRef* currentFile = NULL;
                OSSpinLock* currentLock = NULL;
            
                AudioComponentDescription unitDescription = {0};
                AudioComponentGetDescription(AudioComponentInstanceGetComponent(unit), &unitDescription);
                //'agcc', 'mbdp' - iPhone 4S, iPhone 5
                //'agc2', 'vrq2' - iPhone 5C, iPhone 5S
                if (unitDescription.componentSubType == 'agcc' || unitDescription.componentSubType == 'agc2')
                {
                    currentFile = &micFile;
                    currentLock = &micLock;
                }
                else if (unitDescription.componentSubType == 'mbdp' || unitDescription.componentSubType == 'vrq2')
                {
                    currentFile = &speakerFile;
                    currentLock = &speakerLock;
                }
            
                if (currentFile != NULL)
                {
                    OSSpinLockLock(currentLock);
            
                    //Opening file
                    if (*currentFile == NULL)
                    {
                        //Obtaining input audio format
                        AudioStreamBasicDescription desc;
                        UInt32 descSize = sizeof(desc);
                        AudioUnitGetProperty(unit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &desc, &descSize);
            
                        //Opening audio file
                        CFURLRef url = CFURLCreateWithFileSystemPath(NULL, (CFStringRef)((currentFile == &micFile) ? kMicFilePath : kSpeakerFilePath), kCFURLPOSIXPathStyle, false);
                        ExtAudioFileRef audioFile = NULL;
                        OSStatus result = ExtAudioFileCreateWithURL(url, kAudioFileCAFType, &desc, NULL, kAudioFileFlags_EraseFile, &audioFile);
                        if (result != 0)
                        {
                            *currentFile = NULL;
                        }
                        else
                        {
                            *currentFile = audioFile;
            
                            //Writing audio format
                            ExtAudioFileSetProperty(*currentFile, kExtAudioFileProperty_ClientDataFormat, sizeof(desc), &desc);
                        }
                        CFRelease(url);
                    }
                    else
                    {
                        //Writing audio buffer
                        ExtAudioFileWrite(*currentFile, inNumberFrames, ioData);
                    }
            
                    OSSpinLockUnlock(currentLock);
                }
            
                return AudioUnitProcess_orig(unit, ioActionFlags, inTimeStamp, inNumberFrames, ioData);
            }
            
            __attribute__((constructor))
            static void initialize()
            {
                CTTelephonyCenterAddObserver(CTTelephonyCenterGetDefault(), NULL, CoreTelephonyNotificationCallback, NULL, NULL, CFNotificationSuspensionBehaviorHold);
            
                MSHookFunction(AudioUnitProcess, AudioUnitProcess_hook, &AudioUnitProcess_orig);
            }