Ios 使用音频单元录制音频,每个音频单元以X秒为单位分割文件
我已经做了几天了。我不太熟悉框架的音频单元层。有没有人能给我举一个完整的例子,告诉我如何让用户用x个间隔来记录和编写文件。例如,用户按record,每10秒,我想写入一个文件,在第11秒,它将写入下一个文件,在第21秒,它是一样的。所以当我录制25秒一个字的音频时,它会产生3个不同的文件Ios 使用音频单元录制音频,每个音频单元以X秒为单位分割文件,ios,audio,segments,remoteio,Ios,Audio,Segments,Remoteio,我已经做了几天了。我不太熟悉框架的音频单元层。有没有人能给我举一个完整的例子,告诉我如何让用户用x个间隔来记录和编写文件。例如,用户按record,每10秒,我想写入一个文件,在第11秒,它将写入下一个文件,在第21秒,它是一样的。所以当我录制25秒一个字的音频时,它会产生3个不同的文件 我用avc俘获了它,但是它在中间产生点击和POPs。我已经读过了,这是由于读写操作之间的毫秒数。我尝试过音频队列服务,但知道我正在使用的应用程序,我需要完全控制音频层;所以我决定使用音频单元 我想我越来越近了…
<>我用avc俘获了它,但是它在中间产生点击和POPs。我已经读过了,这是由于读写操作之间的毫秒数。我尝试过音频队列服务,但知道我正在使用的应用程序,我需要完全控制音频层;所以我决定使用音频单元 我想我越来越近了……还是很迷茫。我最终使用了惊人的音频引擎(TAAE)。我现在看的是AEAudioReceiver,我的回调代码如下所示。我认为逻辑上是正确的,但我不认为它的实施是正确的 手头的任务:以AAC格式记录~5秒的片段 尝试:使用AEAudioReceiver回调并将AudioBufferList存储在循环缓冲区中。跟踪在recorder类中接收到的音频秒数;一旦它通过5秒标记(它可以稍微结束,但不是6秒)。调用Obj-c方法使用AEAudioFileWriter写入文件 结果是:没有工作,录音听起来很慢,经常有很多噪音;我能听到一些录音;所以我知道有一些数据,但我好像丢失了很多数据。我甚至不知道如何调试它(我将继续尝试,但目前非常迷茫) 另一项是转换为AAC,我是否先以PCM格式写入文件,然后再转换为AAC,或者是否可以仅将音频段转换为AAC 谢谢你的帮助 -----循环缓冲区初始化----- -----AEAudioReceiver回调------ ----写入文件(MyAudioRecorder类中的方法)---- -----音频控制器音频流基本描述------
//trying to get 5 seconds audio, how do I know what the length is if I don't know the frame size yet? and is that even the right question to ask?
TPCircularBufferInit(&_buffer, 1024 * 256);
static void receiverCallback(__unsafe_unretained MyAudioRecorder *THIS,
__unsafe_unretained AEAudioController *audioController,
void *source,
const AudioTimeStamp *time,
UInt32 frames,
AudioBufferList *audio) {
//store the audio into the buffer
TPCircularBufferCopyAudioBufferList(&THIS->_buffer, audio, time, kTPCircularBufferCopyAll, NULL);
//increase the time interval to track by THIS
THIS.numberOfSecondInCurrentRecording += AEConvertFramesToSeconds(THIS.audioController, frames);
//if number of seconds passed an interval of 5 seconds, than write the last 5 seconds of the buffer to a file
if (THIS.numberOfSecondInCurrentRecording > 5 * THIS->_currentSegment + 1) {
NSLog(@"Segment %d is full, writing file", THIS->_currentSegment);
[THIS writeBufferToFile];
//segment tracking variables
THIS->_numberOfReceiverLoop = 0;
THIS.lastTimeStamp = nil;
THIS->_currentSegment += 1;
} else {
THIS->_numberOfReceiverLoop += 1;
}
// Do something with 'audio'
if (!THIS.lastTimeStamp) {
THIS.lastTimeStamp = (AudioTimeStamp *)time;
}
}
- (void)writeBufferToFileHandler {
NSString *documentsFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES)
objectAtIndex:0];
NSString *filePath = [documentsFolder stringByAppendingPathComponent:[NSString stringWithFormat:@"Segment_%d.aiff", _currentSegment]];
NSError *error = nil;
//setup audio writer, should the buffer be converted to aac first or save the file than convert; and how the heck do you do that?
AEAudioFileWriter *writeFile = [[AEAudioFileWriter alloc] initWithAudioDescription:_audioController.inputAudioDescription];
[writeFile beginWritingToFileAtPath:filePath fileType:kAudioFileAIFFType error:&error];
if (error) {
NSLog(@"Error in init. the file: %@", error);
return;
}
int i = 1;
//loop to write all the AudioBufferLists that is in the Circular Buffer; retrieve the ones based off of the _lastTimeStamp; but I had it in NULL too and worked the same way.
while (1) {
//NSLog(@"Processing buffer file list for segment [%d] and buffer index [%d]", _currentSegment, i);
i += 1;
// Discard any buffers with an incompatible format, in the event of a format change
AudioBufferList *nextBuffer = TPCircularBufferNextBufferList(&_buffer, _lastTimeStamp);
Float32 *frame = (Float32*) &nextBuffer->mBuffers[0].mData;
//if buffer runs out, than we are done writing it and exit loop to close the file
if ( !nextBuffer ) {
NSLog(@"Ran out of frames, there were [%d] AudioBufferList", i - 1);
break;
}
//Adding audio using AudioFileWriter, is the length correct?
OSStatus status = AEAudioFileWriterAddAudio(writeFile, nextBuffer, sizeof(nextBuffer->mBuffers[0].mDataByteSize));
if (status) {
NSLog(@"Writing Error? %d", status);
}
//consume/clear the buffer
TPCircularBufferConsumeNextBufferList(&_buffer);
}
//close the file and hope it worked
[writeFile finishWriting];
}
//interleaved16BitStereoAudioDescription
AudioStreamBasicDescription audioDescription;
memset(&audioDescription, 0, sizeof(audioDescription));
audioDescription.mFormatID = kAudioFormatLinearPCM;
audioDescription.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked | kAudioFormatFlagsNativeEndian;
audioDescription.mChannelsPerFrame = 2;
audioDescription.mBytesPerPacket = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mFramesPerPacket = 1;
audioDescription.mBytesPerFrame = sizeof(SInt16)*audioDescription.mChannelsPerFrame;
audioDescription.mBitsPerChannel = 8 * sizeof(SInt16);
audioDescription.mSampleRate = 44100.0;