Ios 我可以使用AVAudioEngine以比实时更快的速度读取文件、处理音频单元和写入文件吗?
我正在开发一个iOS应用程序,它使用AVAudioEngine进行各种操作,包括将音频录制到文件中,使用音频单元对音频应用效果,以及使用应用的效果播放音频。我还使用一个tap将输出写入一个文件。完成此操作后,它会在播放音频时实时写入文件 是否可以设置一个AVAudioEngine图形,该图形可以读取文件、使用音频单元处理声音并输出到文件,但速度要快于实时速度(即硬件处理速度)?这种方法的使用案例是输出几分钟的音频并应用效果,我当然不想等待几分钟来处理它 编辑:以下是我用来设置AVAudioEngine图形并播放声音文件的代码:Ios 我可以使用AVAudioEngine以比实时更快的速度读取文件、处理音频单元和写入文件吗?,ios,audiounit,avaudioengine,Ios,Audiounit,Avaudioengine,我正在开发一个iOS应用程序,它使用AVAudioEngine进行各种操作,包括将音频录制到文件中,使用音频单元对音频应用效果,以及使用应用的效果播放音频。我还使用一个tap将输出写入一个文件。完成此操作后,它会在播放音频时实时写入文件 是否可以设置一个AVAudioEngine图形,该图形可以读取文件、使用音频单元处理声音并输出到文件,但速度要快于实时速度(即硬件处理速度)?这种方法的使用案例是输出几分钟的音频并应用效果,我当然不想等待几分钟来处理它 编辑:以下是我用来设置AVAudioEng
AVAudioEngine* engine = [[AVAudioEngine alloc] init];
AVAudioPlayerNode* player = [[AVAudioPlayerNode alloc] init];
[engine attachNode:player];
self.player = player;
self.engine = engine;
if (!self.distortionEffect) {
self.distortionEffect = [[AVAudioUnitDistortion alloc] init];
[self.engine attachNode:self.distortionEffect];
[self.engine connect:self.player to:self.distortionEffect format:[self.distortionEffect outputFormatForBus:0]];
AVAudioMixerNode* mixer = [self.engine mainMixerNode];
[self.engine connect:self.distortionEffect to:mixer format:[mixer outputFormatForBus:0]];
}
[self.distortionEffect loadFactoryPreset:AVAudioUnitDistortionPresetDrumsBitBrush];
NSError* error;
if (![self.engine startAndReturnError:&error]) {
NSLog(@"error: %@", error);
} else {
NSURL* fileURL = [[NSBundle mainBundle] URLForResource:@"test2" withExtension:@"mp3"];
AVAudioFile* file = [[AVAudioFile alloc] initForReading:fileURL error:&error];
if (error) {
NSLog(@"error: %@", error);
} else {
[self.player scheduleFile:file atTime:nil completionHandler:nil];
[self.player play];
}
}
上述代码在应用了avaudionitextractionpresetdrumbitbrush
失真预设的情况下,实时播放test2.mp3文件中的声音
然后,我修改了上述代码,在[self.player play]之后添加了以下行:
[self.engine stop];
[self renderAudioAndWriteToFile];
我修改了Vladimir提供的renderAudioAndWriteToFile方法,这样它就不用在第一行中分配新的AVAudioEngine,而是使用已经设置好的self.engine
但是,在Renderaudio和WriteToFile中,它记录“无法渲染音频单元”,因为AudioUnitRender返回的状态为kAudioUnitErr\u Uninitialized
编辑2:我应该提到,我非常乐意将我发布的AVAudioEngine代码转换为使用C API,如果这能让事情变得更简单的话。但是,我希望代码产生与AVAudioEngine代码相同的输出(包括使用上面显示的出厂预设)
play
方法audioEngine.outputNode
)
用这个- (void)configureAudioEngine {
self.engine = [[AVAudioEngine alloc] init];
self.playerNode = [[AVAudioPlayerNode alloc] init];
[self.engine attachNode:self.playerNode];
AVAudioUnitDistortion *distortionEffect = [[AVAudioUnitDistortion alloc] init];
[self.engine attachNode:distortionEffect];
[self.engine connect:self.playerNode to:distortionEffect format:[distortionEffect outputFormatForBus:0]];
self.mixer = [self.engine mainMixerNode];
[self.engine connect:distortionEffect to:self.mixer format:[self.mixer outputFormatForBus:0]];
[distortionEffect loadFactoryPreset:AVAudioUnitDistortionPresetDrumsBitBrush];
NSError* error;
if (![self.engine startAndReturnError:&error])
NSLog(@"Can't start engine: %@", error);
else
[self scheduleFileToPlay];
}
- (void)scheduleFileToPlay {
NSError* error;
NSURL *fileURL = [[NSBundle mainBundle] URLForResource:@"filename" withExtension:@"m4a"];
self.file = [[AVAudioFile alloc] initForReading:fileURL error:&error];
if (self.file)
[self.playerNode scheduleFile:self.file atTime:nil completionHandler:nil];
else
NSLog(@"Can't read file: %@", error);
}
渲染方法
- (void)renderAudioAndWriteToFile {
[self.playerNode play];
[self.engine pause];
AVAudioOutputNode *outputNode = self.engine.outputNode;
AudioStreamBasicDescription const *audioDescription = [outputNode outputFormatForBus:0].streamDescription;
NSString *path = [self filePath];
ExtAudioFileRef audioFile = [self createAndSetupExtAudioFileWithASBD:audioDescription andFilePath:path];
if (!audioFile)
return;
AVURLAsset *asset = [AVURLAsset assetWithURL:self.file.url];
NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
NSUInteger lengthInFrames = duration * audioDescription->mSampleRate;
const NSUInteger kBufferLength = 4096;
AudioBufferList *bufferList = AEAllocateAndInitAudioBufferList(*audioDescription, kBufferLength);
AudioTimeStamp timeStamp;
memset (&timeStamp, 0, sizeof(timeStamp));
timeStamp.mFlags = kAudioTimeStampSampleTimeValid;
OSStatus status = noErr;
for (NSUInteger i = kBufferLength; i < lengthInFrames; i += kBufferLength) {
status = [self renderToBufferList:bufferList writeToFile:audioFile bufferLength:kBufferLength timeStamp:&timeStamp];
if (status != noErr)
break;
}
if (status == noErr && timeStamp.mSampleTime < lengthInFrames) {
NSUInteger restBufferLength = (NSUInteger) (lengthInFrames - timeStamp.mSampleTime);
AudioBufferList *restBufferList = AEAllocateAndInitAudioBufferList(*audioDescription, restBufferLength);
status = [self renderToBufferList:restBufferList writeToFile:audioFile bufferLength:restBufferLength timeStamp:&timeStamp];
AEFreeAudioBufferList(restBufferList);
}
AEFreeAudioBufferList(bufferList);
ExtAudioFileDispose(audioFile);
if (status != noErr)
NSLog(@"An error has occurred");
else
NSLog(@"Finished writing to file at path: %@", path);
}
- (NSString *)filePath {
NSArray *documentsFolders =
NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *fileName = [NSString stringWithFormat:@"%@.m4a", [[NSUUID UUID] UUIDString]];
NSString *path = [documentsFolders[0] stringByAppendingPathComponent:fileName];
return path;
}
- (ExtAudioFileRef)createAndSetupExtAudioFileWithASBD:(AudioStreamBasicDescription const *)audioDescription
andFilePath:(NSString *)path {
AudioStreamBasicDescription destinationFormat;
memset(&destinationFormat, 0, sizeof(destinationFormat));
destinationFormat.mChannelsPerFrame = audioDescription->mChannelsPerFrame;
destinationFormat.mSampleRate = audioDescription->mSampleRate;
destinationFormat.mFormatID = kAudioFormatMPEG4AAC;
ExtAudioFileRef audioFile;
OSStatus status = ExtAudioFileCreateWithURL(
(__bridge CFURLRef) [NSURL fileURLWithPath:path],
kAudioFileM4AType,
&destinationFormat,
NULL,
kAudioFileFlags_EraseFile,
&audioFile
);
if (status != noErr) {
NSLog(@"Can not create ext audio file");
return nil;
}
UInt32 codecManufacturer = kAppleSoftwareAudioCodecManufacturer;
status = ExtAudioFileSetProperty(
audioFile, kExtAudioFileProperty_CodecManufacturer, sizeof(UInt32), &codecManufacturer
);
status = ExtAudioFileSetProperty(
audioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), audioDescription
);
status = ExtAudioFileWriteAsync(audioFile, 0, NULL);
if (status != noErr) {
NSLog(@"Can not setup ext audio file");
return nil;
}
return audioFile;
}
- (OSStatus)renderToBufferList:(AudioBufferList *)bufferList
writeToFile:(ExtAudioFileRef)audioFile
bufferLength:(NSUInteger)bufferLength
timeStamp:(AudioTimeStamp *)timeStamp {
[self clearBufferList:bufferList];
AudioUnit outputUnit = self.engine.outputNode.audioUnit;
OSStatus status = AudioUnitRender(outputUnit, 0, timeStamp, 0, bufferLength, bufferList);
if (status != noErr) {
NSLog(@"Can not render audio unit");
return status;
}
timeStamp->mSampleTime += bufferLength;
status = ExtAudioFileWrite(audioFile, bufferLength, bufferList);
if (status != noErr)
NSLog(@"Can not write audio to file");
return status;
}
- (void)clearBufferList:(AudioBufferList *)bufferList {
for (int bufferIndex = 0; bufferIndex < bufferList->mNumberBuffers; bufferIndex++) {
memset(bufferList->mBuffers[bufferIndex].mData, 0, bufferList->mBuffers[bufferIndex].mDataByteSize);
}
}
-(无效)Renderaudio和WriteToFile{
[self.playernodeplay];
[自引擎暂停];
AVAudioOutputNode*outputNode=self.engine.outputNode;
AudioStreamBasicDescription常量*audioDescription=[outputNode outputFormatForBus:0]。streamDescription;
NSString*path=[self filePath];
ExtAudioFileRef audioFile=[self-createAndSetupExtAudioFileWithASBD:audioDescription和FilePath:path];
如果(!音频文件)
返回;
AVURLAsset*asset=[AvurlAssetWithur:self.file.url];
NSTimeInterval duration=CMTimeGetSeconds(asset.duration);
NSInteger lengthInFrames=持续时间*音频描述->mSampleRate;
常数整数kBufferLength=4096;
AudioBufferList*bufferList=AEAllocateAndInitAudioBufferList(*audioDescription,kBufferLength);
音频时间戳;
memset(&timeStamp,0,sizeof(timeStamp));
timeStamp.mFlags=kAudioTimeStampSampleTimeValid;
骨状态=noErr;
对于(整数i=kBufferLength;imChannelsPerFrame;
destinationFormat.mSampleRate=audioDescription->mSampleRate;
destinationFormat.mFormatID=kaudioformampeg4aac;
ExtAudioFileRef音频文件;
OSStatus status=ExtAudioFileCreateWithURL(
(uu桥CFURLRef)[NSURL fileURLWithPath:path],
考迪奥菲利姆4aType,
&目的格式,
无效的
kAudioFileFlags_擦除文件,
&音频文件
);
如果(状态!=noErr){
NSLog(@“无法创建外部音频文件”);
返回零;
}
UInt32编解码器制造商=KappleSoftwareAudo编解码器制造商;
状态=ExtAudioFileSetProperty(
audioFile、kExtAudioFileProperty\u编解码器制造商、sizeof(UInt32)和编解码器制造商
);
状态=ExtAudioFileSetProperty(
audioFile、kExtAudioFileProperty_ClientDataFormat、sizeof(AudioStreamBasicDescription)、audioDescription
);
状态=ExtAudioFileWriteAsync(audioFile,0,NULL);
如果(状态!=noErr){
NSLog(@“无法设置外部音频文件”);
返回零;
}
返回音频文件;
}
-
AudioBufferList *AEAllocateAndInitAudioBufferList(AudioStreamBasicDescription audioFormat, int frameCount) {
int numberOfBuffers = audioFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved ? audioFormat.mChannelsPerFrame : 1;
int channelsPerBuffer = audioFormat.mFormatFlags & kAudioFormatFlagIsNonInterleaved ? 1 : audioFormat.mChannelsPerFrame;
int bytesPerBuffer = audioFormat.mBytesPerFrame * frameCount;
AudioBufferList *audio = malloc(sizeof(AudioBufferList) + (numberOfBuffers-1)*sizeof(AudioBuffer));
if ( !audio ) {
return NULL;
}
audio->mNumberBuffers = numberOfBuffers;
for ( int i=0; i<numberOfBuffers; i++ ) {
if ( bytesPerBuffer > 0 ) {
audio->mBuffers[i].mData = calloc(bytesPerBuffer, 1);
if ( !audio->mBuffers[i].mData ) {
for ( int j=0; j<i; j++ ) free(audio->mBuffers[j].mData);
free(audio);
return NULL;
}
} else {
audio->mBuffers[i].mData = NULL;
}
audio->mBuffers[i].mDataByteSize = bytesPerBuffer;
audio->mBuffers[i].mNumberChannels = channelsPerBuffer;
}
return audio;
}
void AEFreeAudioBufferList(AudioBufferList *bufferList ) {
for ( int i=0; i<bufferList->mNumberBuffers; i++ ) {
if ( bufferList->mBuffers[i].mData ) free(bufferList->mBuffers[i].mData);
}
free(bufferList);
}