Iphone 使用AVMutableAudioMix调整资产中曲目的音量

Iphone 使用AVMutableAudioMix调整资产中曲目的音量,iphone,ios,audio,avfoundation,Iphone,Ios,Audio,Avfoundation,我正在将AVMutableAudioMix应用于我创建的资产,该资产通常包含3-5个音频曲目(无视频)。我们的目标是在整个播放时间内添加几个音量命令,即我想在1秒时将音量设置为0.1,在2秒时设置为0.5,然后在3秒时设置为0.1或任何值。我现在正尝试使用AVPlayer来实现这一点,但稍后在将AVSession导出到文件时也将使用它。问题在于,它似乎只关心第一个卷命令,而忽略了以后的所有卷命令。如果第一个命令是将卷设置为0.1,则该卷将是此资源其余部分的此轨迹的永久卷。尽管看起来您应该能够添加

我正在将AVMutableAudioMix应用于我创建的资产,该资产通常包含3-5个音频曲目(无视频)。我们的目标是在整个播放时间内添加几个音量命令,即我想在1秒时将音量设置为0.1,在2秒时设置为0.5,然后在3秒时设置为0.1或任何值。我现在正尝试使用AVPlayer来实现这一点,但稍后在将AVSession导出到文件时也将使用它。问题在于,它似乎只关心第一个卷命令,而忽略了以后的所有卷命令。如果第一个命令是将卷设置为0.1,则该卷将是此资源其余部分的此轨迹的永久卷。尽管看起来您应该能够添加任意数量的命令,但是AVMutableAudioMix的“inputParameters”成员实际上是一个NSArray,它是AVMutableAudioMixInputParameter的系列。有人知道了吗


编辑:我部分地理解了这一点。我能够在某一曲目中添加几个音量变化。但时间安排似乎有点偏离了,我不知道该如何解决这个问题。例如,在5秒时将音量设置为0.0,然后在10秒时将音量设置为1.0,然后在15秒时将音量设置为0.0,这会使您假设音量会在这些时间迅速打开和关闭,但结果总是非常不可预测的,随着声音的渐变,有时会起作用(如setVolume所预期的那样,突然的音量变化)。如果有人使用了AudioMix,请提供一个示例。

我用来更改曲目音量的代码是:

AVURLAsset *soundTrackAsset = [[AVURLAsset alloc]initWithURL:trackUrl options:nil];
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];

[audioInputParams setVolume:0.5 atTime:kCMTimeZero];
[audioInputParams setTrackID:[[[soundTrackAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0]  trackID]];
audioMix = [AVMutableAudioMix audioMix];
audioMix.inputParameters = [NSArray arrayWithObject:audioInputParams];
别忘了将audiomix添加到AVAssetExportSession中

exportSession.audioMix = audioMix;
但是,我注意到它并不适用于所有格式,因此,如果AVFoundation一直存在问题,您可以使用此功能更改存储文件的卷级别。但是,此功能可能非常慢

-(void) ScaleAudioFileAmplitude:(NSURL *)theURL: (float) ampScale {
    OSStatus err = noErr;

    ExtAudioFileRef audiofile;
    ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
    assert(audiofile);

    // get some info about the file's format.
    AudioStreamBasicDescription fileFormat;
    UInt32 size = sizeof(fileFormat);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);

    // we'll need to know what type of file it is later when we write 
    AudioFileID aFile;
    size = sizeof(aFile);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
    AudioFileTypeID fileType;
    size = sizeof(fileType);
    err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);


    // tell the ExtAudioFile API what format we want samples back in
    AudioStreamBasicDescription clientFormat;
    bzero(&clientFormat, sizeof(clientFormat));
    clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
    clientFormat.mBytesPerFrame = 4;
    clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
    clientFormat.mFramesPerPacket = 1;
    clientFormat.mBitsPerChannel = 32;
    clientFormat.mFormatID = kAudioFormatLinearPCM;
    clientFormat.mSampleRate = fileFormat.mSampleRate;
    clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    // find out how many frames we need to read
    SInt64 numFrames = 0;
    size = sizeof(numFrames);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);

    // create the buffers for reading in data
    AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
    bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
        bufferList->mBuffers[ii].mNumberChannels = 1;
        bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
    }

    // read in the data
    UInt32 rFrames = (UInt32)numFrames;
    err = ExtAudioFileRead(audiofile, &rFrames, bufferList);

    // close the file
    err = ExtAudioFileDispose(audiofile);

    // process the audio
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        float *fBuf = (float *)bufferList->mBuffers[ii].mData;
        for (int jj=0; jj < rFrames; ++jj) {
            *fBuf = *fBuf * ampScale;
            fBuf++;
        } 
    }

    // open the file for writing
    err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);

    // tell the ExtAudioFile API what format we'll be sending samples in
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    // write the data
    err = ExtAudioFileWrite(audiofile, rFrames, bufferList);

    // close the file
    ExtAudioFileDispose(audiofile);

    // destroy the buffers
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        free(bufferList->mBuffers[ii].mData);
    }
    free(bufferList);
    bufferList = NULL;

 }

由于API更改,音频扩展工具箱函数不再有效。现在需要您设置一个类别。设置导出属性时,我得到了一个错误代码“?cat”(NSError将以十进制打印出来)

下面是在iOS 5.1中运行的代码。它的速度也非常慢,只要看一下,我会说慢了好几倍。它也是内存密集型的,因为它似乎将文件加载到内存中,从而为10MB mp3文件生成内存警告

-(void) scaleAudioFileAmplitude:(NSURL *)theURL withAmpScale:(float) ampScale
{
    OSStatus err = noErr;

    ExtAudioFileRef audiofile;
    ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
    assert(audiofile);

    // get some info about the file's format.
    AudioStreamBasicDescription fileFormat;
    UInt32 size = sizeof(fileFormat);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);

    // we'll need to know what type of file it is later when we write 
    AudioFileID aFile;
    size = sizeof(aFile);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
    AudioFileTypeID fileType;
    size = sizeof(fileType);
    err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);


    // tell the ExtAudioFile API what format we want samples back in
    AudioStreamBasicDescription clientFormat;
    bzero(&clientFormat, sizeof(clientFormat));
    clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
    clientFormat.mBytesPerFrame = 4;
    clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
    clientFormat.mFramesPerPacket = 1;
    clientFormat.mBitsPerChannel = 32;
    clientFormat.mFormatID = kAudioFormatLinearPCM;
    clientFormat.mSampleRate = fileFormat.mSampleRate;
    clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    // find out how many frames we need to read
    SInt64 numFrames = 0;
    size = sizeof(numFrames);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);

    // create the buffers for reading in data
    AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
    bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
    //printf("bufferList->mNumberBuffers = %lu \n\n", bufferList->mNumberBuffers);

    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
        bufferList->mBuffers[ii].mNumberChannels = 1;
        bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
    }

    // read in the data
    UInt32 rFrames = (UInt32)numFrames;
    err = ExtAudioFileRead(audiofile, &rFrames, bufferList);

    // close the file
    err = ExtAudioFileDispose(audiofile);

    // process the audio
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        float *fBuf = (float *)bufferList->mBuffers[ii].mData;
        for (int jj=0; jj < rFrames; ++jj) {
            *fBuf = *fBuf * ampScale;
            fBuf++;
        } 
    }

    // open the file for writing
    err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);

    NSError *error = NULL;

/*************************** You Need This Now ****************************/
        AVAudioSession *session = [AVAudioSession sharedInstance];
        [session setCategory:AVAudioSessionCategoryAudioProcessing error:&error];
/************************* End You Need This Now **************************/

    // tell the ExtAudioFile API what format we'll be sending samples in
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    error = [NSError errorWithDomain:NSOSStatusErrorDomain
                                         code:err
                                     userInfo:nil];
    NSLog(@"Error: %@", [error description]);

    // write the data
    err = ExtAudioFileWrite(audiofile, rFrames, bufferList);

    // close the file
    ExtAudioFileDispose(audiofile);

    // destroy the buffers
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        free(bufferList->mBuffers[ii].mData);
    }
    free(bufferList);
    bufferList = NULL;

}
-(void)scaleAudioFileAmplification:(NSURL*)带ampScale的URL:(float)ampScale
{
OSStatus err=noErr;
ExtAudioFileRef音频文件;
ExtAudioFileOpenURL((CFURLRef)URL,&audiofile);
断言(音频文件);
//获取有关文件格式的一些信息。
音频流基本描述文件格式;
UInt32 size=sizeof(文件格式);
err=ExtAudioFileGetProperty(audiofile、kExtAudioFileProperty_FileDataFormat、大小和文件格式);
//我们需要在以后编写时知道它是什么类型的文件
音频文件ID文件;
大小=sizeof(一个文件);
err=ExtAudioFileGetProperty(audiofile、kExtAudioFileProperty\u audiofile、&size、&aFile);
AudioFileTypeID文件类型;
size=sizeof(文件类型);
err=AudioFileGetProperty(文件、kAudioFilePropertyFileFormat、大小和文件类型);
//告诉ExtAudioFile API我们希望样本返回的格式
AudioStreamBasicDescription客户端格式;
bzero(&clientFormat,sizeof(clientFormat));
clientFormat.mChannelsPerFrame=fileFormat.mChannelsPerFrame;
clientFormat.mBytesPerFrame=4;
clientFormat.mBytesPerPacket=clientFormat.mBytesPerFrame;
clientFormat.mFramesPerPacket=1;
clientFormat.mBitsPerChannel=32;
clientFormat.mFormatID=kAudioFormatLinearPCM;
clientFormat.mSampleRate=fileFormat.mSampleRate;
clientFormat.mFormatFlags=KlineAppCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
err=ExtAudioFileSetProperty(audiofile、kExtAudioFileProperty\u ClientDataFormat、sizeof(clientFormat)和clientFormat);
//找出我们需要阅读多少帧
SInt64个numFrames=0;
大小=sizeof(numFrames);
err=ExtAudioFileGetProperty(audiofile、kExtAudioFileProperty_FileLengthFrames、大小和numFrames);
//创建用于读入数据的缓冲区
AudioBufferList*bufferList=malloc(sizeof(AudioBufferList)+sizeof(AudioBuffer)*(clientFormat.mChannelsPerFrame-1);
bufferList->mNumberBuffers=clientFormat.mChannelsPerFrame;
//printf(“bufferList->mNumberBuffers=%lu\n\n”,bufferList->mNumberBuffers);
对于(int ii=0;iimNumberBuffers;++ii){
bufferList->mbuffer[ii].mDataByteSize=sizeof(float)*numFrames;
bufferList->mBuffers[ii].mNumberChannels=1;
bufferList->mBuffers[ii].mData=malloc(bufferList->mBuffers[ii].mDataByteSize);
}
//读入数据
UInt32 rFrames=(UInt32)numFrames;
err=ExtAudioFileRead(音频文件和rFrames、缓冲列表);
//关闭文件
err=ExtAudioFileDispose(音频文件);
//处理音频
对于(int ii=0;iimNumberBuffers;++ii){
float*fBuf=(float*)bufferList->mBuffers[ii].mData;
对于(intjj=0;jj-(void) scaleAudioFileAmplitude:(NSURL *)theURL withAmpScale:(float) ampScale
{
    OSStatus err = noErr;

    ExtAudioFileRef audiofile;
    ExtAudioFileOpenURL((CFURLRef)theURL, &audiofile);
    assert(audiofile);

    // get some info about the file's format.
    AudioStreamBasicDescription fileFormat;
    UInt32 size = sizeof(fileFormat);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileDataFormat, &size, &fileFormat);

    // we'll need to know what type of file it is later when we write 
    AudioFileID aFile;
    size = sizeof(aFile);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_AudioFile, &size, &aFile);
    AudioFileTypeID fileType;
    size = sizeof(fileType);
    err = AudioFileGetProperty(aFile, kAudioFilePropertyFileFormat, &size, &fileType);


    // tell the ExtAudioFile API what format we want samples back in
    AudioStreamBasicDescription clientFormat;
    bzero(&clientFormat, sizeof(clientFormat));
    clientFormat.mChannelsPerFrame = fileFormat.mChannelsPerFrame;
    clientFormat.mBytesPerFrame = 4;
    clientFormat.mBytesPerPacket = clientFormat.mBytesPerFrame;
    clientFormat.mFramesPerPacket = 1;
    clientFormat.mBitsPerChannel = 32;
    clientFormat.mFormatID = kAudioFormatLinearPCM;
    clientFormat.mSampleRate = fileFormat.mSampleRate;
    clientFormat.mFormatFlags = kLinearPCMFormatFlagIsFloat | kAudioFormatFlagIsNonInterleaved;
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    // find out how many frames we need to read
    SInt64 numFrames = 0;
    size = sizeof(numFrames);
    err = ExtAudioFileGetProperty(audiofile, kExtAudioFileProperty_FileLengthFrames, &size, &numFrames);

    // create the buffers for reading in data
    AudioBufferList *bufferList = malloc(sizeof(AudioBufferList) + sizeof(AudioBuffer) * (clientFormat.mChannelsPerFrame - 1));
    bufferList->mNumberBuffers = clientFormat.mChannelsPerFrame;
    //printf("bufferList->mNumberBuffers = %lu \n\n", bufferList->mNumberBuffers);

    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        bufferList->mBuffers[ii].mDataByteSize = sizeof(float) * numFrames;
        bufferList->mBuffers[ii].mNumberChannels = 1;
        bufferList->mBuffers[ii].mData = malloc(bufferList->mBuffers[ii].mDataByteSize);
    }

    // read in the data
    UInt32 rFrames = (UInt32)numFrames;
    err = ExtAudioFileRead(audiofile, &rFrames, bufferList);

    // close the file
    err = ExtAudioFileDispose(audiofile);

    // process the audio
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        float *fBuf = (float *)bufferList->mBuffers[ii].mData;
        for (int jj=0; jj < rFrames; ++jj) {
            *fBuf = *fBuf * ampScale;
            fBuf++;
        } 
    }

    // open the file for writing
    err = ExtAudioFileCreateWithURL((CFURLRef)theURL, fileType, &fileFormat, NULL, kAudioFileFlags_EraseFile, &audiofile);

    NSError *error = NULL;

/*************************** You Need This Now ****************************/
        AVAudioSession *session = [AVAudioSession sharedInstance];
        [session setCategory:AVAudioSessionCategoryAudioProcessing error:&error];
/************************* End You Need This Now **************************/

    // tell the ExtAudioFile API what format we'll be sending samples in
    err = ExtAudioFileSetProperty(audiofile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat);

    error = [NSError errorWithDomain:NSOSStatusErrorDomain
                                         code:err
                                     userInfo:nil];
    NSLog(@"Error: %@", [error description]);

    // write the data
    err = ExtAudioFileWrite(audiofile, rFrames, bufferList);

    // close the file
    ExtAudioFileDispose(audiofile);

    // destroy the buffers
    for (int ii=0; ii < bufferList->mNumberBuffers; ++ii) {
        free(bufferList->mBuffers[ii].mData);
    }
    free(bufferList);
    bufferList = NULL;

}
AVAudioSession *session = [AVAudioSession sharedInstance];
NSString *originalSessionCategory = [session category];
[session setCategory:AVAudioSessionCategoryAudioProcessing error:&error];
...
...
// restore category
[session setCategory:originalSessionCategory error:&error];
if(error)
    NSLog(@"%@",[error localizedDescription]);
self.audioMix = [AVMutableAudioMix audioMix];
AVMutableAudioMixInputParameters *audioInputParams = [AVMutableAudioMixInputParameters audioMixInputParameters];

[audioInputParams setVolume:0.1 atTime:kCMTimeZero];
[audioInputParams setVolume:0.1 atTime:kCMTimeZero];
audioInputParams.trackID = compositionAudioTrack2.trackID;


AVMutableAudioMixInputParameters *audioInputParams1 = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams1 setVolume:0.9 atTime:kCMTimeZero];
audioInputParams1.trackID = compositionAudioTrack1.trackID;

AVMutableAudioMixInputParameters *audioInputParams2 = [AVMutableAudioMixInputParameters audioMixInputParameters];
[audioInputParams2 setVolume:0.3 atTime:kCMTimeZero];
audioInputParams2.trackID = compositionAudioTrack.trackID;


self.audioMix.inputParameters =[NSArray arrayWithObjects:audioInputParams,audioInputParams1,audioInputParams2, nil];