Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/122.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 是否可以使用AudioFileWritePackages写入文件而不替换该文件上的现有数据?_Ios_Cocoa_Cocoa Touch_Core Audio - Fatal编程技术网

Ios 是否可以使用AudioFileWritePackages写入文件而不替换该文件上的现有数据?

Ios 是否可以使用AudioFileWritePackages写入文件而不替换该文件上的现有数据?,ios,cocoa,cocoa-touch,core-audio,Ios,Cocoa,Cocoa Touch,Core Audio,我的代码基于学习核心音频第6章中的代码 此函数用于获取输入文件并将数据包从该文件复制到输出文件。我正在使用它将许多输入文件的数据写入一个输出文件,它工作得非常好 我遇到的唯一问题是,我添加了一个新的输入文件,并将startPackagePosition设置为输出文件上已写入数据包的区域。它用新数据替换旧数据 是否有一种方法可以在不替换现有数据的情况下将新数据包写入文件。这就像在歌曲文件中添加声音效果,而不替换任何歌曲数据 如果使用AudioFileWritePackages无法做到这一点,那么最

我的代码基于学习核心音频第6章中的代码

此函数用于获取输入文件并将数据包从该文件复制到输出文件。我正在使用它将许多输入文件的数据写入一个输出文件,它工作得非常好

我遇到的唯一问题是,我添加了一个新的输入文件,并将startPackagePosition设置为输出文件上已写入数据包的区域。它用新数据替换旧数据

是否有一种方法可以在不替换现有数据的情况下将新数据包写入文件。这就像在歌曲文件中添加声音效果,而不替换任何歌曲数据

如果使用AudioFileWritePackages无法做到这一点,那么最好的替代方案是什么

static void writeInputFileToOutputFile(AudioStreamBasicDescription *format, ExtAudioFileRef *inputFile, AudioFileID *outputFile, UInt32 *startPacketPosition) {
    //determine the size of the output buffer
    UInt32 outputBufferSize = 32 * 1024; //32kb
    UInt32 sizePerPacket = format->mBytesPerPacket;
    UInt32 packetsPerBuffer = outputBufferSize / sizePerPacket;

    //allocate a buffer for recieving the data
    UInt8 *outputBuffer = (UInt8 *)malloc(sizeof(UInt8) * outputBufferSize);

    //read-convert-write
    while (1) {
        AudioBufferList convertedData; //create an audio buffer list
        convertedData.mNumberBuffers = 1; //with only one buffer
        //set the properties on the single buffer
        convertedData.mBuffers[0].mNumberChannels = format->mChannelsPerFrame;
        convertedData.mBuffers[0].mDataByteSize = outputBufferSize;
        convertedData.mBuffers[0].mData = outputBuffer;

        //get the number of frames and buffer data from the input file
        UInt32 framesPerBuffer = packetsPerBuffer;
        CheckError(ExtAudioFileRead(*inputFile, &framesPerBuffer, &convertedData), "ExtAudioFileRead");

        //if framecount is 0, were finished
        if (framesPerBuffer == 0) {
            return;
        }

        UInt32 bytes = format->mBytesPerPacket;
        CheckError(AudioFileWritePackets(*outputFile, false, framesPerBuffer*bytes, NULL, *startPacketPosition, &framesPerBuffer, convertedData.mBuffers[0].mData), "AudioFileWritePackets");

        //increase the ouput file packet position
        *startPacketPosition += framesPerBuffer;
    }
    free(outputBuffer);

}

我找到了一种使用AVMutableComposition实现这一点的方法

-(void)addInputAsset:(AVURLAsset *)input toOutputComposition:(AVMutableComposition *)composition atTime:(float)seconds {
    //add the input to the composition
    AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
    AVAssetTrack *clipAudioTrack = [[input tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

    //set the start time
    int sampleRate = 44100;
    CMTime nextClipStartTime = CMTimeMakeWithSeconds(seconds, sampleRate);
    [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, input.duration) ofTrack:clipAudioTrack atTime:nextClipStartTime error:nil];
}
要使用此方法混合两个音频文件,请按如下方式调用:

-(void)mixAudio {
    //create a mutable compsition
    AVMutableComposition* composition = [AVMutableComposition composition];

    //audio file 1
    NSURL *url1 = [NSURL URLWithString:@"path/to/file1.mp3"];
    AVURLAsset* asset1 = [[AVURLAsset alloc]initWithURL:url1 options:nil];
    [self addInputAsset:asset1 toOutputComposition:composition atTime:0.0];

    //audio file 2
    NSURL *url2 = [NSURL URLWithString:@"path/to/file2.aif"];
    AVURLAsset* asset2 = [[AVURLAsset alloc]initWithURL:url2 options:nil];
    [self addInputAsset:asset2 toOutputComposition:composition atTime:0.2];

    //create the export session
    AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:composition presetName:AVAssetExportPresetAppleM4A];
    if (exportSession == nil) {
        //ERROR: abort
    }

    // configure export session  output with all our parameters
    exportSession.outputURL = [NSURL fileURLWithPath:@"path/to/output_file.m4a"]; // output path
    exportSession.outputFileType = AVFileTypeAppleM4A; // output file type

    //export the file
    [exportSession exportAsynchronouslyWithCompletionHandler:^{
        if (AVAssetExportSessionStatusCompleted == exportSession.status) {
            NSLog(@"AVAssetExportSessionStatusCompleted");
        } else if (AVAssetExportSessionStatusFailed == exportSession.status) {
            NSLog(@"AVAssetExportSessionStatusFailed");
        } else {
            NSLog(@"Export Session Status: %ld", (long)exportSession.status);
        }
    }];
}