Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/design-patterns/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
iOS通过AVAssetWriter反向播放音频_Ios_Objective C_Core Audio_Avassetwriter_Avasset - Fatal编程技术网

iOS通过AVAssetWriter反向播放音频

iOS通过AVAssetWriter反向播放音频,ios,objective-c,core-audio,avassetwriter,avasset,Ios,Objective C,Core Audio,Avassetwriter,Avasset,我正在尝试使用AVAsset和AVAssetWriter在iOS中反转音频。 以下代码正在运行,但输出文件比输入文件短。 例如,输入文件的持续时间为1:59,但输出的音频内容为1:50 - (void)reverse:(AVAsset *)asset { AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:nil]; AVAssetTrack* audioTrack = [[asset tracksW

我正在尝试使用AVAsset和AVAssetWriter在iOS中反转音频。 以下代码正在运行,但输出文件比输入文件短。 例如,输入文件的持续时间为1:59,但输出的音频内容为1:50

- (void)reverse:(AVAsset *)asset
{
AVAssetReader* reader = [[AVAssetReader alloc] initWithAsset:asset error:nil];

AVAssetTrack* audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];

NSMutableDictionary* audioReadSettings = [NSMutableDictionary dictionary];
[audioReadSettings setValue:[NSNumber numberWithInt:kAudioFormatLinearPCM]
                     forKey:AVFormatIDKey];

AVAssetReaderTrackOutput* readerOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:audioTrack outputSettings:audioReadSettings];
[reader addOutput:readerOutput];
[reader startReading];

NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                [NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
                                [NSNumber numberWithFloat:44100.0], AVSampleRateKey,
                                [NSNumber numberWithInt:2], AVNumberOfChannelsKey,
                                [NSNumber numberWithInt:128000], AVEncoderBitRateKey,
                                [NSData data], AVChannelLayoutKey,
                                nil];

AVAssetWriterInput *writerInput = [[AVAssetWriterInput alloc] initWithMediaType:AVMediaTypeAudio
                                                                 outputSettings:outputSettings];

NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:@"out.m4a"];

NSURL *exportURL = [NSURL fileURLWithPath:exportPath];
NSError *writerError = nil;
AVAssetWriter *writer = [[AVAssetWriter alloc] initWithURL:exportURL
                                                  fileType:AVFileTypeAppleM4A
                                                     error:&writerError];
[writerInput setExpectsMediaDataInRealTime:NO];
[writer addInput:writerInput];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];

CMSampleBufferRef sample = [readerOutput copyNextSampleBuffer];
NSMutableArray *samples = [[NSMutableArray alloc] init];

while (sample != NULL) {

    sample = [readerOutput copyNextSampleBuffer];

    if (sample == NULL)
        continue;

    [samples addObject:(__bridge id)(sample)];
    CFRelease(sample);
}

NSArray* reversedSamples = [[samples reverseObjectEnumerator] allObjects];

for (id reversedSample in reversedSamples) {
    if (writerInput.readyForMoreMediaData)  {
        [writerInput appendSampleBuffer:(__bridge CMSampleBufferRef)(reversedSample)];
    }
    else {
        [NSThread sleepForTimeInterval:0.05];
    }
}

[writerInput markAsFinished];
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_HIGH, 0);
dispatch_async(queue, ^{
    [writer finishWriting];
});
}

更新:


如果我在第一个
循环中直接写入示例,而
循环中则一切正常(即使
writerInput.readyForMoreMediaData
检查)。在这种情况下,结果文件的持续时间与原始文件的持续时间完全相同。但是如果我从反向
NSArray
写入相同的样本,结果会更短。

以样本数打印每个缓冲区的大小(通过“读取”ReaderOut while循环),并在“写入”writerInput for循环中重复。这样,您可以查看所有缓冲区大小,并查看它们是否相加

例如,如果(writerInput.readyForMoreMediaData)
为false,您是否缺少或跳过缓冲区
,您将“睡眠”,但随后继续反向采样中的下一个反向采样(该缓冲区实际上会从writerinport中删除)

更新(基于评论): 我发现在代码中有两个问题:

  • 输出设置不正确(输入文件为mono1频道),但输出设置配置为2频道。它应该是:
    [NSNumber numberwhithint:1],AVNumberOfChannelsKey
    。查看有关输出和输入文件的信息:
  • 第二个问题是,您正在反转8192个音频样本的643个缓冲区,而不是反转每个音频样本的索引。为了查看每个缓冲区,我将调试从查看每个样本的大小更改为查看缓冲区的大小,即8192。因此,第76行现在是:
    size\t sampleSize=CMSampleBufferGetNumSamples(示例);
  • 输出如下所示:

    2015-03-19 22:26:28.171 audioReverse[25012:4901250] Reading [0]: 8192
    2015-03-19 22:26:28.172 audioReverse[25012:4901250] Reading [1]: 8192
    ...
    2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [640]: 8192
    2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [641]: 8192
    2015-03-19 22:26:28.651 audioReverse[25012:4901250] Reading [642]: 5056
    
    
    2015-03-19 22:26:28.651 audioReverse[25012:4901250] Writing [0]: 5056
    2015-03-19 22:26:28.652 audioReverse[25012:4901250] Writing [1]: 8192
    ...
    2015-03-19 22:26:29.134 audioReverse[25012:4901250] Writing [640]: 8192
    2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [641]: 8192
    2015-03-19 22:26:29.135 audioReverse[25012:4901250] Writing [642]: 8192
    
    这表明您正在反转8192个样本的每个缓冲区的顺序,但在每个缓冲区中音频仍然“向前”。我们可以在我拍摄的正确反转(逐个样本)与缓冲区反转的屏幕截图中看到这一点:


    我认为如果每个8192缓冲区中的每个采样都反转,您当前的方案就可以工作。我个人不建议使用NSArray枚举器进行信号处理,但如果您在采样级别操作,它就可以工作。

    以相反的顺序写入音频采样是不够的。采样数据本身需要反转,并且需要正确设置其定时信息

    在Swift中,我们创建了AVAsset的扩展

    必须将样本作为解压缩样本处理。为此,请使用kAudioFormatLinearPCM创建音频读取器设置:

    let kAudioReaderSettings = [
        AVFormatIDKey: Int(kAudioFormatLinearPCM) as AnyObject,
        AVLinearPCMBitDepthKey: 16 as AnyObject,
        AVLinearPCMIsBigEndianKey: false as AnyObject,
        AVLinearPCMIsFloatKey: false as AnyObject,
        AVLinearPCMIsNonInterleaved: false as AnyObject]
    
    使用我们的AVAsset扩展方法audioReader:

    func audioReader(outputSettings: [String : Any]?) -> (audioTrack:AVAssetTrack?, audioReader:AVAssetReader?, audioReaderOutput:AVAssetReaderTrackOutput?) {
        
        if let audioTrack = self.tracks(withMediaType: .audio).first {
            if let audioReader = try? AVAssetReader(asset: self)  {
                let audioReaderOutput = AVAssetReaderTrackOutput(track: audioTrack, outputSettings: outputSettings)
                return (audioTrack, audioReader, audioReaderOutput)
            }
        }
        
        return (nil, nil, nil)
    }
    
    let (_, audioReader, audioReaderOutput) = self.audioReader(outputSettings: kAudioReaderSettings)
    
    创建音频读取器(AvasseTrader)和音频读取器输出(AvasseTraderTrackOutput)以读取音频样本

    我们需要跟踪音频样本和新的计时信息:

    var audioSamples:[CMSampleBuffer] = []
    var timingInfos:[CMSampleTimingInfo] = []
    
    现在开始读取样本。对于每个音频样本,获取其计时信息,以产生新的计时信息,该信息将与音频曲目的结尾相关(因为我们将以相反的顺序将其写回)

    换句话说,我们将调整样品的展示时间

    if audioReader.startReading() {
        while audioReader.status == .reading {
            if let sampleBuffer = audioReaderOutput.copyNextSampleBuffer(){ 
               // process sample                                       
            }
        }
    }
    
    因此,为了“处理样本”,我们使用cmSampleBufferGetSampleTimingInfo数组来获取timingInfo(CMSampleTimingInfo):

    获取演示时间和持续时间:

    let presentationTime = timingInfo.presentationTimeStamp
    let duration = CMSampleBufferGetDuration(sampleBuffer)
    
    计算样本的结束时间:

    let endTime = CMTimeAdd(presentationTime, duration)
    
    现在计算相对于曲目结尾的新演示时间:

    let newPresentationTime = CMTimeSubtract(self.duration, endTime)
    
    并使用它设置计时信息:

    var timingInfo = CMSampleTimingInfo()
    
    CMSampleBufferGetSampleTimingInfoArray(sampleBuffer, entryCount: 0, arrayToFill: &timingInfo, entriesNeededOut: &timingInfoCount)
    
    timingInfo.presentationTimeStamp = newPresentationTime
    
    最后保存音频样本缓冲区及其计时信息,稍后创建反向样本时需要:

    timingInfos.append(timingInfo)
    audioSamples.append(sampleBuffer)
    
    我们需要一个AVAssetWriter:

    guard let assetWriter = try? AVAssetWriter(outputURL: destinationURL, fileType: AVFileType.wav) else {
        // error handling
        return
    }
    
    文件类型为“wav”,因为反向采样将作为未压缩音频格式线性PCM写入,如下所示

    对于assetWriter,我们指定音频压缩设置和“源格式提示”,并可以从未压缩的样本缓冲区获取:

    let sampleBuffer = audioSamples[0]
    let sourceFormat = CMSampleBufferGetFormatDescription(sampleBuffer)
    
    let audioCompressionSettings = [AVFormatIDKey: kAudioFormatLinearPCM] as [String : Any]
    
    现在我们可以创建AVAssetWriterInput,将其添加到编写器并开始编写:

    let assetWriterInput = AVAssetWriterInput(mediaType: AVMediaType.audio, outputSettings:audioCompressionSettings, sourceFormatHint: sourceFormat)
    
    assetWriter.add(assetWriterInput)
    
    assetWriter.startWriting()
    assetWriter.startSession(atSourceTime: CMTime.zero)
    
    现在,以相反的顺序遍历样本,并对每个样本本身进行反转

    我们为CMSampleBuffer提供了一个扩展,名为“反向”

    使用requestMediaDataWhenReady,我们执行以下操作:

    let nbrSamples = audioSamples.count
    var index = 0
    
    let serialQueue: DispatchQueue = DispatchQueue(label: "com.limit-point.reverse-audio-queue")
        
    assetWriterInput.requestMediaDataWhenReady(on: serialQueue) {
            
        while assetWriterInput.isReadyForMoreMediaData, index < nbrSamples {
            let sampleBuffer = audioSamples[nbrSamples - 1 - index]
                
            let timingInfo = timingInfos[index]
                
            if let reversedBuffer = sampleBuffer.reverse(timingInfo: [timingInfo]), assetWriterInput.append(reversedBuffer) == true {
                index += 1
            }
            else {
                index = nbrSamples
            }
                
            if index == nbrSamples {
                assetWriterInput.markAsFinished()
                
                finishWriting() // call assetWriter.finishWriting, check assetWriter status, etc.
            }
        }
    }
    
    必须反转的数据需要使用以下方法获得:

    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer
    
    CMSampleBuffer头文件对该方法的描述如下:

    let nbrSamples = audioSamples.count
    var index = 0
    
    let serialQueue: DispatchQueue = DispatchQueue(label: "com.limit-point.reverse-audio-queue")
        
    assetWriterInput.requestMediaDataWhenReady(on: serialQueue) {
            
        while assetWriterInput.isReadyForMoreMediaData, index < nbrSamples {
            let sampleBuffer = audioSamples[nbrSamples - 1 - index]
                
            let timingInfo = timingInfos[index]
                
            if let reversedBuffer = sampleBuffer.reverse(timingInfo: [timingInfo]), assetWriterInput.append(reversedBuffer) == true {
                index += 1
            }
            else {
                index = nbrSamples
            }
                
            if index == nbrSamples {
                assetWriterInput.markAsFinished()
                
                finishWriting() // call assetWriter.finishWriting, check assetWriter status, etc.
            }
        }
    }
    
    创建包含来自CMSampleBuffer的数据的AudioBufferList,以及引用(并管理)该AudioBufferList中的数据的CMBlockBuffer

    如下所示调用它,其中“self”指的是我们正在反转的CMSampleBuffer,因为这是一个扩展:

    var blockBuffer: CMBlockBuffer? = nil
    let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)
    
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
        self,
        bufferListSizeNeededOut: nil,
        bufferListOut: audioBufferList.unsafeMutablePointer,
        bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
        blockBufferAllocator: nil,
        blockBufferMemoryAllocator: nil,
        flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
        blockBufferOut: &blockBuffer
     )
    
    现在,您可以通过以下方式访问原始数据:

    let data: UnsafeMutableRawPointer = audioBufferList.unsafePointer.pointee.mBuffers.mData
    
    反转数据我们需要以称为sampleArray的“样本”数组的形式访问数据,在Swift中的操作如下:

    let samples = data.assumingMemoryBound(to: Int16.self)
            
    let sizeofInt16 = MemoryLayout<Int16>.size
    let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize  
    
    let dataCount = Int(dataSize) / sizeofInt16
            
    var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]
    
    使用反向采样,我们需要创建一个新的CMSampleBuffer,其中包含反向采样和我们之前从源文件读取音频采样时生成的新计时信息

    现在,我们用CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer替换之前获得的CMBlockBuffer中的数据:

    首先使用反向数组重新分配“样本”:

    var status:OSStatus = noErr
            
    sampleArray.withUnsafeBytes { sampleArrayPtr in
        if let baseAddress = sampleArrayPtr.baseAddress {
            let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
            let rawPtr = UnsafeRawPointer(bufferPointer)
                    
            status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
        } 
    }
    
    if status != noErr {
        return nil
    }
    
    现在,使用reversed blockBuffer创建新的示例缓冲区,最显著的是作为参数传递给我们正在定义的函数“reverse”的新计时信息:

    guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: timingInfo.count, sampleTimingArray: timingInfo, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
        return self
    }
            
    return newBuffer
    
    这就是全部


    最后请注意,Core Audio和AVFoundation标题提供了许多有用的信息,如CoreAudioTypes.h、CMSampleBuffer.h等。完整的示例说明如何使用Swift 5将视频和音频反向输入到同一资产输出中,音频pr
    let formatDescription = CMSampleBufferGetFormatDescription(self)   
    let numberOfSamples = CMSampleBufferGetNumSamples(self)
            
    var newBuffer:CMSampleBuffer?
            
    
    guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: timingInfo.count, sampleTimingArray: timingInfo, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
        return self
    }
            
    return newBuffer
    
     private func reverseVideo(inURL: URL, outURL: URL, queue: DispatchQueue, _ completionBlock: ((Bool)->Void)?) {
        Log.info("Start reverse video!")
        let asset = AVAsset.init(url: inURL)
        guard
            let reader = try? AVAssetReader.init(asset: asset),
            let videoTrack = asset.tracks(withMediaType: .video).first,
            let audioTrack = asset.tracks(withMediaType: .audio).first
    
            else {
                assert(false)
                completionBlock?(false)
                return
        }
    
        let width = videoTrack.naturalSize.width
        let height = videoTrack.naturalSize.height
    
        // Video reader
        let readerVideoSettings: [String : Any] = [ String(kCVPixelBufferPixelFormatTypeKey) : kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange,]
        let readerVideoOutput = AVAssetReaderTrackOutput.init(track: videoTrack, outputSettings: readerVideoSettings)
        reader.add(readerVideoOutput)
    
        // Audio reader
        let readerAudioSettings: [String : Any] = [
            AVFormatIDKey: kAudioFormatLinearPCM,
            AVLinearPCMBitDepthKey: 16 ,
            AVLinearPCMIsBigEndianKey: false ,
            AVLinearPCMIsFloatKey: false,]
        let readerAudioOutput = AVAssetReaderTrackOutput.init(track: audioTrack, outputSettings: readerAudioSettings)
        reader.add(readerAudioOutput)
    
        //Start reading content
        reader.startReading()
    
        //Reading video samples
        var videoBuffers = [CMSampleBuffer]()
        while let nextBuffer = readerVideoOutput.copyNextSampleBuffer() {
            videoBuffers.append(nextBuffer)
        }
    
        //Reading audio samples
        var audioBuffers = [CMSampleBuffer]()
        var timingInfos = [CMSampleTimingInfo]()
        while let nextBuffer = readerAudioOutput.copyNextSampleBuffer() {
    
            var timingInfo = CMSampleTimingInfo()
            var timingInfoCount = CMItemCount()
            CMSampleBufferGetSampleTimingInfoArray(nextBuffer, entryCount: 0, arrayToFill: &timingInfo, entriesNeededOut: &timingInfoCount)
    
            let duration = CMSampleBufferGetDuration(nextBuffer)
            let endTime = CMTimeAdd(timingInfo.presentationTimeStamp, duration)
            let newPresentationTime = CMTimeSubtract(duration, endTime)
    
            timingInfo.presentationTimeStamp = newPresentationTime
    
            timingInfos.append(timingInfo)
            audioBuffers.append(nextBuffer)
        }
    
        //Stop reading
        let status = reader.status
        reader.cancelReading()
        guard status == .completed, let firstVideoBuffer = videoBuffers.first, let firstAudioBuffer = audioBuffers.first else {
            assert(false)
            completionBlock?(false)
            return
        }
    
        //Start video time
        let sessionStartTime = CMSampleBufferGetPresentationTimeStamp(firstVideoBuffer)
    
        //Writer for video
        let writerVideoSettings: [String:Any] = [
            AVVideoCodecKey : AVVideoCodecType.h264,
            AVVideoWidthKey : width,
            AVVideoHeightKey: height,
        ]
        let writerVideoInput: AVAssetWriterInput
        if let formatDescription = videoTrack.formatDescriptions.last {
            writerVideoInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerVideoSettings, sourceFormatHint: (formatDescription as! CMFormatDescription))
        } else {
            writerVideoInput = AVAssetWriterInput.init(mediaType: .video, outputSettings: writerVideoSettings)
        }
        writerVideoInput.transform = videoTrack.preferredTransform
        writerVideoInput.expectsMediaDataInRealTime = false
    
        //Writer for audio
        let writerAudioSettings: [String:Any] = [
            AVFormatIDKey : kAudioFormatMPEG4AAC,
            AVSampleRateKey : 44100,
            AVNumberOfChannelsKey: 2,
            AVEncoderBitRateKey:128000,
            AVChannelLayoutKey: NSData(),
        ]
        let sourceFormat = CMSampleBufferGetFormatDescription(firstAudioBuffer)
        let writerAudioInput: AVAssetWriterInput = AVAssetWriterInput.init(mediaType: .audio, outputSettings: writerAudioSettings, sourceFormatHint: sourceFormat)
        writerAudioInput.expectsMediaDataInRealTime = true
    
        guard
            let writer = try? AVAssetWriter.init(url: outURL, fileType: .mp4),
            writer.canAdd(writerVideoInput),
            writer.canAdd(writerAudioInput)
            else {
                assert(false)
                completionBlock?(false)
                return
        }
    
        let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor.init(assetWriterInput: writerVideoInput, sourcePixelBufferAttributes: nil)
        let group = DispatchGroup.init()
    
        group.enter()
        writer.add(writerVideoInput)
        writer.add(writerAudioInput)
        writer.startWriting()
        writer.startSession(atSourceTime: sessionStartTime)
    
        var videoFinished = false
        var audioFinished = false
    
        //Write video samples in reverse order
        var currentSample = 0
        writerVideoInput.requestMediaDataWhenReady(on: queue) {
            for i in currentSample..<videoBuffers.count {
                currentSample = i
                if !writerVideoInput.isReadyForMoreMediaData {
                    return
                }
                let presentationTime = CMSampleBufferGetPresentationTimeStamp(videoBuffers[i])
                guard let imageBuffer = CMSampleBufferGetImageBuffer(videoBuffers[videoBuffers.count - i - 1]) else {
                    Log.info("VideoWriter reverseVideo: warning, could not get imageBuffer from SampleBuffer...")
                    continue
                }
                if !pixelBufferAdaptor.append(imageBuffer, withPresentationTime: presentationTime) {
                    Log.info("VideoWriter reverseVideo: warning, could not append imageBuffer...")
                }
            }
    
            // finish write video samples
            writerVideoInput.markAsFinished()
            Log.info("Video writing finished!")
            videoFinished = true
            if(audioFinished){
                group.leave()
            }
        }
        //Write audio samples in reverse order
        let totalAudioSamples = audioBuffers.count
        writerAudioInput.requestMediaDataWhenReady(on: queue) {
            for i in 0..<totalAudioSamples-1 {
                if !writerAudioInput.isReadyForMoreMediaData {
                    return
                }
                let audioSample = audioBuffers[totalAudioSamples-1-i]
                let timingInfo = timingInfos[i]
                // reverse samples data using timing info
                if let reversedBuffer = audioSample.reverse(timingInfo: [timingInfo]) {
                    // append data
                    if writerAudioInput.append(reversedBuffer) == false {
                        break
                    }
                }
            }
    
            // finish
            writerAudioInput.markAsFinished()
            Log.info("Audio writing finished!")
            audioFinished = true
            if(videoFinished){
                group.leave()
            }
        }
    
        group.notify(queue: queue) {
            writer.finishWriting {
                if writer.status != .completed {
                    Log.info("VideoWriter reverse video: error - \(String(describing: writer.error))")
                    completionBlock?(false)
                } else {
                    Log.info("Ended reverse video!")
                    completionBlock?(true)
                }
            }
        }
    }
    
    extension CMSampleBuffer {
    
    func reverse(timingInfo:[CMSampleTimingInfo]) -> CMSampleBuffer? {
        var blockBuffer: CMBlockBuffer? = nil
        let audioBufferList: UnsafeMutableAudioBufferListPointer = AudioBufferList.allocate(maximumBuffers: 1)
    
        CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(
            self,
            bufferListSizeNeededOut: nil,
            bufferListOut: audioBufferList.unsafeMutablePointer,
            bufferListSize: AudioBufferList.sizeInBytes(maximumBuffers: 1),
            blockBufferAllocator: nil,
            blockBufferMemoryAllocator: nil,
            flags: kCMSampleBufferFlag_AudioBufferList_Assure16ByteAlignment,
            blockBufferOut: &blockBuffer
         )
        
        if let data = audioBufferList.unsafePointer.pointee.mBuffers.mData {
        
            let samples = data.assumingMemoryBound(to: Int16.self)
    
            let sizeofInt16 = MemoryLayout<Int16>.size
            let dataSize = audioBufferList.unsafePointer.pointee.mBuffers.mDataByteSize
    
            let dataCount = Int(dataSize) / sizeofInt16
    
            var sampleArray = Array(UnsafeBufferPointer(start: samples, count: dataCount)) as [Int16]
            
            sampleArray.reverse()
            
            var status:OSStatus = noErr
                    
            sampleArray.withUnsafeBytes { sampleArrayPtr in
                if let baseAddress = sampleArrayPtr.baseAddress {
                    let bufferPointer: UnsafePointer<Int16> = baseAddress.assumingMemoryBound(to: Int16.self)
                    let rawPtr = UnsafeRawPointer(bufferPointer)
                            
                    status = CMBlockBufferReplaceDataBytes(with: rawPtr, blockBuffer: blockBuffer!, offsetIntoDestination: 0, dataLength: Int(dataSize))
                }
            }
    
            if status != noErr {
                return nil
            }
            
            let formatDescription = CMSampleBufferGetFormatDescription(self)
            let numberOfSamples = CMSampleBufferGetNumSamples(self)
    
            var newBuffer:CMSampleBuffer?
            
            guard CMSampleBufferCreate(allocator: kCFAllocatorDefault, dataBuffer: blockBuffer, dataReady: true, makeDataReadyCallback: nil, refcon: nil, formatDescription: formatDescription, sampleCount: numberOfSamples, sampleTimingEntryCount: timingInfo.count, sampleTimingArray: timingInfo, sampleSizeEntryCount: 0, sampleSizeArray: nil, sampleBufferOut: &newBuffer) == noErr else {
                return self
            }
    
            return newBuffer
        }
        return nil
    }
    }