Swift 将AVAssetWriter复制到多个文件

Swift 将AVAssetWriter复制到多个文件,swift,ffmpeg,avfoundation,avcapturesession,avassetwriter,Swift,Ffmpeg,Avfoundation,Avcapturesession,Avassetwriter,我有一个AVCaptureSession,它由一个AVCaptureScreenInput和一个AVCaptureDeviceInput组成。两者都作为数据输出委托连接起来,我使用AVAssetWriter写入单个MP4文件 当写入单个MP4文件时,一切正常。当我尝试在多个AvassetWriter之间切换以每5秒保存到连续文件时,通过FFMPEG将所有文件压缩在一起时,会出现轻微的音频下降 加入视频的示例(请注意每5秒会有一小段音频下降): 经过大量调查,我确定这可能是由于音频和视频片段被分

我有一个
AVCaptureSession
,它由一个
AVCaptureScreenInput
和一个
AVCaptureDeviceInput
组成。两者都作为数据输出委托连接起来,我使用
AVAssetWriter
写入单个MP4文件

当写入单个MP4文件时,一切正常。当我尝试在多个AvassetWriter之间切换以每5秒保存到连续文件时,通过FFMPEG将所有文件压缩在一起时,会出现轻微的音频下降

加入视频的示例(请注意每5秒会有一小段音频下降):

经过大量调查,我确定这可能是由于音频和视频片段被分割/不是在同一时间戳上开始的

现在我知道我的算法应该可以工作了,但我不知道如何分割音频
CMBufferSample
。这似乎很有用,但不确定如何基于时间分割(需要一个包含该时间前后所有样本的缓冲区)


如果您使用的是
AVCaptureScreenInput
,那么您不在iOS上,对吗?所以我打算写关于拆分样本缓冲区的文章,但后来我记得在OSX上,
AVCaptureFileOutput.startRecording
(而不是
AVAssetWriter
)有这样一条诱人的评论:

在Mac OS X上,如果在captureOutput:didOutputSampleBuffer:fromConnection:delegate方法中调用此方法,则写入新文件的第一个样本保证是传递给该方法的样本缓冲区中包含的样本

不丢弃样本听起来很有希望,因此如果您可以使用
mov
而不是mp4文件,那么您应该能够通过使用
AVCaptureMovieFileOutput
获得无音频丢失的结果,实现
文件输出应提供示例AccurateRecordingStart
并从
didOutputSampleBuffer
调用
startRecording
,如下所示:

import Cocoa
import AVFoundation

@NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {

    @IBOutlet weak var window: NSWindow!

    let session = AVCaptureSession()
    let movieFileOutput = AVCaptureMovieFileOutput()

    var movieChunkNumber = 0
    var chunkDuration = kCMTimeZero // TODO: synchronize access? probably fine.

    func startRecordingChunkFile() {
        let filename = String(format: "capture-%.2i.mov", movieChunkNumber)
        let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent(filename)
        movieFileOutput.startRecording(to: url, recordingDelegate: self)

        movieChunkNumber += 1
    }

    func applicationDidFinishLaunching(_ aNotification: Notification) {
        let displayInput = AVCaptureScreenInput(displayID: CGMainDisplayID())

        let micInput = try! AVCaptureDeviceInput(device: AVCaptureDevice.default(for: .audio)!)

        session.addInput(displayInput)
        session.addInput(micInput)

        movieFileOutput.delegate = self

        session.addOutput(movieFileOutput)

        session.startRunning()

        self.startRecordingChunkFile()
    }
}

extension AppDelegate: AVCaptureFileOutputRecordingDelegate {
    func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
        // NSLog("error \(error)")
    }
}

extension AppDelegate: AVCaptureFileOutputDelegate {
    func fileOutputShouldProvideSampleAccurateRecordingStart(_ output: AVCaptureFileOutput) -> Bool {
        return true
    }

    func fileOutput(_ output: AVCaptureFileOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        if let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) {
            if CMFormatDescriptionGetMediaType(formatDescription) == kCMMediaType_Audio {
                let duration = CMSampleBufferGetDuration(sampleBuffer)
                chunkDuration = CMTimeAdd(chunkDuration, duration)

                if CMTimeGetSeconds(chunkDuration) >= 5 {
                    startRecordingChunkFile()
                    chunkDuration = kCMTimeZero
                }
            }
        }
    }
}

谢谢你把我引向另一个方向!我仍然在一个兔子洞里,但我会在T-1周内尝试,如果成功的话,我会做正确的标记!不客气!您最初的问题是有效的,但答案很棘手,
CMSampleBuffer
调用非常冗长,您希望在样本边界上拆分,但您似乎只有
CMTime
s,当
AVCaptureSession
给您时,他们的时基绝对不是音频采样率&然后osx为你提供这张免出狱卡。是的,绝对是。另一个问题是,
CMSampleBufferCopySampleBufferForRange
将不起作用,因为
CMSampleBuffer
只有第一个样本的表示时间戳,这意味着我可以假设每个样本之后的表示是均匀分布的(根据文档)。因此,我可能必须手动从底层缓冲区获取数据,并创建两个
CMSampleBuffer
s:-)
import Cocoa
import AVFoundation

@NSApplicationMain
class AppDelegate: NSObject, NSApplicationDelegate {

    @IBOutlet weak var window: NSWindow!

    let session = AVCaptureSession()
    let movieFileOutput = AVCaptureMovieFileOutput()

    var movieChunkNumber = 0
    var chunkDuration = kCMTimeZero // TODO: synchronize access? probably fine.

    func startRecordingChunkFile() {
        let filename = String(format: "capture-%.2i.mov", movieChunkNumber)
        let url = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!.appendingPathComponent(filename)
        movieFileOutput.startRecording(to: url, recordingDelegate: self)

        movieChunkNumber += 1
    }

    func applicationDidFinishLaunching(_ aNotification: Notification) {
        let displayInput = AVCaptureScreenInput(displayID: CGMainDisplayID())

        let micInput = try! AVCaptureDeviceInput(device: AVCaptureDevice.default(for: .audio)!)

        session.addInput(displayInput)
        session.addInput(micInput)

        movieFileOutput.delegate = self

        session.addOutput(movieFileOutput)

        session.startRunning()

        self.startRecordingChunkFile()
    }
}

extension AppDelegate: AVCaptureFileOutputRecordingDelegate {
    func fileOutput(_ output: AVCaptureFileOutput, didFinishRecordingTo outputFileURL: URL, from connections: [AVCaptureConnection], error: Error?) {
        // NSLog("error \(error)")
    }
}

extension AppDelegate: AVCaptureFileOutputDelegate {
    func fileOutputShouldProvideSampleAccurateRecordingStart(_ output: AVCaptureFileOutput) -> Bool {
        return true
    }

    func fileOutput(_ output: AVCaptureFileOutput, didOutputSampleBuffer sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
        if let formatDescription = CMSampleBufferGetFormatDescription(sampleBuffer) {
            if CMFormatDescriptionGetMediaType(formatDescription) == kCMMediaType_Audio {
                let duration = CMSampleBufferGetDuration(sampleBuffer)
                chunkDuration = CMTimeAdd(chunkDuration, duration)

                if CMTimeGetSeconds(chunkDuration) >= 5 {
                    startRecordingChunkFile()
                    chunkDuration = kCMTimeZero
                }
            }
        }
    }
}