Macos 如何保持两个AVCaptureMovieFileOutput同步

Macos 如何保持两个AVCaptureMovieFileOutput同步,macos,avfoundation,avcapturesession,avcapturemoviefileoutput,avcaptureoutput,Macos,Avfoundation,Avcapturesession,Avcapturemoviefileoutput,Avcaptureoutput,我有两个摄像头输入OSX应用程序,我正在尝试使用AVCaptureMovieFileOutput保存它们。没过多久他们的视频就不同步了。经过一分钟的测试后,它们可以关闭1到5秒。经过一个小时的测试,他们在20秒前就离开了。我觉得必须有某种简单的方法来保持两种输出同步。我们曾尝试在会话和输出中使用相同的设备,但我们遇到了相同的问题。我们试图将fps降到15,但仍然没有成功 设置输出 func assignDeviceToPreview(captureSession: AVCaptureSessio

我有两个摄像头输入OSX应用程序,我正在尝试使用AVCaptureMovieFileOutput保存它们。没过多久他们的视频就不同步了。经过一分钟的测试后,它们可以关闭1到5秒。经过一个小时的测试,他们在20秒前就离开了。我觉得必须有某种简单的方法来保持两种输出同步。我们曾尝试在会话和输出中使用相同的设备,但我们遇到了相同的问题。我们试图将fps降到15,但仍然没有成功

设置输出

func assignDeviceToPreview(captureSession: AVCaptureSession, device: AVCaptureDevice, previewView: NSView, index: Int){

    captureSession.stopRunning()

    captureSession.beginConfiguration()

    //clearing out old inputs
    for input in captureSession.inputs {
        let i = input as! AVCaptureInput
        captureSession.removeInput(i)
    }

    let output = self.outputs[index]
    output.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]

    //removing old outputs
    for o in captureSession.outputs{
        if let oc = o as? AVCaptureStillImageOutput{
            captureSession.removeOutput(oc)
            print("removed image out")
        }
    }

    //Adding input
    do {

        try captureSession.addInput(AVCaptureDeviceInput(device:device))

        let camViewLayer = previewView.layer!
        camViewLayer.backgroundColor = CGColorGetConstantColor(kCGColorBlack)

        let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
        previewLayer.frame = camViewLayer.bounds
        previewLayer.autoresizingMask = [.LayerWidthSizable, .LayerHeightSizable]

        camViewLayer.addSublayer(previewLayer)

        let overlayPreview = overlayPreviews[index]
        overlayPreview.frame.origin = CGPoint.zero

        previewView.addSubview(overlayPreview)

        //adding output
        captureSession.addOutput(output)

        if captureSession == session2{
            let audio = AVCaptureDevice.defaultDeviceWithMediaType(AVMediaTypeAudio)

            do {
                let input = try AVCaptureDeviceInput(device: audio)
                captureSession.addInput(input)
            }
        }

    } catch {
        print("Failed to add webcam as AV input")
    }

    captureSession.commitConfiguration()
    captureSession.startRunning()
}
开始录音

func startRecording(){

    startRecordingTimer()

    let base = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)[0]
    let appFolder = "Sessions"
    let sessionFolder = "session_" + session.UUID

    let path = base+"/"+appFolder+"/"+sessionFolder

    do{
        try NSFileManager.defaultManager().createDirectoryAtPath(path, withIntermediateDirectories: true, attributes: nil)
    }catch{
        print("issue creating folder")
    }

    for fileOutput in fileOutputs{

        let fileName = "cam\(String(fileOutputs.indexOf(fileOutput)!))" + ".mov"

        let fileURL = NSURL.fileURLWithPathComponents([path, fileName])
        fileURLs.append(fileURL!)
        print(fileURL?.absoluteString)

        var captureConnection = fileOutput.connections.first as? AVCaptureConnection
        captureConnection!.videoMinFrameDuration = CMTimeMake(1, 15)
        captureConnection!.videoMaxFrameDuration = CMTimeMake(1, 15)

        if fileOutput == movieFileOutput1{
            fileOutput.setOutputSettings([AVVideoScalingModeKey: AVVideoScalingModeResize, AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: 1280, AVVideoHeightKey: 720], forConnection: captureConnection)
        }else{
            fileOutput.setOutputSettings([AVVideoScalingModeKey: AVVideoScalingModeResizeAspect, AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: 640, AVVideoHeightKey: 360], forConnection: captureConnection)
        }
        captureConnection = fileOutput.connections.first as? AVCaptureConnection
        print(fileOutput.outputSettingsForConnection(captureConnection))

        fileOutput.startRecordingToOutputFileURL(fileURL, recordingDelegate: self)

        print("start recording")
    }

}

对于精确的定时控制,我认为您需要考虑使用较低级别的AVAssetWriter框架。这允许您控制单个帧的写入和计时

使用AVAssetWriter.startSession(atSourceTime:CMTime),您可以精确控制每个摄像头的录制开始时间

在编写过程中,使用AVCaptureVideoDataOutputSampleBufferDelegate,可以进一步操作生成的CMSampleBuffer以调整其计时信息,并进一步保持两个视频同步。查看有关调整CMSampleBuffer定时部分的参考资料


也就是说,我从未尝试过这一点,也不确定这会奏效,但我相信,如果你沿着这条路走下去,你会接近你想要实现的目标。

谢谢。我会调查一下,然后再给你回复。非常感谢你。我花了很多时间在谷歌上搜索,尝试和出错,但我确实做到了可以手动控制的程度,我写了一个单独的帧,并基于实时计算,而不是缓冲区认为正确的计算。两个视频各1小时23分58秒。现在开始计算音频部分。太好了,很高兴这有帮助!