AVAssetWriterInput和IsReadyForMore媒体数据问题在Swift中创建视频

AVAssetWriterInput和IsReadyForMore媒体数据问题在Swift中创建视频,swift,video,Swift,Video,我从图像创建视频,视频丢失了一些帧,因为我已经准备好了更多的MediaData有些时间没有准备好。 当我调试时,我看到了原因,因为循环,它需要一些延迟时间来启动下一个缓冲区,但我不知道如何做到这一点 { for nextDicData in self.selectedPhotosArray{ if (videoWriterInput.isReadyForMoreMediaData) { i

我从图像创建视频,视频丢失了一些帧,因为我已经准备好了更多的MediaData有些时间没有准备好。 当我调试时,我看到了原因,因为循环,它需要一些延迟时间来启动下一个缓冲区,但我不知道如何做到这一点

{
           for nextDicData in self.selectedPhotosArray{      
                if (videoWriterInput.isReadyForMoreMediaData) {

                    if let nextImage = nextDicData["img"] as? UIImage
                    {
                        var frameDuration = CMTimeMake(Int64(0), fps)
                        if let timeVl = nextDicData["time"] as? Float{
                               framePerSecond = Int64(timeVl * 1000)
                            print("TIME FRAME : \(timeVl)")

                        }else{
                             framePerSecond = Int64(0.1 * 1000)
                        }

                        frameDuration =  CMTimeMake(framePerSecond ,fps)
                        let lastFrameTime = CMTimeMake(Int64(lastTimeVl), fps)
                        let presentationTime = CMTimeAdd(lastFrameTime, frameDuration)
                        var pixelBuffer: CVPixelBuffer? = nil
                        let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
                        if let pixelBuffer = pixelBuffer, status == 0 {
                            let managedPixelBuffer = pixelBuffer
                            CVPixelBufferLockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
                            let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
                            let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
                            let context = CGContext(data: data, width: Int(self.outputSize.width), height: Int(self.outputSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(managedPixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
                            context!.clear(CGRect(x: 0, y: 0, width: CGFloat(self.outputSize.width), height: CGFloat(self.outputSize.height)))
                            let horizontalRatio = CGFloat(self.outputSize.width) / nextImage.size.width
                            let verticalRatio = CGFloat(self.outputSize.height) / nextImage.size.height
                            //let aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
                            let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
                            let newSize: CGSize = CGSize(width: nextImage.size.width, height: nextImage.size.height)
                            let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
                            let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0

                            context?.draw(nextImage.cgImage!, in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
                            CVPixelBufferUnlockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
                            appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)


                        } else {
                            print("Failed to allocate pixel buffer")
                            appendSucceeded = false
                        }
                    }

                }else{
                    //not ready
                       print("write is Not Raady: \(lastTimeVl)")
                }
                if !appendSucceeded {
                    break
                }
                frameCount += 1
                lastTimeVl += framePerSecond
                print("LAST TIME : \(lastTimeVl)")


            }
{
对于self.selectedPhotosArray{
if(videoWriterInput.isReadyForMoreMediaData){
如果让nextImage=nextDicData[“img”]作为UIImage
{
var frameDuration=CMTimeMake(Int64(0),fps)
如果让timeVl=nextDicData[“时间”]作为浮点数{
framePerSecond=Int64(timeVl*1000)
打印(“时间范围:\(timeVl)”)
}否则{
framePerSecond=Int64(0.1*1000)
}
frameDuration=CMTimeMake(framePerSecond,fps)
设lastFrameTime=CMTimeMake(Int64(lastTimeVl),fps)
让presentationTime=CMTimeAdd(lastFrameTime,frameDuration)
var pixelBuffer:CVPixelBuffer?=nil
let状态:CVReturn=CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault,PixelBufferAdapter.pixelBufferPool!,&pixelBuffer)
如果让pixelBuffer=pixelBuffer,状态==0{
让managedPixelBuffer=pixelBuffer
CVPixelBufferLockBaseAddress(managedPixelBuffer,CVPixelBufferLockFlags(原始值:CVOptionFlags(0)))
让数据=CVPixelBufferGetBaseAddress(managedPixelBuffer)
设rgbColorSpace=CGColorSpaceCreateDeviceRGB()
让context=CGContext(数据:data,宽度:Int(self.outputSize.width),高度:Int(self.outputSize.height),bitsPerComponent:8,bytesPerRow:CVPixelBufferGetBytesPerRow(managedPixelBuffer),空格:rgbColorSpace,bitmapInfo:cImageAlphaInfo.PremultipledFirst.rawValue)
context!.clear(CGRect(x:0,y:0,width:CGFloat(self.outputSize.width),height:CGFloat(self.outputSize.height)))
让水平比率=CGFloat(self.outputSize.width)/nextImage.size.width
让verticalRatio=CGFloat(self.outputSize.height)/nextImage.size.height
//设aspectRatio=max(水平比、垂直比)//ScaleAspectFill
设aspectRatio=min(水平比、垂直比)//ScaleAspectFit
let newSize:CGSize=CGSize(宽度:nextImage.size.width,高度:nextImage.size.height)
设x=newSize.width
swift 5

在PixelBufferAdapter.append之后添加usleep

添加睡眠功能的原因是,当存在多个输入时,AVAssetWriter会尝试以交错模式写入媒体数据,因此writer必须准备好进行下一次输入(在您的情况下为图像)才能追加数据。等待一段时间后,它将为下一次输入做好准备


swift 5

在PixelBufferAdapter.append之后添加usleep

添加睡眠功能的原因是,当存在多个输入时,AVAssetWriter会尝试以交错模式写入媒体数据,因此writer必须准备好进行下一次输入(在您的情况下为图像)才能追加数据。等待一段时间后,它将为下一次输入做好准备


AVAssetWriterInput
可以帮助您管理它,并通过调用
requestmediadata whenready
再次告知
isReadyForMoreMediaData
是否为
true

以下是苹果文档中的一个示例(翻译成Swift):

现在,当作者“突然”没有准备好时,不用担心,
它将在下一次requestMediaDataWhenReady回调时继续添加帧。

通过调用
requestMediaDataWhenReady
,AVAssetWriterInput可以帮助您管理它,并让您知道
何时准备好了更多MediaData
是否为

以下是苹果文档中的一个示例(翻译成Swift):

现在,当作者“突然”没有准备好时,不用担心,
它将在下次requestMediaDataWhenReady回调时继续添加帧。

我希望它能起作用。谢谢但不是我的问题我希望它能起作用。谢谢但不是我的问题您能否对此答案提供解释,并链接到任何支持文档?您能否对此答案提供解释,并链接到有证明文件吗?
 appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
 //VideoWriterInput must be paused for atleast 50 milliseconds or the buffer wont be ready to append new frame
 usleep(useconds_t(50000) )
myAVAssetWriterInput.requestMediaDataWhenReady(on: queue) {
    while myAVAssetWriterInput.isReadyForMoreMediaData {
        let nextSampleBuffer = copyNextSampleBufferToWrite()
        if let nextSampleBuffer = nextSampleBuffer { 
            // you have another frame to add
            myAVAssetWriterInput.append(nextSampleBuffer)
        } else { 
            // finished to add frames
            myAVAssetWriterInput.markAsFinished()
            break
        }
    }
})