Video 将更改后的视频保存到Swift 3中的摄像机卷

Video 将更改后的视频保存到Swift 3中的摄像机卷,video,swift3,xcode8,ios10,Video,Swift3,Xcode8,Ios10,为了在Swift 3中保存更改的视频,我应该使用什么功能?我们使用以下功能来捕获图像的整个屏幕,例如: UIGraphicsBeginImageContext(self.view.frame.size) if let ctx = UIGraphicsGetCurrentContext() { self.view.layer.render(in: ctx) let renderedImage = UIGraphicsGetImage

为了在Swift 3中保存更改的视频,我应该使用什么功能?我们使用以下功能来捕获图像的整个屏幕,例如:

UIGraphicsBeginImageContext(self.view.frame.size)
        if let ctx = UIGraphicsGetCurrentContext() {

            self.view.layer.render(in: ctx)
            let renderedImage = UIGraphicsGetImageFromCurrentImageContext()
            UIGraphicsEndImageContext()
但是我们用什么来保存一个有图画的视频呢

谢谢大家!!附上GIF示例


那是一项相当复杂的任务

我不希望这个答案比现在更大,所以我将假设两件大事:

  • 您已经有了没有图形的视频文件
  • 你已经有一幅画了
对于编辑视频,iOS使用AVFoundation framework,因此需要将其导入到类中。 编辑功能应如下所示:

//Input are video (AVAsset) and image that you already have
func addOverlayTo(asset: AVAsset, overlayImage:UIImage?) {
    //this object will be our new video. It describes what will be in it
    let mixComposition = AVMutableComposition()
    //we tell our composition that there will be video track in it
    let videoTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
    //we add our video file to that track
    try! videoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, asset.duration),
                                        of: asset.tracks(withMediaType: AVMediaTypeVideo)[0] ,
                                        at: kCMTimeZero)
    //this object tells how to display our video
    let mainCompositionInst = AVMutableVideoComposition()
    //in iOS videos are always stored in landscape right orientation
    //so to orient and size everything properly we have to look at transform property of asset
    let size = determineRenderSize(for: asset)
    //these steps are necessary only if our video has multiple layers
    if overlayImage != nil {
        //create all necessary layers
        let videoLayer = CALayer()
        videoLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size)
        let parentLayer = CALayer()
        parentLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size)
        parentLayer.addSublayer(videoLayer)
        if overlayImage != nil{
            let overlayLayer = CALayer()
            overlayLayer.contents = overlayImage?.cgImage
            overlayLayer.frame = CGRect(origin: CGPoint(x: 0, y: 0), size: size)
            parentLayer.addSublayer(overlayLayer)
        }
        //layout layers properly
        mainCompositionInst.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: parentLayer)
    }
     let mainInstruction = AVMutableVideoCompositionInstruction()
     //this object will rotate our video to proper orientation
     let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
     layerInstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)
     mainInstruction.layerInstructions = [layerInstruction]
     mainCompositionInst.instructions = [mainInstruction]
     //now we have to fill all properties of our composition instruction
     //their names are quite informative so I won't comment much
     mainCompositionInst.renderSize = size
     mainCompositionInst.renderScale = 1.0
     //assumed standard 30 fps. It's written as 20/600 because videos
     //from built in phone camera have default time scale 600
     mainCompositionInst.frameDuration = CMTimeMake(20,600)
     mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
     //now we need to save our new video to phone memory
     //object that will do it
     let exporter = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
     //create a path where our video will be saved   
     let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
     let outputPath = documentDirectory + "your_file_name.mp4"
     //if there already is a file with this path export will fail
     if FileManager.default.fileExists(atPath: outputPath) {
         try! FileManager.default.removeItem(atPath: outputPath)
     }   
     exporter.outputURL = URL.init(fileURLWithPath: outputPath)
     //again a bunch of parameters that have to be filled. These a pretty standard though
     exporter.outputFileType = AVFileTypeQuickTimeMovie
     exporter.shouldOptimizeForNetworkUse = true
     exporter.videoComposition = mainCompositionInst
     exporter.timeRange = CMTimeRangeMake(kCMTimeZero, asset.duration)
     exporter.exportAsynchronously { () -> Void in
         if exporter.error == nil && exporter.status == .completed{
             print("SAVED!")
         }
         else{
             print(exporter.error!
         }
     }
以及决定方向的功能:

func determineRenderSize(for asset: AVAsset) -> CGSize {
    let videoTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
    let size = videoTrack.naturalSize
    let txf = videoTrack.preferredTransform
    print("transform is ", txf)
    if (size.height == txf.tx && txf.ty == 0){
        return CGSize(width: size.height, height: size.width) //portrait
    }
    else if (txf.tx == size.width && txf.ty == size.height){
        return size //landscape left
    }
    else if (txf.tx == 0 && txf.ty == size.width){
        return CGSize(width: size.height, height: size.width) //upside down
    }
    else{
        return size //landscape right
    }
}
这里有很多不同的参数,但解释这些参数会占用太多的空间,所以要了解更多信息,我建议阅读一些有关iOS视频编辑的教程。 有几个好的是和