Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/swift/18.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 在Swift 3中为视频添加覆盖_Ios_Swift_Video_Avfoundation - Fatal编程技术网

Ios 在Swift 3中为视频添加覆盖

Ios 在Swift 3中为视频添加覆盖,ios,swift,video,avfoundation,Ios,Swift,Video,Avfoundation,我正在学习AVFoundation,在尝试用Swift 3中的覆盖图像保存视频时遇到了问题。使用AVMutableComposition我可以将图像添加到视频中,但是视频会被放大,并且不会将自身限制为视频拍摄的纵向尺寸。我试过: 通过AVAssetTrack设置自然尺寸 将视频限制为AVMutableVideoComposition渲染帧中的纵向大小 将新视频边界锁定到录制的视频宽度和高度 下面的代码与我需要帮助的问题无关。我试图添加的图像覆盖了整个纵向视图,并且边缘周围都有边框。该应用程序

我正在学习AVFoundation,在尝试用Swift 3中的覆盖图像保存视频时遇到了问题。使用
AVMutableComposition
我可以将图像添加到视频中,但是视频会被放大,并且不会将自身限制为视频拍摄的纵向尺寸。我试过:

  • 通过
    AVAssetTrack
    设置自然尺寸
  • 将视频限制为
    AVMutableVideoComposition渲染帧中的纵向大小
  • 将新视频边界锁定到录制的视频宽度和高度
下面的代码与我需要帮助的问题无关。我试图添加的图像覆盖了整个纵向视图,并且边缘周围都有边框。该应用程序也只允许肖像

func processVideoWithWatermark(video: AVURLAsset, watermark: UIImage, completion: @escaping (Bool) -> Void) {

    let composition = AVMutableComposition()
    let asset = AVURLAsset(url: video.url, options: nil)

    let track =  asset.tracks(withMediaType: AVMediaTypeVideo)
    let videoTrack:AVAssetTrack = track[0] as AVAssetTrack
    let timerange = CMTimeRangeMake(kCMTimeZero, asset.duration)

    let compositionVideoTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())

    do {
        try compositionVideoTrack.insertTimeRange(timerange, of: videoTrack, at: kCMTimeZero)
        compositionVideoTrack.preferredTransform = videoTrack.preferredTransform
    } catch {
        print(error)
    }

//      let compositionAudioTrack:AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
//      
//      for audioTrack in asset.tracks(withMediaType: AVMediaTypeAudio) {
//          do {
//              try compositionAudioTrack.insertTimeRange(audioTrack.timeRange, of: audioTrack, at: kCMTimeZero)
//          } catch {
//              print(error)
//          }
//          
//      }
//      
    let size = videoTrack.naturalSize

    let watermark = watermark.cgImage
    let watermarklayer = CALayer()
    watermarklayer.contents = watermark
    watermarklayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)
    watermarklayer.opacity = 1

    let videolayer = CALayer()
    videolayer.frame = CGRect(x: 0, y: 0, width: screenWidth, height: screenHeight)

    let parentlayer = CALayer()
    parentlayer.frame = CGRect(x: 0, y: 0, width: size.width, height: size.height)
    parentlayer.addSublayer(videolayer)
    parentlayer.addSublayer(watermarklayer)

    let layercomposition = AVMutableVideoComposition()
    layercomposition.frameDuration = CMTimeMake(1, 30)
    layercomposition.renderSize = CGSize(width: screenWidth, height: screenHeight)
    layercomposition.renderScale = 1.0
    layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)

    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration)

    let videotrack = composition.tracks(withMediaType: AVMediaTypeVideo)[0] as AVAssetTrack
    let layerinstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videotrack)

    layerinstruction.setTransform(videoTrack.preferredTransform, at: kCMTimeZero)

    instruction.layerInstructions = [layerinstruction]
    layercomposition.instructions = [instruction]

    let filePath = NSTemporaryDirectory() + self.fileName()
    let movieUrl = URL(fileURLWithPath: filePath)

    guard let assetExport = AVAssetExportSession(asset: composition, presetName:AVAssetExportPresetHighestQuality) else {return}
    assetExport.videoComposition = layercomposition
    assetExport.outputFileType = AVFileTypeMPEG4
    assetExport.outputURL = movieUrl

    assetExport.exportAsynchronously(completionHandler: {

        switch assetExport.status {
        case .completed:
            print("success")
            print(video.url)
            self.saveVideoToUserLibrary(fileURL: movieUrl, completion: { (success, error) in
                if success {
                    completion(true)
                } else {
                    completion(false)

                }
            })

            break
        case .cancelled:
            print("cancelled")
            break
        case .exporting:
            print("exporting")
            break
        case .failed:
            print(video.url)
            print("failed: \(assetExport.error!)")
            break
        case .unknown:
            print("unknown")
            break
        case .waiting:
            print("waiting")
            break
        }
    })

}

如果视频层应填充父层,则视频层的
不正确。您需要将大小设置为
size
而不是
screenSize