Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/swift/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Swift 在视频上叠加图像会降低视频分辨率_Swift_Avassetexportsession_Avmutablecomposition_Avvideocomposition - Fatal编程技术网

Swift 在视频上叠加图像会降低视频分辨率

Swift 在视频上叠加图像会降低视频分辨率,swift,avassetexportsession,avmutablecomposition,avvideocomposition,Swift,Avassetexportsession,Avmutablecomposition,Avvideocomposition,当我在视频上叠加图像时,视频质量会大大降低。如果我没有设置导出会话的视频合成或将导出质量设置为passthrough,则视频质量非常好(但我显然没有覆盖) 我正在传递一个本地的.mov视频url来添加覆盖。 我正在使用PHPhotoLibrary将视频保存到相机卷中。 使用其他一些功能转换视频并设置其指令 这一切看起来很简单,但有什么东西正在扼杀视频质量 func merge3(url: URL) { let firstAsset = AVAsset(url: url) //

当我在视频上叠加图像时,视频质量会大大降低。如果我没有设置导出会话的视频合成或将导出质量设置为passthrough,则视频质量非常好(但我显然没有覆盖)

我正在传递一个本地的.mov视频url来添加覆盖。 我正在使用PHPhotoLibrary将视频保存到相机卷中。 使用其他一些功能转换视频并设置其指令

这一切看起来很简单,但有什么东西正在扼杀视频质量

func merge3(url: URL) {

    let firstAsset = AVAsset(url: url)

    // 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.
    let mixComposition = AVMutableComposition()

    // 2 - Create two video tracks
    guard
      let firstTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video,
                                                      preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
      else {
        return
    }
    do {
      try firstTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: firstAsset.duration),
                                     of: firstAsset.tracks(withMediaType: AVMediaType.video)[0],
                                     at: CMTime.zero)
    } catch {
      print("Failed to load first track")
      return
    }

    let s = UIScreen.main.bounds

    let imglogo = UIImage(named: "django")?.scaleImageToSize(newSize: CGSize(width: 250, height: 125))
    let imglayer = CALayer()
    imglayer.contents = imglogo?.cgImage
    imglayer.frame = CGRect(x: s.width / 2 - 125, y: s.height / 2 - 67.5
      , width: 250, height: 125)
    imglayer.opacity = 1.0

    let videolayer = CALayer()
    videolayer.frame = CGRect(x: 0, y: 0, width: s.width, height: s.height)

    let parentlayer = CALayer()
    parentlayer.frame = CGRect(x: 0, y: 0, width: s.width, height: s.height)
    parentlayer.addSublayer(videolayer)
    parentlayer.addSublayer(imglayer)

    // 2.1
    let mainInstruction = AVMutableVideoCompositionInstruction()
    mainInstruction.timeRange = CMTimeRangeMake(start: CMTime.zero,
                                                duration: firstAsset.duration)

    let layercomposition = AVMutableVideoComposition()
    layercomposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
    layercomposition.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videolayer, in: parentlayer)
    layercomposition.renderSize = CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)

    // instruction for watermark
    let instruction = AVMutableVideoCompositionInstruction()
    instruction.timeRange = CMTimeRangeMake(start: CMTime.zero, duration: firstAsset.duration)
    _ = mixComposition.tracks(withMediaType: AVMediaType.video)[0] as AVAssetTrack
    let layerinstruction = VideoHelper.videoCompositionInstruction1(firstTrack, asset: firstAsset)
    instruction.layerInstructions = [layerinstruction]
    layercomposition.instructions = [instruction]

    // 4 - Get path
    guard let documentDirectory = FileManager.default.urls(for: .documentDirectory,
                                                           in: .userDomainMask).first else {
                                                            return
    }
    let dateFormatter = DateFormatter()
    dateFormatter.dateStyle = .long
    dateFormatter.timeStyle = .short
    let date = dateFormatter.string(from: Date())
    let url = documentDirectory.appendingPathComponent("mergeVideo-\(date).mov")

    // 5 - Create Exporter
    guard let exporter = AVAssetExportSession(asset: mixComposition,
                                              presetName: AVAssetExportPresetHighestQuality) else {
                                                return
    }
    exporter.outputURL = url
    exporter.outputFileType = AVFileType.mov
    exporter.shouldOptimizeForNetworkUse = true
    exporter.videoComposition = layercomposition

    // 6 - Perform the Export
    exporter.exportAsynchronously() {
      DispatchQueue.main.async {
        self.exportDidFinish(exporter)
      }
    }
  }
你正在设定你的目标

layercomposition.renderSize = CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
应该是什么时候

layercomposition.renderSize = yourAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize
第一种方法是将分辨率设置为屏幕大小,而不是原始视频的实际大小。第二个修正为设置原始视频的分辨率


这样想吧——你不希望你的分辨率和你的屏幕一样大——那就太小了。您希望它的大小与一些原始视频或一些常用视频设置的大小相同。

不完全确定-但您是否尝试过使用
videoComposition.renderSize=CGSizeMake(someWidth,someHeight)
进行
AVAssetExportSession
。也可能发生的是你的.mov的决议可能会接管-那是什么?此外,如果这两种方法都没有帮助,我们能否获得第一/第二个视频和最后一个视频的分辨率数据?@impression7vx我将获取这些分辨率,但在设置layercomposition.renderSize到屏幕时,我不是在设置视频合成的渲染大小吗bounds@impression7vx啊。决议正在被削减。原始视频分辨率为900x1200,导出视频分辨率为414x736。将开始挖掘正在切割的内容。如果您有任何想法,我将不胜感激。我知道您的问题!发布解决方案哈哈哈,有一刻,这是导致视频层超级tinyI我假设我的转换在某个地方导致视频变得很小。因为我渲染的视频越大,视频就越小,这是一种奇怪的叠加,但这将为您提供初始资源的正确分辨率。无论你对你的初始视频做了什么改变,以及它对大小的影响,得到这个比率并乘以这个大小。你是我朋友中的英雄。谢谢你,哈哈。只需体验一下AVAssets:)
static func orientationFromTransform(_ transform: CGAffineTransform)
    -> (orientation: UIImage.Orientation, isPortrait: Bool) {
      var assetOrientation = UIImage.Orientation.up
      var isPortrait = false
      if transform.a == 0 && transform.b == 1.0 && transform.c == -1.0 && transform.d == 0 {
        assetOrientation = .right
        isPortrait = true
      } else if transform.a == 0 && transform.b == -1.0 && transform.c == 1.0 && transform.d == 0 {
        assetOrientation = .left
        isPortrait = true
      } else if transform.a == 1.0 && transform.b == 0 && transform.c == 0 && transform.d == 1.0 {
        assetOrientation = .up
      } else if transform.a == -1.0 && transform.b == 0 && transform.c == 0 && transform.d == -1.0 {
        assetOrientation = .down
      }
      return (assetOrientation, isPortrait)
  }
layercomposition.renderSize = CGSize(width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.height)
layercomposition.renderSize = yourAsset.tracks(withMediaType: AVMediaTypeVideo)[0].naturalSize