Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/104.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 使用CIFilters在CALayer层次结构中渲染视频_Ios_Avfoundation_Cifilter_Avasset_Core Video - Fatal编程技术网

Ios 使用CIFilters在CALayer层次结构中渲染视频

Ios 使用CIFilters在CALayer层次结构中渲染视频,ios,avfoundation,cifilter,avasset,core-video,Ios,Avfoundation,Cifilter,Avasset,Core Video,在iOS应用程序的UI中,我显示了复杂的Calayer层次结构。其中一个层是AVPlayerLayer,它使用AVVideoCompositionasset:,ApplyingCifilterWithHandler:,实时显示应用了CIFilters的视频 现在我想将这个层合成导出到一个视频文件中。AVFoundation中有两个工具似乎很有用: A:AVVideoCompositionCoreAnimationTool,允许在可能已设置动画的CALayer层次结构中渲染视频 B:AVVideo

在iOS应用程序的UI中,我显示了复杂的Calayer层次结构。其中一个层是AVPlayerLayer,它使用AVVideoCompositionasset:,ApplyingCifilterWithHandler:,实时显示应用了CIFilters的视频

现在我想将这个层合成导出到一个视频文件中。AVFoundation中有两个工具似乎很有用:

A:AVVideoCompositionCoreAnimationTool,允许在可能已设置动画的CALayer层次结构中渲染视频

B:AVVideoCompositionasset:,ApplyingCifilterWithHandler:,我也在UI中使用它将Cifilter应用于视频资源

但是,这两个工具不能同时使用:如果我启动结合这些工具的AVAssetExportSession,AVFoundation会抛出NSInvalidArgumentException:

期望视频合成仅包含AVCoreMageFilterServiceCompositionInstruction

我试图通过以下方式绕过此限制:

解决方法1

1使用AVAssetReader和AVAssetWriter设置导出

2从资产读取器获取样本缓冲区并应用CIFilter,将结果保存在CGImage中

3将CGImage设置为层层次中视频层的内容。现在,层层次结构看起来像最终视频的一帧

4使用CVPixelBufferGetBaseAddress从资产写入器获取每个帧的CVPixelBuffer数据,并使用该数据创建CGContext

5使用CALayer.renderin ctx:CGContext将我的图层渲染到该上下文

此设置可以工作,但速度非常慢-导出5秒的视频有时需要一分钟。看起来CoreGraphics调用是这里的瓶颈,我想这是因为这种方法在CPU上进行合成

解决方法2

另一种方法可以分两步完成:首先,使用应用于文件的过滤器保存源视频,如B中所示,然后使用该视频文件将视频嵌入到层合成中,如a中所示。但是,由于它使用两个过程,我想这并不像它可能的那样有效

总结


什么是将此视频导出到文件的好方法,最好是一次导出?如何同时使用CIFilters和AVVideoCompositionCoreAnimationTool?是否有一种本地方法可以在AVFoundation中设置一个结合了这些工具的管道?

实现这一点的方法是使用自定义AVVideoCompositing。此对象允许您在本例中合成并应用CIFilter每个视频帧

下面是一个将CIPhotoEffectNoir效果应用于整个视频的示例实现:

class VideoFilterCompositor: NSObject, AVVideoCompositing {

    var sourcePixelBufferAttributes: [String : Any]? = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
    var requiredPixelBufferAttributesForRenderContext: [String : Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
    private var renderContext: AVVideoCompositionRenderContext?

    func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
        renderContext = newRenderContext
    }

    func cancelAllPendingVideoCompositionRequests() {
    }

    private let filter = CIFilter(name: "CIPhotoEffectNoir")!
    private let context = CIContext()
    func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
        guard let track = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value, let frame = asyncVideoCompositionRequest.sourceFrame(byTrackID: track) else {
            asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoFilterCompositor", code: 0, userInfo: nil))
            return
        }
        filter.setValue(CIImage(cvPixelBuffer: frame), forKey: kCIInputImageKey)
        if let outputImage = filter.outputImage, let outBuffer = renderContext?.newPixelBuffer() {
            context.render(outputImage, to: outBuffer)
            asyncVideoCompositionRequest.finish(withComposedVideoFrame: outBuffer)
        } else {
            asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoFilterCompositor", code: 0, userInfo: nil))
        }
    }

}
如果您需要在不同的时间使用不同的筛选器,可以使用自定义的AVVideoCompositionInstructionProtocol,您可以从AVAsynchronousVideoCompositionRequest获得该协议

接下来,您需要将其用于AVMutableVideoComposition,因此:


使用此功能,您应该能够使用常规AVAssetExportSession导出视频,设置视频合成

实现此功能的方法是使用自定义的AVAssetExportSession。此对象允许您在本例中合成并应用CIFilter每个视频帧

下面是一个将CIPhotoEffectNoir效果应用于整个视频的示例实现:

class VideoFilterCompositor: NSObject, AVVideoCompositing {

    var sourcePixelBufferAttributes: [String : Any]? = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
    var requiredPixelBufferAttributesForRenderContext: [String : Any] = [kCVPixelBufferPixelFormatTypeKey as String: kCVPixelFormatType_32BGRA]
    private var renderContext: AVVideoCompositionRenderContext?

    func renderContextChanged(_ newRenderContext: AVVideoCompositionRenderContext) {
        renderContext = newRenderContext
    }

    func cancelAllPendingVideoCompositionRequests() {
    }

    private let filter = CIFilter(name: "CIPhotoEffectNoir")!
    private let context = CIContext()
    func startRequest(_ asyncVideoCompositionRequest: AVAsynchronousVideoCompositionRequest) {
        guard let track = asyncVideoCompositionRequest.sourceTrackIDs.first?.int32Value, let frame = asyncVideoCompositionRequest.sourceFrame(byTrackID: track) else {
            asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoFilterCompositor", code: 0, userInfo: nil))
            return
        }
        filter.setValue(CIImage(cvPixelBuffer: frame), forKey: kCIInputImageKey)
        if let outputImage = filter.outputImage, let outBuffer = renderContext?.newPixelBuffer() {
            context.render(outputImage, to: outBuffer)
            asyncVideoCompositionRequest.finish(withComposedVideoFrame: outBuffer)
        } else {
            asyncVideoCompositionRequest.finish(with: NSError(domain: "VideoFilterCompositor", code: 0, userInfo: nil))
        }
    }

}
如果您需要在不同的时间使用不同的筛选器,可以使用自定义的AVVideoCompositionInstructionProtocol,您可以从AVAsynchronousVideoCompositionRequest获得该协议

接下来,您需要将其用于AVMutableVideoComposition,因此:


有了它,您应该能够使用常规AVAssetExportSession导出视频,设置视频合成

这是一个很好的解决方案!谢谢伟大的解决方案!谢谢