Ios 如何在AVCaptureSession中为视频的每一帧应用过滤器?
我正在编写一个应用程序,它需要对使用AVCaptureSession捕获的视频应用过滤器。过滤后的输出将写入输出文件。我目前正在使用CIFilter和CIImage对每个视频帧进行过滤。 代码如下:Ios 如何在AVCaptureSession中为视频的每一帧应用过滤器?,ios,avcapturesession,cifilter,ciimage,Ios,Avcapturesession,Cifilter,Ciimage,我正在编写一个应用程序,它需要对使用AVCaptureSession捕获的视频应用过滤器。过滤后的输出将写入输出文件。我目前正在使用CIFilter和CIImage对每个视频帧进行过滤。 代码如下: func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
...
let pixelBuffer = CMSampleBufferGetImageBuffer(samples)!
let options = [kCVPixelBufferPixelFormatTypeKey as String : kCVPixelFormatType_420YpCbCr8BiPlanarFullRange]
let cameraImage = CIImage(cvImageBuffer: pixelBuffer, options: options)
let filter = CIFilter(name: "CIGaussianBlur")!
filter.setValue((70.0), forKey: kCIInputRadiusKey)
filter.setValue(cameraImage, forKey: kCIInputImageKey)
let result = filter.outputImage!
var pixBuffer:CVPixelBuffer? = nil;
let fmt = CVPixelBufferGetPixelFormatType(pixelBuffer)
CVPixelBufferCreate(kCFAllocatorSystemDefault,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
fmt,
CVBufferGetAttachments(pixelBuffer, .shouldPropagate),
&pixBuffer);
CVBufferPropagateAttachments(pixelBuffer, pixBuffer!)
let eaglContext = EAGLContext(api: EAGLRenderingAPI.openGLES3)!
eaglContext.isMultiThreaded = true
let contextOptions = [kCIContextWorkingColorSpace : NSNull(), kCIContextOutputColorSpace: NSNull()]
let context = CIContext(eaglContext: eaglContext, options: contextOptions)
CVPixelBufferLockBaseAddress( pixBuffer!, CVPixelBufferLockFlags(rawValue: 0))
context.render(result, to: pixBuffer!)
CVPixelBufferUnlockBaseAddress( pixBuffer!, CVPixelBufferLockFlags(rawValue: 0))
var timeInfo = CMSampleTimingInfo(duration: sampleBuffer.duration,
presentationTimeStamp: sampleBuffer.presentationTimeStamp,
decodeTimeStamp: sampleBuffer.decodeTimeStamp)
var sampleBuf:CMSampleBuffer? = nil;
CMSampleBufferCreateReadyWithImageBuffer(kCFAllocatorDefault,
pixBuffer!,
samples.formatDescription!,
&timeInfo,
&sampleBuf)
// write to video file
let ret = assetWriterInput.append(sampleBuf!)
...
}
AVAssetWriterInput.append中的ret始终为false。我做错了什么?而且,我使用的方法效率很低。在此过程中会创建一些临时副本。有可能将其安装到位吗?我使用了几乎相同的代码来解决相同的问题。我发现为渲染创建的像素缓冲区有问题
append(sampleBuffer:)
始终返回false和assetWriter。错误为
Error Domain=AVFoundationErrorDomain Code=-11800“操作可能会失败
未完成“UserInfo={NSUnderlyingError=0x17024ba30{Error
域=NSOSStatusErrorDomain代码=-12780“(空)”},
NSLocalizedFailureReason=发生未知错误(-12780),
NSLocalizedDescription=操作无法完成}
他们说这是一个bug(如上所述),已经发布:
但出乎意料的是,当使用原始像素缓冲区进行渲染时,我发现这个问题消失了。见下面的代码:
let sourcePixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let sourceImage = CIImage(cvImageBuffer: sourcePixelBuffer)
let filter = CIFilter(name: "CIGaussianBlur", withInputParameters: [kCIInputImageKey: sourceImage])!
let filteredImage = filter.outputImage!
var pixelBuffer: CVPixelBuffer? = nil
let width = CVPixelBufferGetWidth(sourcePixelBuffer)
let height = CVPixelBufferGetHeight(sourcePixelBuffer)
let pixelFormat = CVPixelBufferGetPixelFormatType(sourcePixelBuffer)
let attributes = CVBufferGetAttachments(sourcePixelBuffer, .shouldPropagate)!
CVPixelBufferCreate(nil, width, height, pixelFormat, attributes, &pixelBuffer)
CVBufferPropagateAttachments(sourcePixelBuffer, pixelBuffer!)
var filteredPixelBuffer = pixelBuffer! // this never works
filteredPixelBuffer = sourcePixelBuffer // 0_0
let context = CIContext(options: [kCIContextOutputColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!])
context.render(filteredImage, to: filteredPixelBuffer) // modifying original image buffer here!
let presentationTimestamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
var timing = CMSampleTimingInfo(duration: kCMTimeInvalid, presentationTimeStamp: presentationTimestamp, decodeTimeStamp: kCMTimeInvalid)
var processedSampleBuffer: CMSampleBuffer? = nil
var formatDescription: CMFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(nil, filteredPixelBuffer, &formatDescription)
CMSampleBufferCreateReadyWithImageBuffer(nil, filteredPixelBuffer, formatDescription!, &timing, &processedSampleBuffer)
print(assetInput!.append(processedSampleBuffer!))
当然,我们都知道不允许修改示例缓冲区,但不知何故,这种方法会提供正常处理的视频。技巧是肮脏的,我不能说它在有预览层或一些并发处理例程的情况下是否合适。你是否有可能修改sampleBuffer
?我现在没有修改sampleBuffer。不过,如果我能,那就太好了。这样我就不用为过滤后的输出创建新的缓冲区了。上面说我们不能编辑它。用上面提到的方法看看有什么错误。