Ios 使用增强现实技术录制视频的最佳方式是什么
使用增强现实技术录制视频的最佳方式是什么?(在iPhone/iPad摄像头的框架中添加文字、图像和徽标) 之前我试图弄清楚如何将Ios 使用增强现实技术录制视频的最佳方式是什么,ios,swift,augmented-reality,video-recording,ios-camera,Ios,Swift,Augmented Reality,Video Recording,Ios Camera,使用增强现实技术录制视频的最佳方式是什么?(在iPhone/iPad摄像头的框架中添加文字、图像和徽标) 之前我试图弄清楚如何将CIImage()转换成CIImage,并将其转换回CMSampleBuffer() 我几乎什么都做了,只是在AVAssetWriterInput 但无论如何,这个解决方案并不好,它在将CIImage转换为CVPixelBuffer(ciContext.render(CIImage!to:aBuffer))时会消耗大量CPU 所以我想停在这里,找到一些其他的方法来用增强
CIImage
()转换成CIImage
,并将其转换回CMSampleBuffer
()
我几乎什么都做了,只是在AVAssetWriterInput
但无论如何,这个解决方案并不好,它在将CIImage
转换为CVPixelBuffer
(ciContext.render(CIImage!to:aBuffer)
)时会消耗大量CPU
所以我想停在这里,找到一些其他的方法来用增强现实技术录制视频(例如,在将视频编码到mp4文件时,在帧内动态添加(绘制)文本)
这里是我尝试过的,不想再使用的
// convert original CMSampleBuffer to CIImage,
// combine multiple `CIImage`s into one (adding augmented reality -
// text or some additional images)
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
let ciimage : CIImage = CIImage(cvPixelBuffer: pixelBuffer)
var outputImage: CIImage?
let images : Array<CIImage> = [ciimage, ciimageSec!] // add all your CIImages that you'd like to combine
for image in images {
outputImage = outputImage == nil ? image : image.composited(over: outputImage!)
}
// allocate this class variable once
if pixelBufferNew == nil {
CVPixelBufferCreate(kCFAllocatorSystemDefault, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer), kCVPixelFormatType_32BGRA, nil, &pixelBufferNew)
}
// convert CIImage to CVPixelBuffer
let ciContext = CIContext(options: nil)
if let aBuffer = pixelBufferNew {
ciContext.render(outputImage!, to: aBuffer) // >>> IT EATS A LOT OF <<< CPU
}
// convert new CVPixelBuffer to new CMSampleBuffer
var sampleTime = CMSampleTimingInfo()
sampleTime.duration = CMSampleBufferGetDuration(sampleBuffer)
sampleTime.presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
sampleTime.decodeTimeStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer)
var videoInfo: CMVideoFormatDescription? = nil
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, &videoInfo)
var oBuf: CMSampleBuffer?
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixelBufferNew!, true, nil, nil, videoInfo!, &sampleTime, &oBuf)
/*
try to append new CMSampleBuffer into a file (.mp4) using
AVAssetWriter & AVAssetWriterInput... (I met errors with it, original buffer works ok
- "from func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)")
*/*
//将原始CMSampleBuffer转换为CIImage,
//将多个“CIImage”合并为一个(添加增强现实-
//文本或一些附加图像)
让pixelBuffer:CVPixelBuffer=CMSampleBufferGetImageBuffer(sampleBuffer)!
让ciimage:ciimage=ciimage(cvPixelBuffer:pixelBuffer)
var outputImage:CIImage?
let images:Array=[ciimage,ciimageSec!]//添加所有要组合的CIImages
对于图像中的图像{
outputImage=outputImage==nil?image:image.composited(超过:outputImage!)
}
//分配这个类变量一次
如果pixelBufferNew==nil{
CVPixelBufferCreate(kCFAllocatorSystemDefault、CVPixelBufferGetWidth(pixelBuffer)、CVPixelBufferGetHeight(pixelBuffer)、KCVPIxelFormattType_32BGRA、nil和pixelBufferNew)
}
//将CIImage转换为CVPixelBuffer
设ciContext=ciContext(选项:nil)
如果let aBuffer=pixelBufferNew{
ciContext.render(outputImage!,to:aBuffer)//>>它吃了很多现在我回答我自己的问题
最好使用Objective-C++
类(.mm
),我们可以使用OpenCV轻松/快速地从CMSampleBuffer
转换到cv::Mat
,并在处理后返回到CMSampleBuffer
我们可以很容易地从Swift调用Objective-C++函数