Ios 如何在不使用相机的情况下拍摄图像变化的视频?
我已经问了一个问题,但没有回答: 但也许我的问题需要更简单一些。我的谷歌搜索毫无结果如何在不使用相机的情况下实时捕获图像变化的视频? 使用captureOutput,我得到了一个CMSampleBuffer,我可以将它制作成一个CVPixelBuffer。AVAssetWriterInput的mediaType设置为视频,但我认为它需要压缩视频。此外,我不清楚AVAssetWriterInput ExpectsDiadataInRealTime属性是否应设置为true 看起来应该很简单,但我所做的一切都让我的AVAssetWriter的身份失败了 这是我的最后一次尝试。仍然失败:Ios 如何在不使用相机的情况下拍摄图像变化的视频?,ios,swift,avassetwriter,ciimage,Ios,Swift,Avassetwriter,Ciimage,我已经问了一个问题,但没有回答: 但也许我的问题需要更简单一些。我的谷歌搜索毫无结果如何在不使用相机的情况下实时捕获图像变化的视频? 使用captureOutput,我得到了一个CMSampleBuffer,我可以将它制作成一个CVPixelBuffer。AVAssetWriterInput的mediaType设置为视频,但我认为它需要压缩视频。此外,我不清楚AVAssetWriterInput ExpectsDiadataInRealTime属性是否应设置为true 看起来应该很简单,但我所
@objc func importLivePreview(){
guard var importedImage = importedDryCIImage else { return }
DispatchQueue.main.async(){
// apply filter to camera image
// this is what makes the CIImage appear that it is changing
importedImage = self.applyFilterAndReturnImage(ciImage: importedImage, orientation: UIImage.Orientation.right, currentCameraRes:currentCameraRes!)
if self.videoIsRecording &&
self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
guard let writer: AVAssetWriter = self.assetWriter, writer.status == .writing else {
return
}
guard let cv:CVPixelBuffer = self.buffer(from: importedImage) else {
print("CVPixelBuffer could not be created.")
return
}
self.MTLContext?.render(_:importedImage, to:cv)
self.currentSampleTime = CMTimeMakeWithSeconds(0.1, preferredTimescale: 1000000000)
guard let currentSampleTime = self.currentSampleTime else {
return
}
let success = self.assetWriterPixelBufferInput?.append(cv, withPresentationTime: currentSampleTime)
if success == false {
print("Pixel Buffer input failed")
}
}
guard let MTLView = self.MTLCaptureView else {
print("MTLCaptureView is not found or nil.")
return
}
// update the MTKView with the changed CIImage so the user can see the changed image
MTLView.image = importedImage
}
}
我让它工作了。问题是我没有抵消currentSampleTime。此示例没有精确的偏移,但它表明需要将其添加到上一次
@objc func importLivePreview(){
guard var importedImage = importedDryCIImage else { return }
DispatchQueue.main.async(){
// apply filter to camera image
// this is what makes the CIImage appear that it is changing
importedImage = self.applyFilterAndReturnImage(ciImage: importedImage, orientation: UIImage.Orientation.right, currentCameraRes:currentCameraRes!)
if self.videoIsRecording &&
self.assetWriterPixelBufferInput?.assetWriterInput.isReadyForMoreMediaData == true {
guard let writer: AVAssetWriter = self.assetWriter, writer.status == .writing else {
return
}
guard let cv:CVPixelBuffer = self.buffer(from: importedImage) else {
print("CVPixelBuffer could not be created.")
return
}
self.MTLContext?.render(_:importedImage, to:cv)
guard let currentSampleTime = self.currentSampleTime else {
return
}
// offset currentSampleTime
let sampleTimeOffset = CMTimeMakeWithSeconds(0.1, preferredTimescale: 1000000000)
self.currentSampleTime = CMTimeAdd(currentSampleTime, sampleTimeOffset)
print("currentSampleTime = \(String(describing: currentSampleTime))")
let success = self.assetWriterPixelBufferInput?.append(cv, withPresentationTime: currentSampleTime)
if success == false {
print("Pixel Buffer input failed")
}
}
guard let MTLView = self.MTLCaptureView else {
print("MTLCaptureView is not found or nil.")
return
}
// update the MTKView with the changed CIImage so the user can see the changed image
MTLView.image = importedImage
}
}