Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/entity-framework/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Iphone iOS-缩放和裁剪CMSampleBufferRef/CVImageBufferRef_Iphone_Objective C_Ios_Avfoundation_Cmsamplebufferref - Fatal编程技术网

Iphone iOS-缩放和裁剪CMSampleBufferRef/CVImageBufferRef

Iphone iOS-缩放和裁剪CMSampleBufferRef/CVImageBufferRef,iphone,objective-c,ios,avfoundation,cmsamplebufferref,Iphone,Objective C,Ios,Avfoundation,Cmsamplebufferref,我正在使用AVFoundation并从AVCaptureVideoDataOutput获取样本缓冲区,我可以使用以下命令将其直接写入videoWriter: - (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer { CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); if(self.videoWriter.st

我正在使用AVFoundation并从
AVCaptureVideoDataOutput
获取样本缓冲区,我可以使用以下命令将其直接写入videoWriter:

- (void)writeBufferFrame:(CMSampleBufferRef)sampleBuffer {
    CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);    
    if(self.videoWriter.status != AVAssetWriterStatusWriting)
    {
        [self.videoWriter startWriting];
        [self.videoWriter startSessionAtSourceTime:lastSampleTime];
    }

    [self.videoWriterInput appendSampleBuffer:sampleBuffer];

}

我现在想做的是裁剪和缩放CMSAMPUBFEMREF内的图像,而不将其转换成UIIMAGE或CGIMAGEVEF,因为这会减慢性能。

< P>可以考虑使用CeI象(5 +)。
对于缩放,您可以让AVFoundation为您执行此操作。见我最近的帖子。设置AVVideoWidth/AVVideoHeight键的值将缩放尺寸不同的图像。看看这些属性。至于裁剪,我不确定你是否能让AVFoundation帮你做这件事。您可能不得不求助于使用OpenGL或CoreImage。这篇文章的顶部有几个很好的链接。

如果使用vimage,您可以直接处理缓冲区数据,而无需将其转换为任何图像格式

outImg
包含裁剪和缩放的图像数据。
outWidth
cropWidth
之间的关系设置缩放比例。


因此,将
cropX0=0
cropY0=0
cropWidth
cropHeight
设置为原始大小意味着不进行裁剪(使用整个原始图像)。设置
outWidth=cropWidth
outHeight=cropHeight
不会导致缩放。请注意,
inBuff.rowBytes
应该始终是完整源缓冲区的长度,而不是裁剪长度。

注意:我没有注意到原始问题也要求缩放。但无论如何,对于那些只需要裁剪CMSampleBuffer的人来说,这里有一个解决方案

缓冲区只是一个像素数组,因此您实际上可以直接处理缓冲区,而无需使用vImage。代码是用Swift编写的,但我认为很容易找到Objective-C等价物

首先,确保您的CMSampleBuffer是BGRA格式。如果不是,则使用的预设可能是YUV,并破坏以后将使用的每行字节数

dataOutput = AVCaptureVideoDataOutput()
dataOutput.videoSettings = [
    String(kCVPixelBufferPixelFormatTypeKey): 
    NSNumber(value: kCVPixelFormatType_32BGRA)
]
然后,当您获得样本缓冲区时:

let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!

CVPixelBufferLockBaseAddress(imageBuffer, .readOnly)

let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
let cropWidth = 640
let cropHeight = 640
let colorSpace = CGColorSpaceCreateDeviceRGB()

let context = CGContext(data: baseAddress, width: cropWidth, height: cropHeight, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue | CGBitmapInfo.byteOrder32Little.rawValue)
// now the cropped image is inside the context. 
// you can convert it back to CVPixelBuffer 
// using CVPixelBufferCreateWithBytes if you want.

CVPixelBufferUnlockBaseAddress(imageBuffer, .readOnly)

// create image
let cgImage: CGImage = context!.makeImage()!
let image = UIImage(cgImage: cgImage)
如果要从某个特定位置进行裁剪,请添加以下代码:

// calculate start position
let bytesPerPixel = 4
let startPoint = [ "x": 10, "y": 10 ]
let startAddress = baseAddress + startPoint["y"]! * bytesPerRow + startPoint["x"]! * bytesPerPixel

并将
CGContext()
中的
baseAddress
更改为
startAddress
。确保不要超过原始图像的宽度和高度。

在Swift3上尝试此操作

func resize(_ destSize: CGSize)-> CVPixelBuffer? {
        guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil }
        // Lock the image buffer
        CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
        // Get information about the image
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
        let bytesPerRow = CGFloat(CVPixelBufferGetBytesPerRow(imageBuffer))
        let height = CGFloat(CVPixelBufferGetHeight(imageBuffer))
        let width = CGFloat(CVPixelBufferGetWidth(imageBuffer))
        var pixelBuffer: CVPixelBuffer?
        let options = [kCVPixelBufferCGImageCompatibilityKey:true,
                       kCVPixelBufferCGBitmapContextCompatibilityKey:true]
        let topMargin = (height - destSize.height) / CGFloat(2)
        let leftMargin = (width - destSize.width) * CGFloat(2)
        let baseAddressStart = Int(bytesPerRow * topMargin + leftMargin)
        let addressPoint = baseAddress!.assumingMemoryBound(to: UInt8.self)
        let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(destSize.width), Int(destSize.height), kCVPixelFormatType_32BGRA, &addressPoint[baseAddressStart], Int(bytesPerRow), nil, nil, options as CFDictionary, &pixelBuffer)
        if (status != 0) {
            print(status)
            return nil;
        }
        CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0))
        return pixelBuffer;
    }

然后,我如何将其转换回CMSampleBuffer,或者如何使用self.videoWriterInput将其写下来?CIContext提供了将CIImage光栅化回CVPixelBuffer的方法。我可以使其自动缩放,但它不断抱怨我的内存不足,你可以在这里看到我的最新帖子。原因似乎是我一直在更改大小Hi Sten,我在指南中搜索了一个裁剪示例,但没有找到一个,你能给出一个如何直接裁剪缓冲区的示例吗?vImage没有裁剪功能。我知道这个问题/答案很旧,但我现在如何将其导出到CMSampleBuffer?Nils,剪切文件的数据在outImg中。因此,您可以使用它来创建像素缓冲区,例如使用CVPixelBufferCreateWithBytes。然后您可以使用它来创建CMSampleBuffer。我添加了一个图像,而不是旧PDF指南的链接。“…没有将其转换为UIImage或CGImageRef,因为这会降低性能。“我确实”在CMSampleBufferRef中裁剪和缩放图像,而没有将其转换为UIImage或CGImageRef”。我只是将其保存为CGImageRef以供进一步使用(例如,在屏幕上显示)。你可以用裁剪的上下文做任何你想做的事。嗨黃昱嘉 与您联系的最佳方式是什么?我想问一个简单的问题。谢谢很好!我用这段代码用swift 2.2将4:3的样本缓冲区转换为16:9。非常感谢!如何快速将CIImage转换回CMSampleBuffer?我得到了一个失真的输出视频。你知道为什么吗
// calculate start position
let bytesPerPixel = 4
let startPoint = [ "x": 10, "y": 10 ]
let startAddress = baseAddress + startPoint["y"]! * bytesPerRow + startPoint["x"]! * bytesPerPixel
func resize(_ destSize: CGSize)-> CVPixelBuffer? {
        guard let imageBuffer = CMSampleBufferGetImageBuffer(self) else { return nil }
        // Lock the image buffer
        CVPixelBufferLockBaseAddress(imageBuffer, CVPixelBufferLockFlags(rawValue: 0))
        // Get information about the image
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
        let bytesPerRow = CGFloat(CVPixelBufferGetBytesPerRow(imageBuffer))
        let height = CGFloat(CVPixelBufferGetHeight(imageBuffer))
        let width = CGFloat(CVPixelBufferGetWidth(imageBuffer))
        var pixelBuffer: CVPixelBuffer?
        let options = [kCVPixelBufferCGImageCompatibilityKey:true,
                       kCVPixelBufferCGBitmapContextCompatibilityKey:true]
        let topMargin = (height - destSize.height) / CGFloat(2)
        let leftMargin = (width - destSize.width) * CGFloat(2)
        let baseAddressStart = Int(bytesPerRow * topMargin + leftMargin)
        let addressPoint = baseAddress!.assumingMemoryBound(to: UInt8.self)
        let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, Int(destSize.width), Int(destSize.height), kCVPixelFormatType_32BGRA, &addressPoint[baseAddressStart], Int(bytesPerRow), nil, nil, options as CFDictionary, &pixelBuffer)
        if (status != 0) {
            print(status)
            return nil;
        }
        CVPixelBufferUnlockBaseAddress(imageBuffer,CVPixelBufferLockFlags(rawValue: 0))
        return pixelBuffer;
    }