Ios 如何将CVPixelBuffer转换为UIImage?

Ios 如何将CVPixelBuffer转换为UIImage?,ios,uiimage,avfoundation,Ios,Uiimage,Avfoundation,我从CVPixelBuffer获取UIIMage时遇到一些问题。这就是我正在尝试的: CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer); CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_S

我从CVPixelBuffer获取UIIMage时遇到一些问题。这就是我正在尝试的:

CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(imageDataSampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, imageDataSampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
if (attachments)
    CFRelease(attachments);
size_t width = CVPixelBufferGetWidth(pixelBuffer);
size_t height = CVPixelBufferGetHeight(pixelBuffer);
if (width && height) { // test to make sure we have valid dimensions
    UIImage *image = [[UIImage alloc] initWithCIImage:ciImage];

    UIImageView *lv = [[UIImageView alloc] initWithFrame:self.view.frame];
    lv.contentMode = UIViewContentModeScaleAspectFill;
    self.lockedView = lv;
    [lv release];
    self.lockedView.image = image;
    [image release];
}
[ciImage release];

高度
宽度
均正确设置为相机的分辨率<代码>图像已创建,但它似乎是黑色的(或者可能是透明的?)。我不太明白问题出在哪里。如果您有任何想法,我们将不胜感激。

首先是与您的问题没有直接关系的显而易见的东西:
AVCaptureVideoPreviewLayer
是将任何一台摄像机的视频传输到独立视图的最便宜的方法,如果数据来自该视图,并且您没有立即修改它的计划。您不必自己动手,预览层直接连接到
AVCaptureSession
并自动更新

我不得不承认,我对核心问题缺乏信心。
CIImage
与其他两种类型的图像之间存在语义上的差异-a
CIImage
是图像的配方,不一定由像素支持。它可以是“从这里取像素,像这样变换,应用这个过滤器,像这样变换,与其他图像合并,应用这个过滤器”。在您选择渲染之前,系统不知道
CIImage
的外观。它本身也不知道光栅化的适当界限

UIImage
仅用于包装
CIImage
。它不会将其转换为像素。想必,
UIImageView
应该可以实现这一点,但如果是这样的话,我似乎无法找到在哪里提供适当的输出矩形

我成功地回避了以下问题:

CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];

CIContext *temporaryContext = [CIContext contextWithOptions:nil];
CGImageRef videoImage = [temporaryContext
                   createCGImage:ciImage
                   fromRect:CGRectMake(0, 0, 
                          CVPixelBufferGetWidth(pixelBuffer),
                          CVPixelBufferGetHeight(pixelBuffer))];

UIImage *uiImage = [UIImage imageWithCGImage:videoImage];
CGImageRelease(videoImage);

With提供了一个明显的机会来指定输出矩形。我确信有一条路线可以通过,而不使用
CGImage
作为中介,因此请不要认为此解决方案是最佳实践。

获取UIImage的另一种方法。性能提高约10倍,至少在我的情况下:

int w = CVPixelBufferGetWidth(pixelBuffer);
int h = CVPixelBufferGetHeight(pixelBuffer);
int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
int bytesPerPixel = r/w;

unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);

UIGraphicsBeginImageContext(CGSizeMake(w, h));

CGContextRef c = UIGraphicsGetCurrentContext();

unsigned char* data = CGBitmapContextGetData(c);
if (data != NULL) {
   int maxY = h;
   for(int y = 0; y<maxY; y++) {
      for(int x = 0; x<w; x++) {
         int offset = bytesPerPixel*((w*y)+x);
         data[offset] = buffer[offset];     // R
         data[offset+1] = buffer[offset+1]; // G
         data[offset+2] = buffer[offset+2]; // B
         data[offset+3] = buffer[offset+3]; // A
      }
   }
} 
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();
int w=CVPixelBufferGetWidth(pixelBuffer);
int h=CVPixelBufferGetHeight(pixelBuffer);
int r=CVPixelBufferGetBytesPerRow(像素缓冲区);
int字节/像素=r/w;
无符号字符*缓冲区=CVPixelBufferGetBaseAddress(pixelBuffer);
UIGraphicsBeginImageContext(CGSizeMake(w,h));
CGContextRef c=UIGraphicsGetCurrentContext();
无符号字符*数据=CGBitmapContextGetData(c);
如果(数据!=NULL){
int maxY=h;

对于(int y=0;y,除非图像数据的格式不同,需要旋转或转换-我建议不要增加任何内容…只需使用memcpy将数据插入上下文内存区域,如下所示:

//not here... unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);

UIGraphicsBeginImageContext(CGSizeMake(w, h));

CGContextRef c = UIGraphicsGetCurrentContext();

void *ctxData = CGBitmapContextGetData(c);

// MUST READ-WRITE LOCK THE PIXEL BUFFER!!!!
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxData = CVPixelBufferGetBaseAddress(pixelBuffer);
memcpy(ctxData, pxData, 4 * w * h);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

... and so on...

以前的方法导致CG光栅数据泄漏。这种转换方法没有泄漏:

@autoreleasepool {

    CGImageRef cgImage = NULL;
    OSStatus res = CreateCGImageFromCVPixelBuffer(pixelBuffer,&cgImage);
    if (res == noErr){
        UIImage *image= [UIImage imageWithCGImage:cgImage scale:1.0 orientation:UIImageOrientationUp];

    }
    CGImageRelease(cgImage);
}


    static OSStatus CreateCGImageFromCVPixelBuffer(CVPixelBufferRef pixelBuffer, CGImageRef *imageOut)
    {
        OSStatus err = noErr;
        OSType sourcePixelFormat;
        size_t width, height, sourceRowBytes;
        void *sourceBaseAddr = NULL;
        CGBitmapInfo bitmapInfo;
        CGColorSpaceRef colorspace = NULL;
        CGDataProviderRef provider = NULL;
        CGImageRef image = NULL;

        sourcePixelFormat = CVPixelBufferGetPixelFormatType( pixelBuffer );
        if ( kCVPixelFormatType_32ARGB == sourcePixelFormat )
            bitmapInfo = kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipFirst;
        else if ( kCVPixelFormatType_32BGRA == sourcePixelFormat )
            bitmapInfo = kCGBitmapByteOrder32Little | kCGImageAlphaNoneSkipFirst;
        else
            return -95014; // only uncompressed pixel formats

        sourceRowBytes = CVPixelBufferGetBytesPerRow( pixelBuffer );
        width = CVPixelBufferGetWidth( pixelBuffer );
        height = CVPixelBufferGetHeight( pixelBuffer );

        CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
        sourceBaseAddr = CVPixelBufferGetBaseAddress( pixelBuffer );

        colorspace = CGColorSpaceCreateDeviceRGB();

        CVPixelBufferRetain( pixelBuffer );
        provider = CGDataProviderCreateWithData( (void *)pixelBuffer, sourceBaseAddr, sourceRowBytes * height, ReleaseCVPixelBuffer);
        image = CGImageCreate(width, height, 8, 32, sourceRowBytes, colorspace, bitmapInfo, provider, NULL, true, kCGRenderingIntentDefault);

        if ( err && image ) {
            CGImageRelease( image );
            image = NULL;
        }
        if ( provider ) CGDataProviderRelease( provider );
        if ( colorspace ) CGColorSpaceRelease( colorspace );
        *imageOut = image;
        return err;
    }

    static void ReleaseCVPixelBuffer(void *pixel, const void *data, size_t size)
    {
        CVPixelBufferRef pixelBuffer = (CVPixelBufferRef)pixel;
        CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
        CVPixelBufferRelease( pixelBuffer );
    }
试试这件雨燕

Swift 4.2:

import VideoToolbox

extension UIImage {
    public convenience init?(pixelBuffer: CVPixelBuffer) {
        var cgImage: CGImage?
        VTCreateCGImageFromCVPixelBuffer(pixelBuffer, nil, &cgImage)

        guard let cgImage = cgImage else {
            return nil
        }

        self.init(cgImage: cgImage)
    }
}
Swift 5:

import VideoToolbox

extension UIImage {
    public convenience init?(pixelBuffer: CVPixelBuffer) {
        var cgImage: CGImage?
        VTCreateCGImageFromCVPixelBuffer(pixelBuffer, options: nil, imageOut: &cgImage)

        guard let cgImage = cgImage else {
            return nil
        } 

        self.init(cgImage: cgImage)
    }
}
注意:这只适用于RGB像素缓冲区,不适用于灰度


现代的解决办法是

let image = UIImage(ciImage: CIImage(cvPixelBuffer: YOUR_BUFFER))


你肯定想要一个介于两者之间的CIImage,例如,因为你要加入一些中间的cifilter,还是可以只使用CGBitmapContextCreate->UIImage?现在,我只想在视图中显示它,看看我在处理什么。接下来,我想玩像素。谢谢,我会尝试一下。原因是在previewLayer上没有用是因为我需要更高的分辨率。我使用CIIImage而不是jpeg表示的原因是为了查看jpeg压缩是否添加了重要的伪影。事实上,如果伪影最小,我可能会选择使用jpeg。你应该使用递增指针,这将得到一个微小的speED Boost还需要在调用CVPixelBufferGetBaseAddress之前调用CVPixelFuffelCuxBaseAdvor并在数据复制之后调用CVPIXEL BuffERunBaseBoost地址。执行数据的单块复制。速度比什么快?请您写一个Swift版本好吗?这在Swift中可能吗?@Jonathan CichonI与CGImageCreate路径相比,在旧设备上的fps提高了约50%。谢谢!但要小心,因为CVPixelBuffe中的行末尾经常有填充字节r、 也就是说,CVPixelBufferGetBytesPerRow可能比您预期的要多。那么您复制的图像输出将看起来完全倾斜。您应该
导入VideoToolbox
我必须更改swift 5中的此行:
VTCreateCImageFromCVPixelBuffer(pixelBuffer,选项:nil,imageOut:&cgImage)
此方法速度较慢。还有其他方法吗?能否仅转换选定的(x、y、宽度、高度)像素缓冲区到图像的区域,因为我认为如果在应用程序中经常使用它,可以节省大量资源。这似乎对某些
CVPixelBuffer
s不起作用。
VTCreateCImageFromCVPixelBuffer
更可靠。@JohnScalo你能详细说明一下吗?我使用过这种方法,而且似乎有效。想知道在什么情况下条件失败。你能将部分缓冲区转换成UIImage吗?swift版本?当然,让我帮你转换成swift@PavanK