Iphone kCVPixelFormatTypeƤYPCBCRC8BIPLANARFULLRANGE帧到UIImage转换

Iphone kCVPixelFormatTypeƤYPCBCRC8BIPLANARFULLRANGE帧到UIImage转换,iphone,objective-c,ios,avfoundation,Iphone,Objective C,Ios,Avfoundation,我有一个应用程序,它以kCVPixelFormatTypeƤYPCBCRC8BIPLANARFULLRANGE格式捕获实时视频,以处理Y频道。根据苹果的文件: kCVPixelFormatTypeƤYPCBCRC8BIPLANARFULLRANGE 双平面分量Y'CbCr 8位4:2:0,全量程(亮度=[0255]色度=[1255])。baseAddr指向一个大端CVPlanarPixelBufferInfo_YCbCrBiPlanar结构 我想在UIViewController中展示其中的一些

我有一个应用程序,它以kCVPixelFormatTypeƤYPCBCRC8BIPLANARFULLRANGE格式捕获实时视频,以处理Y频道。根据苹果的文件:

kCVPixelFormatTypeƤYPCBCRC8BIPLANARFULLRANGE 双平面分量Y'CbCr 8位4:2:0,全量程(亮度=[0255]色度=[1255])。baseAddr指向一个大端CVPlanarPixelBufferInfo_YCbCrBiPlanar结构

我想在UIViewController中展示其中的一些帧,是否有API可以转换为kCVPixelFormatType_32BGRA格式?你能给出一些提示来调整苹果提供的这个方法吗

// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer  {
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0);

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context);
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context);
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}

谢谢大家!

我不知道有任何可访问的内置方式可以在iOS中将双平面Y/CbCr图像转换为RGB。但是,您应该能够自己在软件中执行转换,例如

uint8_t clamp(int16_t input)
{
    // clamp negative numbers to 0; assumes signed shifts
    // (a valid assumption on iOS)
    input &= ~(num >> 16);

    // clamp numbers greater than 255 to 255; the accumulation
    // of the mask looks odd but is an attempt to avoid
    // pipeline stalls
    uint8_t saturationMask = num >> 8;
    saturationMask |= saturationMask << 4;
    saturationMask |= saturationMask << 2;
    saturationMask |= saturationMask << 1;
    num |= saturationMask;

    return num&0xff;
}

...

CVPixelBufferLockBaseAddress(imageBuffer, 0);

size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);

uint8_t *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
CVPlanarPixelBufferInfo_YCbCrBiPlanar *bufferInfo = (CVPlanarPixelBufferInfo_YCbCrBiPlanar *)baseAddress;

NSUInteger yOffset = EndianU32_BtoN(bufferInfo->componentInfoY.offset);
NSUInteger yPitch = EndianU32_BtoN(bufferInfo->componentInfoY.rowBytes);

NSUInteger cbCrOffset = EndianU32_BtoN(bufferInfo->componentInfoCbCr.offset);
NSUInteger cbCrPitch = EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);

uint8_t *rgbBuffer = malloc(width * height * 3);
uint8_t *yBuffer = baseAddress + yOffset;
uint8_t *cbCrBuffer = baseAddress + cbCrOffset;

for(int y = 0; y < height; y++)
{
    uint8_t *rgbBufferLine = &rgbBuffer[y * width * 3];
    uint8_t *yBufferLine = &yBuffer[y * yPitch];
    uint8_t *cbCrBufferLine = &cbCrBuffer[(y >> 1) * cbCrPitch];

    for(int x = 0; x < width; x++)
    {
        // from ITU-R BT.601, rounded to integers
        uint8_t y = yBufferLine[x] - 16;
        uint8_t cb = cbCrBufferLine[x & ~1] - 128;
        uint8_t cr = cbCrBufferLine[x | 1] - 128;

        uint8_t *rgbOutput = &rgbBufferLine[x*3];

        rgbOutput[0] = clamp(((298 * y + 409 * cr - 223) >> 8) - 223);
        rgbOutput[1] = clamp(((298 * y - 100 * cb - 208 * cr + 136) >> 8) + 136);
        rgbOutput[2] = clamp(((298 * y + 516 * cb - 277) >> 8) - 277);
    }

}
uint8\u t钳位(int16\u t输入)
{
//将负数钳制为0;假定有符号移位
//(关于iOS的有效假设)
输入&=~(num>>16);
//大于255到255的钳位数;累加
//这个面具看起来很奇怪,但它是为了避免
//管道失速
uint8\u t饱和掩码=num>>8;
saturationMask |=saturationMask组件信息Cbcr.offset);
NSUInteger cbCrPitch=EndianU32_BtoN(bufferInfo->componentInfoCbCr.rowBytes);
uint8_t*rgbBuffer=malloc(宽*高*3);
uint8_t*yBuffer=baseAddress+yOffset;
uint8_t*cbCrBuffer=baseAddress+cbCrOffset;
对于(int y=0;y>1)*cbCrPitch];
对于(int x=0;x>8)-223);
rgbOutput[1]=夹具((298*y-100*cb-208*cr+136)>>8)+136);
rgbOutput[2]=夹具((298*y+516*cb-277)>>8)-277);
}
}

刚刚直接写入这个框中,未经测试,我想我已经正确地提取了cb/cr。然后使用
CGBitmapContextCreate
rgbBuffer
创建
CGImage
,从而创建了
UIImage
我发现的大多数实现(包括前面的答案)如果您在
AVCaptureConnection
中更改
videoOrientation
(由于某些原因,我不完全理解,在这种情况下,
CVPlanarPixelBufferInfo\u ycbcrbiplana
结构将是空的),那么将无法工作,因此我编写了一个这样做的结构(大部分代码都基于)。我的实现还向RGB缓冲区添加了一个空的alpha通道,并使用
kCGImageAlphaNoneSkipLast
标志创建
CGBitmapContext
(没有alpha数据,但iOS似乎需要每个像素4个字节)。如下所示:

#定义夹具(a)(a>255?255:(a>1)*cbCrPitch];
对于(int x=0;x
这些关于位移位和神奇变量的其他答案都是疯狂的。这里有一种在Swift 5中使用加速框架的替代方法。它从像素格式的缓冲区中获取一帧
kCVPixelFormatTypeƤYPCBCR8BIPLANARFULLRANGE
(双平面组件Y'CbCr 8位4:2:0)并在将其转换为
argb888
后从中生成
UIImage
。但您可能会修改它以处理任何输入/输出格式:

导入加速
导入核心图形
导入CoreMedia
进口基金会
进口石英砂
导入UIKit
func createImage(来自sampleBuffer:CMSampleBuffer)->UIImage{
guard let imageBuffer=CMSampleBufferGetImageBuffer(sampleBuffer)else{
归零
}
//像素格式为双平面分量Y'CbCr 8位4:2:0,全范围(亮度=[0255]色度=[1255])。
//baseAddr指向一个大端CVPlanarPixelBufferInfo_YCbCrBiPlanar结构。
//
guard CVPixelBufferGetPixelFormatType(imageBuffer)=kCVPixelFormatType_420YpCbCr8BiPlanarFullRange else{
归零
}
guard CVPixelBufferLockBaseAddress(imageBuffer.readOnly)=KCVReturnSuccessful{
归零
}
推迟{
//返回前请确保解锁基址
CVPixelBufferUnlockBaseAddress(imageBuffer.readOnly)
}
//第一个平面是亮度,第二个平面是色度
保护CVPixelBufferGetPlaneCount(imageBuffer)=2{
归零
}
//第一架飞机
guard let lumaBaseAddress=CVPixelBufferGetBaseAddressOfPlane(imageBuffer,0)else{
归零
}
设lumaWidth=CVPixelBufferGetWidthOfPlane(imageBuffer,0)
设lumaHeight=CVPixelBufferGe