Iphone 将UIImage转换为CMSampleBufferRef

Iphone 将UIImage转换为CMSampleBufferRef,iphone,ios,objective-c,video,avfoundation,Iphone,Ios,Objective C,Video,Avfoundation,我正在使用AVFoundation进行视频录制。我必须将视频剪辑到320x280。我正在获取CMSampleBufferRef并使用以下代码将其转换为UIImage CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer]; UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage]; CGImageRelease(_cgImage); _uiImage = [_uiImag

我正在使用AVFoundation进行视频录制。我必须将视频剪辑到320x280。我正在获取CMSampleBufferRef并使用以下代码将其转换为UIImage

CGImageRef _cgImage = [self imageFromSampleBuffer:sampleBuffer];
UIImage *_uiImage = [UIImage imageWithCGImage:_cgImage];
CGImageRelease(_cgImage);
_uiImage = [_uiImage resizedImageWithSize:CGSizeMake(320, 280)];

CMSampleBufferRef croppedBuffer = /* NEED HELP WITH THIS */

[_videoInput appendSampleBuffer:sampleBuffer]; 
// _videoInput is a AVAssetWriterInput
imageFromSampleBuffer:方法如下所示:

- (CGImageRef) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer // Create a CGImageRef from sample buffer data
{
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);        // Lock the image buffer

    uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);   // Get information of the image
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();

    CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGImageRef newImage = CGBitmapContextCreateImage(newContext);
    CGContextRelease(newContext);

    CGColorSpaceRelease(colorSpace);
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);
    /* CVBufferRelease(imageBuffer); */  // do not call this!

    return newImage;
}
现在,我必须将调整大小的图像转换回CMSampleBufferRef,以便在AVAssetWriterInput中写入

如何将UIImage转换为CMSampleBufferRef


谢谢大家

虽然您可以从头开始创建自己的核心介质样本缓冲区,但使用

您可以在inputSettings字典中描述源像素缓冲区格式,并将其传递给适配器初始值设定项:

NSMutableDictionary* inputSettingsDict = [NSMutableDictionary dictionary];
[inputSettingsDict setObject:[NSNumber numberWithInt:pixelFormat] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[inputSettingsDict setObject:[NSNumber numberWithUnsignedInteger:(NSUInteger)(image.uncompressedSize/image.rect.size.height)] forKey:(NSString*)kCVPixelBufferBytesPerRowAlignmentKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.width] forKey:(NSString*)kCVPixelBufferWidthKey];
[inputSettingsDict setObject:[NSNumber numberWithDouble:image.rect.size.height] forKey:(NSString*)kCVPixelBufferHeightKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGImageCompatibilityKey];
[inputSettingsDict setObject:[NSNumber numberWithBool:YES] forKey:(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey];
AVAssetWriterInputPixelBufferAdaptor* pixelBufferAdapter = [[AVAssetWriterInputPixelBufferAdaptor alloc] initWithAssetWriterInput:assetWriterInput sourcePixelBufferAttributes:inputSettingsDict];
然后,您可以将CVPixelBuffers附加到适配器:

[pixelBufferAdapter appendPixelBuffer:completePixelBuffer withPresentationTime:pixelBufferTime]
PixelBufferAdapter接受CVPixelBuffers,因此您必须将UIImage转换为pixelBuffers,如下所述:

UIImage
CGImage
属性传递给
newPixelBufferFromCGImage
这是我在GPUImage框架中使用的一个函数,用于调整传入CMSampleBufferRef的大小,并将缩放结果放置在您提供的CVPixelBufferRef中:

void GPUImageCreateResizedSampleBuffer(CVPixelBufferRef cameraFrame, CGSize finalSize, CMSampleBufferRef *sampleBuffer)
{
    // CVPixelBufferCreateWithPlanarBytes for YUV input

    CGSize originalSize = CGSizeMake(CVPixelBufferGetWidth(cameraFrame), CVPixelBufferGetHeight(cameraFrame));

    CVPixelBufferLockBaseAddress(cameraFrame, 0);
    GLubyte *sourceImageBytes =  CVPixelBufferGetBaseAddress(cameraFrame);
    CGDataProviderRef dataProvider = CGDataProviderCreateWithData(NULL, sourceImageBytes, CVPixelBufferGetBytesPerRow(cameraFrame) * originalSize.height, NULL);
    CGColorSpaceRef genericRGBColorspace = CGColorSpaceCreateDeviceRGB();
    CGImageRef cgImageFromBytes = CGImageCreate((int)originalSize.width, (int)originalSize.height, 8, 32, CVPixelBufferGetBytesPerRow(cameraFrame), genericRGBColorspace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst, dataProvider, NULL, NO, kCGRenderingIntentDefault);

    GLubyte *imageData = (GLubyte *) calloc(1, (int)finalSize.width * (int)finalSize.height * 4);

    CGContextRef imageContext = CGBitmapContextCreate(imageData, (int)finalSize.width, (int)finalSize.height, 8, (int)finalSize.width * 4, genericRGBColorspace,  kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
    CGContextDrawImage(imageContext, CGRectMake(0.0, 0.0, finalSize.width, finalSize.height), cgImageFromBytes);
    CGImageRelease(cgImageFromBytes);
    CGContextRelease(imageContext);
    CGColorSpaceRelease(genericRGBColorspace);
    CGDataProviderRelease(dataProvider);

    CVPixelBufferRef pixel_buffer = NULL;
    CVPixelBufferCreateWithBytes(kCFAllocatorDefault, finalSize.width, finalSize.height, kCVPixelFormatType_32BGRA, imageData, finalSize.width * 4, stillImageDataReleaseCallback, NULL, NULL, &pixel_buffer);
    CMVideoFormatDescriptionRef videoInfo = NULL;
    CMVideoFormatDescriptionCreateForImageBuffer(NULL, pixel_buffer, &videoInfo);

    CMTime frameTime = CMTimeMake(1, 30);
    CMSampleTimingInfo timing = {frameTime, frameTime, kCMTimeInvalid};

    CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, pixel_buffer, YES, NULL, NULL, videoInfo, &timing, sampleBuffer);
    CFRelease(videoInfo);
    CVPixelBufferRelease(pixel_buffer);
}
创建CMSampleBufferRef并不需要所有的步骤,但是正如weichsel指出的,您只需要CVPixelBufferRef来编码视频


然而,如果您真正想在这里做的是裁剪视频并录制它,那么进出UIImage将是一种非常缓慢的方式。相反,我建议您使用类似于使用GPUImageVideoCamera输入捕获视频(或者如果剪切以前录制的电影,则使用GPUImageMovie),将其输入GPUImageCropFilter,并将结果提交给GPUImageMovieWriter。这样,视频永远不会触及核心图形,并且尽可能多地使用硬件加速。它将比您上面描述的要快得多。

img->UIImage

- (CVPixelBufferRef)CVPixelBufferRefFromUiImage:(UIImage *)img {

    CGSize size = img.size;
    CGImageRef image = [img CGImage];

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);

    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
    NSParameterAssert(context);

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);

    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}
CVPixelBufferRef pxbuffer = NULL;
CGImageRef image = [img CGImage];
// Initilize CVPixelBuffer
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image), CGImageGetHeight(image), kCVPixelFormatType_32ARGB, NULL, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

CGContextRef context = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(pxbuffer), CGImageGetWidth(image), CGImageGetHeight(image), CGImageGetBitsPerComponent(image), CVPixelBufferGetBytesPerRow(pxbuffer), CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
请确保分别从
CGImageRef
CVPixelBufferRef
获取
Component和BytesPerRow

CGImageGetBitsPerComponent(图像)

CVPixelBufferGetBytesPerRow(pxbuffer)


在许多地方,我看到人们使用常量,如果它们不正确,你会得到一个扭曲的图像。

请对你的回答进行一些解释不幸的是,这个AvpixelBufferAdapter的文档不存在,而且文档记录得很差,正如99%的Apple所写。虽然这对从数组读取的帧有效,但对实时视频流无效。那么我们如何仅使用CVPixelBuffer添加文本或绘制线条?