Ios AVAssetWriter-用于叠加图像的像素缓冲区

Ios AVAssetWriter-用于叠加图像的像素缓冲区,ios,avfoundation,avassetwriter,core-video,Ios,Avfoundation,Avassetwriter,Core Video,我可以成功地从一个静态图像创建电影。然而,我也得到了一个较小的图像阵列,我需要叠加在背景图像的顶部。我试着用assetWriter重复附加帧的过程,但我遇到了错误,因为您无法写入已写入的相同帧 因此,我假设您必须在写入帧之前为每个帧完全合成整个像素缓冲区。但是你会怎么做呢 以下是我用于渲染一幅背景图像的代码: CGSize renderSize = CGSizeMake(320, 568); NSUInteger fps = 30; self.assetWriter = [[A

我可以成功地从一个静态图像创建电影。然而,我也得到了一个较小的图像阵列,我需要叠加在背景图像的顶部。我试着用assetWriter重复附加帧的过程,但我遇到了错误,因为您无法写入已写入的相同帧

因此,我假设您必须在写入帧之前为每个帧完全合成整个像素缓冲区。但是你会怎么做呢

以下是我用于渲染一幅背景图像的代码:

CGSize renderSize = CGSizeMake(320, 568);
    NSUInteger fps = 30;

    self.assetWriter = [[AVAssetWriter alloc] initWithURL:
                                  [NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
                                                              error:&error];
    NSParameterAssert(self.assetWriter);

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:renderSize.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:renderSize.height], AVVideoHeightKey,
                                   nil];

    AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
                                            assetWriterInputWithMediaType:AVMediaTypeVideo
                                            outputSettings:videoSettings];


    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
                                                     sourcePixelBufferAttributes:nil];

    NSParameterAssert(videoWriterInput);
    NSParameterAssert([self.assetWriter canAddInput:videoWriterInput]);
    videoWriterInput.expectsMediaDataInRealTime = YES;
    [self.assetWriter addInput:videoWriterInput];

    //Start a session:
    [self.assetWriter startWriting];
    [self.assetWriter startSessionAtSourceTime:kCMTimeZero];

    CVPixelBufferRef buffer = NULL;

    NSInteger totalFrames = 90; //3 seconds

    //process the bg image
    int frameCount = 0;

    UIImage* resizedImage = [UIImage resizeImage:self.bgImage size:renderSize];
    buffer = [self pixelBufferFromCGImage:[resizedImage CGImage]];

    BOOL append_ok = YES;
    int j = 0;
    while (append_ok && j < totalFrames) {
        if (adaptor.assetWriterInput.readyForMoreMediaData)  {

            CMTime frameTime = CMTimeMake(frameCount,(int32_t) fps);
            append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
            if(!append_ok){
                NSError *error = self.assetWriter.error;
                if(error!=nil) {
                    NSLog(@"Unresolved error %@,%@.", error, [error userInfo]);
                }
            }
        }
        else {
            printf("adaptor not ready %d, %d\n", frameCount, j);
            [NSThread sleepForTimeInterval:0.1];
        }
        j++;
        frameCount++;
    }
    if (!append_ok) {
        printf("error appending image %d times %d\n, with error.", frameCount, j);
    }


    //Finish the session:
    [videoWriterInput markAsFinished];
    [self.assetWriter finishWritingWithCompletionHandler:^() {
        self.assetWriter = nil;
    }];

- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {

    CGSize size = CGSizeMake(320,568);

    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;

    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
                                          size.width,
                                          size.height,
                                          kCVPixelFormatType_32ARGB,
                                          (__bridge CFDictionaryRef) options,
                                          &pxbuffer);
    if (status != kCVReturnSuccess){
        NSLog(@"Failed to create pixel buffer");
    }

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                                 size.height, 8, 4*size.width, rgbColorSpace,
                                                 (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);

    CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                           CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

同样,问题是如何为背景图像和将分层在bg图像之上的N个小图像阵列创建像素缓冲区。之后的下一步是叠加一个小视频。

您可以将图像列表中的像素信息添加到像素缓冲区上。 此示例代码显示如何在ARGB像素缓冲区上添加BGRA数据

// Try to create a pixel buffer with the image mat
uint8_t* videobuffer = m_imageBGRA.data;


// From image buffer (BGRA) to pixel buffer
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate (NULL, m_width, m_height, kCVPixelFormatType_32ARGB, NULL, &pixelBuffer);
if ((pixelBuffer == NULL) || (status != kCVReturnSuccess))
{
    NSLog(@"Error CVPixelBufferPoolCreatePixelBuffer[pixelBuffer=%@][status=%d]", pixelBuffer, status);
    return;
}
else
{
    uint8_t *videobuffertmp = videobuffer;
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    GLubyte *pixelBufferData = (GLubyte *)CVPixelBufferGetBaseAddress(pixelBuffer);

    // Add data for all the pixels in the image
    for( int row=0 ; row<m_width ; ++row )
    {
        for( int col=0 ; col<m_height ; ++col )
        {
            memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t));       // alpha
            memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t));       // red
            memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t));       // green
            memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t));       // blue
            // Move the buffer pointer to the next pixel
            pixelBufferData += 4*sizeof(uint8_t);
            videobuffertmp  += 4*sizeof(uint8_t);
        }
    }


    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
一旦我们添加了像素,我们可以通过以下方式向前移动像素:

// Move the buffer pointer to the next pixel
pixelBufferData += 4*sizeof(uint8_t);
videobuffertmp  += 4*sizeof(uint8_t);
将指针向前移动4个字节

如果图像较小,则可以将其添加到较小的区域中,或使用alpha值作为目标数据定义“如果”。例如:

// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
    for( int col=0 ; col<m_height ; ++col )
    {
        if( videobuffertmp[3] > 10 ) // check alpha channel
        {
            memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t));       // alpha
            memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t));       // red
            memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t));       // green
            memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t));       // blue
        }
        // Move the buffer pointer to the next pixel
        pixelBufferData += 4*sizeof(uint8_t);
        videobuffertmp  += 4*sizeof(uint8_t);
    }
}
// Add data for all the pixels in the image
for( int row=0 ; row<m_width ; ++row )
{
    for( int col=0 ; col<m_height ; ++col )
    {
        if( videobuffertmp[3] > 10 ) // check alpha channel
        {
            memcpy(&pixelBufferData[0], &videobuffertmp[3], sizeof(uint8_t));       // alpha
            memcpy(&pixelBufferData[1], &videobuffertmp[2], sizeof(uint8_t));       // red
            memcpy(&pixelBufferData[2], &videobuffertmp[1], sizeof(uint8_t));       // green
            memcpy(&pixelBufferData[3], &videobuffertmp[0], sizeof(uint8_t));       // blue
        }
        // Move the buffer pointer to the next pixel
        pixelBufferData += 4*sizeof(uint8_t);
        videobuffertmp  += 4*sizeof(uint8_t);
    }
}