ASSETWriterInput,用于从Iphone问题上的UIImage制作视频

ASSETWriterInput,用于从Iphone问题上的UIImage制作视频,iphone,image,video,avfoundation,Iphone,Image,Video,Avfoundation,我尝试以下两种方法将UIImages pixelbuffer附加到ASSETWriterInput。除了视频文件中没有数据外,一切看起来都很好。怎么了 1适配器类 2制作 我发现出于某种原因,我需要多次附加缓冲区。在这个例子中,我制作的一个测试应用程序的计时可能不合适,但既然它可以工作,应该会给你一个好主意 + (void)writeImageAsMovie:(UIImage*)image toPath:(NSString*)path size:(CGSize)size duration:(in

我尝试以下两种方法将
UIImage
s pixelbuffer附加到
ASSETWriterInput
。除了视频文件中没有数据外,一切看起来都很好。怎么了

1适配器类 2制作
我发现出于某种原因,我需要多次附加缓冲区。在这个例子中,我制作的一个测试应用程序的计时可能不合适,但既然它可以工作,应该会给你一个好主意

+ (void)writeImageAsMovie:(UIImage*)image toPath:(NSString*)path size:(CGSize)size duration:(int)duration 
{
    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
                                  [NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
                                                              error:&error];
    NSParameterAssert(videoWriter);

    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                   AVVideoCodecH264, AVVideoCodecKey,
                                   [NSNumber numberWithInt:size.width], AVVideoWidthKey,
                                   [NSNumber numberWithInt:size.height], AVVideoHeightKey,
                                   nil];
    AVAssetWriterInput* writerInput = [[AVAssetWriterInput
                                        assetWriterInputWithMediaType:AVMediaTypeVideo
                                        outputSettings:videoSettings] retain];

    AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
                                                     assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                     sourcePixelBufferAttributes:nil];
    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];

    //Start a session:
    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:kCMTimeZero];

    //Write samples:
    CVPixelBufferRef buffer = [Utils pixelBufferFromCGImage:image.CGImage size:size];
    [adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
    [adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(duration-1, 2)];

    //Finish the session:
    [writerInput markAsFinished];
    [videoWriter endSessionAtSourceTime:CMTimeMake(duration, 2)];
    [videoWriter finishWriting];
}
此方法不是必需的,但在此用作像素缓冲源的示例:

+ (CVPixelBufferRef) pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
    NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
                             [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
                             nil];
    CVPixelBufferRef pxbuffer = NULL;
    CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
                          size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
                          &pxbuffer);
    status=status;//Added to make the stupid compiler not show a stupid warning.
    NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);

    CVPixelBufferLockBaseAddress(pxbuffer, 0);
    void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    NSParameterAssert(pxdata != NULL);

    CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
                                                 size.height, 8, 4*size.width, rgbColorSpace, 
                                                 kCGImageAlphaNoneSkipFirst);
    NSParameterAssert(context);

    //CGContextTranslateCTM(context, 0, CGImageGetHeight(image));
    //CGContextScaleCTM(context, 1.0, -1.0);//Flip vertically to account for different origin

    CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
                                         CGImageGetHeight(image)), image);
    CGColorSpaceRelease(rgbColorSpace);
    CGContextRelease(context);

    CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

    return pxbuffer;
}

等等,尽管@Peter DeWeese给出的答案是应该遵循的方向,但代码有两个巨大的问题:首先,当系统准备添加新媒体时,您需要等待;其次,您有一个巨大的内存泄漏,因为在将缓冲区添加到视频写入器后,您需要释放缓冲区

这在您的情况下是正确的,但在一般情况下更是如此,您希望在多帧中循环,如下所示:

NSInteger i = 0;

for (; i<n; i++) {

     image = allImages[i];

     CVPixelBufferRef buffer = [self pixelBufferFromCGImage:image.CGImage cropFrame:frame];

    // wait for more media data is ready
    while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
        NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
        [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
    }
    NSLog(@"Panorama: appending frame %ld out of %ld", (long)i, (long)n);

    // Append data
    [adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(i, freq)];//kCMTimeZero];


    // release the buffer
    CVPixelBufferRelease(buffer);

}
NSInteger i=0;

对于(;i我对这段代码有点问题。它会给我一个扭曲的图像

更改:

CGContextRef context = CGBitmapContextCreate(pxdata,
                                             size.width,
                                             size.height,
                                             8,
                                             4 * size.width,
                                             rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);
致:


帮助。

效果很好。这两个命令都是错误?不管怎样,你知道如何将音频添加到视频中吗?我使用的是AvAudioPlayer。你可以制作一个AVMutableComposition并将音频和视频插入到合成中。如果你想淡入淡出等,你可以使用AVMutableAudioMix。可以将合成和audioMix添加到视频中你的玩家的AVPlayerItem。我还发现你必须附加两次缓冲区——至少,我还不知道如何在没有这个混乱的情况下让它工作。有人知道为什么这是必要的吗?没有实现上述方法它会工作吗?以及传递什么持续时间参数?@iApple:take duration in int..like int duration=10;:)谢谢,谢谢,谢谢!!几个小时来我一直在为这个头痛。我的代码在3倍规模的设备上运行得很好,但在其他设备上却不行。这就是解决办法。
NSInteger i = 0;

for (; i<n; i++) {

     image = allImages[i];

     CVPixelBufferRef buffer = [self pixelBufferFromCGImage:image.CGImage cropFrame:frame];

    // wait for more media data is ready
    while (adaptor.assetWriterInput.readyForMoreMediaData == FALSE) {
        NSDate *maxDate = [NSDate dateWithTimeIntervalSinceNow:0.1];
        [[NSRunLoop currentRunLoop] runUntilDate:maxDate];
    }
    NSLog(@"Panorama: appending frame %ld out of %ld", (long)i, (long)n);

    // Append data
    [adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(i, freq)];//kCMTimeZero];


    // release the buffer
    CVPixelBufferRelease(buffer);

}
CGContextRef context = CGBitmapContextCreate(pxdata,
                                             size.width,
                                             size.height,
                                             8,
                                             4 * size.width,
                                             rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);
CGContextRef context = CGBitmapContextCreate(pxdata,
                                             size.width,
                                             size.height,
                                             8,
                                             CVPixelBufferGetBytesPerRow(pxbuffer),
                                             rgbColorSpace,
                                             kCGImageAlphaNoneSkipFirst);