创建16:9而不是4:3的视频-AVAsset Writer-iPhone

创建16:9而不是4:3的视频-AVAsset Writer-iPhone,iphone,objective-c,video,aspect-ratio,avassetwriter,Iphone,Objective C,Video,Aspect Ratio,Avassetwriter,我正在使用下面的代码使用AVAsset writer从静态16:9图像制作视频。问题是,出于某种原因,制作的视频是4:3格式的 有谁能建议我修改代码以生成16:9的视频,或者我如何将4:3的视频转换为16:9 多谢各位 - (void) createVideoFromStillImage { //Set the size according to the device type (iPhone or iPad). CGSize size = CGSizeMake(screenWidth, sc

我正在使用下面的代码使用AVAsset writer从静态16:9图像制作视频。问题是,出于某种原因,制作的视频是4:3格式的

有谁能建议我修改代码以生成16:9的视频,或者我如何将4:3的视频转换为16:9

多谢各位

- (void) createVideoFromStillImage 
{
//Set the size according to the device type (iPhone or iPad).
CGSize size = CGSizeMake(screenWidth, screenHeight);

NSString *betaCompressionDirectory = [NSHomeDirectory() stringByAppendingPathComponent:@"Documents/IntroVideo.mov"];

NSError *error = nil;

unlink([betaCompressionDirectory UTF8String]);

//----initialize compression engine
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
                                                       fileType:AVFileTypeQuickTimeMovie
                                                          error:&error];
NSParameterAssert(videoWriter);
if(error)
    NSLog(@"error = %@", [error localizedDescription]);

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264,AVVideoCodecKey,
                               [NSNumber numberWithInt:size.height], AVVideoWidthKey,
                               [NSNumber numberWithInt:size.width], AVVideoHeightKey, nil];
AVAssetWriterInput *writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
                                                       [NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];

AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
                                                                                                                 sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);

if ([videoWriter canAddInput:writerInput])
    NSLog(@"I can add this input");
else
    NSLog(@"i can't add this input");

[videoWriter addInput:writerInput];

[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];

//CGImageRef theImage = [finishedMergedImage CGImage];
CGImageRef theImage = [introImage CGImage];

//dispatch_queue_t    dispatchQueue = dispatch_queue_create("mediaInputQueue", NULL);
int __block         frame = 0;

//Calculate how much progress % one frame completion represents. Maximum of 75%.
float currentProgress = 0.0;
float progress = (80.0 / kDurationOfIntroOutro);    
//NSLog(@"Progress is %f", progress);

for (int i=0; i<=kDurationOfIntroOutro; i++) {

    //Update our progress view for every frame that is generated.
    [self updateProgressView:currentProgress];
    currentProgress +=progress;

    //NSLog(@"CurrentProgress is %f", currentProgress);

    frame++;
    [NSThread sleepForTimeInterval:0.05]; //Delay to allow buffer to be ready.
    CVPixelBufferRef buffer = (CVPixelBufferRef)[self pixelBufferFromCGImage:theImage size:size];
    if (buffer) {
        if (adaptor.assetWriterInput.readyForMoreMediaData)
    {
        if(![adaptor appendPixelBuffer:buffer withPresentationTime:CMTimeMake(frame, 20)])
            NSLog(@"FAIL");
        else
            NSLog(@"Success:%d", frame);
        CFRelease(buffer);
    }
    }
}

[writerInput markAsFinished];
[videoWriter finishWriting];
[videoWriter release];

//NSLog(@"outside for loop");
//Grab the URL for the video so we can use it later.
NSURL * url = [self applicationDocumentsDirectory : kIntroVideoFileName];
[assetURLArray setObject:url forKey:kIntroVideo];

}

- (CVPixelBufferRef )pixelBufferFromCGImage:(CGImageRef)image size:(CGSize)size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, 
                         [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, &pxbuffer);
// CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &pxbuffer);

NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL); 

CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);

CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);

CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);

CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);

CVPixelBufferUnlockBaseAddress(pxbuffer, 0);

return pxbuffer;
}
-(无效)从静止图像创建视频
{
//根据设备类型(iPhone或iPad)设置大小。
CGSize size=CGSizeMake(屏幕宽度、屏幕高度);
NSString*betaCompressionDirectory=[NSHomeDirectory()stringByAppendingPathComponent:@“Documents/IntroVideo.mov”];
n错误*错误=nil;
取消链接([betaCompressionDirectory UTF8String]);
//----初始化压缩引擎
AVAssetWriter*videoWriter=[[AVAssetWriter alloc]initWithURL:[NSURL fileURLWithPath:betaCompressionDirectory]
文件类型:AVFileTypeQuickTimeMovie
错误:&错误];
NSParameterAssert(视频编写器);
如果(错误)
NSLog(@“error=%@,[error localizedDescription]);
NSDictionary*videoSettings=[NSDictionary Dictionary WithObjectsSandKeys:AVVideoCodecH264,AVVideoCodeKey,
[NSNumber numberWithInt:size.height],AVVideoWidthKey,
[NSNumber numberWithInt:size.width],AVVideoHeightKey,无];
AVAssetWriterInput*writerInput=[AVAssetWriterInput assetWriterInputWithMediaType:AvMediaType视频输出设置:视频设置];
NSDictionary*sourcePixelBufferAttributesDictionary=[NSDictionary Dictionary Dictionary WithObjectsAndKeys:
[NSNumber numberwhint:kCVPixelFormatType_32ARGB],kCVPixelBufferPixelFormatTypeKey,无];
AvassetWriterInputPixelBufferAdapter*适配器=[AvassetWriterInputPixelBufferAdapter AssetWriterInputPixelBufferAdapter WithAssetWriterInput:writerInput
sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
if([videoWriter canAddInput:writerInput])
NSLog(@“我可以添加此输入”);
其他的
NSLog(@“我无法添加此输入”);
[videoWriter附加输入:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//CGImageRef theImage=[FinishedMergeImage CGImage];
CGImageRef theImage=[introImage CGImage];
//dispatch\u queue\t dispatchQueue=dispatch\u queue\u create(“mediaInputQueue”,NULL);
int _块帧=0;
//计算一帧完成所代表的进度百分比。最大值为75%。
float currentProgress=0.0;
浮动进度=(80.0/kDurationofintroutro);
//NSLog(@“进度为%f”,进度);

对于(int i=0;i为了结束此操作,我将重申我在上面所做的操作。您使用的
videoSettings
词典应该使用视频的目标维度,但您正在传入视图的维度。除非这是您想要录制的,否则您需要更改传入的
AVVideoWidthKey
AVVideoWidthKey
是正确的输出大小


考虑到iOS设备屏幕的纵横比接近4:3,这可能是导致录制视频的纵横比的原因。

您的
视频设置是否应该使用屏幕宽度和高度?您不希望它成为您的视频宽度和高度吗?对于iOS设备,屏幕宽度和高度大约为4:3纵横比。布拉德-谢谢-所以我应该指定类似于1280x720的东西?我会使用你想要的任何输出视频尺寸。至少,这是我在这里的代码中的内容。太棒了-它工作正常-发现得很好。谢谢-我欠你(再次!)。如果你以我可以接受的答案发帖的话。