Ios 使用AVFoundation裁剪AVAsset视频
我正在使用Ios 使用AVFoundation裁剪AVAsset视频,ios,video,avfoundation,avassetexportsession,Ios,Video,Avfoundation,Avassetexportsession,我正在使用AVCaptureMovieFileOutput录制一些视频。我使用略微放大的AVLayerVideoGravityResizeAspectFill显示预览层。我的问题是最终的视频更大,包含的额外图像在预览时不适合屏幕 这是预览和生成的视频 是否有一种方法可以指定要使用AVAssetExportSession从视频中剪切的CGRect 编辑---- 当我将CGAffineTransformScale应用于AVAssetTrack时,它会放大视频,并将AVMutableVideoCom
AVCaptureMovieFileOutput
录制一些视频。我使用略微放大的AVLayerVideoGravityResizeAspectFill
显示预览层。我的问题是最终的视频更大,包含的额外图像在预览时不适合屏幕
这是预览和生成的视频
是否有一种方法可以指定要使用AVAssetExportSession
从视频中剪切的CGRect
编辑----
当我将CGAffineTransformScale
应用于AVAssetTrack
时,它会放大视频,并将AVMutableVideoComposition
renderSize
设置为view.bounds
时,它会从末端裁剪。太好了,只剩下一个问题了。视频的宽度没有延伸到正确的宽度,只是被黑色填充
编辑2----
建议的问题/答案不完整
我的一些代码:
在我的-(void)captureOutput:(AVCaptureFileOutput*)captureOutput未完成根据OutputFileAttribute:(NSURL*)outputFileURL fromConnections:(NSArray*)connections error:(NSError*)error
方法中,我使用此方法来裁剪和调整视频大小
- (void)flipAndSave:(NSURL *)videoURL withCompletionBlock:(void(^)(NSURL *returnURL))completionBlock
{
AVURLAsset *firstAsset = [AVURLAsset assetWithURL:videoURL];
// 1 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.
AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];
// 2 - Video track
AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration)
ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:kCMTimeZero error:nil];
// 2.1 - Create AVMutableVideoCompositionInstruction
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(CMTimeMakeWithSeconds(0, 600), firstAsset.duration);
// 2.2 - Create an AVMutableVideoCompositionLayerInstruction for the first track
AVMutableVideoCompositionLayerInstruction *firstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
AVAssetTrack *firstAssetTrack = [[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
UIImageOrientation firstAssetOrientation_ = UIImageOrientationUp;
BOOL isFirstAssetPortrait_ = NO;
CGAffineTransform firstTransform = firstAssetTrack.preferredTransform;
if (firstTransform.a == 0 && firstTransform.b == 1.0 && firstTransform.c == -1.0 && firstTransform.d == 0) {
firstAssetOrientation_ = UIImageOrientationRight;
isFirstAssetPortrait_ = YES;
}
if (firstTransform.a == 0 && firstTransform.b == -1.0 && firstTransform.c == 1.0 && firstTransform.d == 0) {
firstAssetOrientation_ = UIImageOrientationLeft;
isFirstAssetPortrait_ = YES;
}
if (firstTransform.a == 1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == 1.0) {
firstAssetOrientation_ = UIImageOrientationUp;
}
if (firstTransform.a == -1.0 && firstTransform.b == 0 && firstTransform.c == 0 && firstTransform.d == -1.0) {
firstAssetOrientation_ = UIImageOrientationDown;
}
// [firstlayerInstruction setTransform:firstAssetTrack.preferredTransform atTime:kCMTimeZero];
// [firstlayerInstruction setCropRectangle:self.view.bounds atTime:kCMTimeZero];
CGFloat scale = [self getScaleFromAsset:firstAssetTrack];
firstTransform = CGAffineTransformScale(firstTransform, scale, scale);
[firstlayerInstruction setTransform:firstTransform atTime:kCMTimeZero];
// 2.4 - Add instructions
mainInstruction.layerInstructions = [NSArray arrayWithObjects:firstlayerInstruction,nil];
AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];
mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];
mainCompositionInst.frameDuration = CMTimeMake(1, 30);
// CGSize videoSize = firstAssetTrack.naturalSize;
CGSize videoSize = self.view.bounds.size;
BOOL isPortrait_ = [self isVideoPortrait:firstAsset];
if(isPortrait_) {
videoSize = CGSizeMake(videoSize.height, videoSize.width);
}
NSLog(@"%@", NSStringFromCGSize(videoSize));
mainCompositionInst.renderSize = videoSize;
// 3 - Audio track
AVMutableCompositionTrack *AudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
[AudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, firstAsset.duration)
ofTrack:[[firstAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:kCMTimeZero error:nil];
// 4 - Get path
NSString *outputPath = [[NSString alloc] initWithFormat:@"%@%@", NSTemporaryDirectory(), @"cutoutput.mov"];
NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
NSFileManager *manager = [[NSFileManager alloc] init];
if ([manager fileExistsAtPath:outputPath])
{
[manager removeItemAtPath:outputPath error:nil];
}
// 5 - Create exporter
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL=outputURL;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = mainCompositionInst;
[exporter exportAsynchronouslyWithCompletionHandler:^{
switch ([exporter status])
{
case AVAssetExportSessionStatusFailed:
NSLog(@"Export failed: %@ : %@", [[exporter error] localizedDescription], [exporter error]);
completionBlock(nil);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(@"Export canceled");
completionBlock(nil);
break;
default: {
NSURL *outputURL = exporter.outputURL;
dispatch_async(dispatch_get_main_queue(), ^{
completionBlock(outputURL);
});
break;
}
}
}];
}
以下是我对您问题的解释:您在屏幕比例为4:3的设备上捕获视频,因此您的
AVCaptureVideoPreviewLayer
为4:3,但视频输入设备以16:9捕获视频,因此生成的视频比预览中看到的视频“大”
如果您只是希望裁剪预览未捕获的额外像素,请查看此项。本文展示了如何将视频裁剪成正方形。但是,您只需要进行一些修改即可裁剪到4:3。我已经进行了测试,以下是我所做的更改:
一旦你有了视频的AVAssetTrack
,你就需要计算一个新的高度
// we convert the captured height i.e. 1080 to a 4:3 screen ratio and get the new height
CGFloat newHeight = clipVideoTrack.naturalSize.height/3*4;
然后使用newHeight修改这两行
videoComposition.renderSize = CGSizeMake(clipVideoTrack.naturalSize.height, newHeight);
CGAffineTransform t1 = CGAffineTransformMakeTranslation(clipVideoTrack.naturalSize.height, -(clipVideoTrack.naturalSize.width - newHeight)/2 );
因此,我们在这里所做的是将renderSize设置为4:3的比例-确切的尺寸基于输入设备。然后,我们使用cGraffeTransform
转换视频位置,这样我们在AVCaptureVideoPreviewLayer
中看到的就是呈现到我们文件中的内容
编辑:如果您想将所有内容放在一起,根据设备的屏幕比例(3:2、4:3、16:9)裁剪视频,并考虑视频方向,我们需要添加一些内容
首先,这里是经过修改的示例代码,其中有一些关键的改动:
// output file
NSString* docFolder = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) lastObject];
NSString* outputPath = [docFolder stringByAppendingPathComponent:@"output2.mov"];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputPath])
[[NSFileManager defaultManager] removeItemAtPath:outputPath error:nil];
// input file
AVAsset* asset = [AVAsset assetWithURL:outputFileURL];
AVMutableComposition *composition = [AVMutableComposition composition];
[composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
// input clip
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// crop clip to screen ratio
UIInterfaceOrientation orientation = [self orientationForTrack:asset];
BOOL isPortrait = (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationPortraitUpsideDown) ? YES: NO;
CGFloat complimentSize = [self getComplimentSize:videoTrack.naturalSize.height];
CGSize videoSize;
if(isPortrait) {
videoSize = CGSizeMake(videoTrack.naturalSize.height, complimentSize);
} else {
videoSize = CGSizeMake(complimentSize, videoTrack.naturalSize.height);
}
AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.renderSize = videoSize;
videoComposition.frameDuration = CMTimeMake(1, 30);
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );
// rotate and position video
AVMutableVideoCompositionLayerInstruction* transformer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
CGFloat tx = (videoTrack.naturalSize.width-complimentSize)/2;
if (orientation == UIInterfaceOrientationPortrait || orientation == UIInterfaceOrientationLandscapeRight) {
// invert translation
tx *= -1;
}
// t1: rotate and position video since it may have been cropped to screen ratio
CGAffineTransform t1 = CGAffineTransformTranslate(videoTrack.preferredTransform, tx, 0);
// t2/t3: mirror video horizontally
CGAffineTransform t2 = CGAffineTransformTranslate(t1, isPortrait?0:videoTrack.naturalSize.width, isPortrait?videoTrack.naturalSize.height:0);
CGAffineTransform t3 = CGAffineTransformScale(t2, isPortrait?1:-1, isPortrait?-1:1);
[transformer setTransform:t3 atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject: transformer];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
// export
exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=[NSURL fileURLWithPath:outputPath];
exporter.outputFileType=AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){
NSLog(@"Exporting done!");
// added export to library for testing
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath]]) {
[library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:outputPath]
completionBlock:^(NSURL *assetURL, NSError *error) {
NSLog(@"Saved to album");
if (error) {
}
}];
}
}];
我们在这里添加的是一个调用,根据视频尺寸与屏幕比例的裁剪来获取视频的新渲染大小。一旦我们缩小尺寸,我们需要转换位置以重新居中视频。所以我们抓住它的方向,让它朝正确的方向移动。这将解决我们在UIInterfaceOrientation和scapeLeft中看到的偏心问题。最后,CGAffineTransform t2、t3
水平镜像视频
以下是实现这一目标的两种新方法:
- (CGFloat)getComplimentSize:(CGFloat)size {
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat ratio = screenRect.size.height / screenRect.size.width;
// we have to adjust the ratio for 16:9 screens
if (ratio == 1.775) ratio = 1.77777777777778;
return size * ratio;
}
- (UIInterfaceOrientation)orientationForTrack:(AVAsset *)asset {
UIInterfaceOrientation orientation = UIInterfaceOrientationPortrait;
NSArray *tracks = [asset tracksWithMediaType:AVMediaTypeVideo];
if([tracks count] > 0) {
AVAssetTrack *videoTrack = [tracks objectAtIndex:0];
CGAffineTransform t = videoTrack.preferredTransform;
// Portrait
if(t.a == 0 && t.b == 1.0 && t.c == -1.0 && t.d == 0) {
orientation = UIInterfaceOrientationPortrait;
}
// PortraitUpsideDown
if(t.a == 0 && t.b == -1.0 && t.c == 1.0 && t.d == 0) {
orientation = UIInterfaceOrientationPortraitUpsideDown;
}
// LandscapeRight
if(t.a == 1.0 && t.b == 0 && t.c == 0 && t.d == 1.0) {
orientation = UIInterfaceOrientationLandscapeRight;
}
// LandscapeLeft
if(t.a == -1.0 && t.b == 0 && t.c == 0 && t.d == -1.0) {
orientation = UIInterfaceOrientationLandscapeLeft;
}
}
return orientation;
}
这些都很直截了当。唯一需要注意的是,在getComplaidSize:
方法中,我们必须手动调整16:9的比率,因为iPhone5+的分辨率在数学上低于真正的16:9。AVCaptureVideoDataOutput是AVCaptureOutput的一个具体子类,您可以使用它来处理所捕获视频中未压缩的帧,或访问压缩帧
AVCaptureVideoDataOutput的一个实例生成可以使用其他媒体API处理的视频帧。您可以使用captureOutput:didOutputSampleBuffer:fromConnection:
delegate方法访问帧
配置会话
您可以在会话中使用预设来指定所需的图像质量和分辨率。预设是一个常数,用于标识多个可能配置中的一个;在某些情况下,实际配置是特定于设备的:
这些预设值代表各种设备的实际值,请参阅“”和“捕获静止图像”
如果要设置特定于大小的配置,应先检查是否支持该配置,然后再进行设置:
if ([session canSetSessionPreset:AVCaptureSessionPreset1280x720]) {
session.sessionPreset = AVCaptureSessionPreset1280x720;
}
else {
// Handle the failure.
}
可能与我之前看过的代码重复。就连答案评论都说它不完整。好吧,很抱歉。您可能希望显示实际代码…添加了代码。我想知道我是否应该改用avcapturedevideodataoutput
,并在帧进入时进行裁剪。默认的iOS摄像头应用程序只记录屏幕上的内容,所以它必须是可能的。我觉得我错过了一些简单的事情。善用赏金。我正打算提出这个建议。哇,谢谢你。这让我非常接近我所需要的。你的代码有两个问题。1,这在iPad上的效果非常好,但在iPhone5上,屏幕不是4/3,所以视频更小。我们能用屏幕大小代替比例吗?2、在横向录制时,视频会旋转,这意味着它总是横向显示。@Darren我扩展了我的原始答案,以包括交替的屏幕比例和方向。添加时请不要忘记更新此内容:[transformer setTransform:t2 atTime:kCMTimeZero]代码>不幸的是,我对转换也不够熟悉,无法提供快速解决方案。我尝试了一些方法,但没有解决问题。我的猜测是,当我们应用CGAffineTransformTranslate并旋转视频时,以下位置和比例更改会受到新定位点/矩阵的影响。试着胡扯一下,或者问一个原创的问题。我相信有人能更好地理解转换——我的猜测有点不靠谱。更新了我的答案。压扁了您在ui界面方向和scapeleft中看到的偏离中心的问题。并添加了几个变换以水平镜像。应该这样做:)