iOS-视频帧处理优化

iOS-视频帧处理优化,ios,optimization,mpmovieplayercontroller,video-processing,Ios,Optimization,Mpmovieplayercontroller,Video Processing,在我的项目中,我需要将视频每帧的一块复制到一个唯一的结果图像上 捕获视频帧不是什么大事。可能是这样的: // duration is the movie lenght in s. // frameDuration is 1/fps. (or 24fps, frameDuration = 1/24) // player is a MPMoviePlayerController for (NSTimeInterval i=0; i < duration; i += frameDuration)

在我的项目中,我需要将视频每帧的一块复制到一个唯一的结果图像上

捕获视频帧不是什么大事。可能是这样的:

// duration is the movie lenght in s.
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24)
// player is a MPMoviePlayerController
for (NSTimeInterval i=0; i < duration; i += frameDuration) {
    UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact];

    CGRect destinationRect = [self getDestinationRect:i];
    [self drawImage:image inRect:destinationRect fromRect:originRect];

    // UI feedback
    [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO];
}
//持续时间是以秒为单位的电影长度。
//帧持续时间为1/fps。(或24fps,帧持续时间=1/24)
//玩家是MPMoviePlayerController
对于(NSTimeInterval i=0;i
当我尝试实现
drawImage:inRect:fromRect:
方法时,问题就出现了。
我试过了,这是:

  • 使用视频帧中的
    CGImageCreateWithImageInRect
    创建新的CGImage,以提取图像块
  • 在ImageContext上制作CGContextDrawImage以绘制区块
  • 但当视频播放到12-14秒时,我的iphone4s发布了第三次内存警告并崩溃。我用泄漏工具分析了应用程序,它没有发现任何泄漏


    我在石英方面不是很强。有没有更好的优化方法来实现这一点?

    最后,我保留了代码中的石英部分,并改变了检索图像的方式

    现在我使用AVFoundation,这是一个更快的解决方案

    // Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties.
    AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL
                                                 options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease];
    AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease];
    generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape)
    AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset];
    
    // Retrieving the video properties
    NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
    frameDuration = CMTimeGetSeconds(composition.frameDuration);
    CGSize renderSize = composition.renderSize;
    CGFloat totalFrames = round(duration/frameDuration);
    
    // Selecting each frame we want to extract : all of them.
    NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)];
    for (int i=0; i<totalFrames; i++) {
        NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)];
        [times addObject:time];
    }
    
    __block int i = 0;
    AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
        if (result == AVAssetImageGeneratorSucceeded) {
            int x = round(CMTimeGetSeconds(requestedTime)/frameDuration);
            CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height);
            [self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context];
        }
        else
            NSLog(@"Ouch: %@", error.description);
        i++;
        [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO];
        if(i == totalFrames) {
            [self performSelectorOnMainThread:@selector(performVideoDidFinish) withObject:nil waitUntilDone:NO];
        }
    };
    
    // Launching the process...
    generator.requestedTimeToleranceBefore = kCMTimeZero;
    generator.requestedTimeToleranceAfter = kCMTimeZero;
    generator.maximumSize = renderSize;
    [generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler];
    
    //创建工具:1/视频资源,2/图像生成器,3/合成,这有助于检索视频属性。
    AVURLAsset*asset=[[[AVURLAsset alloc]initWithURL:moviePathURL
    选项:[NSDictionary Dictionary WithObjects and Keys:[NSNumber WithBool:YES],AVURLASSETPREVERPRECISEDURATION AND TIMINGKEY,nil]]自动删除];
    AVAssetImageGenerator*generator=[[[AVAssetImageGenerator alloc]initWithAsset:asset]自动释放];
    generator.appliesPreferredTrackTransform=是;//如果我忽略这一点,帧将旋转90°(未尝试在横向中)
    AVVideoComposition*composition=[AVVideoComposition VideoComposition WithPropertiesOSET:asset];
    //检索视频属性
    NSTimeInterval duration=CMTimeGetSeconds(asset.duration);
    frameDuration=CMTimeGetSeconds(composition.frameDuration);
    CGSize renderSize=合成。renderSize;
    CGFloat totalFrames=四舍五入(持续时间/帧持续时间);
    //选择要提取的每个帧:所有帧。
    NSMUTABLEARRY*times=[NSMUTABLEARRY阵列容量:四舍五入(持续时间/帧持续时间)];
    
    对于(inti=0;i最后,我保留了代码中的石英部分,并更改了检索图像的方式

    现在我使用AVFoundation,这是一个更快的解决方案

    // Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties.
    AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL
                                                 options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease];
    AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease];
    generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape)
    AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset];
    
    // Retrieving the video properties
    NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
    frameDuration = CMTimeGetSeconds(composition.frameDuration);
    CGSize renderSize = composition.renderSize;
    CGFloat totalFrames = round(duration/frameDuration);
    
    // Selecting each frame we want to extract : all of them.
    NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)];
    for (int i=0; i<totalFrames; i++) {
        NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)];
        [times addObject:time];
    }
    
    __block int i = 0;
    AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
        if (result == AVAssetImageGeneratorSucceeded) {
            int x = round(CMTimeGetSeconds(requestedTime)/frameDuration);
            CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height);
            [self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context];
        }
        else
            NSLog(@"Ouch: %@", error.description);
        i++;
        [self performSelectorOnMainThread:@selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO];
        if(i == totalFrames) {
            [self performSelectorOnMainThread:@selector(performVideoDidFinish) withObject:nil waitUntilDone:NO];
        }
    };
    
    // Launching the process...
    generator.requestedTimeToleranceBefore = kCMTimeZero;
    generator.requestedTimeToleranceAfter = kCMTimeZero;
    generator.maximumSize = renderSize;
    [generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler];
    
    //创建工具:1/视频资源,2/图像生成器,3/合成,这有助于检索视频属性。
    AVURLAsset*asset=[[[AVURLAsset alloc]initWithURL:moviePathURL
    选项:[NSDictionary Dictionary WithObjects and Keys:[NSNumber WithBool:YES],AVURLASSETPREVERPRECISEDURATION AND TIMINGKEY,nil]]自动删除];
    AVAssetImageGenerator*generator=[[[AVAssetImageGenerator alloc]initWithAsset:asset]自动释放];
    generator.appliesPreferredTrackTransform=YES;//如果忽略此项,帧将旋转90°(未尝试在横向中)
    AVVideoComposition*composition=[AVVideoComposition VideoComposition WithPropertiesOSET:asset];
    //检索视频属性
    NSTimeInterval duration=CMTimeGetSeconds(asset.duration);
    frameDuration=CMTimeGetSeconds(composition.frameDuration);
    CGSize renderSize=合成。renderSize;
    CGFloat totalFrames=四舍五入(持续时间/帧持续时间);
    //选择要提取的每个帧:所有帧。
    NSMUTABLEARRY*times=[NSMUTABLEARRY阵列容量:四舍五入(持续时间/帧持续时间)];
    
    对于(int i=0;i除了Martin的回答之外,我建议缩小通过该调用获得的图像的大小;也就是说,添加一个属性
    [generator.maximumSize=CGSizeMake(width,height)];
    使图像尽可能小,这样它们就不会占用太多内存

    除了Martin的回答之外,我建议缩小通过调用获得的图像的大小;也就是说,添加一个属性
    [generator.maximumSize=CGSizeMake(width,height)]
    使图像尽可能小,这样它们就不会占用太多内存

    嗨,Martin,图像提取的方法很完美,但在应用程序中,如果视频持续时间超过30秒,则应用程序会崩溃,并带有内存警告。你有其他方法或对此进行任何更改吗?谢谢。长视频不会崩溃。检查你的代码,也许你会发现包含处理程序块中的泄漏。由于设备没有足够的内存空间,您无法将所有提取的图像保留在内存中。@iBhavik您是否找到了解决此问题的方法Hi Martin,图像提取的方法非常完美,但在应用程序中如果视频持续时间超过30秒,则应用程序会崩溃并出现内存警告。您是否有其他方法或解决方案有什么变化吗?谢谢。它不应该因为长视频而崩溃。请检查代码,可能是处理程序块中有漏洞。由于设备没有足够的内存空间,您无法在内存中保留所有提取的图像。@iBhavik您找到解决方案了吗