Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/video/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios OpenGL ES 2.0到iPad/iPhone上的视频_Ios_Video_Xcode4_Opengl Es 2.0_Avassetwriter - Fatal编程技术网

Ios OpenGL ES 2.0到iPad/iPhone上的视频

Ios OpenGL ES 2.0到iPad/iPhone上的视频,ios,video,xcode4,opengl-es-2.0,avassetwriter,Ios,Video,Xcode4,Opengl Es 2.0,Avassetwriter,尽管这里有关于StackOverflow的好消息,但我还是束手无策 我正在尝试在iPad2(使用iOS4.3)上为视频编写OpenGL渲染缓冲。这正是我正在尝试的: A) 设置AvassetWriterInputPixelBufferAdapter 创建指向视频文件的AVAssetWriter 使用适当的设置设置AVAssetWriterInput 设置AvassetWriteInputPixelBufferAdapter以向视频文件添加数据 B) 使用AvassetWriterInputPix

尽管这里有关于StackOverflow的好消息,但我还是束手无策

我正在尝试在iPad2(使用iOS4.3)上为视频编写OpenGL渲染缓冲。这正是我正在尝试的:

A) 设置AvassetWriterInputPixelBufferAdapter

  • 创建指向视频文件的AVAssetWriter

  • 使用适当的设置设置AVAssetWriterInput

  • 设置AvassetWriteInputPixelBufferAdapter以向视频文件添加数据

  • B) 使用AvassetWriterInputPixelBufferAdapter将数据写入视频文件

  • 将OpenGL代码呈现到屏幕上

  • 通过glReadPixels获取OpenGL缓冲区

  • 从OpenGL数据创建CVPixelBufferRef

  • 使用appendPixelBuffer方法将该PixelBuffer附加到AvassetWriterInputPixelBufferAdapter

  • 然而,我在做这件事时遇到了问题。我现在的策略是在按下按钮时设置AvassetWriterInputPixelBufferAdapter。一旦AvassetWriterInputPixelBufferAdapter有效,我设置了一个标志,向EAGLView发送信号,以创建像素缓冲区,并通过appendPixelBuffer将其附加到视频文件中,以获得给定数量的帧

    现在,我的代码在尝试附加第二个像素缓冲区时崩溃,导致以下错误:

    -[__NSCFDictionary appendPixelBuffer:withPresentationTime:]: unrecognized selector sent to instance 0x131db0
    
    以下是我的AVAsset设置代码(很多代码都基于Rudy Aramayo的代码,该代码对普通图像有效,但不针对纹理进行设置):

    好的,现在我的videoWriter和适配器已经设置好,我告诉我的OpenGL渲染器为每一帧创建一个像素缓冲区:

    - (void) captureScreenVideo {
    
      if (!writerInput.readyForMoreMediaData) {
        return;
      }
    
      CGSize esize = CGSizeMake(eagl.backingWidth, eagl.backingHeight);
      NSInteger myDataLength = esize.width * esize.height * 4;
      GLuint *buffer = (GLuint *) malloc(myDataLength);
      glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
      CVPixelBufferRef pixel_buffer = NULL;
      CVPixelBufferCreateWithBytes (NULL, esize.width, esize.height, kCVPixelFormatType_32BGRA, buffer, 4 * esize.width, NULL, 0, NULL, &pixel_buffer);
    
      /* DON'T FREE THIS BEFORE USING pixel_buffer! */ 
      //free(buffer);
    
      if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
          NSLog(@"FAIL");
        } else {
          NSLog(@"Success:%d", currentFrame);
          currentTime = CMTimeAdd(currentTime, frameLength);
        }
    
       free(buffer);
       CVPixelBufferRelease(pixel_buffer);
      }
    
    
      currentFrame++;
    
      if (currentFrame > MAX_FRAMES) {
        VIDEO_WRITER_IS_READY = false;
        [writerInput markAsFinished];
        [videoWriter finishWriting];
        [videoWriter release];
    
        [self moveVideoToSavedPhotos]; 
      }
    }
    
    最后,我将视频移动到摄影机卷:

    - (void) moveVideoToSavedPhotos {
      ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
      NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];    
      NSURL* fileURL = [NSURL fileURLWithPath:localVid];
    
      [library writeVideoAtPathToSavedPhotosAlbum:fileURL
                                  completionBlock:^(NSURL *assetURL, NSError *error) {
                                    if (error) {   
                                      NSLog(@"%@: Error saving context: %@", [self class], [error localizedDescription]);
                                    }
                                  }];
      [library release];
    }
    
    然而,正如我所说,我在调用appendPixelBuffer时崩溃了


    很抱歉发送了这么多代码,但我真的不知道我做错了什么。更新一个将图像写入视频的项目似乎很简单,但我无法获取通过glReadPixels创建的像素缓冲区并将其附加。这快把我逼疯了!如果有人有任何建议或OpenGL-->视频的工作代码示例,那将是令人惊讶的。。。谢谢

    似乎是内存管理不当。错误表明消息被发送到
    \u NSCFDictionary
    而不是
    AvassetWriterInputPixelBufferAdapter
    的事实非常可疑

    为什么需要手动保留适配器?由于CocoaTouch已完全关闭,因此这看起来很粗糙


    要解决内存问题。

    这里看起来需要做一些事情-

  • 根据文档,创建像素缓冲区的推荐方法似乎是在
    适配器.pixelBufferPool
    上使用
    CVPixelBufferPoolCreatePixelBuffer
  • 然后,您可以通过使用
    CVPixelBufferLockBaseAddress
    获取地址,然后使用
    CVPixelBufferGetBaseAddress
    填充缓冲区,并在将其传递到适配器之前使用
    CVPixelBufferUnlockBaseAddress>解锁内存
  • writerInput.readyForMoreMediaData
    YES
    时,可以将像素缓冲区传递到输入。这意味着“等待准备就绪”。
    usleep
    直到它变为
    YES
    时才起作用,但您也可以使用键值

  • 剩下的都没问题。有了这些,原始代码就可以生成一个可播放的视频文件。

    “万一有人无意中发现了这一点,我终于让它工作了……现在我对它的了解比我多了一点。我在上面的代码中有一个错误,在调用appendPixelBuffer之前,我释放了glReadPixels中填充的数据缓冲区。也就是说,我认为释放它是安全的,因为我已经创建了CVPixelBufferRef。我有edi编辑上面的代码,使像素缓冲区现在有实际数据!–Angus Forbes Jun 28'11 5:58“

    这是你撞车的真正原因,我也遇到了这个问题。
    即使您已经创建了CVPixelBufferRef,也不要释放缓冲区。

    基于上述代码,我在我的开源框架中得到了与此类似的东西,因此我认为我应该提供解决方案。在我的情况下,我能够使用像素缓冲池,正如Srikumar所建议的,而不是手动创建的像素buff每一帧都有一个附加帧

    我首先配置要录制的电影:

    NSError *error = nil;
    
    assetWriter = [[AVAssetWriter alloc] initWithURL:movieURL fileType:AVFileTypeAppleM4V error:&error];
    if (error != nil)
    {
        NSLog(@"Error: %@", error);
    }
    
    
    NSMutableDictionary * outputSettings = [[NSMutableDictionary alloc] init];
    [outputSettings setObject: AVVideoCodecH264 forKey: AVVideoCodecKey];
    [outputSettings setObject: [NSNumber numberWithInt: videoSize.width] forKey: AVVideoWidthKey];
    [outputSettings setObject: [NSNumber numberWithInt: videoSize.height] forKey: AVVideoHeightKey];
    
    
    assetWriterVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
    assetWriterVideoInput.expectsMediaDataInRealTime = YES;
    
    // You need to use BGRA for the video in order to get realtime encoding. I use a color-swizzling shader to line up glReadPixels' normal RGBA output with the movie input's BGRA.
    NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_32BGRA], kCVPixelBufferPixelFormatTypeKey,
                                                           [NSNumber numberWithInt:videoSize.width], kCVPixelBufferWidthKey,
                                                           [NSNumber numberWithInt:videoSize.height], kCVPixelBufferHeightKey,
                                                           nil];
    
    assetWriterPixelBufferInput = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:assetWriterVideoInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
    
    [assetWriter addInput:assetWriterVideoInput];
    
    然后使用此代码使用
    glReadPixels()
    获取每个渲染帧:

    我注意到的一件事是,如果我尝试附加两个具有相同整数值的像素缓冲区(在提供的基础上),整个录制将失败,并且输入将永远不会占用另一个像素缓冲区。类似地,如果在从池中检索失败后尝试附加像素缓冲区,它将中止录制。因此,上面代码中的早期救援

    除了上面的代码外,我还使用了一个颜色旋转着色器将OpenGL ES场景中的RGBA渲染转换为BGRA,以便AVAssetWriter进行快速编码。有了它,我可以在iPhone 4上以30 FPS的速度录制640x480视频


    同样,此操作的所有代码都可以在存储库中的GPUImageMovieWriter类下找到。

    来自错误消息
    -[\uu NSCFDictionary appendPixelBuffer:withPresentationTime:]:发送到实例0x131db0的无法识别的选择器

    看起来你的pixelBufferAdapter已经发布了,现在它指向了一个字典。

    我唯一的代码是:

    。 .

    释放CGDataProvider类实例中数据的回调:

    static void releaseDataCallback (void *info, const void *data, size_t size) {
        free((void*)data);
    }
    
    CVCGImageUtil类接口和实现文件分别为:

    @import Foundation;
    @import CoreMedia;
    @import CoreGraphics;
    @import QuartzCore;
    @import CoreImage;
    @import UIKit;
    
    @interface CVCGImageUtil : NSObject
    
    + (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;
    
    + (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;
    
    + (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;
    
    @end
    
    #import "CVCGImageUtil.h"
    
    @implementation CVCGImageUtil
    
    + (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
    {
        // CVPixelBuffer to CoreImage
        CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
        image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
        CGPoint origin = [image extent].origin;
        image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
    
        // CoreImage to CGImage via CoreImage context
        CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];
    
        // CGImage to UIImage (OPTIONAL)
        //UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
        //return (CGImageRef)uiImage.CGImage;
    
        return cgImage;
    }
    
    + (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
    {
        CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
                                      CGImageGetHeight(image));
        NSDictionary *options =
        [NSDictionary dictionaryWithObjectsAndKeys:
         [NSNumber numberWithBool:YES],
         kCVPixelBufferCGImageCompatibilityKey,
         [NSNumber numberWithBool:YES],
         kCVPixelBufferCGBitmapContextCompatibilityKey,
         nil];
        CVPixelBufferRef pxbuffer = NULL;
    
        CVReturn status =
        CVPixelBufferCreate(
                            kCFAllocatorDefault, frameSize.width, frameSize.height,
                            kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
                            &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    
        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    
        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(
                                                     pxdata, frameSize.width, frameSize.height,
                                                     8, CVPixelBufferGetBytesPerRow(pxbuffer),
                                                     rgbColorSpace,
                                                     (CGBitmapInfo)kCGBitmapByteOrder32Little |
                                                     kCGImageAlphaPremultipliedFirst);
    
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                               CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);
    
        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    
        return pxbuffer;
    }
    
    + (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
    {
        CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
        CMSampleBufferRef newSampleBuffer = NULL;
        CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
        CMVideoFormatDescriptionRef videoInfo = NULL;
        CMVideoFormatDescriptionCreateForImageBuffer(
                                                     NULL, pixelBuffer, &videoInfo);
        CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
                                           pixelBuffer,
                                           true,
                                           NULL,
                                           NULL,
                                           videoInfo,
                                           &timimgInfo,
                                           &newSampleBuffer);
    
        return newSampleBuffer;
    }
    
    @end
    

    这不折不扣地回答了您问题的B部分。A部分后面有一个单独的答案…

    我从未失败过使用此代码将视频文件读写到iPhone;在您的实现中,您只需将在实现方法末尾找到的processFrame方法中的调用替换为ca
      // [_context presentRenderbuffer:GL_RENDERBUFFER];
    
    dispatch_async(dispatch_get_main_queue(), ^{
        @autoreleasepool {
            // To capture the output to an OpenGL render buffer...
            NSInteger myDataLength = _backingWidth * _backingHeight * 4;
            GLubyte *buffer = (GLubyte *) malloc(myDataLength);
            glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
            glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
    
            // To swap the pixel buffer to a CoreGraphics context (as a CGImage)
            CGDataProviderRef provider;
            CGColorSpaceRef colorSpaceRef;
            CGImageRef imageRef;
            CVPixelBufferRef pixelBuffer;
            @try {
                provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
                int bitsPerComponent = 8;
                int bitsPerPixel = 32;
                int bytesPerRow = 4 * _backingWidth;
                colorSpaceRef = CGColorSpaceCreateDeviceRGB();
                CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
                CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
                imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
            } @catch (NSException *exception) {
                NSLog(@"Exception: %@", [exception reason]);
            } @finally {
                if (imageRef) {
                    // To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
                    pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
                    // To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
                    imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
                }
                CGDataProviderRelease(provider);
                CGColorSpaceRelease(colorSpaceRef);
                CGImageRelease(imageRef);
            }
    
        }
    });
    
    static void releaseDataCallback (void *info, const void *data, size_t size) {
        free((void*)data);
    }
    
    @import Foundation;
    @import CoreMedia;
    @import CoreGraphics;
    @import QuartzCore;
    @import CoreImage;
    @import UIKit;
    
    @interface CVCGImageUtil : NSObject
    
    + (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;
    
    + (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;
    
    + (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;
    
    @end
    
    #import "CVCGImageUtil.h"
    
    @implementation CVCGImageUtil
    
    + (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
    {
        // CVPixelBuffer to CoreImage
        CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
        image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
        CGPoint origin = [image extent].origin;
        image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
    
        // CoreImage to CGImage via CoreImage context
        CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];
    
        // CGImage to UIImage (OPTIONAL)
        //UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
        //return (CGImageRef)uiImage.CGImage;
    
        return cgImage;
    }
    
    + (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
    {
        CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
                                      CGImageGetHeight(image));
        NSDictionary *options =
        [NSDictionary dictionaryWithObjectsAndKeys:
         [NSNumber numberWithBool:YES],
         kCVPixelBufferCGImageCompatibilityKey,
         [NSNumber numberWithBool:YES],
         kCVPixelBufferCGBitmapContextCompatibilityKey,
         nil];
        CVPixelBufferRef pxbuffer = NULL;
    
        CVReturn status =
        CVPixelBufferCreate(
                            kCFAllocatorDefault, frameSize.width, frameSize.height,
                            kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
                            &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    
        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
    
        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(
                                                     pxdata, frameSize.width, frameSize.height,
                                                     8, CVPixelBufferGetBytesPerRow(pxbuffer),
                                                     rgbColorSpace,
                                                     (CGBitmapInfo)kCGBitmapByteOrder32Little |
                                                     kCGImageAlphaPremultipliedFirst);
    
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
                                               CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);
    
        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    
        return pxbuffer;
    }
    
    + (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
    {
        CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
        CMSampleBufferRef newSampleBuffer = NULL;
        CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
        CMVideoFormatDescriptionRef videoInfo = NULL;
        CMVideoFormatDescriptionCreateForImageBuffer(
                                                     NULL, pixelBuffer, &videoInfo);
        CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
                                           pixelBuffer,
                                           true,
                                           NULL,
                                           NULL,
                                           videoInfo,
                                           &timimgInfo,
                                           &newSampleBuffer);
    
        return newSampleBuffer;
    }
    
    @end
    
    //
    //  ExportVideo.h
    //  ChromaFilterTest
    //
    //  Created by James Alan Bush on 10/30/16.
    //  Copyright © 2016 James Alan Bush. All rights reserved.
    //
    
    #import <Foundation/Foundation.h>
    #import <AVFoundation/AVFoundation.h>
    #import <CoreMedia/CoreMedia.h>
    #import "GLKitView.h"
    
    @interface ExportVideo : NSObject
    {
        AVURLAsset                           *_asset;
        AVAssetReader                        *_reader;
        AVAssetWriter                        *_writer;
        NSString                             *_outputURL;
        NSURL                                *_outURL;
        AVAssetReaderTrackOutput             *_readerAudioOutput;
        AVAssetWriterInput                   *_writerAudioInput;
        AVAssetReaderTrackOutput             *_readerVideoOutput;
        AVAssetWriterInput                   *_writerVideoInput;
        CVPixelBufferRef                      _currentBuffer;
        dispatch_queue_t                      _mainSerializationQueue;
        dispatch_queue_t                      _rwAudioSerializationQueue;
        dispatch_queue_t                      _rwVideoSerializationQueue;
        dispatch_group_t                      _dispatchGroup;
        BOOL                                  _cancelled;
        BOOL                                  _audioFinished;
        BOOL                                  _videoFinished;
        AVAssetWriterInputPixelBufferAdaptor *_pixelBufferAdaptor;
    }
    
    @property (readwrite, retain) NSURL *url;
    @property (readwrite, retain) GLKitView *renderer;
    
    - (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer;
    - (void)startProcessing;
    @end
    
    
    //
    //  ExportVideo.m
    //  ChromaFilterTest
    //
    //  Created by James Alan Bush on 10/30/16.
    //  Copyright © 2016 James Alan Bush. All rights reserved.
    //
    
    #import "ExportVideo.h"
    #import "GLKitView.h"
    
    @implementation ExportVideo
    
    @synthesize url = _url;
    
    - (id)initWithURL:(NSURL *)url usingRenderer:(GLKitView *)renderer {
        NSLog(@"ExportVideo");
        if (!(self = [super init])) {
            return nil;
        }
    
        self.url = url;
        self.renderer = renderer;
    
        NSString *serializationQueueDescription = [NSString stringWithFormat:@"%@ serialization queue", self];
        _mainSerializationQueue = dispatch_queue_create([serializationQueueDescription UTF8String], NULL);
    
        NSString *rwAudioSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw audio serialization queue", self];
        _rwAudioSerializationQueue = dispatch_queue_create([rwAudioSerializationQueueDescription UTF8String], NULL);
    
        NSString *rwVideoSerializationQueueDescription = [NSString stringWithFormat:@"%@ rw video serialization queue", self];
        _rwVideoSerializationQueue = dispatch_queue_create([rwVideoSerializationQueueDescription UTF8String], NULL);
    
        return self;
    }
    
    - (void)startProcessing {
        NSDictionary *inputOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey];
        _asset = [[AVURLAsset alloc] initWithURL:self.url options:inputOptions];
        NSLog(@"URL: %@", self.url);
        _cancelled = NO;
        [_asset loadValuesAsynchronouslyForKeys:[NSArray arrayWithObject:@"tracks"] completionHandler: ^{
            dispatch_async(_mainSerializationQueue, ^{
                if (_cancelled)
                    return;
                BOOL success = YES;
                NSError *localError = nil;
                success = ([_asset statusOfValueForKey:@"tracks" error:&localError] == AVKeyValueStatusLoaded);
                if (success)
                {
                    NSFileManager *fm = [NSFileManager defaultManager];
                    NSString *localOutputPath = [self.url path];
                    if ([fm fileExistsAtPath:localOutputPath])
                        //success = [fm removeItemAtPath:localOutputPath error:&localError];
                        success = TRUE;
                }
                if (success)
                    success = [self setupAssetReaderAndAssetWriter:&localError];
                if (success)
                    success = [self startAssetReaderAndWriter:&localError];
                if (!success)
                    [self readingAndWritingDidFinishSuccessfully:success withError:localError];
            });
        }];
    }
    
    
    - (BOOL)setupAssetReaderAndAssetWriter:(NSError **)outError
    {
        // Create and initialize the asset reader.
        _reader = [[AVAssetReader alloc] initWithAsset:_asset error:outError];
        BOOL success = (_reader != nil);
        if (success)
        {
            // If the asset reader was successfully initialized, do the same for the asset writer.
            NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
            _outputURL = paths[0];
            NSFileManager *manager = [NSFileManager defaultManager];
            [manager createDirectoryAtPath:_outputURL withIntermediateDirectories:YES attributes:nil error:nil];
            _outputURL = [_outputURL stringByAppendingPathComponent:@"output.mov"];
            [manager removeItemAtPath:_outputURL error:nil];
            _outURL = [NSURL fileURLWithPath:_outputURL];
            _writer = [[AVAssetWriter alloc] initWithURL:_outURL fileType:AVFileTypeQuickTimeMovie error:outError];
            success = (_writer != nil);
        }
    
        if (success)
        {
            // If the reader and writer were successfully initialized, grab the audio and video asset tracks that will be used.
            AVAssetTrack *assetAudioTrack = nil, *assetVideoTrack = nil;
            NSArray *audioTracks = [_asset tracksWithMediaType:AVMediaTypeAudio];
            if ([audioTracks count] > 0)
                assetAudioTrack = [audioTracks objectAtIndex:0];
            NSArray *videoTracks = [_asset tracksWithMediaType:AVMediaTypeVideo];
            if ([videoTracks count] > 0)
                assetVideoTrack = [videoTracks objectAtIndex:0];
    
            if (assetAudioTrack)
            {
                // If there is an audio track to read, set the decompression settings to Linear PCM and create the asset reader output.
                NSDictionary *decompressionAudioSettings = @{ AVFormatIDKey : [NSNumber numberWithUnsignedInt:kAudioFormatLinearPCM] };
                _readerAudioOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetAudioTrack outputSettings:decompressionAudioSettings];
                [_reader addOutput:_readerAudioOutput];
                // Then, set the compression settings to 128kbps AAC and create the asset writer input.
                AudioChannelLayout stereoChannelLayout = {
                    .mChannelLayoutTag = kAudioChannelLayoutTag_Stereo,
                    .mChannelBitmap = 0,
                    .mNumberChannelDescriptions = 0
                };
                NSData *channelLayoutAsData = [NSData dataWithBytes:&stereoChannelLayout length:offsetof(AudioChannelLayout, mChannelDescriptions)];
                NSDictionary *compressionAudioSettings = @{
                                                           AVFormatIDKey         : [NSNumber numberWithUnsignedInt:kAudioFormatMPEG4AAC],
                                                           AVEncoderBitRateKey   : [NSNumber numberWithInteger:128000],
                                                           AVSampleRateKey       : [NSNumber numberWithInteger:44100],
                                                           AVChannelLayoutKey    : channelLayoutAsData,
                                                           AVNumberOfChannelsKey : [NSNumber numberWithUnsignedInteger:2]
                                                           };
                _writerAudioInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetAudioTrack mediaType] outputSettings:compressionAudioSettings];
                [_writer addInput:_writerAudioInput];
            }
    
            if (assetVideoTrack)
            {
                // If there is a video track to read, set the decompression settings for YUV and create the asset reader output.
                NSDictionary *decompressionVideoSettings = @{
                                                             (id)kCVPixelBufferPixelFormatTypeKey     : [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange],
                                                             (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary]
                                                             };
                _readerVideoOutput = [AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:assetVideoTrack outputSettings:decompressionVideoSettings];
                [_reader addOutput:_readerVideoOutput];
                CMFormatDescriptionRef formatDescription = NULL;
                // Grab the video format descriptions from the video track and grab the first one if it exists.
                NSArray *formatDescriptions = [assetVideoTrack formatDescriptions];
                if ([formatDescriptions count] > 0)
                    formatDescription = (__bridge CMFormatDescriptionRef)[formatDescriptions objectAtIndex:0];
                CGSize trackDimensions = {
                    .width = 0.0,
                    .height = 0.0,
                };
                // If the video track had a format description, grab the track dimensions from there. Otherwise, grab them direcly from the track itself.
                if (formatDescription)
                    trackDimensions = CMVideoFormatDescriptionGetPresentationDimensions(formatDescription, false, false);
                else
                    trackDimensions = [assetVideoTrack naturalSize];
                NSDictionary *compressionSettings = nil;
                // If the video track had a format description, attempt to grab the clean aperture settings and pixel aspect ratio used by the video.
                if (formatDescription)
                {
                    NSDictionary *cleanAperture = nil;
                    NSDictionary *pixelAspectRatio = nil;
                    CFDictionaryRef cleanApertureFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_CleanAperture);
                    if (cleanApertureFromCMFormatDescription)
                    {
                        cleanAperture = @{
                                          AVVideoCleanApertureWidthKey            : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureWidth),
                                          AVVideoCleanApertureHeightKey           : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHeight),
                                          AVVideoCleanApertureHorizontalOffsetKey : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureHorizontalOffset),
                                          AVVideoCleanApertureVerticalOffsetKey   : (id)CFDictionaryGetValue(cleanApertureFromCMFormatDescription, kCMFormatDescriptionKey_CleanApertureVerticalOffset)
                                          };
                    }
                    CFDictionaryRef pixelAspectRatioFromCMFormatDescription = CMFormatDescriptionGetExtension(formatDescription, kCMFormatDescriptionExtension_PixelAspectRatio);
                    if (pixelAspectRatioFromCMFormatDescription)
                    {
                        pixelAspectRatio = @{
                                             AVVideoPixelAspectRatioHorizontalSpacingKey : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioHorizontalSpacing),
                                             AVVideoPixelAspectRatioVerticalSpacingKey   : (id)CFDictionaryGetValue(pixelAspectRatioFromCMFormatDescription, kCMFormatDescriptionKey_PixelAspectRatioVerticalSpacing)
                                             };
                    }
                    // Add whichever settings we could grab from the format description to the compression settings dictionary.
                    if (cleanAperture || pixelAspectRatio)
                    {
                        NSMutableDictionary *mutableCompressionSettings = [NSMutableDictionary dictionary];
                        if (cleanAperture)
                            [mutableCompressionSettings setObject:cleanAperture forKey:AVVideoCleanApertureKey];
                        if (pixelAspectRatio)
                            [mutableCompressionSettings setObject:pixelAspectRatio forKey:AVVideoPixelAspectRatioKey];
                        compressionSettings = mutableCompressionSettings;
                    }
                }
                // Create the video settings dictionary for H.264.
                NSMutableDictionary *videoSettings = (NSMutableDictionary *) @{
                                                                               AVVideoCodecKey  : AVVideoCodecH264,
                                                                               AVVideoWidthKey  : [NSNumber numberWithDouble:trackDimensions.width],
                                                                               AVVideoHeightKey : [NSNumber numberWithDouble:trackDimensions.height]
                                                                               };
                // Put the compression settings into the video settings dictionary if we were able to grab them.
                if (compressionSettings)
                    [videoSettings setObject:compressionSettings forKey:AVVideoCompressionPropertiesKey];
                // Create the asset writer input and add it to the asset writer.
                _writerVideoInput = [AVAssetWriterInput assetWriterInputWithMediaType:[assetVideoTrack mediaType] outputSettings:videoSettings];
                NSDictionary *pixelBufferAdaptorSettings = @{
                                                             (id)kCVPixelBufferPixelFormatTypeKey     : @(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange),
                                                             (id)kCVPixelBufferIOSurfacePropertiesKey : [NSDictionary dictionary],
                                                             (id)kCVPixelBufferWidthKey               : [NSNumber numberWithDouble:trackDimensions.width],
                                                             (id)kCVPixelBufferHeightKey              : [NSNumber numberWithDouble:trackDimensions.height]
                                                             };
    
                _pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:_writerVideoInput sourcePixelBufferAttributes:pixelBufferAdaptorSettings];
    
                [_writer addInput:_writerVideoInput];
            }
        }
        return success;
    }
    
    - (BOOL)startAssetReaderAndWriter:(NSError **)outError
    {
        BOOL success = YES;
        // Attempt to start the asset reader.
        success = [_reader startReading];
        if (!success) {
            *outError = [_reader error];
            NSLog(@"Reader error");
        }
        if (success)
        {
            // If the reader started successfully, attempt to start the asset writer.
            success = [_writer startWriting];
            if (!success) {
                *outError = [_writer error];
                NSLog(@"Writer error");
            }
        }
    
        if (success)
        {
            // If the asset reader and writer both started successfully, create the dispatch group where the reencoding will take place and start a sample-writing session.
            _dispatchGroup = dispatch_group_create();
            [_writer startSessionAtSourceTime:kCMTimeZero];
            _audioFinished = NO;
            _videoFinished = NO;
    
            if (_writerAudioInput)
            {
                // If there is audio to reencode, enter the dispatch group before beginning the work.
                dispatch_group_enter(_dispatchGroup);
                // Specify the block to execute when the asset writer is ready for audio media data, and specify the queue to call it on.
                [_writerAudioInput requestMediaDataWhenReadyOnQueue:_rwAudioSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (_audioFinished)
                        return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([_writerAudioInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                        // Get the next audio sample buffer, and append it to the output file.
                        CMSampleBufferRef sampleBuffer = [_readerAudioOutput copyNextSampleBuffer];
                        if (sampleBuffer != NULL)
                        {
                            BOOL success = [_writerAudioInput appendSampleBuffer:sampleBuffer];
                            CFRelease(sampleBuffer);
                            sampleBuffer = NULL;
                            completedOrFailed = !success;
                        }
                        else
                        {
                            completedOrFailed = YES;
                        }
                    }
                    if (completedOrFailed)
                    {
                        // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the audio work has finished).
                        BOOL oldFinished = _audioFinished;
                        _audioFinished = YES;
                        if (oldFinished == NO)
                        {
                            [_writerAudioInput markAsFinished];
                        }
                        dispatch_group_leave(_dispatchGroup);
                    }
                }];
            }
    
            if (_writerVideoInput)
            {
                // If we had video to reencode, enter the dispatch group before beginning the work.
                dispatch_group_enter(_dispatchGroup);
                // Specify the block to execute when the asset writer is ready for video media data, and specify the queue to call it on.
                [_writerVideoInput requestMediaDataWhenReadyOnQueue:_rwVideoSerializationQueue usingBlock:^{
                    // Because the block is called asynchronously, check to see whether its task is complete.
                    if (_videoFinished)
                        return;
                    BOOL completedOrFailed = NO;
                    // If the task isn't complete yet, make sure that the input is actually ready for more media data.
                    while ([_writerVideoInput isReadyForMoreMediaData] && !completedOrFailed)
                    {
                        // Get the next video sample buffer, and append it to the output file.
                        CMSampleBufferRef sampleBuffer = [_readerVideoOutput copyNextSampleBuffer];
    
                        CVImageBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
                        _currentBuffer = pixelBuffer;
                        [self performSelectorOnMainThread:@selector(processFrame) withObject:nil waitUntilDone:YES];
    
                        if (_currentBuffer != NULL)
                        {
                            //BOOL success = [_writerVideoInput appendSampleBuffer:sampleBuffer];
                            BOOL success = [_pixelBufferAdaptor appendPixelBuffer:_currentBuffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
                            CFRelease(sampleBuffer);
                            sampleBuffer = NULL;
                            completedOrFailed = !success;
                        }
                        else
                        {
                            completedOrFailed = YES;
                        }
                    }
                    if (completedOrFailed)
                    {
                        // Mark the input as finished, but only if we haven't already done so, and then leave the dispatch group (since the video work has finished).
                        BOOL oldFinished = _videoFinished;
                        _videoFinished = YES;
                        if (oldFinished == NO)
                        {
                            [_writerVideoInput markAsFinished];
                        }
                        dispatch_group_leave(_dispatchGroup);
                    }
                }];
            }
            // Set up the notification that the dispatch group will send when the audio and video work have both finished.
            dispatch_group_notify(_dispatchGroup, _mainSerializationQueue, ^{
                BOOL finalSuccess = YES;
                NSError *finalError = nil;
                // Check to see if the work has finished due to cancellation.
                if (_cancelled)
                {
                    // If so, cancel the reader and writer.
                    [_reader cancelReading];
                    [_writer cancelWriting];
                }
                else
                {
                    // If cancellation didn't occur, first make sure that the asset reader didn't fail.
                    if ([_reader status] == AVAssetReaderStatusFailed)
                    {
                        finalSuccess = NO;
                        finalError = [_reader error];
                        NSLog(@"_reader finalError: %@", finalError);
                    }
                    // If the asset reader didn't fail, attempt to stop the asset writer and check for any errors.
                    [_writer finishWritingWithCompletionHandler:^{
                        [self readingAndWritingDidFinishSuccessfully:finalSuccess withError:[_writer error]];
                    }];
                }
                // Call the method to handle completion, and pass in the appropriate parameters to indicate whether reencoding was successful.
    
            });
        }
        // Return success here to indicate whether the asset reader and writer were started successfully.
        return success;
    }
    
    - (void)readingAndWritingDidFinishSuccessfully:(BOOL)success withError:(NSError *)error
    {
        if (!success)
        {
            // If the reencoding process failed, we need to cancel the asset reader and writer.
            [_reader cancelReading];
            [_writer cancelWriting];
            dispatch_async(dispatch_get_main_queue(), ^{
                // Handle any UI tasks here related to failure.
            });
        }
        else
        {
            // Reencoding was successful, reset booleans.
            _cancelled = NO;
            _videoFinished = NO;
            _audioFinished = NO;
            dispatch_async(dispatch_get_main_queue(), ^{
                UISaveVideoAtPathToSavedPhotosAlbum(_outputURL, nil, nil, nil);
            });
        }
        NSLog(@"readingAndWritingDidFinishSuccessfully success = %@ : Error = %@", (success == 0) ? @"NO" : @"YES", error);
    }
    
    - (void)processFrame {
    
        if (_currentBuffer) {
            if (kCVReturnSuccess == CVPixelBufferLockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly))
            {
                [self.renderer processPixelBuffer:_currentBuffer];
                CVPixelBufferUnlockBaseAddress(_currentBuffer, kCVPixelBufferLock_ReadOnly);
            } else {
                NSLog(@"processFrame END");
                return;
            }
        }
    }
    
    @end