AVCaptureSession指定obj-c iphone应用程序捕获图像的分辨率和质量

AVCaptureSession指定obj-c iphone应用程序捕获图像的分辨率和质量,iphone,objective-c,avcapturesession,avcapture,Iphone,Objective C,Avcapturesession,Avcapture,您好,我想设置AVcapture会话,使用iphone摄像头以特定分辨率(如果可能的话,以特定质量)拍摄图像。这里是设置AV会话代码 // Create and configure a capture session and start it running - (void)setupCaptureSession { NSError *error = nil; // Create the session self.captureSession = [[AVCaptur

您好,我想设置
AV
capture会话,使用iphone摄像头以特定分辨率(如果可能的话,以特定质量)拍摄图像。这里是设置
AV
会话代码

// Create and configure a capture session and start it running
- (void)setupCaptureSession 
{
    NSError *error = nil;

    // Create the session
    self.captureSession = [[AVCaptureSession alloc] init];

    // Configure the session to produce lower resolution video frames, if your 
    // processing algorithm can cope. We'll specify medium quality for the
    // chosen device.
    captureSession.sessionPreset = AVCaptureSessionPresetMedium;

    // Find a suitable AVCaptureDevice
    NSArray *cameras=[AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
    AVCaptureDevice *device;
    if ([UserDefaults camera]==UIImagePickerControllerCameraDeviceFront)
    {
        device =[cameras objectAtIndex:1];
    }
    else
    {
        device = [cameras objectAtIndex:0];
    };

    // Create a device input with the device and add it to the session.
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input)
    {
        NSLog(@"PANIC: no media input");
    }
    [captureSession addInput:input];

    // Create a VideoDataOutput and add it to the session
    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    [captureSession addOutput:output];
    NSLog(@"connections: %@", output.connections);

    // Configure your output.
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    // Specify the pixel format
    output.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];


    // If you wish to cap the frame rate to a known value, such as 15 fps, set 
    // minFrameDuration.


    // Assign session to an ivar.
    [self setSession:captureSession];
    [self.captureSession startRunning];
}
setSession

-(void)setSession:(AVCaptureSession *)session
{
    NSLog(@"setting session...");
    self.captureSession=session;
    NSLog(@"setting camera view");
    self.previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
    //UIView *aView = self.view;
    CGRect videoRect = CGRectMake(20.0, 20.0, 280.0, 255.0);
    previewLayer.frame = videoRect; // Assume you want the preview layer to fill the view.
    [previewLayer setBackgroundColor:[[UIColor grayColor] CGColor]];
    [self.view.layer addSublayer:previewLayer];
    //[aView.layer addSublayer:previewLayer];
}
和输出方法:

// Delegate routine that is called when a sample buffer was written
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer 
   fromConnection:(AVCaptureConnection *)connection
{ 
    //NSLog(@"captureOutput: didOutputSampleBufferFromConnection");

    // Create a UIImage from the sample buffer data
    self.currentImage = [self imageFromSampleBuffer:sampleBuffer];

    //< Add your code here that uses the image >
}

// Create a UIImage from sample buffer data
- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    //NSLog(@"imageFromSampleBuffer: called");
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);


    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
    UIImage *image = [UIImage imageWithCGImage:quartzImage];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}
//在写入示例缓冲区时调用的委托例程
-(void)captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection*)连接
{ 
//NSLog(@“captureOutput:didOutputSampleBufferFromConnection”);
//从样本缓冲区数据创建UIImage
self.currentImage=[self-imageFromSampleBuffer:sampleBuffer];
//<在此处添加使用图像的代码>
}
//从样本缓冲区数据创建UIImage
-(UIImage*)imageFromSampleBuffer:(CMSampleBufferRef)sampleBuffer
{
//NSLog(@“imageFromSampleBuffer:调用”);
//为媒体数据获取CMSampleBuffer的核心视频图像缓冲区
CVImageBufferRef imageBuffer=CMSampleBufferGetImageBuffer(sampleBuffer);
//锁定像素缓冲区的基址
CVPixelBufferLockBaseAddress(imageBuffer,0);
//获取像素缓冲区的每行字节数
void*baseAddress=CVPixelBufferGetBaseAddress(imageBuffer);
//获取像素缓冲区的每行字节数
size_t bytesPerRow=CVPixelBufferGetBytesPerRow(图像缓冲区);
//获取像素缓冲区的宽度和高度
size\u t width=CVPixelBufferGetWidth(imageBuffer);
大小\u t高度=CVPixelBufferGetHeight(imageBuffer);
//创建设备相关的RGB颜色空间
CGColorSpaceRef colorSpace=CGColorSpaceCreateDeviceRGB();
//使用示例缓冲区数据创建位图图形上下文
CGContextRef context=CGBitmapContextCreate(基地址、宽度、高度、8、字节数、颜色空间、kCGBitmapByteOrder32Little | KCGimageAlphaPremultipledFirst);
//从位图图形上下文中的像素数据创建石英图像
CGImageRef quartzImage=CGBitmapContextCreateImage(上下文);
//解锁像素缓冲区
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
//释放上下文和颜色空间
CGContextRelease(上下文);
CGCOLORSPACTERELEASE(色彩空间);
//从石英图像创建图像对象
UIImage*image=[UIImage imageWithCGImage:quartzImage];
//释放石英图像
CGImageRelease(四倍图像);
返回(图像);
}
一切都很标准。但是我应该在哪里改变,改变什么来指定捕获图像的分辨率和质量。请帮助我

请参阅“拍摄静止图像”部分,了解如果设置一个或另一个预设,将获得的尺寸


您应该更改的参数是captureSession.sessionPreset

尝试使用类似的方法,其中cx和cy是您的自定义分辨率:

NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                              AVVideoScalingModeResizeAspectFill,AVVideoScalingModeKey,
                           AVVideoCodecH264, AVVideoCodecKey,
                           [NSNumber numberWithInt:cx], AVVideoWidthKey,
                           [NSNumber numberWithInt:cx], AVVideoHeightKey,
                           nil];
_videoInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];

我的应用程序中有一个
UISlider
,使用它的用户将指定他想要的更高或更低的值。但哪些值更高?
NSString*const AVCaptureSessionPresetPhoto;NSString*常量AVCaptureSessionPresetHigh;NSString*const AVCaptureSessionPresetMedium;NSString*常量AVCaptureSessionPresetLow;NSString*常量AVCaptureSessionPreset320x240;NSString*const AVCaptureSessionPreset352x288;NSString*常量AVCaptureSessionPreset640x480;NSString*const AVCaptureSessionPreset960x540;NSString*const AVCaptureSessionPreset1280x720很明显,质量好的更高。。这可能会有帮助。