Ios 如何在iPhone中捕获仅照相机可见的视图

Ios 如何在iPhone中捕获仅照相机可见的视图,ios,iphone,objective-c,avcapturesession,avcapturedevice,Ios,Iphone,Objective C,Avcapturesession,Avcapturedevice,我已经创建了一个视图控制器,其中50%的视图是摄影机视图,另外50%是按钮等。 我面临的问题是,当我拍摄图像时,拍摄的图像要大得多,我只想拍摄50%的视图中我能看到的东西。 所以它看起来像这样: 这就是我在视图中看到的: 这是我在拍摄后得到的图像: 这背后的代码是: -(void) viewDidAppear:(BOOL)animated { AVCaptureSession *session = [[AVCaptureSession alloc] init]; sessio

我已经创建了一个视图控制器,其中50%的视图是摄影机视图,另外50%是按钮等。
我面临的问题是,当我拍摄图像时,拍摄的图像要大得多,我只想拍摄50%的视图中我能看到的东西。
所以它看起来像这样:
这就是我在视图中看到的:

这是我在拍摄后得到的图像:

这背后的代码是:

-(void) viewDidAppear:(BOOL)animated
{
    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;

    CALayer *viewLayer = self.vImagePreview.view.layer;
    NSLog(@"viewLayer = %@", viewLayer);

    AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];

    [captureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    captureVideoPreviewLayer.frame = self.vImagePreview.view.bounds;
    [self.vImagePreview.view.layer addSublayer:captureVideoPreviewLayer];
    NSLog(@"Rect of self.view: %@",NSStringFromCGRect(self.vImagePreview.view.frame));
    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        NSLog(@"ERROR: trying to open camera: %@", error);
    }
    [session addInput:input];

    stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
    [stillImageOutput setOutputSettings:outputSettings];

    [session addOutput:stillImageOutput];

    [session startRunning];
}

- (void)viewDidLoad
{
    [super viewDidLoad];

    // Load camera
    vImagePreview = [[CameraViewController alloc]init];

    vImagePreview.view.frame = CGRectMake(10, 10, 300, 500);
    [self.view addSubview:vImagePreview.view];

    vImage = [[UIImageView alloc]init];
    vImage.frame = CGRectMake(10, 10, 300, 300);

    [self.view addSubview:vImage];

}
这是我尝试捕捉图像时发生的事件:

AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections)
    {
        for (AVCaptureInputPort *port in [connection inputPorts])
        {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] )
            {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    NSLog(@"about to request a capture from: %@", stillImageOutput);
    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
     {
         CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
         if (exifAttachments)
         {
             // Do something with the attachments.
             NSLog(@"attachements: %@", exifAttachments);
         }
         else
             NSLog(@"no attachments");

         NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
         image = [[UIImage alloc] initWithData:imageData];

         UIAlertView *successAlert = [[UIAlertView alloc] init];
         successAlert.title = @"Review Picture";
         successAlert.message = @"";
         [successAlert addButtonWithTitle:@"Save"];
         [successAlert addButtonWithTitle:@"Retake"];

         [successAlert setDelegate:self];

         UIImageView *_imageView = [[UIImageView alloc] initWithFrame:CGRectMake(220, 10, 40, 40)];
         _imageView.image = image;
         [successAlert addSubview:_imageView];
         [successAlert show];

         UIImage *_image = [self imageByScalingAndCroppingForSize:CGSizeMake(640,480) :_imageView.image];
         NSData *idata = [NSData dataWithData:UIImagePNGRepresentation(_image)];
         encodedImage = [self encodeBase64WithData:idata];
     }];

为什么我要获取整个摄影机视图如何缩小摄影机捕获的大小,以便只捕获摄影机视图中看到的内容?

您可以在捕获后裁剪图像。试试这个

CGImageRef imageRef = CGImageCreateWithImageInRect([largeImage CGImage], cropRect);
// or use the UIImage wherever you like
[UIImageView setImage:[UIImage imageWithCGImage:imageRef]]; 
CGImageRelease(imageRef);
或者试试这个

- (UIImage *)crop:(CGRect)rect {
if (self.scale > 1.0f) {
    rect = CGRectMake(rect.origin.x * self.scale,
                      rect.origin.y * self.scale,
                      rect.size.width * self.scale,
                      rect.size.height * self.scale);
}

CGImageRef imageRef = CGImageCreateWithImageInRect(self.CGImage, rect);
UIImage *result = [UIImage imageWithCGImage:imageRef scale:self.scale orientation:self.imageOrientation];
CGImageRelease(imageRef);
return result;
}

也请参考此链接:

您还可以尝试直接拍摄给定rect的屏幕截图,这将减少您的工作规模和其他内容。。类似这样的东西,但我如何知道原点x和y?为此,您需要通过运行代码手动给出帧,通过尝试和错误方法给出rect。我的建议是,你可以尝试拍摄拍摄图像的截图,然后计算所需图像与origin.x的差值。这是一个糟糕的建议,它不是一个错误,图像本身有宽度和高度属性,视图也有,预览视图也有。您可以根据这些属性计算实际的可视帧。