Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/111.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/objective-c/25.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 将捕获的图像精确裁剪为AVCaptureVideoPreviewLayer中的外观 我有一个使用AV基金会的照片应用程序。我使用占据屏幕上半部分的AVCaptureVideoPreviewLayer设置了一个预览层。因此,当用户试图拍照时,他们只能看到屏幕上半部分看到的东西_Ios_Objective C_Avfoundation_Calayer_Avcapturesession - Fatal编程技术网

Ios 将捕获的图像精确裁剪为AVCaptureVideoPreviewLayer中的外观 我有一个使用AV基金会的照片应用程序。我使用占据屏幕上半部分的AVCaptureVideoPreviewLayer设置了一个预览层。因此,当用户试图拍照时,他们只能看到屏幕上半部分看到的东西

Ios 将捕获的图像精确裁剪为AVCaptureVideoPreviewLayer中的外观 我有一个使用AV基金会的照片应用程序。我使用占据屏幕上半部分的AVCaptureVideoPreviewLayer设置了一个预览层。因此,当用户试图拍照时,他们只能看到屏幕上半部分看到的东西,ios,objective-c,avfoundation,calayer,avcapturesession,Ios,Objective C,Avfoundation,Calayer,Avcapturesession,这很好,但当用户实际拍摄照片时,我尝试将照片设置为图层内容,图像会失真。我做了研究,意识到我需要裁剪图像 我想做的就是裁剪完整的拍摄图像,这样剩下的就是用户最初在屏幕上半部分看到的内容 我已经能够完成这一点,但我正在通过输入手动CGRect值来完成,它看起来仍然不完美。必须有一个更简单的方法来做到这一点 在过去的两天里,我几乎浏览了每一篇关于堆栈溢出的帖子,都是关于裁剪图像的,但没有任何效果 必须有一种方法以编程方式裁剪捕获的图像,以便最终图像与预览层中最初看到的图像完全相同 以下是我的view

这很好,但当用户实际拍摄照片时,我尝试将照片设置为图层内容,图像会失真。我做了研究,意识到我需要裁剪图像

我想做的就是裁剪完整的拍摄图像,这样剩下的就是用户最初在屏幕上半部分看到的内容

我已经能够完成这一点,但我正在通过输入手动CGRect值来完成,它看起来仍然不完美。必须有一个更简单的方法来做到这一点

在过去的两天里,我几乎浏览了每一篇关于堆栈溢出的帖子,都是关于裁剪图像的,但没有任何效果

必须有一种方法以编程方式裁剪捕获的图像,以便最终图像与预览层中最初看到的图像完全相同

以下是我的viewDidLoad实现:

- (void)viewDidLoad
{
    [super viewDidLoad];

    AVCaptureSession *session =[[AVCaptureSession alloc]init];
    [session setSessionPreset:AVCaptureSessionPresetPhoto];

    AVCaptureDevice *inputDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = [[NSError alloc]init];
    AVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:inputDevice error:&error];

    if([session canAddInput:deviceInput])
        [session addInput:deviceInput];

    CALayer *rootLayer = [[self view]layer];
    [rootLayer setMasksToBounds:YES];

    _previewLayer = [[AVCaptureVideoPreviewLayer alloc]initWithSession:session];
    [_previewLayer setFrame:CGRectMake(0, 0, rootLayer.bounds.size.width, rootLayer.bounds.size.height/2)];
    [_previewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    [rootLayer insertSublayer:_previewLayer atIndex:0];

    _stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
    [session addOutput:_stillImageOutput];

    [session startRunning];
    }
下面是用户按下按钮拍摄照片时运行的代码:

-(IBAction)stillImageCapture {
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in _stillImageOutput.connections){
        for (AVCaptureInputPort *port in [connection inputPorts]){
            if ([[port mediaType] isEqual:AVMediaTypeVideo]){
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) {
            break;
        }
    }

    NSLog(@"about to request a capture from: %@", _stillImageOutput);

    [_stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler:^(CMSampleBufferRef imageDataSampleBuffer, NSError *error) {
        if(imageDataSampleBuffer) {
            NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageDataSampleBuffer];

            UIImage *image = [[UIImage alloc]initWithData:imageData];
            CALayer *subLayer = [CALayer layer];
            subLayer.frame = _previewLayer.frame;
            image = [self rotate:image andOrientation:image.imageOrientation];

            //Below is the crop that is sort of working for me, but as you can see I am manually entering in values and just guessing and it still does not look perfect.
            CGRect cropRect = CGRectMake(0, 650, 3000, 2000);
            CGImageRef imageRef = CGImageCreateWithImageInRect([image CGImage], cropRect);

            subLayer.contents = (id)[UIImage imageWithCGImage:imageRef].CGImage;
            subLayer.frame = _previewLayer.frame;

            [_previewLayer addSublayer:subLayer];
        }
    }];
}

查看
AVCaptureVideoPreviewLayer
s

-(CGRect)metadataOutputRectOfInterestForRect:(CGRect)layerRect
通过此方法,可以轻松地将层的可见CGRect转换为实际相机输出

有一点需要注意:物理摄像机不是“顶面朝上”安装的,而是顺时针旋转90度。(因此,如果你把iPhone的Home(主页)按钮握在右边,相机实际上是朝上的)

记住这一点,您必须转换上述方法提供的CGRect,以将图像裁剪为屏幕上的图像

例如:

CGRect visibleLayerFrame = THE ACTUAL VISIBLE AREA IN THE LAYER FRAME
CGRect metaRect = [self.previewView.layer metadataOutputRectOfInterestForRect:visibleLayerFrame];


CGSize originalSize = [originalImage size];

if (UIInterfaceOrientationIsPortrait(_snapInterfaceOrientation)) {
    // For portrait images, swap the size of the image, because
    // here the output image is actually rotated relative to what you see on screen.

    CGFloat temp = originalSize.width;
    originalSize.width = originalSize.height;
    originalSize.height = temp;
}


// metaRect is fractional, that's why we multiply here

CGRect cropRect;

cropRect.origin.x = metaRect.origin.x * originalSize.width;
cropRect.origin.y = metaRect.origin.y * originalSize.height;
cropRect.size.width = metaRect.size.width * originalSize.width;
cropRect.size.height = metaRect.size.height * originalSize.height;

cropRect = CGRectIntegral(cropRect);
这可能有点令人困惑,但让我真正理解的是:


按住你的设备“Home Button right”->你会看到x轴实际上位于iPhone的“高度”,而y轴位于iPhone的“宽度”。这就是为什么对于肖像图像,必须交换大小;)

@Cabus有一个有效的解决方案,你应该投票给他的答案。然而,我用Swift编写了我自己的版本,包括以下内容:

// The image returned in initialImageData will be larger than what
//  is shown in the AVCaptureVideoPreviewLayer, so we need to crop it.
let image : UIImage = UIImage(data: initialImageData)!

let originalSize : CGSize
let visibleLayerFrame = self.previewView!.bounds // THE ACTUAL VISIBLE AREA IN THE LAYER FRAME

// Calculate the fractional size that is shown in the preview
let metaRect : CGRect = (self.videoPreviewLayer?.metadataOutputRectOfInterestForRect(visibleLayerFrame))!
if (image.imageOrientation == UIImageOrientation.Left || image.imageOrientation == UIImageOrientation.Right) {
    // For these images (which are portrait), swap the size of the
    // image, because here the output image is actually rotated
    // relative to what you see on screen.
    originalSize = CGSize(width: image.size.height, height: image.size.width)
}
else {
    originalSize = image.size
}

// metaRect is fractional, that's why we multiply here.
let cropRect : CGRect = CGRectIntegral(
        CGRect( x: metaRect.origin.x * originalSize.width,
                y: metaRect.origin.y * originalSize.height,
                width: metaRect.size.width * originalSize.width,
                height: metaRect.size.height * originalSize.height))

let finalImage : UIImage = 
    UIImage(CGImage: CGImageCreateWithImageInRect(image.CGImage, cropRect)!, 
        scale:1, 
        orientation: image.imageOrientation )

以下是@Erik Allen在《Swift 3》中的回答:

let originalSize: CGSize
let visibleLayerFrame = self?.photoView.bounds

// Calculate the fractional size that is shown in the preview
let metaRect = (self?.videoPreviewLayer?.metadataOutputRectOfInterest(for: visibleLayerFrame ?? CGRect.zero)) ?? CGRect.zero

if (image.imageOrientation == UIImageOrientation.left || image.imageOrientation == UIImageOrientation.right) {
    // For these images (which are portrait), swap the size of the
    // image, because here the output image is actually rotated
    // relative to what you see on screen.
    originalSize = CGSize(width: image.size.height, height: image.size.width)
} else {
    originalSize = image.size
}

let cropRect: CGRect = CGRect(x: metaRect.origin.x * originalSize.width, y: metaRect.origin.y * originalSize.height, width: metaRect.size.width * originalSize.width, height: metaRect.size.height * originalSize.height).integral

if let finalCgImage = image.cgImage?.cropping(to: cropRect) {
    let finalImage = UIImage(cgImage: finalCgImage, scale: 1.0, orientation: image.imageOrientation)

    // User your image...
}

@user3117509此答案值得打分,请接受!而@Cabus你不旋转图像。我使用以下代码
UIImage*cropedimage=[UIImage-imageWithCGImage:imageRef];UIGraphicsBeginImageContext(cropeImage.size);[[UIImage imageWithCGImage:[CroppedImageCGImage]比例:1.0方向:UIImageOrientationRight]DrawinRectMake(0,0,CroppedImages.size.height,CroppedImages.size.width)];UIImage*rotatedCroppedImage=UIGraphicsGetImageFromCurrentImageContext();UIGraphicsSendImageContext()