Ios 根据相机层上方的矩形视图从AVCapture裁剪图像

Ios 根据相机层上方的矩形视图从AVCapture裁剪图像,ios,iphone,Ios,Iphone,我得到了相机预览层,其中相机预设为1280x720。 在预览层上方,我添加了一个带有边框的方形视图 我的目标是从相机中获得裁剪图像 一种从摄像机中提取数据的方法 -(CGImageRef)createImageFromBuffer:(CVImageBufferRef)buffer left:(size_t)left top:(size_t)top

我得到了相机预览层,其中相机预设为1280x720。 在预览层上方,我添加了一个带有边框的方形视图

我的目标是从相机中获得裁剪图像

一种从摄像机中提取数据的方法

-(CGImageRef)createImageFromBuffer:(CVImageBufferRef)buffer
                              left:(size_t)left
                               top:(size_t)top
                             width:(size_t)width
                            height:(size_t)height CF_RETURNS_RETAINED {
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(buffer);
    size_t dataWidth = CVPixelBufferGetWidth(buffer);
    size_t dataHeight = CVPixelBufferGetHeight(buffer);

    if (left + width > dataWidth ||
        top + height > dataHeight) {
        [NSException raise:NSInvalidArgumentException format:@"Crop rectangle does not fit within image data."];
    }

    size_t newBytesPerRow = ((width*4+0xf)>>4)<<4;

    CVPixelBufferLockBaseAddress(buffer,0);

    int8_t *baseAddress = (int8_t *)CVPixelBufferGetBaseAddress(buffer);

    size_t size = newBytesPerRow*height;
    int8_t *bytes = (int8_t *)malloc(size * sizeof(int8_t));
    if (newBytesPerRow == bytesPerRow) {
        memcpy(bytes, baseAddress+top*bytesPerRow, size * sizeof(int8_t));
    } else {
        for (int y=0; y<height; y++) {
            memcpy(bytes+y*newBytesPerRow,
                   baseAddress+left*4+(top+y)*bytesPerRow,
                   newBytesPerRow * sizeof(int8_t));
        }
    }
    CVPixelBufferUnlockBaseAddress(buffer, 0);

    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    CGContextRef newContext = CGBitmapContextCreate(bytes,
                                                    width,
                                                    height,
                                                    8,
                                                    newBytesPerRow,
                                                    colorSpace,
                                                    kCGBitmapByteOrder32Little|
                                                    kCGImageAlphaNoneSkipFirst);
    CGColorSpaceRelease(colorSpace);

    CGImageRef result = CGBitmapContextCreateImage(newContext);

    CGContextRelease(newContext);

    free(bytes);

    return result;
}
提取数据:

    - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

        if(self.lastDecodeTime && [self.lastDecodeTime timeIntervalSinceNow]>-DECODE_LIMIT_TIME){
            return;
        }
        if ( self.scannerDisabled)
            return;

        self.lastDecodeTime=[NSDate date];

        CVImageBufferRef videoFrame = CMSampleBufferGetImageBuffer(sampleBuffer);
        CGFloat cameraFrameWidth = CVPixelBufferGetWidth(videoFrame);
        CGFloat cameraFrameHeight = CVPixelBufferGetHeight(videoFrame);


        CGPoint rectPoint = self.rectangleView.frame.origin;
        rectPoint = [self.previewLayer convertPoint:rectPoint fromLayer:self.view.layer];
        CGPoint cameraPoint =    [self.previewLayer captureDevicePointOfInterestForPoint:rectPoint];
        CGPoint matrixPoint = CGPointMake(cameraPoint.x*cameraFrameWidth,cameraPoint.x*cameraFrameHeight);

        CGFloat D = self.rectangleView.frame.size.width*2.0;
        CGRect matrixRect = CGRectMake(matrixPoint.x, matrixPoint.y, D, D);


        CGImageRef videoFrameImage = [self createImageFromBuffer:videoFrame left:matrixRect.origin.x top:matrixRect.origin.y width:matrixRect.size.width height:matrixRect.size.height];

        CGImageRef rotatedImage = [self createRotatedImage:videoFrameImage degrees:self.rotationDeg];
        CGImageRelease(videoFrameImage);
...
...
...
}
为了调试,我在左上角添加了一个小的图像视图,以查看裁剪后的结果。。 你可以看到我在正确的方向上,但是有一些偏移。 我假设相机的缓冲区是1280x720,iphone的屏幕有不同的外观,所以可能会出现某种裁剪,这可能就是我正在处理的偏移量

附加屏幕截图,您可以看到裁剪图像没有居中

p、 这是输出设置

 AVCaptureVideoDataOutput *output = [AVCaptureVideoDataOutput new];

    NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
                                       [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
    [output setVideoSettings:rgbOutputSettings];

有什么想法吗?

试试这个,从整个图像中得到裁剪后的图像

[self.view resizableSnapshotViewFromRect:requiredRectToCrop afterScreenUpdates:YES withCapInsets:UIEdgeInsetsZero];

谢谢,但这不是我想要的只是让requiredRectToCrop成为你的square UIview rect的rect。什么是requiredRectToCrop?未找到此方法的任何参考“我获得了相机预览层,其中相机预设为1280x720。在预览层上方,我添加了一个带边框的方形UIView”。RequiredRectToCrop是方形UIView的Cgrect。requiredRectToCrop=squareView.frame;
[self.view resizableSnapshotViewFromRect:requiredRectToCrop afterScreenUpdates:YES withCapInsets:UIEdgeInsetsZero];