Ios 如何裁剪检测到的人脸

Ios 如何裁剪检测到的人脸,ios,crop,face-detection,core-image,Ios,Crop,Face Detection,Core Image,我使用CoreImage来检测人脸。我想在人脸检测后裁剪人脸。我使用此片段检测人脸: -(void)markFaces:(UIImageView *)facePicture{ CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage]; CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace

我使用
CoreImage
来检测人脸。我想在人脸检测后裁剪人脸。我使用此片段检测人脸:

-(void)markFaces:(UIImageView *)facePicture{


CIImage* image = [CIImage imageWithCGImage:imageView.image.CGImage];

CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                          context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];


NSArray* features = [detector featuresInImage:image];


CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
transform = CGAffineTransformTranslate(transform, 0, -imageView.bounds.size.height);


for(CIFaceFeature* faceFeature in features)
{
    // Get the face rect: Translate CoreImage coordinates to UIKit coordinates
    const CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);


    faceView = [[UIView alloc] initWithFrame:faceRect];
    faceView.layer.borderWidth = 1;
    faceView.layer.borderColor = [[UIColor redColor] CGColor];


    UIGraphicsBeginImageContext(faceView.bounds.size);
    [faceView.layer renderInContext:UIGraphicsGetCurrentContext()];
    UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();

    //Blur the UIImage with a CIFilter
    CIImage *imageToBlur = [CIImage imageWithCGImage:viewImage.CGImage];
    CIFilter *gaussianBlurFilter = [CIFilter filterWithName: @"CIGaussianBlur"];
    [gaussianBlurFilter setValue:imageToBlur forKey: @"inputImage"];
    [gaussianBlurFilter setValue:[NSNumber numberWithFloat: 10] forKey: @"inputRadius"];
    CIImage *resultImage = [gaussianBlurFilter valueForKey: @"outputImage"];
    UIImage *endImage = [[UIImage alloc] initWithCIImage:resultImage];

    //Place the UIImage in a UIImageView
    UIImageView *newView = [[UIImageView alloc] initWithFrame:self.view.bounds];
    newView.image = endImage;
    [self.view addSubview:newView];

    CGFloat faceWidth = faceFeature.bounds.size.width;

    [imageView addSubview:faceView];

    // LEFT EYE
    if(faceFeature.hasLeftEyePosition)
    {

        const CGPoint leftEyePos = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);

        UIView *leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
                                                                       leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f
                                                                       ,faceWidth*EYE_SIZE_RATE,
                                                                       faceWidth*EYE_SIZE_RATE)];

        NSLog(@"Left Eye X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",leftEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
              leftEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f,faceWidth*EYE_SIZE_RATE,
              faceWidth*EYE_SIZE_RATE);

        leftEyeView.backgroundColor = [[UIColor magentaColor] colorWithAlphaComponent:0.3];
        leftEyeView.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;


        [imageView addSubview:leftEyeView];
    }


    // RIGHT EYE
    if(faceFeature.hasRightEyePosition)
    {

        const CGPoint rightEyePos = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform);


        UIView *rightEye = [[UIView alloc] initWithFrame:CGRectMake(rightEyePos.x - faceWidth*EYE_SIZE_RATE*0.5,
                                                                    rightEyePos.y - faceWidth*EYE_SIZE_RATE*0.5,
                                                                    faceWidth*EYE_SIZE_RATE,
                                                                    faceWidth*EYE_SIZE_RATE)];



        NSLog(@"Right Eye X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",rightEyePos.x - faceWidth*EYE_SIZE_RATE*0.5f,
              rightEyePos.y - faceWidth*EYE_SIZE_RATE*0.5f,faceWidth*EYE_SIZE_RATE,
              faceWidth*EYE_SIZE_RATE);

        rightEye.backgroundColor = [[UIColor blueColor] colorWithAlphaComponent:0.2];
        rightEye.layer.cornerRadius = faceWidth*EYE_SIZE_RATE*0.5;
        [imageView addSubview:rightEye];
    }


    // MOUTH
    if(faceFeature.hasMouthPosition)
    {

        const CGPoint mouthPos = CGPointApplyAffineTransform(faceFeature.mouthPosition, transform);


        UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5,
                                                                 mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5,
                                                                 faceWidth*MOUTH_SIZE_RATE,
                                                                 faceWidth*MOUTH_SIZE_RATE)];

        NSLog(@"Mouth X = %0.1f Y = %0.1f Width = %0.1f Height = %0.1f",mouthPos.x - faceWidth*MOUTH_SIZE_RATE*0.5f,
              mouthPos.y - faceWidth*MOUTH_SIZE_RATE*0.5f,faceWidth*MOUTH_SIZE_RATE,
              faceWidth*MOUTH_SIZE_RATE);


        mouth.backgroundColor = [[UIColor greenColor] colorWithAlphaComponent:0.3];
        mouth.layer.cornerRadius = faceWidth*MOUTH_SIZE_RATE*0.5;
        [imageView addSubview:mouth];

    }
}
}

我想要的只是裁剪面部。

您可以使用此功能轻松裁剪面部。它经过测试,工作正常

-(void)faceWithFrame:(CGRect)frame{
    CGRect rect = frame;
    CGImageRef imageRef = CGImageCreateWithImageInRect([self.imageView.image CGImage], rect);
    UIImage *cropedImage = [UIImage imageWithCGImage:imageRef];
    self.cropedImg.image =cropedImage;
}

您只需传递面部框架,上面的函数将生成裁剪面部图像。

您发布了一些代码,使用面部检测来执行与您想要执行的操作无关的操作。该代码确实包括在图像坐标中计算图像中每个面的矩形。您的下一个任务是修改该代码,以提取该矩形处的图像并将其转换为另一个图像。您应该能够创建一个与所需输出图像大小相同的上下文,然后使用drawInRect将脸部部分渲染到该图像上下文中。试试看,如果你有问题,发布你的代码并寻求帮助。