基于OpenCV的IOS人脸特征检测

基于OpenCV的IOS人脸特征检测,ios,opencv,opencv3.0,Ios,Opencv,Opencv3.0,我想在iOS中进行面部特征检测。我已经能够使用OpenCV检测到一张脸,但现在我想检测这张脸上的所有“特征”,以便我以后可以对它们进行识别 我发现了一个名为flandmark的库,但它不像我可以在iOS上使用的框架 有人知道我该怎么做吗 谢谢 Nikhil Mehta我建议您尽可能简单,在这种情况下,只要使用本机iOS功能就可以了 主类是CoreImage框架的CIDetector。 以下是主要的方法 // create CIDetector object with CIDetectorType

我想在iOS中进行面部特征检测。我已经能够使用OpenCV检测到一张脸,但现在我想检测这张脸上的所有“特征”,以便我以后可以对它们进行识别

我发现了一个名为flandmark的库,但它不像我可以在iOS上使用的框架

有人知道我该怎么做吗

谢谢
Nikhil Mehta

我建议您尽可能简单,在这种情况下,只要使用本机iOS功能就可以了

主类是
CoreImage
框架的
CIDetector
。 以下是主要的方法

// create CIDetector object with CIDetectorTypeFace type
CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                                  context:nil
                                                  options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];

// give it CIImage and receive an array with CIFaceFeature objects
NSArray *features = [detector featuresInImage:newImage];
根据苹果文档
CIFaceFeature
包含下一个属性

@interface CIFaceFeature : CIFeature

 @property (readonly, assign) CGRect bounds;
 @property (readonly, assign) BOOL hasLeftEyePosition;
 @property (readonly, assign) CGPoint leftEyePosition;
 @property (readonly, assign) BOOL hasRightEyePosition;
 @property (readonly, assign) CGPoint rightEyePosition;
 @property (readonly, assign) BOOL hasMouthPosition;
 @property (readonly, assign) CGPoint mouthPosition;

 @property (readonly, assign) BOOL hasTrackingID;
 @property (readonly, assign) int trackingID;
 @property (readonly, assign) BOOL hasTrackingFrameCount;
 @property (readonly, assign) int trackingFrameCount;

 @property (readonly, assign) BOOL hasFaceAngle;
 @property (readonly, assign) float faceAngle;

 @property (readonly, assign) BOOL hasSmile;
 @property (readonly, assign) BOOL leftEyeClosed;
 @property (readonly, assign) BOOL rightEyeClosed;

 @end
GCD还有一个很好的例子,它实现了人脸特征检测,下面是它的例子。它能找到人们眼睛的位置,并在上面覆盖一些有趣的眼睛

最后是项目的部分代码和截图

- (UIImage *)faceOverlayImageFromImage:(UIImage *)image
{
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                              context:nil
                                              options:@{CIDetectorAccuracy: CIDetectorAccuracyHigh}];
    // Get features from the image
    CIImage* newImage = [CIImage imageWithCGImage:image.CGImage];

    NSArray *features = [detector featuresInImage:newImage];

    UIGraphicsBeginImageContext(image.size);
    CGRect imageRect = CGRectMake(0.0f, 0.0f, image.size.width, image.size.height);

    //Draws this in the upper left coordinate system
    [image drawInRect:imageRect blendMode:kCGBlendModeNormal alpha:1.0f];

    CGContextRef context = UIGraphicsGetCurrentContext();

    for (CIFaceFeature *faceFeature in features) {
        CGRect faceRect = [faceFeature bounds];
        CGContextSaveGState(context);

        // CI and CG work in different coordinate systems, we should translate to
        // the correct one so we don't get mixed up when calculating the face position.
        CGContextTranslateCTM(context, 0.0, imageRect.size.height);
        CGContextScaleCTM(context, 1.0f, -1.0f);

        if ([faceFeature hasLeftEyePosition]) {
            CGPoint leftEyePosition = [faceFeature leftEyePosition];
            CGFloat eyeWidth = faceRect.size.width / kFaceBoundsToEyeScaleFactor;
            CGFloat eyeHeight = faceRect.size.height / kFaceBoundsToEyeScaleFactor;
            CGRect eyeRect = CGRectMake(leftEyePosition.x - eyeWidth/2.0f,
                                        leftEyePosition.y - eyeHeight/2.0f,
                                        eyeWidth,
                                        eyeHeight);
            [self drawEyeBallForFrame:eyeRect];
        }

        if ([faceFeature hasRightEyePosition]) {
            CGPoint leftEyePosition = [faceFeature rightEyePosition];
            CGFloat eyeWidth = faceRect.size.width / kFaceBoundsToEyeScaleFactor;
            CGFloat eyeHeight = faceRect.size.height / kFaceBoundsToEyeScaleFactor;
            CGRect eyeRect = CGRectMake(leftEyePosition.x - eyeWidth / 2.0f,
                                        leftEyePosition.y - eyeHeight / 2.0f,
                                        eyeWidth,
                                        eyeHeight);
            [self drawEyeBallForFrame:eyeRect];
        }

        CGContextRestoreGState(context);
    }

    UIImage *overlayImage = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return overlayImage;
}

- (void)drawEyeBallForFrame:(CGRect)rect
{
    CGContextRef context = UIGraphicsGetCurrentContext();
    CGContextAddEllipseInRect(context, rect);
    CGContextSetFillColorWithColor(context, [UIColor whiteColor].CGColor);
    CGContextFillPath(context);

    CGFloat x, y, eyeSizeWidth, eyeSizeHeight;
    eyeSizeWidth = rect.size.width * kRetinaToEyeScaleFactor;
    eyeSizeHeight = rect.size.height * kRetinaToEyeScaleFactor;

    x = arc4random_uniform((rect.size.width - eyeSizeWidth));
    y = arc4random_uniform((rect.size.height - eyeSizeHeight));
    x += rect.origin.x;
    y += rect.origin.y;

    CGFloat eyeSize = MIN(eyeSizeWidth, eyeSizeHeight);
    CGRect eyeBallRect = CGRectMake(x, y, eyeSize, eyeSize);
    CGContextAddEllipseInRect(context, eyeBallRect);
    CGContextSetFillColorWithColor(context, [UIColor blackColor].CGColor);
    CGContextFillPath(context);
}

希望能有帮助