Iphone iOS人脸检测问题

Iphone iOS人脸检测问题,iphone,ios5,face-detection,Iphone,Ios5,Face Detection,我试图在iOS 5中使用CoreImage的人脸检测,但它没有检测到任何东西。我正在尝试使用以下代码检测相机刚刚捕获的图像中的人脸: - (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info { UIImage *image = [info objectForKey:@"UIImagePickerControllerO

我试图在iOS 5中使用CoreImage的人脸检测,但它没有检测到任何东西。我正在尝试使用以下代码检测相机刚刚捕获的图像中的人脸:

- (void)imagePickerController:(UIImagePickerController *)picker didFinishPickingMediaWithInfo:(NSDictionary *)info {
    UIImage *image = [info objectForKey:@"UIImagePickerControllerOriginalImage"];
    NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyHigh, CIDetectorAccuracy, nil];     
    CIDetector *faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];
    NSArray *features = [faceDetector featuresInImage:image.CIImage];
    NSLog(@"Features = %@", features);
    [self dismissModalViewControllerAnimated:YES];
}

这可以很好地编译和运行,但是无论图像中有什么,features数组总是空的。。。有什么想法吗?

好的,仔细阅读文档总是有帮助的。在UIImage文档中,CIImage属性下写着:“如果UIImage对象是使用CGImageRef初始化的,则该属性的值为nil。”显然,UIImagePickerController确实从CGImageRef初始化图像,因为该属性确实为nil。要使上述代码正常工作,您需要添加:

CIImage *ciImage = [CIImage imageWithCGImage:image.CGImage];
并更改此行:

NSArray *features = [faceDetector featuresInImage:ciImage];

我注意到的另一件大事是,从静止图像中进行人脸检测在前摄像头的低分辨率图像上并不起作用!每当我使用背面的高分辨率相机时,它都能正常工作。也许该算法已针对高分辨率进行了调整。

请尝试以下操作。假设您在image变量中加载photo:

NSDictionary *options = [NSDictionary dictionaryWithObject: CIDetectorAccuracyLow forKey: CIDetectorAccuracy];
            CIDetector  *detector = [CIDetector detectorOfType: CIDetectorTypeFace context: nil options: options];

        CIImage *ciImage = [CIImage imageWithCGImage: [image CGImage]];
        NSNumber *orientation = [NSNumber numberWithInt:[image imageOrientation]+1];
        NSDictionary *fOptions = [NSDictionary dictionaryWithObject:orientation forKey: CIDetectorImageOrientation];
            NSArray *features = [detector featuresInImage:ciImage options:fOptions];
            for (CIFaceFeature *f in features) {

                NSLog(@"left eye found: %@", (f. hasLeftEyePosition ? @"YES" : @"NO"));

                NSLog(@"right eye found: %@", (f. hasRightEyePosition ? @"YES" : @"NO"));

                NSLog(@"mouth found: %@", (f. hasMouthPosition ? @"YES" : @"NO"));

                if(f.hasLeftEyePosition)

                    NSLog(@"left eye position x = %f , y = %f", f.leftEyePosition.x, f.leftEyePosition.y);

                if(f.hasRightEyePosition)

                    NSLog(@"right eye position x = %f , y = %f", f.rightEyePosition.x, f.rightEyePosition.y);

                if(f.hasMouthPosition)

                    NSLog(@"mouth position x = %f , y = %f", f.mouthPosition.x, f.mouthPosition.y);

            }

我无法直接回复你的@14:52评论,但我一直在玩前摄像头的人脸检测功能——因为我根本无法让前摄像头拍到我的脸,所以我一圈又一圈地转圈

事实证明,它对旋转非常敏感——我注意到,当我把iPad2放在肖像中时(正如你在使用前摄像头时所期望的那样),我的识别准确率不到10%。一时兴起,把它转向一边,并得到100%的承认与前摄像头

如果您使用的是“始终处于纵向”的前置摄像头,简单的解决方法是添加以下小片段:

NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
NSArray* features = [detector featuresInImage:image options:imageOptions];

其中的6迫使探测器在纵向模式下工作。苹果的示例中有一整套实用方法,如果你需要它来动态地确定你的方向,就可以确定你所处的方向。

以上所有答案都不适用于我(ios 8.4)ipad mini和ipad air 2

我的观察结果和罗伯沃马尔德一样。当iPad旋转时,人脸检测工作正常,所以我旋转了图像:)


今天仍然相关!绝妙
let ciImage = CIImage(CVPixelBuffer: pixelBuffer, options: attachments)
let angle = CGFloat(-M_PI/2)
let rotatedImage = ciImage.imageByApplyingTransform(CGAffineTransformMakeRotation(angle))