iPhone中OpenCV眼睛检测器的参数

iPhone中OpenCV眼睛检测器的参数,iphone,opencv,face-detection,eye-detection,Iphone,Opencv,Face Detection,Eye Detection,我正在尝试使用iPhone将3d玻璃映射到人脸上。我正在使用以下OpenCV眼睛检测。然而,眼睛检测不是很稳健。当我的眼睛变得有点窄的时候,如果我在脸上转一个小弯,或者我低头看着相机,眼睛检测就不起作用了。即使在正面,它也只能检测到一半的画面。我在很多地方读到,调整某些参数有助于对图像进行预处理。然而,我不能得到正确的组合。以下是我正在使用的预处理和参数。如果有人可以建议/分享更好的参数,请提供帮助。谢谢 从pixelBuffer-image获取灰度图像。然后调用processFrame: i

我正在尝试使用iPhone将3d玻璃映射到人脸上。我正在使用以下OpenCV眼睛检测。然而,眼睛检测不是很稳健。当我的眼睛变得有点窄的时候,如果我在脸上转一个小弯,或者我低头看着相机,眼睛检测就不起作用了。即使在正面,它也只能检测到一半的画面。我在很多地方读到,调整某些参数有助于对图像进行预处理。然而,我不能得到正确的组合。以下是我正在使用的预处理和参数。如果有人可以建议/分享更好的参数,请提供帮助。谢谢

pixelBuffer
-image获取灰度图像。然后调用
processFrame

 if (format == kCVPixelFormatType_420YpCbCr8BiPlanarFullRange) {
            // For grayscale mode, the luminance channel of the YUV data is used
            CVPixelBufferLockBaseAddress(pixelBuffer, 0);
            void *baseaddress = CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0);

            cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC1, baseaddress, 0);

            [self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation];

            CVPixelBufferUnlockBaseAddress(pixelBuffer, 0); 
        }
        else if (format == kCVPixelFormatType_32BGRA) {
            // For color mode a 4-channel cv::Mat is created from the BGRA data
            CVPixelBufferLockBaseAddress(pixelBuffer, 0);
            void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);

            cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);

            [self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation];

            CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);    
        }
        else {
            NSLog(@"Unsupported video format");
        }
初始化分类器:

    NSString * const kFaceCascadeFilename = @"haarcascade_frontalface_alt2";
    NSString * const kEyesCascadeFilename = @"haarcascade_eye";
processFrame:执行检测

- (void)processFrame:(cv::Mat &)mat videoRect:(CGRect)rect videoOrientation:(AVCaptureVideoOrientation)videOrientation
{
    // Shrink video frame to 320X240
    cv::resize(mat, mat, cv::Size(), 0.5f, 0.5f, CV_INTER_LINEAR);
    rect.size.width /= 2.0f;
    rect.size.height /= 2.0f;

    // Rotate video frame by 90deg to portrait by combining a transpose and a flip
    // Note that AVCaptureVideoDataOutput connection does NOT support hardware-accelerated
    // rotation and mirroring via videoOrientation and setVideoMirrored properties so we
    // need to do the rotation in software here.
    cv::transpose(mat, mat);
    CGFloat temp = rect.size.width;
    rect.size.width = rect.size.height;
    rect.size.height = temp;

    if (videOrientation == AVCaptureVideoOrientationLandscapeRight)
    {
        // flip around y axis for back camera
        cv::flip(mat, mat, 1);
    }
    else {
        // Front camera output needs to be mirrored to match preview layer so no flip is required here
    }

    videOrientation = AVCaptureVideoOrientationPortrait;

    // Detect faces
    std::vector<cv::Rect> faces;
    std::vector<cv::Rect> eyes;


    _faceCascade.detectMultiScale(mat, faces, 1.1, 2, 0 |CV_HAAR_SCALE_IMAGE, cv::Size(30, 30));

    // We will usually have only one face in frame
    if (faces.size() >0){
        cv::Mat faceROI = mat(faces.front());
        _eyesCascade.detectMultiScale( faceROI, eyes, 1.15, 3.0, 0 , cv::Size(30, 30));
    }

    // Dispatch updating of face markers to main queue
    dispatch_sync(dispatch_get_main_queue(), ^{
        [self displayFaces:faces eyes:eyes
             forVideoRect:rect
          videoOrientation:videOrientation];    
    });
}
-(void)processFrame:(cv::Mat&)Mat videoRect:(CGRect)rect videoOrientation:(AVCaptureVideoOrientation)videoOrientation
{
//将视频帧缩小到320X240
cv::resize(垫,垫,cv::Size(),0.5f,0.5f,cv_INTER_LINEAR);
rect.size.width/=2.0f;
rect.size.height/=2.0f;
//通过组合转置和翻转,将视频帧旋转90度至纵向
//请注意,AVCaptureVideoDataOutput连接不支持硬件加速
//通过videoOrientation和设置VideoMirrored属性进行旋转和镜像,以便
//需要在软件中进行旋转。
cv::转置(mat,mat);
CGFloat temp=rect.size.width;
rect.size.width=rect.size.height;
rect.size.height=温度;
if(VideoOrientation==AVCaptureVideoOrientationAndscapeRight)
{
//绕y轴翻转后摄像头
cv::翻转(垫,垫,1);
}
否则{
//前摄像头输出需要镜像以匹配预览层,因此此处不需要翻转
}
VideoOrientation=AVCaptureVideoOrientation肖像;
//检测人脸
向量面;
性病媒眼;
_faceCascade.detectMultiScale(mat,faces,1.1,2,0 | CV|u HAAR|u SCALE_图像,CV::Size(30,30));
//我们通常在画面中只有一张脸
如果(faces.size()>0){
cv::Mat faceROI=Mat(faces.front());
_眼睛级联。检测多尺度(面部ROI,眼睛,1.15,3.0,0,cv::Size(30,30));
}
//将面标记的更新分派到主队列
调度同步(调度获取主队列()^{
[自我显示面孔:面孔眼睛:眼睛
forVideoRect:rect
视频定向:视频定向];
});
}

我还研究了opencv的眼睛检测,它的质量似乎很低。我正在研究的两种方法是一种全新的检测器,基于许多关于该主题的研究论文,以及一种新的分类器,使用我自己的数据进行训练。这两种方法都很耗时,因此openCV可能无法提供合适的解决方案。你知道苹果内置的眼睛检测和其他功能的CIDetector性能吗。我还发现了这段faceL项目的视频,它似乎给出了可靠的结果。但找不到iPhone的任何实现。如果您只想构建一个支持眼睛检测的iPhone应用程序,那么iPhone SDK就是一个不错的选择。OpenCV为您提供了一系列令人印象深刻的工具来开发自定义检测器,您可以将其移植到其他平台或进行扩展,但所需的资源超出了普通应用程序开发人员所能提供的范围-仅此功能可能需要数月的工作和大量的数学运算。