Ios CIDetector在面部特征上给出错误的位置

Ios CIDetector在面部特征上给出错误的位置,ios,face-recognition,Ios,Face Recognition,现在我知道坐标系搞乱了。我试过反转视图和imageView,什么都没有。然后我尝试反转特征上的坐标,但仍然遇到同样的问题。我知道它会检测人脸、眼睛和嘴巴,但当我试图根据样本代码放置叠加框时,它们的位置不正确(确切地说,它们在屏幕右侧)。我不明白为什么会发生这种事 我会发布一些代码,因为我知道你们中的一些人喜欢这种特殊性: -(void)faceDetector { // Load the picture for face detection // UIImageView* imag

现在我知道坐标系搞乱了。我试过反转视图和imageView,什么都没有。然后我尝试反转特征上的坐标,但仍然遇到同样的问题。我知道它会检测人脸、眼睛和嘴巴,但当我试图根据样本代码放置叠加框时,它们的位置不正确(确切地说,它们在屏幕右侧)。我不明白为什么会发生这种事

我会发布一些代码,因为我知道你们中的一些人喜欢这种特殊性:

-(void)faceDetector
{
    // Load the picture for face detection
//    UIImageView* image = [[UIImageView alloc] initWithImage:mainImage];
    [self.imageView setImage:mainImage];
    [self.imageView setUserInteractionEnabled:YES];

    // Draw the face detection image
//    [self.view addSubview:self.imageView];

    // Execute the method used to markFaces in background
//    [self performSelectorInBackground:@selector(markFaces:) withObject:self.imageView];

    // flip image on y-axis to match coordinate system used by core image
//    [self.imageView setTransform:CGAffineTransformMakeScale(1, -1)];

    // flip the entire window to make everything right side up
//    [self.view setTransform:CGAffineTransformMakeScale(1, -1)];

//    [toolbar setTransform:CGAffineTransformMakeScale(1, -1)];
    [toolbar setFrame:CGRectMake(0, 0, 320, 44)];

    // Execute the method used to markFaces in background
    [self performSelectorInBackground:@selector(markFaces:) withObject:_imageView];
//    [self markFaces:self.imageView];
}

-(void)markFaces:(UIImageView *)facePicture
{
    // draw a CI image with the previously loaded face detection picture
    CIImage* image = [CIImage imageWithCGImage:facePicture.image.CGImage];

    // create a face detector - since speed is not an issue we'll use a high accuracy
    // detector
    CIDetector* detector = [CIDetector detectorOfType:CIDetectorTypeFace
                                              context:nil options:[NSDictionary dictionaryWithObject:CIDetectorAccuracyHigh forKey:CIDetectorAccuracy]];

//    CGAffineTransform transform = CGAffineTransformMakeScale(1, -1);
    CGAffineTransform transform = CGAffineTransformMakeScale(self.view.frame.size.width/mainImage.size.width, -self.view.frame.size.height/mainImage.size.height);
    transform = CGAffineTransformTranslate(transform, 0, -self.imageView.bounds.size.height);

    // create an array containing all the detected faces from the detector
    NSDictionary* imageOptions = [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:6] forKey:CIDetectorImageOrientation];
    NSArray* features = [detector featuresInImage:image options:imageOptions];
//    NSArray* features = [detector featuresInImage:image];

    NSLog(@"Marking Faces: Count: %d", [features count]);

    // we'll iterate through every detected face.  CIFaceFeature provides us
    // with the width for the entire face, and the coordinates of each eye
    // and the mouth if detected.  Also provided are BOOL's for the eye's and
    // mouth so we can check if they already exist.
    for(CIFaceFeature* faceFeature in features)
    {


        // create a UIView using the bounds of the face
//        UIView* faceView = [[UIView alloc] initWithFrame:faceFeature.bounds];
        CGRect faceRect = CGRectApplyAffineTransform(faceFeature.bounds, transform);

        // get the width of the face
//        CGFloat faceWidth = faceFeature.bounds.size.width;
        CGFloat faceWidth = faceRect.size.width;

        // create a UIView using the bounds of the face
        UIView *faceView = [[UIView alloc] initWithFrame:faceRect];

        // add a border around the newly created UIView
        faceView.layer.borderWidth = 1;
        faceView.layer.borderColor = [[UIColor redColor] CGColor];

        // add the new view to create a box around the face
        [self.imageView addSubview:faceView];
        NSLog(@"Face -> X: %f, Y: %f, W: %f, H: %f",faceRect.origin.x, faceRect.origin.y, faceRect.size.width, faceRect.size.height);

        if(faceFeature.hasLeftEyePosition)
        {

            // create a UIView with a size based on the width of the face
            CGPoint leftEye = CGPointApplyAffineTransform(faceFeature.leftEyePosition, transform);
            UIView* leftEyeView = [[UIView alloc] initWithFrame:CGRectMake(leftEye.x-faceWidth*0.15, leftEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
            // change the background color of the eye view
            [leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];
            // set the position of the leftEyeView based on the face
            [leftEyeView setCenter:leftEye];
            // round the corners
            leftEyeView.layer.cornerRadius = faceWidth*0.15;
            // add the view to the window
            [self.imageView addSubview:leftEyeView];
            NSLog(@"Has Left Eye -> X: %f, Y: %f",leftEye.x, leftEye.y);
        }

        if(faceFeature.hasRightEyePosition)
        {

            // create a UIView with a size based on the width of the face
            CGPoint rightEye = CGPointApplyAffineTransform(faceFeature.rightEyePosition, transform);
            UIView* leftEye = [[UIView alloc] initWithFrame:CGRectMake(rightEye.x-faceWidth*0.15, rightEye.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];
            // change the background color of the eye view
            [leftEye setBackgroundColor:[[UIColor yellowColor] colorWithAlphaComponent:0.3]];
            // set the position of the rightEyeView based on the face
            [leftEye setCenter:rightEye];
            // round the corners
            leftEye.layer.cornerRadius = faceWidth*0.15;
            // add the new view to the window
            [self.imageView addSubview:leftEye];
            NSLog(@"Has Right Eye -> X: %f, Y: %f", rightEye.x, rightEye.y);
        }

//        if(faceFeature.hasMouthPosition)
//        {
//            // create a UIView with a size based on the width of the face
//            UIView* mouth = [[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];
//            // change the background color for the mouth to green
//            [mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];
//            // set the position of the mouthView based on the face
//            [mouth setCenter:faceFeature.mouthPosition];
//            // round the corners
//            mouth.layer.cornerRadius = faceWidth*0.2;
//            // add the new view to the window
//            [self.imageView addSubview:mouth];
//        }
    }
}
我知道代码段有点长,但这是它的主要要点。与此相关的另一件事是,我有一个UIImagePickerController,它让用户可以选择选择现有图像或拍摄新图像。然后,图像被设置到屏幕的UIImageView中,与各种方框和圆圈一起显示,但无法显示:/

任何帮助都将不胜感激。谢谢~

更新:

我添加了一张它现在所做的照片,这样你们就可以有一个想法了,我应用了新的缩放效果更好一些,但离我想要它做的还差得远


除非图像视图与图像大小完全相同,否则变换缺少比例。从

   CGAffineTransformMakeScale( viewWidth / imageWidth, - viewHeight / imageHeight )

其中
viewWidth
viewHeight
是视图的大小,
imageWidth
imageHeight
是图像的大小。

只需使用苹果SquareCam应用程序中的代码即可。它将前后摄像头的正方形在任何方向上正确对齐。沿faceRect插值以获得正确的眼睛和嘴巴位置。注意:必须从面特征将x位置与y位置交换。我不知道你为什么要交换,但这会给你正确的位置。

所以在@Sven的帮助下,我找到了答案

CGAffineTransform transform = CGAffineTransformMakeScale(self.imageView.bounds.size.width/mainImage.size.width, -self.imageView.bounds.size.height/mainImage.size.height);
    transform = CGAffineTransformRotate(transform, degreesToRadians(270));

我必须调整变换以缩放图像大小和imageview的大小,然后出于某种原因,我必须旋转它,但它现在工作得很好

效果更好,现在它没有向右(屏幕外)移动,而是在实际面上方绘制面框。这让我感到困惑,因为我刚刚使用了一个示例代码,它可以工作并更改为允许在旧图像或新图像上进行选择。同样,我不确定您是指UIImageView还是常规UIView,但在比较原始图像(我的代码中的mainImage)时,它工作得更好控制器的视图这些是我的转换值,在执行了您建议的操作后:转换->A:0.291667,B:0.000000,C:0.000000,D:-0.378125,Tx:81.666664,Ty:183.012512图像顺便问一下:)您有解决方案吗?