Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/swift/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
ML工具包iOS人脸检测错误_Ios_Swift_Face Detection_Google Mlkit - Fatal编程技术网

ML工具包iOS人脸检测错误

ML工具包iOS人脸检测错误,ios,swift,face-detection,google-mlkit,Ios,Swift,Face Detection,Google Mlkit,我一直在尝试使用 但是有一个问题,它不能与前置摄像头配合使用,它只能在我使用手机上的后置摄像头时检测人脸。我打印出了方向和前后的所有匹配项。在我的iPhoneX上,它似乎前后都能工作,但当我在iPhone11和iPhoneXMax上测试它时,它只在后摄像头上工作。我不确定是什么导致了这种不一致。我使用的代码如下,请注意,传递到photoVerification函数的所有图像都首先通过fixedOrientation函数运行,以确保一致性: func photoVerification(imag

我一直在尝试使用 但是有一个问题,它不能与前置摄像头配合使用,它只能在我使用手机上的后置摄像头时检测人脸。我打印出了方向和前后的所有匹配项。在我的iPhoneX上,它似乎前后都能工作,但当我在iPhone11和iPhoneXMax上测试它时,它只在后摄像头上工作。我不确定是什么导致了这种不一致。我使用的代码如下,请注意,传递到photoVerification函数的所有图像都首先通过fixedOrientation函数运行,以确保一致性:

 func photoVerification(image: UIImage?) {
    guard let imageFace = image else { return }
    //Enhanced Face Detection
    let options = FaceDetectorOptions()
    options.performanceMode = .accurate
    //Initialize face detector with given options
    let faceDetector = FaceDetector.faceDetector(options: options)
    // Initialize a VisionImage object with the given UIImage.
    let visionImage = VisionImage(image: imageFace)
    visionImage.orientation = imageFace.imageOrientation
    print("$$The Images Orientation is: ",imageFace.imageOrientation.rawValue)
    faceDetector.process(visionImage) { faces, error in
        guard error == nil, let faces = faces, !faces.isEmpty else {
          // [START_EXCLUDE]
          let errorString = error?.localizedDescription ?? "NO Results Possible"
            print("Error: ",errorString)
          //No face detected provide error on image
          print("No face detected!")
          self.userVerified = false
          self.addVerifiedTag(isVerified: false)
          // [END_EXCLUDE]
          return
        }

        // Faces detected
        // [START_EXCLUDE]
        //Face Has been detected Offer Verified Tag to user
        print("Face detected!")
        self.userVerified = true
        self.addVerifiedTag(isVerified: true)
    }
}


func fixedOrientation(image:UIImage) -> UIImage?{
    guard image.imageOrientation != .up else{
        //Orientation is correct
        return image
    }
    guard let cgImage = image.cgImage else{
        //CGimage not available
        return nil
    }
    guard let colorSpace = cgImage.colorSpace, let ctx = CGContext(data: nil, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: cgImage.bitsPerComponent, bytesPerRow: 0, space: colorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue) else{
        return nil
    }
    var  transform:CGAffineTransform = CGAffineTransform.identity
    
    switch image.imageOrientation {
    case .down, .downMirrored:
        transform = transform.translatedBy(x: image.size.width, y: image.size.height)
        transform = transform.rotated(by: CGFloat.pi)
    case .left, .leftMirrored:
        transform = transform.translatedBy(x: image.size.width, y: 0)
        transform = transform.rotated(by: CGFloat.pi / 2.0)
    case .right, .rightMirrored:
        transform = transform.translatedBy(x: 0, y: image.size.height)
        transform = transform.rotated(by: CGFloat.pi / -2.0)
    case .up, .upMirrored:
        break
    @unknown default:
        break
    }

    // Flip image one more time if needed to, this is to prevent flipped image
    switch image.imageOrientation {
    case .upMirrored, .downMirrored:
        transform = transform.translatedBy(x: image.size.width, y: 0)
        transform = transform.scaledBy(x: -1, y: 1)
    case .leftMirrored, .rightMirrored:
        transform = transform.translatedBy(x: image.size.height, y: 0)
        transform = transform.scaledBy(x: -1, y: 1)
    case .up, .down, .left, .right:
        break
    @unknown default:
        break
    }

    ctx.concatenate(transform)

    switch image.imageOrientation {
    case .left, .leftMirrored, .right, .rightMirrored:
        ctx.draw(cgImage, in: CGRect(x: 0, y: 0, width: image.size.height, height: image.size.width))
    default:
        ctx.draw(cgImage, in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
        break
    }

    guard let newCGImage = ctx.makeImage() else { return nil }
    return UIImage.init(cgImage: newCGImage, scale: 1, orientation: .up)
}
你文章中的Google ML Kit SDK适用于iPhone 11上的前后摄像头。我的手机运行的是iOS 13.4,我使用的是Xcode 11.6。您可以在Swift和Objective-C中查看iOS Quickstart示例应用程序,该应用程序演示了如何使用前后摄像头拍照或预览实时视频来进行人脸检测和其他功能:

你文章中的Google ML Kit SDK适用于iPhone 11上的前后摄像头。我的手机运行的是iOS 13.4,我使用的是Xcode 11.6。您可以在Swift和Objective-C中查看iOS Quickstart示例应用程序,该应用程序演示了如何使用前后摄像头拍照或预览实时视频来进行人脸检测和其他功能: