Warning: file_get_contents(/data/phpspider/zhask/data//catemap/8/swift/16.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/image-processing/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Swift 视觉框架,用于检测向上/向下看的人脸_Swift_Image Processing_Face Detection_Vision_Apple Vision - Fatal编程技术网

Swift 视觉框架,用于检测向上/向下看的人脸

Swift 视觉框架,用于检测向上/向下看的人脸,swift,image-processing,face-detection,vision,apple-vision,Swift,Image Processing,Face Detection,Vision,Apple Vision,我正在使用视觉框架来检测人脸方向。VNFaceObservation具有横摇和偏航特性,但不幸的是没有俯仰。如何计算音高值?我需要检查一个人是向上看还是向下看 另一个论坛中有人建议我使用瞳孔和眉毛之间的距离,但我并没有得到好的结果,尤其是当人们改变与屏幕的距离时,因为这种差异不同。我试图用face boundingBox height来划分差异,但该值也是动态的,当用户抬起头时,面周围的框变小了 面容迥异的人则是另一回事。有人用Vision实现过这样的功能吗? 谢谢你的帮助 var ori

我正在使用视觉框架来检测人脸方向。VNFaceObservation具有横摇和偏航特性,但不幸的是没有俯仰。如何计算音高值?我需要检查一个人是向上看还是向下看

另一个论坛中有人建议我使用瞳孔和眉毛之间的距离,但我并没有得到好的结果,尤其是当人们改变与屏幕的距离时,因为这种差异不同。我试图用face boundingBox height来划分差异,但该值也是动态的,当用户抬起头时,面周围的框变小了

面容迥异的人则是另一回事。有人用Vision实现过这样的功能吗? 谢谢你的帮助

   var origins: [CGPoint] = []
  //  origin based on left and right pupil
  if let point = result.landmarks?.leftPupil?.normalizedPoints.first {
    let origin = landmark(point: point, to: result.boundingBox)
    origins.append(origin)
  }
  if let point = result.landmarks?.rightPupil?.normalizedPoints.first {
    let origin = landmark(point: point, to: result.boundingBox)
    origins.append(origin)
  }
 
  // Calculate the average y coordinate of the origins.
  let avgY = origins.map { $0.y }.reduce(0.0, +) / CGFloat(origins.count)
  
  // get eyebrow locations
  var eyebrowOrigins: [CGPoint] = []
    var heights: [CGPoint] = []
    
    
  if let point = result.landmarks?.leftEyebrow?.normalizedPoints.first {
    let origin = landmark(point: point, to: result.boundingBox)
    eyebrowOrigins.append(origin)
  }
  if let point = result.landmarks?.rightEyebrow?.normalizedPoints.first {
    let origin = landmark(point: point, to: result.boundingBox)
    eyebrowOrigins.append(origin)
  }
  
  // Calculate the average y coordinate of the eye brows.
  let eyebrowAvgY = eyebrowOrigins.map { $0.y }.reduce(0.0, +) / CGFloat(origins.count)
  
  // compare pupils location to eye brows
  
  var focusY: CGFloat = 0
    let diff = avgY - eyebrowAvgY

  // approximate face height inside the bounding box
    let faceHeight = result.boundingBox.height