Swift 在iOS上使用Google ML对象检测并在图像上绘制边界框

Swift 在iOS上使用Google ML对象检测并在图像上绘制边界框,swift,Swift,我正在开发google ML对象检测应用程序。该应用程序的功能就像用户将捕获图像并将其发送到服务器进行对象检测。从服务器上,我得到的JSON响应如下: { "payload": [ { "imageObjectDetection": { "boundingBox": { "normalizedVertices": [ { "x": 0.034553755,

我正在开发google ML对象检测应用程序。该应用程序的功能就像用户将捕获图像并将其发送到服务器进行对象检测。从服务器上,我得到的JSON响应如下:

{
  "payload": [
    {
      "imageObjectDetection": {
        "boundingBox": {
          "normalizedVertices": [
            {
              "x": 0.034553755,
              "y": 0.015524037
            },
            {
              "x": 0.941527,
              "y": 0.9912563
            }
          ]
        },
        "score": 0.9997793
      },
      "displayName": "Salad"
    }
  ]
}
从上面的回答中,我想在图像上绘制可单击的边界框:


请向我推荐一个框架或库,在图像上绘制可选的边界框。

您不需要任何库,因为这是一个非常简单的过程。。 只需使用从api获得的x和y值调用此函数,就可以了。当然,在点击手势中添加代码


func draxBox(x1: CGFloat, x2: CGFloat, y1: CGFloat, y2: CGFloat) {
    //1. Find the size to the bounding box:
    width = (x2 - x1) * yourImageView.frame.width
    height = (y2 - y1) * yourImageView.frame.height

    //2. Add a subview to yourImageView:
    let boundingBox = UIView()
    boundingBox.backgroundColor = .clear
    boundingBox.layer.borderWidth = 1
    boundingBox.layer.borderColor = UIColor.red.cgColor
    boundingBox.translatesAutoresizingMaskIntoConstraints = false
    yourImageView.addSubview(boundingBox)

    NSLayoutConstraint.activate([
        boundingBox.leadingAnchor.constraint(equalTo: yourImageView.leadingAnchor, constant: x1),
        boundingBox.topAnchor.constraint(equalTo: yourImageView.topAnchor, constant: y1),
        boundingBox.widthAnchor.constraint(equalToConstant: width),
        boundingBox.heightAnchor.constraint(equalToConstant: height)
    ])

    //3. Add tap action to this view:
    let tap = UITapGestureRecognizer(target: self, action: #selector(boxTapped))
    boundingBox.isUserInteractionEnabled = true
    boundingBox.addGestureRecognizer(tap)
}

@objc private fun boxTapped() {
   //actian when tapped goes here
}