SwiftUI:围绕Firebase ML工具包识别的元素绘制矩形

SwiftUI:围绕Firebase ML工具包识别的元素绘制矩形,swiftui,bounding-box,cgrect,firebase-mlkit,geometryreader,Swiftui,Bounding Box,Cgrect,Firebase Mlkit,Geometryreader,我目前正在尝试在图像顶部绘制Firebase ML工具包识别的文本框。 目前,我还没有成功,我根本看不到任何框,因为它们都显示在屏幕外。我看了这篇文章作为参考:还有那个项目: 这是应显示框的视图: struct ImageScanned: View { var image: UIImage @Binding var rectangles: [CGRect] @State var viewSize: CGSize = .zero var body: some View { // TODO

我目前正在尝试在图像顶部绘制Firebase ML工具包识别的文本框。 目前,我还没有成功,我根本看不到任何框,因为它们都显示在屏幕外。我看了这篇文章作为参考:还有那个项目:

这是应显示框的视图:

struct ImageScanned: View {
var image: UIImage
@Binding var rectangles: [CGRect]
@State var viewSize: CGSize = .zero

var body: some View {
    // TODO: fix scaling
       ZStack {
           Image(uiImage: image)
               .resizable()
               .scaledToFit()
               .overlay(
                   GeometryReader { geometry in
                    ZStack {
                        ForEach(self.transformRectangles(geometry: geometry)) { rect in
                            Rectangle()
                            .path(in: CGRect(
                                x: rect.x,
                                y: rect.y,
                                width: rect.width,
                                height: rect.height))
                            .stroke(Color.red, lineWidth: 2.0)
                        }
                    }
                }
           )
       }
}
private func transformRectangles(geometry: GeometryProxy) -> [DetectedRectangle] {
    var rectangles: [DetectedRectangle] = []

    let imageViewWidth = geometry.frame(in: .global).size.width
    let imageViewHeight = geometry.frame(in: .global).size.height
    let imageWidth = image.size.width
    let imageHeight = image.size.height

    let imageViewAspectRatio = imageViewWidth / imageViewHeight
    let imageAspectRatio = imageWidth / imageHeight
    let scale = (imageViewAspectRatio > imageAspectRatio)
      ? imageViewHeight / imageHeight : imageViewWidth / imageWidth

    let scaledImageWidth = imageWidth * scale
    let scaledImageHeight = imageHeight * scale
    let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
    let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)

    var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
    transform = transform.scaledBy(x: scale, y: scale)

    for rect in self.rectangles {
        let rectangle = rect.applying(transform)
        rectangles.append(DetectedRectangle(width: rectangle.width, height: rectangle.height, x: rectangle.minX, y: rectangle.minY))
    }
    return rectangles
}
}


这是嵌套此视图的视图:

struct StartScanView: View {
@State var showCaptureImageView: Bool = false
@State var image: UIImage? = nil
@State var rectangles: [CGRect] = []

var body: some View {
    ZStack {
        if showCaptureImageView {
            CaptureImageView(isShown: $showCaptureImageView, image: $image)
        } else {
            VStack {

                Button(action: {
                    self.showCaptureImageView.toggle()
                }) {
                    Text("Start Scanning")
                }

                // show here View with rectangles on top of image
                if self.image != nil {
                    ImageScanned(image: self.image ?? UIImage(), rectangles: $rectangles)
                }


                Button(action: {
                    self.processImage()
                }) {
                    Text("Process Image")
                }
            }
        }
    }
}

func processImage() {
    let scaledImageProcessor = ScaledElementProcessor()
    if image != nil {
        scaledImageProcessor.process(in: image!) { text in
            for block in text.blocks {
                for line in block.lines {
                    for element in line.elements {
                        self.rectangles.append(element.frame)
                    }
                }
            }
        }
    }
}
}

教程的计算导致矩形太大,而示例项目中的一个矩形太小。 (高度相似) 不幸的是,我找不到Firebase决定元素大小的大小。 这就是它的样子:
根本不计算宽度和高度,矩形的大小似乎与它们应该具有的大小(不完全相同),因此这给了我一个假设,该ML Kit的大小计算与图像的大小、高度/宽度不成比例。

ML Kit有一个快速启动应用程序,精确显示您正在尝试执行的操作:识别文本并在文本周围绘制一个矩形。以下是Swift代码:


这就是我如何更改foreach循环的方法

Image(uiImage: uiimage!).resizable().scaledToFit().overlay(
                     GeometryReader{ (geometry: GeometryProxy) in
                        ForEach(self.blocks , id: \.self){ (block:VisionTextBlock) in
                            Rectangle().path(in: block.frame.applying(self.transformMatrix(geometry: geometry, image: self.uiimage!))).stroke(Color.purple, lineWidth: 2.0)
                        }
                    }

            )
我没有传递x、y、width和height,而是将
transformMatrix
函数的返回值传递给path函数

我的
transformMatrix
函数是

    private func transformMatrix(geometry:GeometryProxy, image:UIImage) -> CGAffineTransform {

      let imageViewWidth = geometry.size.width
      let imageViewHeight = geometry.size.height
      let imageWidth = image.size.width
      let imageHeight = image.size.height

      let imageViewAspectRatio = imageViewWidth / imageViewHeight
      let imageAspectRatio = imageWidth / imageHeight
      let scale = (imageViewAspectRatio > imageAspectRatio) ?
        imageViewHeight / imageHeight :
        imageViewWidth / imageWidth

      // Image view's `contentMode` is `scaleAspectFit`, which scales the image to fit the size of the
      // image view by maintaining the aspect ratio. Multiple by `scale` to get image's original size.
      let scaledImageWidth = imageWidth * scale
      let scaledImageHeight = imageHeight * scale
      let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
      let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)

      var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
      transform = transform.scaledBy(x: scale, y: scale)
      return transform
    }
}
输出是


如果我尝试他们的方法,我会得到与我的修改版相同的结果——矩形变得非常小。(大约是它们假定大小的1/6)。此外,他们使用的是UIKit而不是SwiftUI…我只是编辑了我的问题,以便可以看到我是如何修改的。
    private func transformMatrix(geometry:GeometryProxy, image:UIImage) -> CGAffineTransform {

      let imageViewWidth = geometry.size.width
      let imageViewHeight = geometry.size.height
      let imageWidth = image.size.width
      let imageHeight = image.size.height

      let imageViewAspectRatio = imageViewWidth / imageViewHeight
      let imageAspectRatio = imageWidth / imageHeight
      let scale = (imageViewAspectRatio > imageAspectRatio) ?
        imageViewHeight / imageHeight :
        imageViewWidth / imageWidth

      // Image view's `contentMode` is `scaleAspectFit`, which scales the image to fit the size of the
      // image view by maintaining the aspect ratio. Multiple by `scale` to get image's original size.
      let scaledImageWidth = imageWidth * scale
      let scaledImageHeight = imageHeight * scale
      let xValue = (imageViewWidth - scaledImageWidth) / CGFloat(2.0)
      let yValue = (imageViewHeight - scaledImageHeight) / CGFloat(2.0)

      var transform = CGAffineTransform.identity.translatedBy(x: xValue, y: yValue)
      transform = transform.scaledBy(x: scale, y: scale)
      return transform
    }
}