Ios 将观察转换为字符串

Ios 将观察转换为字符串,ios,machine-learning,ocr,nslinguistictagger,apple-vision,Ios,Machine Learning,Ocr,Nslinguistictagger,Apple Vision,我正在浏览苹果的,我在UIImages中看到了几个与文本检测相关的类: (一) (二) 看起来他们可以检测角色,但我看不出有什么方法可以对角色做任何事情。一旦检测到字符,您将如何将它们转换为可由用户解释的内容 下面是一篇文章,简要概述了 谢谢你的阅读 感谢GitHub用户,您可以测试一个示例: 问题是,结果是每个检测到的字符都有一个边界框数组。从我在Vision会议上收集的信息来看,我认为您应该使用CoreML来检测实际的字符 推荐的WWDC 2017演讲:(也还没有看完),请看25:50的一个

我正在浏览苹果的,我在
UIImages
中看到了几个与文本检测相关的类:

(一)

(二)

看起来他们可以检测角色,但我看不出有什么方法可以对角色做任何事情。一旦检测到字符,您将如何将它们转换为可由用户解释的内容

下面是一篇文章,简要概述了


谢谢你的阅读

感谢GitHub用户,您可以测试一个示例:

问题是,结果是每个检测到的字符都有一个边界框数组。从我在Vision会议上收集的信息来看,我认为您应该使用CoreML来检测实际的字符

推荐的WWDC 2017演讲:(也还没有看完),请看25:50的一个类似示例,名为MNISTVision


下面是另一个漂亮的应用程序,演示了如何使用Keras(Tensorflow)来训练使用CoreML的MNIST手写识别模型:

SwiftOCR

我刚刚得到了SwiftOCR来处理小文本集

使用

它使用神经网络MNIST模型进行文本识别

待办事项:VNTextObservation>SwiftOCR

一旦我将它一个连接到另一个,我将使用VNTextObservation发布它的示例

OpenCV+Tesseract OCR

我尝试使用OpenCV+Tesseract,但出现编译错误,然后找到了SwiftOCR

另请参见:Google Vision iOS

注意Google Vision文本识别-Android sdk有文本检测功能,但也有iOS cocoapod。因此,请密切关注它,因为它最终会将文本识别添加到iOS中

//更正:刚刚尝试过,但只有Android版本的sdk支持文本检测

如果您订阅发行版:

单击订阅发布
如果有更好的解决方案,您可以看到何时将TextDetection添加到Cocoapod的iOS部分

我已经成功地在屏幕上绘制了区域框和字符框。苹果的visionapi实际上非常出色。您必须将视频的每一帧转换为图像,并将其输入识别器。它比直接从相机输入像素缓冲区精确得多

 if #available(iOS 11.0, *) {
            guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {return}

            var requestOptions:[VNImageOption : Any] = [:]

            if let camData = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
                requestOptions = [.cameraIntrinsics:camData]
            }

            let imageRequestHandler = VNImageRequestHandler(cvPixelBuffer: pixelBuffer,
                                                            orientation: 6,
                                                            options: requestOptions)

            let request = VNDetectTextRectanglesRequest(completionHandler: { (request, _) in
                guard let observations = request.results else {print("no result"); return}
                let result = observations.map({$0 as? VNTextObservation})
                DispatchQueue.main.async {
                    self.previewLayer.sublayers?.removeSubrange(1...)
                    for region in result {
                        guard let rg = region else {continue}
                        self.drawRegionBox(box: rg)
                        if let boxes = region?.characterBoxes {
                            for characterBox in boxes {
                                self.drawTextBox(box: characterBox)
                            }
                        }
                    }
                }
            })
            request.reportCharacterBoxes = true
            try? imageRequestHandler.perform([request])
        }
    }
现在我正试图重新理解课文。苹果没有提供任何内置的OCR模型。我想用CoreML来实现这一点,所以我试图将一个经过Tesseract训练的数据模型转换成CoreML

您可以在这里找到Tesseract模型:我认为下一步是编写一个coremltools转换器,支持这些类型的输入和输出.coreML文件


或者,您可以直接链接到TesseractiOS,并尝试使用从Vision API获得的区域框和字符框为其提供信息

这是怎么做的

    //
//  ViewController.swift
//


import UIKit
import Vision
import CoreML

class ViewController: UIViewController {

    //HOLDS OUR INPUT
    var  inputImage:CIImage?

    //RESULT FROM OVERALL RECOGNITION
    var  recognizedWords:[String] = [String]()

    //RESULT FROM RECOGNITION
    var recognizedRegion:String = String()


    //OCR-REQUEST
    lazy var ocrRequest: VNCoreMLRequest = {
        do {
            //THIS MODEL IS TRAINED BY ME FOR FONT "Inconsolata" (Numbers 0...9 and UpperCase Characters A..Z)
            let model = try VNCoreMLModel(for:OCR().model)
            return VNCoreMLRequest(model: model, completionHandler: self.handleClassification)
        } catch {
            fatalError("cannot load model")
        }
    }()

    //OCR-HANDLER
    func handleClassification(request: VNRequest, error: Error?)
    {
        guard let observations = request.results as? [VNClassificationObservation]
            else {fatalError("unexpected result") }
        guard let best = observations.first
            else { fatalError("cant get best result")}

        self.recognizedRegion = self.recognizedRegion.appending(best.identifier)
    }

    //TEXT-DETECTION-REQUEST
    lazy var textDetectionRequest: VNDetectTextRectanglesRequest = {
        return VNDetectTextRectanglesRequest(completionHandler: self.handleDetection)
    }()

    //TEXT-DETECTION-HANDLER
    func handleDetection(request:VNRequest, error: Error?)
    {
        guard let observations = request.results as? [VNTextObservation]
            else {fatalError("unexpected result") }

       // EMPTY THE RESULTS
        self.recognizedWords = [String]()

        //NEEDED BECAUSE OF DIFFERENT SCALES
        let  transform = CGAffineTransform.identity.scaledBy(x: (self.inputImage?.extent.size.width)!, y:  (self.inputImage?.extent.size.height)!)

        //A REGION IS LIKE A "WORD"
        for region:VNTextObservation in observations
        {
            guard let boxesIn = region.characterBoxes else {
                continue
            }

            //EMPTY THE RESULT FOR REGION
            self.recognizedRegion = ""

            //A "BOX" IS THE POSITION IN THE ORIGINAL IMAGE (SCALED FROM 0... 1.0)
            for box in boxesIn
            {
                //SCALE THE BOUNDING BOX TO PIXELS
                let realBoundingBox = box.boundingBox.applying(transform)

                //TO BE SURE
                guard (inputImage?.extent.contains(realBoundingBox))!
                    else { print("invalid detected rectangle"); return}

                //SCALE THE POINTS TO PIXELS
                let topleft = box.topLeft.applying(transform)
                let topright = box.topRight.applying(transform)
                let bottomleft = box.bottomLeft.applying(transform)
                let bottomright = box.bottomRight.applying(transform)

                //LET'S CROP AND RECTIFY
                let charImage = inputImage?
                    .cropped(to: realBoundingBox)
                    .applyingFilter("CIPerspectiveCorrection", parameters: [
                        "inputTopLeft" : CIVector(cgPoint: topleft),
                        "inputTopRight" : CIVector(cgPoint: topright),
                        "inputBottomLeft" : CIVector(cgPoint: bottomleft),
                        "inputBottomRight" : CIVector(cgPoint: bottomright)
                        ])

                //PREPARE THE HANDLER
                let handler = VNImageRequestHandler(ciImage: charImage!, options: [:])

                //SOME OPTIONS (TO PLAY WITH..)
                self.ocrRequest.imageCropAndScaleOption = VNImageCropAndScaleOption.scaleFill

                //FEED THE CHAR-IMAGE TO OUR OCR-REQUEST - NO NEED TO SCALE IT - VISION WILL DO IT FOR US !!
                do {
                    try handler.perform([self.ocrRequest])
                }  catch { print("Error")}

            }

            //APPEND RECOGNIZED CHARS FOR THAT REGION
            self.recognizedWords.append(recognizedRegion)
        }

        //THATS WHAT WE WANT - PRINT WORDS TO CONSOLE
        DispatchQueue.main.async {
            self.PrintWords(words: self.recognizedWords)
        }
    }

    func PrintWords(words:[String])
    {
        // VOILA'
        print(recognizedWords)

    }

    func doOCR(ciImage:CIImage)
    {
        //PREPARE THE HANDLER
        let handler = VNImageRequestHandler(ciImage: ciImage, options:[:])

        //WE NEED A BOX FOR EACH DETECTED CHARACTER
        self.textDetectionRequest.reportCharacterBoxes = true
        self.textDetectionRequest.preferBackgroundProcessing = false

        //FEED IT TO THE QUEUE FOR TEXT-DETECTION
        DispatchQueue.global(qos: .userInteractive).async {
            do {
                try  handler.perform([self.textDetectionRequest])
            } catch {
                print ("Error")
            }
        }

    }

    override func viewDidLoad() {
        super.viewDidLoad()
        // Do any additional setup after loading the view, typically from a nib.

        //LETS LOAD AN IMAGE FROM RESOURCE
        let loadedImage:UIImage = UIImage(named: "Sample1.png")! //TRY Sample2, Sample3 too

        //WE NEED A CIIMAGE - NOT NEEDED TO SCALE
        inputImage = CIImage(image:loadedImage)!

        //LET'S DO IT
        self.doOCR(ciImage: inputImage!)


    }

    override func didReceiveMemoryWarning() {
        super.didReceiveMemoryWarning()
        // Dispose of any resources that can be recreated.
    }
}

你会发现包含的完整项目是经过培训的模型

我正在使用谷歌的Tesseract OCR引擎将图像转换为实际字符串。您必须使用cocoapods将其添加到您的Xcode项目中。尽管Tesseract将执行OCR,即使您只是将包含文本的图像馈送给它,但使其性能更好/更快的方法是使用检测到的文本矩形来馈送实际包含文本的图像片段,这正是苹果的视觉框架派上用场的地方。 以下是指向引擎的链接: 这里有一个链接指向我的项目的当前阶段,该阶段已经实现了文本检测+OCR:
希望这些能有所帮助。祝你好运

对于那些仍在寻找解决方案的人,我写了一篇简短的文章。它同时使用Vision API和Tesseract,可用于实现问题所描述的任务,方法只有一种:

func sliceaAndOCR(image: UIImage, charWhitelist: String, charBlackList: String = "", completion: @escaping ((_: String, _: UIImage) -> Void))

此方法将查找图像中的文本,返回找到的字符串和原始图像的一个片段,显示文本的找到位置。

Firebase ML Kit使用iOS(和Android)的车载设备执行此操作,其性能优于Tesseract和SwiftOCR。

Apple最终更新了Vision以执行OCR。打开一个游乐场并在Resources文件夹中转储几个测试图像。在我的例子中,我称它们为“demoDocument.jpg”和“demoLicensePlate.jpg”

这个新类被称为
vnecrognitizetextrequest
。把这个扔到操场上,让它旋转一下:

import Vision

enum DemoImage: String {
    case document = "demoDocument"
    case licensePlate = "demoLicensePlate"
}

class OCRReader {
    func performOCR(on url: URL?, recognitionLevel: VNRequestTextRecognitionLevel)  {
        guard let url = url else { return }
        let requestHandler = VNImageRequestHandler(url: url, options: [:])

        let request = VNRecognizeTextRequest  { (request, error) in
            if let error = error {
                print(error)
                return
            }

            guard let observations = request.results as? [VNRecognizedTextObservation] else { return }

            for currentObservation in observations {
                let topCandidate = currentObservation.topCandidates(1)
                if let recognizedText = topCandidate.first {
                    print(recognizedText.string)
                }
            }
        }
        request.recognitionLevel = recognitionLevel

        try? requestHandler.perform([request])
    }
}

func url(for image: DemoImage) -> URL? {
    return Bundle.main.url(forResource: image.rawValue, withExtension: "jpg")
}

let ocrReader = OCRReader()
ocrReader.performOCR(on: url(for: .document), recognitionLevel: .fast)

第19次世界大战中有很多这样的故事,我还没有机会把它弄糟,但我想你已经知道了一些事情。你可以抓取rect和OCR的子图像。Google Vision OCR处于测试阶段,只能通过ios通过REST访问。未包含在IOS SDK中。你能将VNTextObservation与SwiftOCR连接起来吗?我看到MS认知服务现在可以读取图像中的文本了,好的,所以我刚刚研究了一下,似乎iOS支持现在已经可用了:试用过的SwiftOCR-印象不太深刻-在应用程序中包含的示例图像字符串中遇到了问题,因此对于没有经过培训的文本图像,效果会更差。奇点将被推迟到下周!:)见上面的评论发现谷歌视觉OCR测试版。在休息时更容易接近。ios sdk中还没有。您是否成功地将Tesseract转换为核心ML模型?这方面有什么进展吗?我正在研究这个问题,我可能最终使用vision api来查找角色,然后以某种方式将其提供给tesseract iOS SDK。我宁愿使用coreml来提供支持/快速性,但我可能必须解决这个问题,我怎么可能在屏幕中央有一个矩形框,而只有该区域中的文本被检测到,周围有一个框?矩形之外的任何东西都没有方框?使用CoreML检测矩形方框中的实际字符/字符串有进展吗?谢谢你!是否有任何内置工具可以检测iOS 12中使用CoreMl2的实际字符/字符串?您找到什么了吗?@ZaidPathan还没有。这些类似乎有着巨大的潜力,如果它们能将这一点联系起来的话
import Vision

enum DemoImage: String {
    case document = "demoDocument"
    case licensePlate = "demoLicensePlate"
}

class OCRReader {
    func performOCR(on url: URL?, recognitionLevel: VNRequestTextRecognitionLevel)  {
        guard let url = url else { return }
        let requestHandler = VNImageRequestHandler(url: url, options: [:])

        let request = VNRecognizeTextRequest  { (request, error) in
            if let error = error {
                print(error)
                return
            }

            guard let observations = request.results as? [VNRecognizedTextObservation] else { return }

            for currentObservation in observations {
                let topCandidate = currentObservation.topCandidates(1)
                if let recognizedText = topCandidate.first {
                    print(recognizedText.string)
                }
            }
        }
        request.recognitionLevel = recognitionLevel

        try? requestHandler.perform([request])
    }
}

func url(for image: DemoImage) -> URL? {
    return Bundle.main.url(forResource: image.rawValue, withExtension: "jpg")
}

let ocrReader = OCRReader()
ocrReader.performOCR(on: url(for: .document), recognitionLevel: .fast)