Swift Vision&x27;核心图像';s透视校正
我使用的是macOS Big Sur 11.2.3版和Xcode 12.4版。 我想得到一个数独图像的外方,它有透视失真。 我是这样做的:Swift Vision&x27;核心图像';s透视校正,swift,image-processing,detection,core-image,vision,Swift,Image Processing,Detection,Core Image,Vision,我使用的是macOS Big Sur 11.2.3版和Xcode 12.4版。 我想得到一个数独图像的外方,它有透视失真。 我是这样做的: 执行矩形检测请求。这将传递外部矩形的点 执行透视校正。这提供了一个完美的二次矩形 现在我想裁剪数独外框的图像 对透视校正图像执行第二个矩形检测请求,以获取用于裁剪操作的矩形 令人惊讶的是,矩形检测结果是一个空数组 我怀疑,原因可能是什么 打印出原始CGImage的属性可提供: Original image: <CGImage 0x7f9
Original image:
<CGImage 0x7f92e4415560> (IP)
<<CGColorSpace 0x6000035faf40> (kCGColorSpaceICCBased; kCGColorSpaceModelRGB; sRGB IEC61966-2.1)>
width = 2448, height = 3264, bpc = 8, bpp = 32, row bytes = 9792
kCGImageAlphaNoneSkipLast | 0 (default byte order) | kCGImagePixelFormatPacked
is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes
2021-04-06 19:15:04.445374+0200 StackExchangeHilfe[1959:100561] Metal API Validation Enabled
Corrected image:
<CGImage 0x7f92f451f180> (DP)
<<CGColorSpace 0x6000035fae80> (kCGColorSpaceDeviceRGB)>
width = 2073, height = 2194, bpc = 8, bpp = 32, row bytes = 8320
kCGImageAlphaPremultipliedLast | 0 (default byte order) | kCGImagePixelFormatPacked
is mask? No, has masking color? No, has soft mask? No, has matte? No, should interpolate? Yes
我猜vision第二次找不到矩形,因为它太大,几乎填满了整个图像。也许你可以试着在做透视校正时不要剪那么多。
import UIKit
import Vision
class ViewController: UIViewController {
@IBOutlet weak var origImageView: UIImageView!
@IBOutlet weak var correctedImageView: UIImageView!
let imageName = "sudoku"
var origImage: UIImage!
override func viewDidLoad() {
super.viewDidLoad()
origImage = UIImage(named: imageName)
origImageView.image = origImage
let correctedImage = performOperationsWithUIImage(origImage)
correctedImageView.image = correctedImage
}
func performOperationsWithUIImage(_ image: UIImage) -> UIImage? {
let cgImage = image.cgImage!
print("Original image:")
print("\(String(describing: cgImage))")
// Create rectangle detect request
let rectDetectRequest = VNDetectRectanglesRequest()
// Customize & configure the request to detect only certain rectangles.
rectDetectRequest.maximumObservations = 8 // Vision currently supports up to 16.
rectDetectRequest.minimumAspectRatio = 0.8 // height / width
rectDetectRequest.quadratureTolerance = 30
rectDetectRequest.minimumSize = 0.5
rectDetectRequest.minimumConfidence = 0.6
// Create a request handler.
let imageRequestHandler = VNImageRequestHandler(cgImage: cgImage, orientation: .up, options: [:])
// Send the requests to the request handler.
do {
try imageRequestHandler.perform([rectDetectRequest])
} catch let error as NSError {
print("Failed to perform first image request: \(error)")
}
guard let results = rectDetectRequest.results as? [VNRectangleObservation]
else {return nil}
print("\nFirst rectangle request result:")
print("\(results.count) rectangle(s) detected:")
print("\(String(describing: results))")
// Perform pespective correction
let width = Int(cgImage.width)
let height = Int(cgImage.height)
guard let filter = CIFilter(name:"CIPerspectiveCorrection") else { return nil }
filter.setValue(CIImage(image: image), forKey: "inputImage")
filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.topLeft, width, height)), forKey: "inputTopLeft")
filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.topRight, width, height)), forKey: "inputTopRight")
filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.bottomLeft, width, height)), forKey: "inputBottomLeft")
filter.setValue(CIVector(cgPoint: VNImagePointForNormalizedPoint(results.first!.bottomRight, width, height)), forKey: "inputBottomRight")
guard
let outputCIImage = filter.outputImage,
let outputCGImage = CIContext(options: nil).createCGImage(outputCIImage, from: outputCIImage.extent) else {return nil}
print("\nCorrected image:")
print("\(String(describing: outputCGImage))")
// Perform another rectangle detection
let newImageRequestHandler = VNImageRequestHandler(cgImage: outputCGImage, orientation: .up, options: [:])
// Send the requests to the request handler.
do {
try newImageRequestHandler.perform([rectDetectRequest])
} catch let error as NSError {
print("Failed to perform second image request: \(error)")
}
guard let newResults = rectDetectRequest.results as? [VNRectangleObservation]
else {return nil}
print("\nSecond rectangle request result:")
print("\(newResults.count) rectangle(s) detected:")
print("\(String(describing: newResults))")
return UIImage(cgImage: outputCGImage)
}
}