Swift 按UIBezierPath裁剪后调整图像形状大小

Swift 按UIBezierPath裁剪后调整图像形状大小,swift,crop,image-resizing,Swift,Crop,Image Resizing,在我的项目中,我尝试使用UIBezierPath裁剪图像,并使用CAShapeLayer和setMask操作轻松完成。裁剪操作后,输出为: 输入作物图像: 输出图像: 现在,我正在尝试拉伸输出图像,并制作这个矩形大小的图像。为此,我使用一个函数,将数组中的所有像素的颜色都取出来,不包括具有清晰颜色的像素。为此,我使用此函数: public func getRGBAs(fromImage image: UIImage, x: Int, y: Int, count: Int) -> [UIC

在我的项目中,我尝试使用UIBezierPath裁剪图像,并使用CAShapeLayer和setMask操作轻松完成。裁剪操作后,输出为:

输入作物图像:

输出图像:

现在,我正在尝试拉伸输出图像,并制作这个矩形大小的图像。为此,我使用一个函数,将数组中的所有像素的颜色都取出来,不包括具有清晰颜色的像素。为此,我使用此函数:

public func getRGBAs(fromImage image: UIImage, x: Int, y: Int, count: Int) -> [UIColor] {

    var result = [UIColor]()

    // First get the image into your data buffer
    guard let cgImage = image.cgImage else {
        print("CGContext creation failed")
        return []
    }

    let width = cgImage.width
    let height = cgImage.height
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let rawdata = calloc(height*width*4, MemoryLayout<CUnsignedChar>.size)
    let bytesPerPixel = 4
    let bytesPerRow = bytesPerPixel * width
    let bitsPerComponent = 8
    let bitmapInfo: UInt32 = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue

    guard let context = CGContext(data: rawdata, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
        print("CGContext creation failed")
        return result
    }

    context.draw(cgImage, in: CGRect(x: 0, y: 0, width: width, height: height))

    // Now your rawData contains the image data in the RGBA8888 pixel format.
    var byteIndex = bytesPerRow * y + bytesPerPixel * x

    for _ in 0..<count {
        let alpha = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 3, as: UInt8.self)) / 255.0
        let red = CGFloat(rawdata!.load(fromByteOffset: byteIndex, as: UInt8.self)) / alpha
        let green = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 1, as: UInt8.self)) / alpha
        let blue = CGFloat(rawdata!.load(fromByteOffset: byteIndex + 2, as: UInt8.self)) / alpha
        byteIndex += bytesPerPixel

        let aColor = UIColor(red: red, green: green, blue: blue, alpha: alpha)

        if aColor != UIColor(red: 0, green: 0, blue: 0, alpha: 0){

            result.append(aColor)

        }

        result.append(aColor)

    }

    free(rawdata)

    return result
}
public func getRGBAs(fromImage图像:UIImage,x:Int,y:Int,count:Int)->[UIColor]{
var结果=[UIColor]()
//首先将图像放入数据缓冲区
guard let cgImage=image.cgImage else{
打印(“CGContext创建失败”)
返回[]
}
让宽度=cgImage.width
让高度=cgImage.height
让colorSpace=CGColorSpaceCreateDeviceRGB()
让rawdata=calloc(高度*宽度*4,MemoryLayout.size)
设字节/像素=4
让bytesPerRow=bytesPerPixel*宽度
设bitsPerComponent=8
让bitmapInfo:UInt32=CGImageAlphaInfo.PremultipledLast.rawValue | CGBitmapInfo.byteOrder32Big.rawValue
guard let context=CGContext(数据:rawdata,宽度:宽度,高度:高度,bitsPerComponent:bitsPerComponent,bytesPerRow:bytesPerRow,空格:colorSpace,bitmapInfo:bitmapInfo)else{
打印(“CGContext创建失败”)
返回结果
}
绘制(cgImage,in:CGRect(x:0,y:0,width:width,height:height))
//现在,您的rawData包含RGBA8888像素格式的图像数据。
var byteIndex=bytesPerRow*y+bytesPerPixel*x
对于0..UIImage中的uu{
让bitsPerComponent:UInt=8
设bitsPerPixel:UInt=32
断言(pixels.count==Int(宽度*高度))
变量数据=像素//复制到可变[]
让providerRef=CGDataProvider(
数据:NSData(字节:&data,长度:data.count*MemoryLayout.size)
)
设cgim=CGImage(
宽度:Int(宽度),
高度:Int(高度),
bitsPerComponent:Int(bitsPerComponent),
bitsPerPixel:Int(bitsPerPixel),
bytesPerRow:Int(宽度)*Int(内存大小),
空间:rgbColorSpace,
bitmapInfo:bitmapInfo,
提供者:providerRef!,
解码:无,
是的,
意图:。默认意图
)
返回UIImage(cgImage:cgim!)
}
但输出图像既不是实际图像,也不是矩形。最终输出图像:


问题出在哪里?解决方案是什么?

这样做。我们希望图像为矩形宽度x高度

  x0, x1, x2, x3;
  y0, y1, y2, y3; -- image corners
  rgba = your image
  image = output image

  for(y=0;y<height;y++)
    for(x=0;x<width;x++)
    {
       float xprime, yprime;

       xprime = bilerp(x0, x1, x2, x3, x/width, y/height);
       yprime = bilerp(y0, y1, y2, y3, x/width, y/height);

       xf = xprime - floor(xprime);
       yf = yprime - floor(yprime);

       // Now do bilerp again on the image pixels
       ix = floor(xprime);
       iy = floor(yprime)

       pix = bilerp(rgba[iy][ix], rgba[iy][ix+1], rgba[iy+1][ix], rgba[iy+1][ix+1], xf, yf);

       image[y][x] = pix;

    }
x0,x1,x2,x3;
y0,y1,y2,y3;--图像角点
rgba=您的图像
图像=输出图像

对于(y=0;y哇。说实话?第一张图片非常接近我写的一个部分应用程序。(这是一种恭维!这只是图片,不是你在做什么。)

您可以使用称为CIPerspectiveCorrection的核心图像过滤器。

基本上,将UIImage/CGImage转换为CIImage,将cgpoint转换为CIVectors,然后调用过滤器

(1)将您的图像转换为CIImage。

使用以下两行中的一行:

let ciInput = CIImage(image: myUiImage)
let ciInput = CIImage(cgImage: myCgImage)
(2)将您的CGPOINT转换为CIVECTOR。

CIImages的原点位于左下角,而不是左上角。(换句话说,您需要翻转Y坐标。以下是一个示例:

let uiTL = CGPoint(x: 50, y: 50)
let uiTR = CGPoint(x: 75, y: 75)
let uiBL = CGPoint(x: 100, y: 300)
let uiBR = CGPoint(x: 25, y: 200)
let topLeft = createVector(uiTL,ciImage)
let topRight = createVector(uiTR,ciImage)
let bottomLeft = createVector(uiBL,ciImage)
let bottomRight = createVector(uiBL,ciImage)

func createVector(_ point:CGPoint, _ image:CIImage) -> CIVector {
    return CIVector(x: point.x, y: image.extent.height - point.y)
}
(您可能需要对此进行测试,以确保点正确映射。我键入了徒手转换-uiTL可能需要转换为bottomLeft,依此类推。)

(3)调用CoreImage过滤器:

func doPerspectiveCorrection(
    _ ciInput:CIImage,
    _ topLeft:AnyObject,
    _ topRight:AnyObject,
    _ bottomRight:AnyObject,
    _ bottomLeft:AnyObject)
    -> UIImage {

        Let ctx = CIContext(options: nil)
        let filter = CIFilter(name: "CIPerspectiveCorrection")
        filter?.setValue(topLeft, forKey: "inputTopLeft")
        filter?.setValue(topRight, forKey: "inputTopRight")
        filter?.setValue(bottomRight, forKey: "inputBottomRight")
        filter?.setValue(bottomLeft, forKey: "inputBottomLeft")
        filter!.setValue(image, forKey: kCIInputImageKey)
        cgOutput = context.createCGImage((filter?.outputImage)!, from: (filter?.ciOutput?.extent)!)
        ciInput = filter?.outputImage
        return UIImage(cgImage: cgImage)
}
这会将图像裁剪并缩放到您拥有的4个CG点


她是官方的链接。

谢谢你的回答,但我不清楚你的答案,因为我是ios领域的新手。如果你详细解释一下你的代码,对我会很有帮助。什么是bilerp(,,,),rgba=你的图像,图像=输出图像,高度和宽度的值是多少?
func doPerspectiveCorrection(
    _ ciInput:CIImage,
    _ topLeft:AnyObject,
    _ topRight:AnyObject,
    _ bottomRight:AnyObject,
    _ bottomLeft:AnyObject)
    -> UIImage {

        Let ctx = CIContext(options: nil)
        let filter = CIFilter(name: "CIPerspectiveCorrection")
        filter?.setValue(topLeft, forKey: "inputTopLeft")
        filter?.setValue(topRight, forKey: "inputTopRight")
        filter?.setValue(bottomRight, forKey: "inputBottomRight")
        filter?.setValue(bottomLeft, forKey: "inputBottomLeft")
        filter!.setValue(image, forKey: kCIInputImageKey)
        cgOutput = context.createCGImage((filter?.outputImage)!, from: (filter?.ciOutput?.extent)!)
        ciInput = filter?.outputImage
        return UIImage(cgImage: cgImage)
}