Ios 缩放图像:加速如何成为最慢的方法?

Ios 缩放图像:加速如何成为最慢的方法?,ios,uiimage,core-graphics,core-image,ciimage,Ios,Uiimage,Core Graphics,Core Image,Ciimage,我正在测试几种方法来重新缩放UIImage 我已经测试了所有这些方法,并测量了它们调整图像大小所需的时间 1)UIGraphicsBeginImageContextWithOptions&UIImage-drawInRect: let image = UIImage(contentsOfFile: self.URL.path!) let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.5, 0

我正在测试几种方法来重新缩放UIImage

我已经测试了所有这些方法,并测量了它们调整图像大小所需的时间

1)UIGraphicsBeginImageContextWithOptions&UIImage-drawInRect:

let image = UIImage(contentsOfFile: self.URL.path!)

let size = CGSizeApplyAffineTransform(image.size, CGAffineTransformMakeScale(0.5, 0.5))
let hasAlpha = false
let scale: CGFloat = 0.0 // Automatically use scale factor of main screen

UIGraphicsBeginImageContextWithOptions(size, !hasAlpha, scale)
image.drawInRect(CGRect(origin: CGPointZero, size: size))

let scaledImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
2)CGBitmapContextCreate和CGContextDrawImage

let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

let width = CGImageGetWidth(cgImage) / 2
let height = CGImageGetHeight(cgImage) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)

let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)

CGContextSetInterpolationQuality(context, kCGInterpolationHigh)

CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)

let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }
let image = CIImage(contentsOfURL: self.URL)

let filter = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(image, forKey: "inputImage")
filter.setValue(0.5, forKey: "inputScale")
filter.setValue(1.0, forKey: "inputAspectRatio")
let outputImage = filter.valueForKey("outputImage") as! CIImage

let context = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let scaledImage = UIImage(CGImage: self.context.createCGImage(outputImage, fromRect: outputImage.extent()))
let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

// create a source buffer
var format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: nil, 
    bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.First.rawValue), 
    version: 0, decode: nil, renderingIntent: CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer = vImage_Buffer()
defer {
    sourceBuffer.data.dealloc(Int(sourceBuffer.height) * Int(sourceBuffer.height) * 4)
}

var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else { return nil }

// create a destination buffer
let scale = UIScreen.mainScreen().scale
let destWidth = Int(image.size.width * 0.5 * scale)
let destHeight = Int(image.size.height * 0.5 * scale)
let bytesPerPixel = CGImageGetBitsPerPixel(image.CGImage) / 8
let destBytesPerRow = destWidth * bytesPerPixel
let destData = UnsafeMutablePointer<UInt8>.alloc(destHeight * destBytesPerRow)
defer {
    destData.dealloc(destHeight * destBytesPerRow)
}
var destBuffer = vImage_Buffer(data: destData, height: vImagePixelCount(destHeight), width: vImagePixelCount(destWidth), rowBytes: destBytesPerRow)

// scale the image
error = vImageScale_ARGB8888(&sourceBuffer, &destBuffer, nil, numericCast(kvImageHighQualityResampling))
guard error == kvImageNoError else { return nil }

// create a CGImage from vImage_Buffer
let destCGImage = vImageCreateCGImageFromBuffer(&destBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else { return nil }

// create a UIImage
let scaledImage = destCGImage.flatMap { UIImage(CGImage: $0, scale: 0.0, orientation: image.imageOrientation) }
3)CGImageSourceCreateTumbnailAtIndex

import ImageIO

if let imageSource = CGImageSourceCreateWithURL(self.URL, nil) {
    let options: [NSString: NSObject] = [
        kCGImageSourceThumbnailMaxPixelSize: max(size.width, size.height) / 2.0,
        kCGImageSourceCreateThumbnailFromImageAlways: true
    ]

    let scaledImage = CGImageSourceCreateThumbnailAtIndex(imageSource, 0, options).flatMap { UIImage(CGImage: $0) }
}
4)Lanczos使用核心图像重新采样

let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

let width = CGImageGetWidth(cgImage) / 2
let height = CGImageGetHeight(cgImage) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)

let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)

CGContextSetInterpolationQuality(context, kCGInterpolationHigh)

CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)

let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }
let image = CIImage(contentsOfURL: self.URL)

let filter = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(image, forKey: "inputImage")
filter.setValue(0.5, forKey: "inputScale")
filter.setValue(1.0, forKey: "inputAspectRatio")
let outputImage = filter.valueForKey("outputImage") as! CIImage

let context = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let scaledImage = UIImage(CGImage: self.context.createCGImage(outputImage, fromRect: outputImage.extent()))
let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

// create a source buffer
var format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: nil, 
    bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.First.rawValue), 
    version: 0, decode: nil, renderingIntent: CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer = vImage_Buffer()
defer {
    sourceBuffer.data.dealloc(Int(sourceBuffer.height) * Int(sourceBuffer.height) * 4)
}

var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else { return nil }

// create a destination buffer
let scale = UIScreen.mainScreen().scale
let destWidth = Int(image.size.width * 0.5 * scale)
let destHeight = Int(image.size.height * 0.5 * scale)
let bytesPerPixel = CGImageGetBitsPerPixel(image.CGImage) / 8
let destBytesPerRow = destWidth * bytesPerPixel
let destData = UnsafeMutablePointer<UInt8>.alloc(destHeight * destBytesPerRow)
defer {
    destData.dealloc(destHeight * destBytesPerRow)
}
var destBuffer = vImage_Buffer(data: destData, height: vImagePixelCount(destHeight), width: vImagePixelCount(destWidth), rowBytes: destBytesPerRow)

// scale the image
error = vImageScale_ARGB8888(&sourceBuffer, &destBuffer, nil, numericCast(kvImageHighQualityResampling))
guard error == kvImageNoError else { return nil }

// create a CGImage from vImage_Buffer
let destCGImage = vImageCreateCGImageFromBuffer(&destBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else { return nil }

// create a UIImage
let scaledImage = destCGImage.flatMap { UIImage(CGImage: $0, scale: 0.0, orientation: image.imageOrientation) }
5)加速过程中的振动

let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

let width = CGImageGetWidth(cgImage) / 2
let height = CGImageGetHeight(cgImage) / 2
let bitsPerComponent = CGImageGetBitsPerComponent(cgImage)
let bytesPerRow = CGImageGetBytesPerRow(cgImage)
let colorSpace = CGImageGetColorSpace(cgImage)
let bitmapInfo = CGImageGetBitmapInfo(cgImage)

let context = CGBitmapContextCreate(nil, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo.rawValue)

CGContextSetInterpolationQuality(context, kCGInterpolationHigh)

CGContextDrawImage(context, CGRect(origin: CGPointZero, size: CGSize(width: CGFloat(width), height: CGFloat(height))), cgImage)

let scaledImage = CGBitmapContextCreateImage(context).flatMap { UIImage(CGImage: $0) }
let image = CIImage(contentsOfURL: self.URL)

let filter = CIFilter(name: "CILanczosScaleTransform")!
filter.setValue(image, forKey: "inputImage")
filter.setValue(0.5, forKey: "inputScale")
filter.setValue(1.0, forKey: "inputAspectRatio")
let outputImage = filter.valueForKey("outputImage") as! CIImage

let context = CIContext(options: [kCIContextUseSoftwareRenderer: false])
let scaledImage = UIImage(CGImage: self.context.createCGImage(outputImage, fromRect: outputImage.extent()))
let cgImage = UIImage(contentsOfFile: self.URL.path!).CGImage

// create a source buffer
var format = vImage_CGImageFormat(bitsPerComponent: 8, bitsPerPixel: 32, colorSpace: nil, 
    bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.First.rawValue), 
    version: 0, decode: nil, renderingIntent: CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer = vImage_Buffer()
defer {
    sourceBuffer.data.dealloc(Int(sourceBuffer.height) * Int(sourceBuffer.height) * 4)
}

var error = vImageBuffer_InitWithCGImage(&sourceBuffer, &format, nil, cgImage, numericCast(kvImageNoFlags))
guard error == kvImageNoError else { return nil }

// create a destination buffer
let scale = UIScreen.mainScreen().scale
let destWidth = Int(image.size.width * 0.5 * scale)
let destHeight = Int(image.size.height * 0.5 * scale)
let bytesPerPixel = CGImageGetBitsPerPixel(image.CGImage) / 8
let destBytesPerRow = destWidth * bytesPerPixel
let destData = UnsafeMutablePointer<UInt8>.alloc(destHeight * destBytesPerRow)
defer {
    destData.dealloc(destHeight * destBytesPerRow)
}
var destBuffer = vImage_Buffer(data: destData, height: vImagePixelCount(destHeight), width: vImagePixelCount(destWidth), rowBytes: destBytesPerRow)

// scale the image
error = vImageScale_ARGB8888(&sourceBuffer, &destBuffer, nil, numericCast(kvImageHighQualityResampling))
guard error == kvImageNoError else { return nil }

// create a CGImage from vImage_Buffer
let destCGImage = vImageCreateCGImageFromBuffer(&destBuffer, &format, nil, nil, numericCast(kvImageNoFlags), &error)?.takeRetainedValue()
guard error == kvImageNoError else { return nil }

// create a UIImage
let scaledImage = destCGImage.flatMap { UIImage(CGImage: $0, scale: 0.0, orientation: image.imageOrientation) }
让cgImage=UIImage(contentsOfFile:self.URL.path!).cgImage
//创建源缓冲区
var format=vImage_CGImageFormat(比特分量:8,比特像素:32,颜色空间:nil,
bitmapInfo:CGBitmapInfo(rawValue:CGImageAlphaInfo.First.rawValue),
版本:0,解码:nil,renderingIntent:CGColorRenderingIntent.RenderingIntentDefault)
var sourceBuffer=vImage_Buffer()
推迟{
sourceBuffer.data.dealoc(Int(sourceBuffer.height)*Int(sourceBuffer.height)*4)
}
var error=vImageBuffer_InitWithCGImage(&sourceBuffer,&format,nil,cgImage,numericCast(kvImageNoFlags))
保护错误==kvImageNoError else{return nil}
//创建目标缓冲区
让scale=UIScreen.mainScreen().scale
设destWidth=Int(image.size.width*0.5*比例)
设destHeight=Int(image.size.height*0.5*比例)
让bytesPerPixel=cgmagegetbitsperpixel(image.CGImage)/8
设destBytesPerRow=destWidth*bytesPerPixel
设destData=UnsafeMutablePointer.alloc(destwheight*destBytesPerRow)
推迟{
destData.dealloc(destHeight*destBytesPerRow)
}
var destBuffer=vImage_Buffer(数据:destData,高度:vImagePixelCount(destHeight),宽度:vImagePixelCount(destWidth),行字节:destBytesPerRow)
//缩放图像
错误=vImageScale_argb888(&sourceBuffer,&destBuffer,nil,数值广播(kvImageHighQualityResampling))
保护错误==kvImageNoError else{return nil}
//从vImage_缓冲区创建CGImage
设destCGImage=VimageCreateCImageFromBuffer(&destBuffer,&format,nil,nil,numericCast(kImageNoFlags),&error)?。takeRetainedValue()
保护错误==kvImageNoError else{return nil}
//创建一个UIImage
设scaleImage=destCGImage.flatMap{UIImage(CGImage:$0,比例:0.0,方向:image.imageOrientation)}
经过数小时的测试,并测量每种方法将图像重新缩放到100x100所需的时间,我的结论与NSHipster完全不同。首先,accelerate中的vImage比第一种方法慢200倍,在我看来,它是其他方法的可怜表亲。核心图像方法也很慢。但我很好奇方法1是如何粉碎方法3、4和5的,其中一些理论上是在GPU上处理东西的

例如,方法#3花了2秒钟将1024x1024图像的大小调整为100x100。另一方面#1花了0.01秒

我错过什么了吗

一定是出了什么问题,否则苹果不会花时间写加速和CIImage之类的东西


注意:我正在测量从图像已加载到变量到缩放版本保存到另一个变量的时间。我不考虑从文件中读取所需的时间。

由于各种原因,加速可能是最慢的方法:

  • 您显示的代码可能会花费大量时间来提取数据 从CGImage创建一个新图像。你没有,比如说, 使用任何允许CGImage使用vImage结果的功能 直接而不是复制。在某些提取/创建CGImage操作中,可能还需要进行颜色空间转换。从这里很难说
  • 其他一些方法可能没有起到任何作用,推迟了测试 工作到后来,当绝对被迫这样做。如果那是在你结束时间之后,那么工作就没有被衡量
  • 其他一些方法的优点是能够 直接使用图像的内容,无需制作副本 首先
  • 不同的重采样方法(例如双线性和Lanczos)都有 不同成本
  • GPU在某些方面实际上可以更快,重采样就是其中之一 它是专门优化的任务之一。另一方面,对向量单元进行随机数据访问(如重采样)不是一件好事
  • 计时方法可能会影响结果。加速是多线程的。 如果你使用挂钟时间,你会得到一个答案。如果你使用 你会得到另一个

  • 如果你真的认为加速在这里太离谱了,那就提交一个bug。不过,在这样做之前,我肯定会检查Instruments Time Profile,您在基准测试循环中花费了大部分时间在vImageScale上。

    我很惊讶其中一些方法居然奏效:
    UIImage(contentsOfFile:self.URL.absoluteString!)
    不正确。它应该是
    UIImage(contentsOfFile:self.URL.path!)
    UIImage(contentsOfURL:self.URL)
    。您比较过所有这些方法的输出吗?结果输出是否相同?所有情况下尺寸相同?结果相同。代码可能包含一些输入错误。我已经解决了这个问题。也可以在MetalPerformanceShader中尝试MPSImageScale。我有一个类似的结果,vImage缩放非常慢。您没有按照vImage编程指南中的建议重新使用自己的临时缓冲区进行重采样操作,即:vImageScale_argb888(&sourceBuffer,&destBuffer,nil,numericCast(kvImageHighQualityResampling))中的nil参数然而,在我的案例中,我确实使用了这项技术,但在性能上只获得了非常温和且微不足道的改进。