Ios 检查UIImage的分段是亮还是暗
我试图覆盖一个V形按钮,允许用户关闭当前视图。雪佛龙的颜色在深色图像上应为浅色,在浅色图像上应为深色。我附上了我描述的内容的截图 但是,在尝试计算图像的明暗度时,会对性能产生重大影响,我就是这样做的(在“CGImage”上操作):Ios 检查UIImage的分段是亮还是暗,ios,swift,uiimage,cgimage,Ios,Swift,Uiimage,Cgimage,我试图覆盖一个V形按钮,允许用户关闭当前视图。雪佛龙的颜色在深色图像上应为浅色,在浅色图像上应为深色。我附上了我描述的内容的截图 但是,在尝试计算图像的明暗度时,会对性能产生重大影响,我就是这样做的(在“CGImage”上操作): 在将图像安装到视图后,这里有两个选项可用于查找图像的大小: 一旦你得到了它,你就可以知道雪佛龙在哪里(你可能需要先转换它的框架) 如果性能仍然不足,我会考虑使用CoreImage来执行计算: 使用CoreImage有几种方法,但获得平均值是最简单的 var isDa
在将图像安装到视图后,这里有两个选项可用于查找图像的大小: 一旦你得到了它,你就可以知道雪佛龙在哪里(你可能需要先转换它的框架) 如果性能仍然不足,我会考虑使用CoreImage来执行计算: 使用CoreImage有几种方法,但获得平均值是最简单的
var isDark: Bool {
guard let imageData = dataProvider?.data else { return false }
guard let ptr = CFDataGetBytePtr(imageData) else { return false }
let length = CFDataGetLength(imageData)
let threshold = Int(Double(width * height) * 0.45)
var darkPixels = 0
for i in stride(from: 0, to: length, by: 4) {
let r = ptr[i]
let g = ptr[i + 1]
let b = ptr[i + 2]
let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
if luminance < 150 {
darkPixels += 1
if darkPixels > threshold {
return true
}
}
}
return false
}
extension UIImage {
var isDark: Bool {
return cgImage?.isDark ?? false
}
}
extension CGImage {
var isDark: Bool {
guard let imageData = dataProvider?.data else { return false }
guard let ptr = CFDataGetBytePtr(imageData) else { return false }
let length = CFDataGetLength(imageData)
let threshold = Int(Double(width * height) * 0.45)
var darkPixels = 0
for i in stride(from: 0, to: length, by: 4) {
let r = ptr[i]
let g = ptr[i + 1]
let b = ptr[i + 2]
let luminance = (0.299 * Double(r) + 0.587 * Double(g) + 0.114 * Double(b))
if luminance < 150 {
darkPixels += 1
if darkPixels > threshold {
return true
}
}
}
return false
}
func cropping(to rect: CGRect, scale: CGFloat) -> CGImage? {
let scaledRect = CGRect(x: rect.minX * scale, y: rect.minY * scale, width: rect.width * scale, height: rect.height * scale)
return self.cropping(to: scaledRect)
}
}
extension UIImageView {
func hasDarkImage(at subsection: CGRect) -> Bool {
guard let image = image, let aspectSize = aspectFillSize() else { return false }
let scale = image.size.width / frame.size.width
let cropRect = CGRect(x: (aspectSize.width - frame.width) / 2,
y: (aspectSize.height - frame.height) / 2,
width: aspectSize.width,
height: frame.height)
let croppedImage = image.cgImage?
.cropping(to: cropRect, scale: scale)?
.cropping(to: subsection, scale: scale)
return croppedImage?.isDark ?? false
}
private func aspectFillSize() -> CGSize? {
guard let image = image else { return nil }
var aspectFillSize = CGSize(width: frame.width, height: frame.height)
let widthScale = frame.width / image.size.width
let heightScale = frame.height / image.size.height
if heightScale > widthScale {
aspectFillSize.width = heightScale * image.size.width
}
else if widthScale > heightScale {
aspectFillSize.height = widthScale * image.size.height
}
return aspectFillSize
}
}