Ios 带ScaleSpectFill的GussianBlur图像

Ios 带ScaleSpectFill的GussianBlur图像,ios,swift,Ios,Swift,我想在图像上使用Gaussianblur,但我也想使用imageview scalemode的ScaleSpectFill 我正在使用以下代码模糊我的图像: func getImageWithBlur(image: UIImage) -> UIImage?{ let context = CIContext(options: nil) guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {

我想在图像上使用Gaussianblur,但我也想使用imageview scalemode的
ScaleSpectFill

我正在使用以下代码模糊我的图像:

func getImageWithBlur(image: UIImage) -> UIImage?{
    let context = CIContext(options: nil)

    guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
        return nil
    }
    let beginImage = CIImage(image: image)
    currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
    currentFilter.setValue(6.5, forKey: "inputRadius")
    guard let output = currentFilter.outputImage, let cgimg = context.createCGImage(output, from: output.extent) else {
        return nil
    }
    return UIImage(cgImage: cgimg)
}
但这不适用于
scaleSpectFill
模式


它们都是相同的图像。但当我模糊第二幅图像时,正如你所看到的,它从上到下增加了空间。当使用模糊图像时,我应该做些什么才能使其更适合?

当应用
CIGaussianBlur
过滤器时,生成的图像比原始图像大。这是因为模糊应用于边

要恢复原始大小的图像,需要使用原始图像范围

但是请注意,模糊同时应用于边的内部和外部,因此如果仅剪裁到原始范围,则边将有效地“淡出”。要完全避免边缘,您需要进一步向内剪裁

下面是一个示例,使用
ui图像
扩展来模糊带有的或不带模糊边缘的

extension UIImage {

    func blurredImageWithBlurredEdges(inputRadius: CGFloat) -> UIImage? {

        guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
            return nil
        }
        guard let beginImage = CIImage(image: self) else {
            return nil
        }
        currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
        currentFilter.setValue(inputRadius, forKey: "inputRadius")
        guard let output = currentFilter.outputImage else {
            return nil
        }

        // UIKit and UIImageView .contentMode doesn't play well with
        // CIImage only, so we need to back the return UIImage with a CGImage
        let context = CIContext()

        // cropping rect because blur changed size of image
        guard let final = context.createCGImage(output, from: beginImage.extent) else {
            return nil
        }

        return UIImage(cgImage: final)

    }

    func blurredImageWithClippedEdges(inputRadius: CGFloat) -> UIImage? {

        guard let currentFilter = CIFilter(name: "CIGaussianBlur") else {
            return nil
        }
        guard let beginImage = CIImage(image: self) else {
            return nil
        }
        currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
        currentFilter.setValue(inputRadius, forKey: "inputRadius")
        guard let output = currentFilter.outputImage else {
            return nil
        }

        // UIKit and UIImageView .contentMode doesn't play well with
        // CIImage only, so we need to back the return UIImage with a CGImage
        let context = CIContext()

        // cropping rect because blur changed size of image

        // to clear the blurred edges, use a fromRect that is
        // the original image extent insetBy (negative) 1/2 of new extent origins
        let newExtent = beginImage.extent.insetBy(dx: -output.extent.origin.x * 0.5, dy: -output.extent.origin.y * 0.5)
        guard let final = context.createCGImage(output, from: newExtent) else {
            return nil
        }
        return UIImage(cgImage: final)

    }

}
下面是一个视图控制器示例,展示了如何使用它,以及不同的结果:

class BlurTestViewController: UIViewController {

    let imgViewA = UIImageView()
    let imgViewB = UIImageView()
    let imgViewC = UIImageView()

    override func viewDidLoad() {
        super.viewDidLoad()

        let stackView = UIStackView()
        stackView.axis = .vertical
        stackView.alignment = .fill
        stackView.distribution = .fillEqually
        stackView.spacing = 8
        stackView.translatesAutoresizingMaskIntoConstraints = false

        view.addSubview(stackView)

        NSLayoutConstraint.activate([

            stackView.widthAnchor.constraint(equalToConstant: 200.0),
            stackView.centerXAnchor.constraint(equalTo: view.centerXAnchor),
            stackView.centerYAnchor.constraint(equalTo: view.centerYAnchor),

        ])

        [imgViewA, imgViewB, imgViewC].forEach { v in
            v.backgroundColor = .red
            v.contentMode = .scaleAspectFill
            v.clipsToBounds = true
            // square image views (1:1 ratio)
            v.heightAnchor.constraint(equalTo: v.widthAnchor, multiplier: 1.0).isActive = true
            stackView.addArrangedSubview(v)
        }

    }

    override func viewDidAppear(_ animated: Bool) {
        super.viewDidAppear(animated)

        guard let imgA = UIImage(named: "bkg640x360") else {
            fatalError("Could not load image!")
        }

        guard let imgB = imgA.blurredImageWithBlurredEdges(inputRadius: 6.5) else {
            fatalError("Could not create Blurred image with Blurred Edges")
        }

        guard let imgC = imgA.blurredImageWithClippedEdges(inputRadius: 6.5) else {
            fatalError("Could not create Blurred image with Clipped Edges")
        }

        imgViewA.image = imgA
        imgViewB.image = imgB
        imgViewC.image = imgC

    }

}
使用此原始
640x360
图像,以及
200 x 200
图像视图:

我们得到这个输出:


同样值得一提的是——尽管我相信您已经注意到了——这些函数在模拟器上运行得很慢,但在实际设备上运行得很快。

我相信您的问题是,CIFilter的卷积内核在将模糊应用于图像边缘时正在创建额外的数据。CIContext不是严格限定的空间,它能够使用图像周围的区域来完全处理所有输出。因此,不要在createCGImage中使用output.extent,而是使用输入图像的大小(转换为CGRect)

要解释沿图像边缘的模糊alpha通道,可以使用CIImage.unmultiplyingAlpha().settingAlphaOne()方法在返回之前展平图像

func getImageWithBlur(image: UIImage) -> UIImage? {

    let context = CIContext(options: nil)

    guard let currentFilter = CIFilter(name: "CIGaussianBlur") else { return nil }

    let beginImage = CIImage(image: image)
    currentFilter.setValue(beginImage, forKey: kCIInputImageKey)
    currentFilter.setValue(6.5, forKey: "inputRadius")

    let rect = CGRect(x: 0.0, y: 0.0, width: image.size.width, height: image.size.height)

    guard let output = currentFilter.outputImage?.unpremultiplyingAlpha().settingAlphaOne(in: rect) else { return nil }
    guard let cgimg = context.createCGImage(output, from: rect) else { return nil }

    print("image.size:    \(image.size)")
    print("output.extent: \(output.extent)")

    return UIImage(cgImage: cgimg)

}

几乎可以肯定,这是一个复制品。
aspectFill
clipstobunds
的组合对您隐瞒了真实情况或者,根本不使用CoreImage,只需在图像和标签之间放置一个视觉效果视图。例如,如果呈现白色文本,我可能会使用
UIVisualEffectView(效果:UIBlurEffect(样式:.dark))
的暗调视图。视觉效果模糊太多,没有任何半径哇,这是一个很好的解释。非常感谢,它成功了。