Ios GPUImage中的GPUImageDivideBlendFilter,what';CIImage的相应过滤器是什么?
我试过这个:Ios GPUImage中的GPUImageDivideBlendFilter,what';CIImage的相应过滤器是什么?,ios,gpuimage,ciimage,Ios,Gpuimage,Ciimage,我试过这个: CIFilter *dodgeFilter = [CIFilter filterWithName:@"CIColorDodgeBlendMode"]; 取代: GPUImageDivideBlendFilter *divideBlendFilter = [[GPUImageDivideBlendFilter alloc] init]; 但是效果不一样。内置过滤器 您是否尝试过使用CIDivideBlendMode CIImage *img1 = [[CIImage alloc]
CIFilter *dodgeFilter = [CIFilter filterWithName:@"CIColorDodgeBlendMode"];
取代:
GPUImageDivideBlendFilter *divideBlendFilter = [[GPUImageDivideBlendFilter alloc] init];
但是效果不一样。内置过滤器
您是否尝试过使用CIDivideBlendMode
CIImage *img1 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img1.jpg"]];
CIImage *img2 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img2.jpg"]];
CIFilter *filterBuiltin = [CIFilter filterWithName:@"CIDivideBlendMode"
keysAndValues:@"inputImage", img1,
@"inputBackgroundImage", img2, nil];
CIImage *outputImageBuiltin = [filterBuiltin outputImage];
UIImage *filteredImageBuiltin = [self imageWithCIImage:outputImageBuiltin];
自定义过滤器 我认为在iOS8允许的情况下,尝试基于现有的GPUImageFilter创建自定义的
CIFilter
会很有趣。这应该允许将任何GPUImageFilter
转换为它的CIFilter
对应项
在开始之前,它值得检查并
我们将从编写自定义内核开始,该内核非常类似于GPUImageDivideBlendFilter
着色器。一个例外是核心映像内核语言中似乎不支持的控制流部分,我们将使用*\u branch1
和*\u branch2
乘法器来解决这个问题
创建CIFilter很简单:
ImageDivideBlendFilter.cikernel
(自定义过滤器内核)文件:
kernel vec4 GPUImageDivideBlendFilter(sampler image1, sampler image2)
{
float EPSILON = 1e-4;
vec4 base = sample(image1, samplerCoord(image1));
vec4 overlay = sample(image2, samplerCoord(image2));
float ra1 = overlay.a * base.a + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
float ra2 = (base.r * overlay.a * overlay.a) / overlay.r + overlay.r * (1.0 - base.a) + base.r * (1.0 - overlay.a);
// https://developer.apple.com/library/mac/documentation/GraphicsImaging/Reference/CIKernelLangRef/ci_gslang_ext.html#//apple_ref/doc/uid/TP40004397-CH206-TPXREF101
// "Other flow control statements (if, for, while, do while) are supported only when the loop condition can be inferred at the time the code compiles"
float ra_branch2 = step(EPSILON, overlay.a) * step(base.r / overlay.r, base.a / overlay.a);
float ra_branch1 = step(ra_branch2, 0.5);
float ra = ra1 * ra_branch1 + ra2 * ra_branch2;
float ga1 = overlay.a * base.a + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
float ga2 = (base.g * overlay.a * overlay.a) / overlay.g + overlay.g * (1.0 - base.a) + base.g * (1.0 - overlay.a);
float ga_branch2 = step(EPSILON, overlay.a) * step(base.g / overlay.g, base.a / overlay.a);
float ga_branch1 = step(ga_branch2, 0.5);
float ga = ga1 * ga_branch1 + ga2 * ga_branch2;
float ba1 = overlay.a * base.a + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
float ba2 = (base.b * overlay.a * overlay.a) / overlay.b + overlay.b * (1.0 - base.a) + base.b * (1.0 - overlay.a);
float ba_branch2 = step(EPSILON, overlay.a) * step(base.b / overlay.b, base.a / overlay.a);
float ba_branch1 = step(ba_branch2, 0.5);
float ba = ba1 * ba_branch1 + ba2 * ba_branch2;
return vec4(ra, ga, ba, 1.0);
}
// ImageDivideBlendFilter.h
#import <CoreImage/CoreImage.h>
@interface ImageDivideBlendFilter : CIFilter
@end
// ImageDivideBlendFilter.m
#import "ImageDivideBlendFilter.h"
@interface ImageDivideBlendFilter()
{
CIImage *_image1;
CIImage *_image2;
}
@end
@implementation ImageDivideBlendFilter
static CIColorKernel *imageDivideBlendKernel = nil;
+ (void)initialize
{
// This will load the kernel code which will compiled at run time. We do this just once to optimize performances
if (!imageDivideBlendKernel)
{
NSBundle *bundle = [NSBundle bundleForClass:[self class]];
NSString *code = [NSString stringWithContentsOfFile:[bundle pathForResource: @"ImageDivideBlendFilter" ofType: @"cikernel"] encoding:NSUTF8StringEncoding error:nil];
NSArray *kernels = [CIColorKernel kernelsWithString:code];
imageDivideBlendKernel = [kernels firstObject];
}
}
- (CIImage *)outputImage
{
return [imageDivideBlendKernel applyWithExtent:_image1.extent roiCallback:nil arguments:@[_image1, _image2]];
}
+ (CIFilter *)filterWithName: (NSString *)name
{
CIFilter *filter;
filter = [[self alloc] init];
return filter;
}
@end
我在Github上提供了一个示例项目,展示了如何在Swift中进行CIFiltering。答案中的CIColorKernel代码不起作用;事实上,任何试图将多个采样器对象(图像)传递给内核的尝试都会失败。而且我知道这与问题无关,但我觉得应该指出的是,内核代码有一点是向后的,特别是与预乘相关的函数。在独立于其余三个通道使用alpha通道时,不复制任何采样器(或颜色)对象;当你有了成品后,用预乘法重新组合它们。如果在计算中未更改或混合或以其他方式使用两个采样器(或颜色)对象,则不要这样做
- (void)filterDemo
{
CIImage *img1 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img1.jpg"]];
CIImage *img2 = [[CIImage alloc] initWithImage:[UIImage imageNamed:@"img2.jpg"]];
[ImageDivideBlendFilter class]; // preload kernel, it speeds up loading the filter if used multiple times
CIFilter *filterCustom = [CIFilter filterWithName:@"ImageDivideBlendFilter" keysAndValues:@"image1", img2, @"image2", img1, nil];
CIImage *outputImageCustom = [filterCustom outputImage];
UIImage *filteredImageCustom = [self imageWithCIImage:outputImageCustom];
}
- (UIImage *)imageWithCIImage:(CIImage *)ciimage
{
CIContext *context = [CIContext contextWithOptions:nil];
CGImageRef cgimg = [context createCGImage:ciimage fromRect:[ciimage extent]];
UIImage *newImg = [UIImage imageWithCGImage:cgimg];
CGImageRelease(cgimg);
return newImg;
}