Ios GPUImage:GPUImageThreeInputFilter适用于静态图像输入,但不适用于相机输入
我正在尝试使用GPUImage实现一个重新映射过滤器。它类似于opencv重新映射函数,它获取输入图像、xmap和ymap。因此,我将GPUImageThreeInputFilter子类化,并编写了自己的着色器代码。当过滤器的输入是静止图像时,我得到了正确的输出图像。代码如下:Ios GPUImage:GPUImageThreeInputFilter适用于静态图像输入,但不适用于相机输入,ios,iphone,opengl-es,gpuimage,fragment-shader,Ios,Iphone,Opengl Es,Gpuimage,Fragment Shader,我正在尝试使用GPUImage实现一个重新映射过滤器。它类似于opencv重新映射函数,它获取输入图像、xmap和ymap。因此,我将GPUImageThreeInputFilter子类化,并编写了自己的着色器代码。当过滤器的输入是静止图像时,我得到了正确的输出图像。代码如下: GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init]; [remapFilter forceProcessingAtSize:CGSizeMake
GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
[remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
UIImage *inputImage = [UIImage imageNamed:@"test.jpg"];
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:inputImage];
[stillImageSource addTarget:remapFilter atTextureLocation:0];
GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
[stillImageSource1 processImage];
[stillImageSource1 addTarget:remapFilter atTextureLocation:1];
GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
[stillImageSource2 processImage];
[stillImageSource2 addTarget:remapFilter atTextureLocation:2];
[stillImageSource processImage];
UIImage *filteredImage=[remapFilter imageFromCurrentlyProcessedOutput];
然而,当输入切换到相机输入时,我得到了错误的输出图像。我做了一些调试,发现xmap和ymap没有加载到第二和第三纹理。这两种纹理的像素值均为0
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPresetHigh cameraPosition:AVCaptureDevicePositionFront];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
GPUImageRemap *remapFilter=[[GPUImageRemap alloc] init];
[remapFilter forceProcessingAtSize:CGSizeMake(sphericalImageW, sphericalImageH)];
[videoCamera addTarget:remapFilter atTextureLocation:0];
GPUImagePicture *stillImageSource1 = [[GPUImagePicture alloc] initWithImage:xmapImage];
[stillImageSource1 processImage];
[stillImageSource1 addTarget:remapFilter atTextureLocation:1];
GPUImagePicture *stillImageSource2 = [[GPUImagePicture alloc] initWithImage:ymapImage];
[stillImageSource2 processImage];
[stillImageSource2 addTarget:remapFilter atTextureLocation:2];
GPUImageView *camView = [[GPUImageView alloc] initWithFrame:self.view.bounds];
[remapFilter addTarget:camView];
[videoCamera startCameraCapture];
头文件:
#import <GPUImage.h>
#import <GPUImageThreeInputFilter.h>
@interface GPUImageRemap : GPUImageThreeInputFilter
{
}
#导入
#进口
@接口GPUImagerMap:GPUImageThreeInputFilter
{
}
主文件:
#import "GPUImageRemap.h"
NSString *const kGPUImageRemapFragmentShaderString = SHADER_STRING
(
varying highp vec2 textureCoordinate;
varying highp vec2 textureCoordinate2;
varying highp vec2 textureCoordinate3;
uniform sampler2D inputImageTexture;
uniform sampler2D inputImageTexture2;
uniform sampler2D inputImageTexture3;
/*
x, y map orignally store floating point numbers in [0 imageWidth] and [0 imageHeight]
then they are divided by imageWidth-1 and imageHeight-1 to be in [0 1]
then they are converted to integer by multiply 1000000
then an integer is put in the 4 byte of RGBA channel
then each unsigned byte RGBA component is clamped to [0 1] and passed to fragment shader
therefore, do the inverse in fragment shader to get original x, y coordinates
*/
void main()
{
highp vec4 xAry0_1 = texture2D(inputImageTexture2, textureCoordinate2);
highp vec4 xAry0_255=floor(xAry0_1*vec4(255.0)+vec4(0.5));
//largest integer number we may see will not exceed 2000000, so 3 bytes are enough to carry our integer values
highp float xint=xAry0_255.b*exp2(16.0)+xAry0_255.g*exp2(8.0)+xAry0_255.r;
highp float x=xint/1000000.0;
highp vec4 yAry0_1 = texture2D(inputImageTexture3, textureCoordinate3);
highp vec4 yAry0_255=floor(yAry0_1*vec4(255.0)+vec4(0.5));
highp float yint=yAry0_255.b*exp2(16.0)+yAry0_255.g*exp2(8.0)+yAry0_255.r;
highp float y=yint/1000000.0;
if (x<0.0 || x>1.0 || y<0.0 || y>1.0)
{
gl_FragColor = vec4(0,0,0,1);
}
else
{
highp vec2 imgTexCoord=vec2(y, x);
gl_FragColor = texture2D(inputImageTexture, imgTexCoord);
}
}
);
@implementation GPUImageRemap
- (id)init
{
if (!(self = [super initWithFragmentShaderFromString:kGPUImageRemapFragmentShaderString]))
{
return nil;
}
return self;
}
#导入“gpuimagermap.h”
NSString*常量kGPUImageRemapFragmentShaderString=SHADER\u字符串
(
变化的highp vec2纹理坐标;
变化的highp vec2纹理坐标2;
变化的highp vec2纹理坐标3;
均匀的二维纹理;
均匀取样器2D输入图像纹理2;
均匀取样器2D输入图像纹理3;
/*
x、 y映射最初将浮点数存储在[0 imageWidth]和[0 imageHeight]中
然后将它们除以imageWidth-1和imageHeight-1,使其位于[01]中
然后将它们乘以1000000转换为整数
然后在RGBA通道的4字节中放入一个整数
然后将每个无符号字节RGBA组件钳制到[0 1]并传递给片段着色器
因此,在片段着色器中进行反转以获得原始的x、y坐标
*/
void main()
{
highp vec4 xAry0_1=纹理2D(输入图像纹理2,纹理坐标2);
highp vec4 xAry0_255=地板(xAry0_1*vec4(255.0)+vec4(0.5));
//我们可能看到的最大整数不会超过2000000,所以3个字节就足以承载我们的整数值
highp float xint=xAry0_255.b*exp2(16.0)+xAry0_255.g*exp2(8.0)+xAry0_255.r;
高p浮动x=xint/1000000.0;
highp vec4 yAry0_1=纹理2d(输入图像纹理3,纹理坐标3);
highp vec4 yAry0_255=地板(yAry0_1*vec4(255.0)+vec4(0.5));
highp float yint=yAry0_255.b*exp2(16.0)+yAry0_255.g*exp2(8.0)+yAry0_255.r;
高P浮动y=yint/1000000.0;
如果(x1.0 | | y1.0)
{
gl_FragColor=vec4(0,0,0,1);
}
其他的
{
highp vec2 imgTexCoord=vec2(y,x);
gl_FragColor=纹理2D(输入图像纹理,imgTexCoord);
}
}
);
@实现GPUImagerMap
-(id)init
{
if(!(self=[super initWithFragmentShaderFromString:kGPUImageRemapFragmentShaderString]))
{
返回零;
}
回归自我;
}
我自己找到了答案。GPUImagePicture不能声明为局部变量。否则,它将在函数退出后释放。这就是为什么上传到GPU时它们都是0。所有GPUImagePicture变量都必须是全局变量。您能随时发布您的子类代码吗?我需要帮助让我的自定义着色器工作。谢谢你。我今天早上真的让它工作了,不需要任何子类,谢天谢地!我确实为gpuimage3inputfilter创建了一个子类,并将其命名为:GPUImageFourInputFilter,用于我正在处理的另一个过滤器。我只是添加了我的代码。不管怎么说,你能找到答案真是太好了。