iOS上UIImage的运动模糊效果
有没有办法在图像上获得运动模糊效果? 我试过GPUImage、Filtrr和iOS核心图像,但所有这些都有规则的模糊-没有运动模糊iOS上UIImage的运动模糊效果,ios,xcode,uiimage,Ios,Xcode,Uiimage,有没有办法在图像上获得运动模糊效果? 我试过GPUImage、Filtrr和iOS核心图像,但所有这些都有规则的模糊-没有运动模糊 我也试过UIImage DSP,但它的运动模糊几乎是不可见的。我需要更强的。CoreImage有一个运动模糊过滤器 它叫CIMotionBlur 在我对存储库发表评论时,我只是在GPUImage中添加了运动和缩放模糊。这些是GPUImageMotionBlurFilter和GPUImageZoomBlurFilter类。这是缩放模糊的一个示例: 对于运动模糊,我在
我也试过UIImage DSP,但它的运动模糊几乎是不可见的。我需要更强的。CoreImage有一个运动模糊过滤器
它叫CIMotionBlur 在我对存储库发表评论时,我只是在GPUImage中添加了运动和缩放模糊。这些是GPUImageMotionBlurFilter和GPUImageZoomBlurFilter类。这是缩放模糊的一个示例: 对于运动模糊,我在一个方向上做了9次高斯模糊。这是使用以下顶点和片段着色器实现的: 顶点:
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
uniform highp vec2 directionalTexelStep;
varying vec2 textureCoordinate;
varying vec2 oneStepBackTextureCoordinate;
varying vec2 twoStepsBackTextureCoordinate;
varying vec2 threeStepsBackTextureCoordinate;
varying vec2 fourStepsBackTextureCoordinate;
varying vec2 oneStepForwardTextureCoordinate;
varying vec2 twoStepsForwardTextureCoordinate;
varying vec2 threeStepsForwardTextureCoordinate;
varying vec2 fourStepsForwardTextureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
oneStepBackTextureCoordinate = inputTextureCoordinate.xy - directionalTexelStep;
twoStepsBackTextureCoordinate = inputTextureCoordinate.xy - 2.0 * directionalTexelStep;
threeStepsBackTextureCoordinate = inputTextureCoordinate.xy - 3.0 * directionalTexelStep;
fourStepsBackTextureCoordinate = inputTextureCoordinate.xy - 4.0 * directionalTexelStep;
oneStepForwardTextureCoordinate = inputTextureCoordinate.xy + directionalTexelStep;
twoStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 2.0 * directionalTexelStep;
threeStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 3.0 * directionalTexelStep;
fourStepsForwardTextureCoordinate = inputTextureCoordinate.xy + 4.0 * directionalTexelStep;
}
片段:
precision highp float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
varying vec2 oneStepBackTextureCoordinate;
varying vec2 twoStepsBackTextureCoordinate;
varying vec2 threeStepsBackTextureCoordinate;
varying vec2 fourStepsBackTextureCoordinate;
varying vec2 oneStepForwardTextureCoordinate;
varying vec2 twoStepsForwardTextureCoordinate;
varying vec2 threeStepsForwardTextureCoordinate;
varying vec2 fourStepsForwardTextureCoordinate;
void main()
{
lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
fragmentColor += texture2D(inputImageTexture, oneStepBackTextureCoordinate) * 0.15;
fragmentColor += texture2D(inputImageTexture, twoStepsBackTextureCoordinate) * 0.12;
fragmentColor += texture2D(inputImageTexture, threeStepsBackTextureCoordinate) * 0.09;
fragmentColor += texture2D(inputImageTexture, fourStepsBackTextureCoordinate) * 0.05;
fragmentColor += texture2D(inputImageTexture, oneStepForwardTextureCoordinate) * 0.15;
fragmentColor += texture2D(inputImageTexture, twoStepsForwardTextureCoordinate) * 0.12;
fragmentColor += texture2D(inputImageTexture, threeStepsForwardTextureCoordinate) * 0.09;
fragmentColor += texture2D(inputImageTexture, fourStepsForwardTextureCoordinate) * 0.05;
gl_FragColor = fragmentColor;
}
作为优化,我使用角度、模糊大小和图像尺寸计算片段着色器外部纹理样本之间的步长。然后将其传递到顶点着色器,以便我可以计算那里的纹理采样位置,并在片段着色器中对其进行插值。这避免了iOS设备上的依赖纹理读取
缩放模糊要慢得多,因为我仍然在片段着色器中进行这些计算。毫无疑问,我有办法优化它,但我还没有尝试过。缩放模糊使用9次高斯模糊,其中方向和每采样偏移距离随像素相对于模糊中心的位置而变化
它使用以下片段着色器(和标准穿透顶点着色器):
请注意,出于性能原因,这两个模糊都在9个样本中硬编码。这意味着,在较大的模糊大小下,您将开始从有限的样本中看到瑕疵。对于较大的模糊,您需要多次运行这些过滤器或扩展它们以支持更多的高斯采样。但是,由于iOS设备上的纹理采样带宽有限,更多的采样将导致渲染速度大大降低。请看一看我尝试的UIImage DSP,运动模糊效果几乎不可见。我需要更强大的功能。“在OS X v10.4及更高版本中可用”-不适用于iOS:/啊,对不起,我没有看到:(
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec2 blurCenter;
uniform highp float blurSize;
void main()
{
// TODO: Do a more intelligent scaling based on resolution here
highp vec2 samplingOffset = 1.0/100.0 * (blurCenter - textureCoordinate) * blurSize;
lowp vec4 fragmentColor = texture2D(inputImageTexture, textureCoordinate) * 0.18;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + samplingOffset) * 0.15;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (2.0 * samplingOffset)) * 0.12;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (3.0 * samplingOffset)) * 0.09;
fragmentColor += texture2D(inputImageTexture, textureCoordinate + (4.0 * samplingOffset)) * 0.05;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - samplingOffset) * 0.15;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (2.0 * samplingOffset)) * 0.12;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (3.0 * samplingOffset)) * 0.09;
fragmentColor += texture2D(inputImageTexture, textureCoordinate - (4.0 * samplingOffset)) * 0.05;
gl_FragColor = fragmentColor;
}