Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/101.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios 使用GPUImage框架添加UIImage元素_Ios_Gpuimage - Fatal编程技术网

Ios 使用GPUImage框架添加UIImage元素

Ios 使用GPUImage框架添加UIImage元素,ios,gpuimage,Ios,Gpuimage,我正在使用Brad Larson的GPUImage框架添加UIImage元素,我已经成功添加了图像,但主要问题是图像被拉伸到视频的纵横比。 这是我的密码: GPUImageView *filterView = (GPUImageView *)self.view; videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:

我正在使用Brad Larson的GPUImage框架添加UIImage元素,我已经成功添加了图像,但主要问题是图像被拉伸到视频的纵横比。 这是我的密码:

    GPUImageView *filterView = (GPUImageView *)self.view;
    videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
    videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;
    transformFilter=[[GPUImageTransformFilter alloc]init];
    CGAffineTransform t=CGAffineTransformMakeScale(0.5, 0.5);
    [(GPUImageTransformFilter *)filter setAffineTransform:t];
    [videoCamera addTarget:transformFilter];

    filter = [[GPUImageOverlayBlendFilter alloc] init];
    [videoCamera addTarget:filter];
    inputImage = [UIImage imageNamed:@"eye.png"];
    sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];
    [sourcePicture forceProcessingAtSize:CGSizeMake(50, 50)];
    [sourcePicture processImage];
        [sourcePicture addTarget:filter];
    [sourcePicture addTarget:transformFilter];


    [filter addTarget:filterView];
    [videoCamera startCameraCapture];
我尝试在混合图像之前使用变换过滤器,但它没有缩放。 我希望图像显示在中心。我该怎么做?
谢谢

你走对了方向,只是有些东西放错了地方

下面的代码将加载覆盖图像并应用变换以保持其实际大小。默认情况下,将在视频上居中

GPUImageView *filterView = (GPUImageView *)self.view;
videoCamera = [[GPUImageVideoCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack];
videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait;

filter = [[GPUImageOverlayBlendFilter alloc] init];
transformFilter = [[GPUImageTransformFilter alloc]init];

[videoCamera addTarget:filter];
[transformFilter addTarget:filter];

// setup overlay image
inputImage = [UIImage imageNamed:@"eye.png"];
sourcePicture = [[GPUImagePicture alloc] initWithImage:inputImage smoothlyScaleOutput:YES];

// determine the necessary scaling to keep image at actual size
CGFloat tx = inputImage.size.width / 480.0;     // 480/640: based on video camera preset
CGFloat ty = inputImage.size.height / 640.0;

// apply transform to filter
CGAffineTransform t = CGAffineTransformMakeScale(tx, ty);
[(GPUImageTransformFilter *)transformFilter setAffineTransform:t];

//
[sourcePicture addTarget:filter];
[sourcePicture addTarget:transformFilter];
[sourcePicture processImage];

[filter addTarget:filterView];
[videoCamera startCameraCapture];