Ios 如何更改UIImage/UIImageView单个像素的颜色

Ios 如何更改UIImage/UIImageView单个像素的颜色,ios,objective-c,uiimageview,uiimage,Ios,Objective C,Uiimageview,Uiimage,我有一个应用了过滤器的UIImageView: testImageView.layer.magnificationFilter = kCAFilterNearest; 使单个像素可见。此UIImageView位于UIScrollView中,图像本身为1000x1000。我使用了以下代码来检测已点击的像素: 我首先设置了轻触手势识别器: UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarg

我有一个应用了过滤器的UIImageView:

testImageView.layer.magnificationFilter = kCAFilterNearest;
使单个像素可见。此UIImageView位于UIScrollView中,图像本身为1000x1000。我使用了以下代码来检测已点击的像素:

我首先设置了轻触手势识别器:

UITapGestureRecognizer *scrollTap = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(singleTapGestureCaptured: )];
scrollTap.numberOfTapsRequired = 1;
[mainScrollView addGestureRecognizer:scrollTap];
然后使用抽头的位置生成抽头的坐标,通过该坐标,UIImageView的像素被抽头:

- (void)singleTapGestureCaptured:(UITapGestureRecognizer *)gesture
{
    CGPoint touchPoint = [gesture locationInView:testImageView];

    NSLog(@"%f is X pixel num, %f is Y pixel num ; %f is width of imageview", (touchPoint.x/testImageView.bounds.size.width)*1000, (touchPoint.y/testImageView.bounds.size.width)*1000, testImageView.bounds.size.width);

}
我想能够点击一个像素,并有它的颜色变化。然而,我发现的StackOverflow帖子中没有一个答案是有效的,或者不是过时的。但是,对于熟练的编码人员,您可以帮助我破译旧帖子,使之生效,或者使用我上面的代码来检测UIImageView的哪个像素被点击,自己生成一个简单的修复程序

感谢所有的帮助

为originaluser2编辑:

在阅读了originaluser2的文章之后,当我在我的物理设备上运行他的示例GitHub项目时,运行代码可以完美地工作。但是,当我在自己的应用程序中运行相同的代码时,我遇到图像被替换为空白,并出现以下错误:

<Error>: Unsupported pixel description - 3 components, 16 bits-per-component, 64 bits-per-pixel
<Error>: CGBitmapContextCreateWithData: failed to create delegate.
<Error>: CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
<Error>: CGBitmapContextCreateImage: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
:不支持的像素描述-3个组件,每个组件16位,每个像素64位
:CGBitmapContextCreateWithData:未能创建委托。
:CGContextDrawImage:无效的上下文0x0。如果要查看回溯,请设置CG_CONTEXT_SHOW_backtrace环境变量。
:CGBitmapContextCreateImage:无效的上下文0x0。如果要查看回溯,请设置CG_CONTEXT_SHOW_backtrace环境变量。

正如我在手机上测试代码所证明的那样,代码显然是有效的。然而,同样的代码在我自己的项目中产生了一些问题。尽管我怀疑它们都是由一两个简单的中心问题引起的。如何解决这些错误?

您可以尝试以下方法:

UIImage *originalImage = [UIImage imageNamed:@"something"];

CGSize size = originalImage.size;

UIGraphicsBeginImageContext(size);

[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];

// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);

UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

您可以尝试以下方法:

UIImage *originalImage = [UIImage imageNamed:@"something"];

CGSize size = originalImage.size;

UIGraphicsBeginImageContext(size);

[originalImage drawInRect:CGRectMake(0, 0, size.width, size.height)];

// myColor is an instance of UIColor
[myColor setFill];
UIRectFill(CGRectMake(someX, someY, 1, 1);

UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();
您需要将此问题分解为多个步骤。
  • 获取图像坐标系中接触点的坐标
  • 获取要更改的像素的x和y位置
  • 创建位图上下文并用新颜色的组件替换给定像素的组件
  • 首先,要获得图像坐标系中接触点的坐标,可以使用我在
    UIImageView
    上编写的分类方法。这将返回一个
    cGraffeTransform
    ,它将一个点从视图坐标映射到图像坐标,具体取决于视图的内容模式

    @interface UIImageView (PointConversionCatagory)
    
    @property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
    @property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
    
    @end
    
    @implementation UIImageView (PointConversionCatagory)
    
    -(CGAffineTransform) viewToImageTransform {
    
        UIViewContentMode contentMode = self.contentMode;
    
        // failure conditions. If any of these are met – return the identity transform
        if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
            (contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
            return CGAffineTransformIdentity;
        }
    
        // the width and height ratios
        CGFloat rWidth = self.image.size.width/self.frame.size.width;
        CGFloat rHeight = self.image.size.height/self.frame.size.height;
    
        // whether the image will be scaled according to width
        BOOL imageWiderThanView = rWidth > rHeight;
    
        if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
    
            // The ratio to scale both the x and y axis by
            CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
    
            // The x-offset of the inner rect as it gets centered
            CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
    
            // The y-offset of the inner rect as it gets centered
            CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
    
            return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
        } else {
            return CGAffineTransformMakeScale(rWidth, rHeight);
        }
    }
    
    -(CGAffineTransform) imageToViewTransform {
        return CGAffineTransformInvert(self.viewToImageTransform);
    }
    
    @end
    
    这里没有什么太复杂的,只是一些额外的比例方面的逻辑拟合/填充,以确保图像的中心被考虑在内。如果您的计算机在屏幕上以1:1的比例显示图像,则可以完全跳过此步骤

    接下来,您将希望获得要更改的像素的x和y位置。这相当简单–您只需使用上述类别属性
    viewToImageTransform
    获取图像坐标系中的像素,然后使用
    floor
    将值积分

    UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(imageViewWasTapped:)];
    tapGesture.numberOfTapsRequired = 1;
    [imageView addGestureRecognizer:tapGesture];
    
    ...
    
    -(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
    
        if (!imageView.image) {
            return;
        }
    
        // get the pixel position
        CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
        PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
    
        // replace image with new image, with the pixel replaced
        imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
    }
    
    最后,您需要使用我的另一种分类方法–
    imageWithPixel:replacedByColor:
    以给定颜色替换像素,以显示新图像

    /// A simple struct to represent the position of a pixel
    struct PixelPosition {
        NSInteger x;
        NSInteger y;
    };
    
    typedef struct PixelPosition PixelPosition;
    
    @interface UIImage (UIImagePixelManipulationCatagory)
    
    @end
    
    @implementation UIImage (UIImagePixelManipulationCatagory)
    
    -(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
    
        // components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
        const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
        UInt8* color255Components = calloc(sizeof(UInt8), 4);
        for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
    
        // raw image reference
        CGImageRef rawImage = self.CGImage;
    
        // image attributes
        size_t width = CGImageGetWidth(rawImage);
        size_t height = CGImageGetHeight(rawImage);
        CGRect rect = {CGPointZero, {width, height}};
    
        // image format
        size_t bitsPerComponent = 8;
        size_t bytesPerRow = width*4;
    
        // the bitmap info
        CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
    
        // data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
        UInt8* data = calloc(bytesPerRow, height);
    
        // get new RGB color space
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
        // create bitmap context
        CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
    
        // draw image into context (populating the data array while doing so)
        CGContextDrawImage(ctx, rect, rawImage);
    
        // get the index of the pixel (4 components times the x position plus the y position times the row width)
        NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
    
        // set the pixel components to the color components
        data[pixelIndex] = color255Components[0]; // r
        data[pixelIndex+1] = color255Components[1]; // g
        data[pixelIndex+2] = color255Components[2]; // b
        data[pixelIndex+3] = color255Components[3]; // a
    
        // get image from context
        CGImageRef img = CGBitmapContextCreateImage(ctx);
    
        // clean up
        free(color255Components);
        CGContextRelease(ctx);
        CGColorSpaceRelease(colorSpace);
        free(data);
    
        UIImage* returnImage = [UIImage imageWithCGImage:img];
        CGImageRelease(img);
    
        return returnImage;
    }
    
    @end
    
    这样做的目的是获取给定像素的索引(基于像素的x和y坐标)——然后使用该索引将该像素的组件数据替换为替换颜色的颜色组件

    最后,我们从位图上下文中取出一个图像并执行一些清理

    完成结果:


    完整项目:

    您需要将此问题分解为多个步骤。
  • 获取图像坐标系中接触点的坐标
  • 获取要更改的像素的x和y位置
  • 创建位图上下文并用新颜色的组件替换给定像素的组件
  • 首先,要获得图像坐标系中接触点的坐标,可以使用我在
    UIImageView
    上编写的分类方法。这将返回一个
    cGraffeTransform
    ,它将一个点从视图坐标映射到图像坐标,具体取决于视图的内容模式

    @interface UIImageView (PointConversionCatagory)
    
    @property (nonatomic, readonly) CGAffineTransform viewToImageTransform;
    @property (nonatomic, readonly) CGAffineTransform imageToViewTransform;
    
    @end
    
    @implementation UIImageView (PointConversionCatagory)
    
    -(CGAffineTransform) viewToImageTransform {
    
        UIViewContentMode contentMode = self.contentMode;
    
        // failure conditions. If any of these are met – return the identity transform
        if (!self.image || self.frame.size.width == 0 || self.frame.size.height == 0 ||
            (contentMode != UIViewContentModeScaleToFill && contentMode != UIViewContentModeScaleAspectFill && contentMode != UIViewContentModeScaleAspectFit)) {
            return CGAffineTransformIdentity;
        }
    
        // the width and height ratios
        CGFloat rWidth = self.image.size.width/self.frame.size.width;
        CGFloat rHeight = self.image.size.height/self.frame.size.height;
    
        // whether the image will be scaled according to width
        BOOL imageWiderThanView = rWidth > rHeight;
    
        if (contentMode == UIViewContentModeScaleAspectFit || contentMode == UIViewContentModeScaleAspectFill) {
    
            // The ratio to scale both the x and y axis by
            CGFloat ratio = ((imageWiderThanView && contentMode == UIViewContentModeScaleAspectFit) || (!imageWiderThanView && contentMode == UIViewContentModeScaleAspectFill)) ? rWidth:rHeight;
    
            // The x-offset of the inner rect as it gets centered
            CGFloat xOffset = (self.image.size.width-(self.frame.size.width*ratio))*0.5;
    
            // The y-offset of the inner rect as it gets centered
            CGFloat yOffset = (self.image.size.height-(self.frame.size.height*ratio))*0.5;
    
            return CGAffineTransformConcat(CGAffineTransformMakeScale(ratio, ratio), CGAffineTransformMakeTranslation(xOffset, yOffset));
        } else {
            return CGAffineTransformMakeScale(rWidth, rHeight);
        }
    }
    
    -(CGAffineTransform) imageToViewTransform {
        return CGAffineTransformInvert(self.viewToImageTransform);
    }
    
    @end
    
    这里没有什么太复杂的,只是一些额外的比例方面的逻辑拟合/填充,以确保图像的中心被考虑在内。如果您的计算机在屏幕上以1:1的比例显示图像,则可以完全跳过此步骤

    接下来,您将希望获得要更改的像素的x和y位置。这相当简单–您只需使用上述类别属性
    viewToImageTransform
    获取图像坐标系中的像素,然后使用
    floor
    将值积分

    UITapGestureRecognizer *tapGesture = [[UITapGestureRecognizer alloc] initWithTarget:self action:@selector(imageViewWasTapped:)];
    tapGesture.numberOfTapsRequired = 1;
    [imageView addGestureRecognizer:tapGesture];
    
    ...
    
    -(void) imageViewWasTapped:(UIGestureRecognizer*)tapGesture {
    
        if (!imageView.image) {
            return;
        }
    
        // get the pixel position
        CGPoint pt = CGPointApplyAffineTransform([tapGesture locationInView:imageView], imageView.viewToImageTransform);
        PixelPosition pixelPos = {(NSInteger)floor(pt.x), (NSInteger)floor(pt.y)};
    
        // replace image with new image, with the pixel replaced
        imageView.image = [imageView.image imageWithPixel:pixelPos replacedByColor:[UIColor colorWithRed:0 green:1 blue:1 alpha:1.0]];
    }
    
    最后,您需要使用我的另一种分类方法–
    imageWithPixel:replacedByColor:
    以给定颜色替换像素,以显示新图像

    /// A simple struct to represent the position of a pixel
    struct PixelPosition {
        NSInteger x;
        NSInteger y;
    };
    
    typedef struct PixelPosition PixelPosition;
    
    @interface UIImage (UIImagePixelManipulationCatagory)
    
    @end
    
    @implementation UIImage (UIImagePixelManipulationCatagory)
    
    -(UIImage*) imageWithPixel:(PixelPosition)pixelPosition replacedByColor:(UIColor*)color {
    
        // components of replacement color – in a 255 UInt8 format (fairly standard bitmap format)
        const CGFloat* colorComponents = CGColorGetComponents(color.CGColor);
        UInt8* color255Components = calloc(sizeof(UInt8), 4);
        for (int i = 0; i < 4; i++) color255Components[i] = (UInt8)round(colorComponents[i]*255.0);
    
        // raw image reference
        CGImageRef rawImage = self.CGImage;
    
        // image attributes
        size_t width = CGImageGetWidth(rawImage);
        size_t height = CGImageGetHeight(rawImage);
        CGRect rect = {CGPointZero, {width, height}};
    
        // image format
        size_t bitsPerComponent = 8;
        size_t bytesPerRow = width*4;
    
        // the bitmap info
        CGBitmapInfo bitmapInfo = kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big;
    
        // data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
        UInt8* data = calloc(bytesPerRow, height);
    
        // get new RGB color space
        CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
    
        // create bitmap context
        CGContextRef ctx = CGBitmapContextCreate(data, width, height, bitsPerComponent, bytesPerRow, colorSpace, bitmapInfo);
    
        // draw image into context (populating the data array while doing so)
        CGContextDrawImage(ctx, rect, rawImage);
    
        // get the index of the pixel (4 components times the x position plus the y position times the row width)
        NSInteger pixelIndex = 4*(pixelPosition.x+(pixelPosition.y*width));
    
        // set the pixel components to the color components
        data[pixelIndex] = color255Components[0]; // r
        data[pixelIndex+1] = color255Components[1]; // g
        data[pixelIndex+2] = color255Components[2]; // b
        data[pixelIndex+3] = color255Components[3]; // a
    
        // get image from context
        CGImageRef img = CGBitmapContextCreateImage(ctx);
    
        // clean up
        free(color255Components);
        CGContextRelease(ctx);
        CGColorSpaceRelease(colorSpace);
        free(data);
    
        UIImage* returnImage = [UIImage imageWithCGImage:img];
        CGImageRelease(img);
    
        return returnImage;
    }
    
    @end
    
    这样做的目的是获取给定像素的索引(基于像素的x和y坐标)——然后使用该索引将该像素的组件数据替换为替换颜色的颜色组件

    最后,我们从位图上下文中取出一个图像并执行一些清理

    完成结果:



    完整项目:

    现在就开始实施。我会让你知道我的进展。谢谢你非常详细的回复!在我的手机上运行您的项目后,代码显然有效,因此我已将问题标记为已回答。然而,当我在自己的项目中运行相同的代码时,遇到了几个错误,我在我的主要帖子的编辑中列出了这些错误。虽然你帮我回答了这个问题,但我会非常高兴的