Ios 核心图形-使用整数数组绘制灰度图像

Ios 核心图形-使用整数数组绘制灰度图像,ios,core-graphics,cgimage,Ios,Core Graphics,Cgimage,我正在尝试使用核心图形创建UIImage 我的愿望是画一幅分为4个不同灰度区域/像素的图像 ……白色……灰色 …灰色…黑色 因此,使用core graphics,我想定义一个由4个不同的int8\t组成的数组,它们对应于所需的图像: int8_t data[] = { 255, 122, 122, 0, }; 255为白色 122为灰色 0为黑色 我能找到的类似代码的最佳参考是 这个参考是指一个RGB图像,所以根据我自己的常识提出了这个代码(对不起objective

我正在尝试使用核心图形创建UIImage

我的愿望是画一幅分为4个不同灰度区域/像素的图像

……白色……灰色

…灰色…黑色

因此,使用core graphics,我想定义一个由4个不同的
int8\t
组成的数组,它们对应于所需的图像:

int8_t data[] = {
    255,   122,
    122,     0,
};
255
为白色

122
为灰色

0
为黑色

我能找到的类似代码的最佳参考是

这个参考是指一个RGB图像,所以根据我自己的常识提出了这个代码(对不起objective-C法语-这不是我的参考:):

但是。。。这段代码给了我一个完整的黑色图像:

有人能给我介绍一个我能阅读并理解如何完成这项任务的地方吗?在尝试执行这样的任务时,有关核心图形的数据量似乎非常少。是时候做这些猜测了。。。永远:)

你很接近

灰度图像每像素需要两个分量:亮度和alpha

因此,只需进行几处更改(请参见注释):


编辑——我认为使用上述代码的内存缓冲区寻址存在问题。经过一些测试,我得到了不一致的结果

请尝试使用此修改后的代码:

@interface TestingViewController : UIViewController
@end
@interface TestingViewController ()
@end
@implementation TestingViewController

// CGDataProviderCreateWithData callback to free the pixel data buffer
void freePixelData(void *info, const void *data, size_t size) {
    free((void *)data);
}

- (UIImage*) getImageFromGrayScaleArray:(BOOL)allBlack {
    
    int8_t grayArray[] = {
        255, 122,
        122, 0,
    };
    
    int8_t blackArray[] = {
        0, 0,
        0, 0,
    };
    
    int width = 2;
    int height = 2;
    
    int imageSizeInPixels = width * height;
    int bytesPerPixel = 2; // 1 byte for brightness, 1 byte for alpha
    unsigned char *pixels = (unsigned char *)malloc(imageSizeInPixels * bytesPerPixel);
    memset(pixels, 255, imageSizeInPixels * bytesPerPixel); // setting alpha values to 255
    
    if (allBlack) {
        for (int i = 0; i < imageSizeInPixels; i++) {
            pixels[i * 2] = blackArray[i]; // writing array of bytes as image brightnesses
        }
    } else {
        for (int i = 0; i < imageSizeInPixels; i++) {
            pixels[i * 2] = grayArray[i]; // writing array of bytes as image brightnesses
        }
    }
    
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
                                                              pixels,
                                                              imageSizeInPixels * bytesPerPixel,
                                                              freePixelData);
    
    CGImageRef imageRef = CGImageCreate(width,
                                        height,
                                        8,
                                        8 * bytesPerPixel,
                                        width * bytesPerPixel,
                                        colorSpaceRef,
                                        kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault);
    
    UIImage *image = [UIImage imageWithCGImage:imageRef];
    
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    
    return image;
}

- (void)viewDidLoad {
    [super viewDidLoad];
    
    self.view.backgroundColor = [UIColor systemTealColor];
    
    UIImage *img1 = [self getImageFromGrayScaleArray:NO];
    UIImage *img2 = [self getImageFromGrayScaleArray:YES];
    
    UIImageView *v1 = [UIImageView new];
    UIImageView *v2 = [UIImageView new];
    
    v1.image = img1;
    v1.backgroundColor = [UIColor systemYellowColor];
    v2.image = img2;
    v2.backgroundColor = [UIColor systemYellowColor];
    
    v1.contentMode = UIViewContentModeScaleToFill;
    v2.contentMode = UIViewContentModeScaleToFill;
    
    v1.translatesAutoresizingMaskIntoConstraints = NO;
    [self.view addSubview:v1];
    v2.translatesAutoresizingMaskIntoConstraints = NO;
    [self.view addSubview:v2];
    
    UILayoutGuide *g = [self.view safeAreaLayoutGuide];
    
    [NSLayoutConstraint activateConstraints:@[
        
        [v1.topAnchor constraintEqualToAnchor:g.topAnchor constant:40.0],
        [v1.centerXAnchor constraintEqualToAnchor:g.centerXAnchor],
        [v1.widthAnchor constraintEqualToConstant:200.0],
        [v1.heightAnchor constraintEqualToAnchor:v1.widthAnchor],
        
        [v2.topAnchor constraintEqualToAnchor:v1.bottomAnchor constant:40.0],
        [v2.centerXAnchor constraintEqualToAnchor:self.view.centerXAnchor],
        [v2.widthAnchor constraintEqualToAnchor:v1.widthAnchor],
        [v2.heightAnchor constraintEqualToAnchor:v2.widthAnchor],
        
    ]];
}

@end
和底部图像使用:

    int8_t blackArray[] = {
        0, 0,
        0, 0,
    };
    
输出:


您的目标是创建
2x2
像素图像吗?白色-灰色作为“顶行”,灰色-黑色作为“底行”?是的。我想创建2x2像素的uiimageDonMag,非常感谢您的回答。我觉得这些信息非常有用。然而,当我在模拟器上运行它时,结果与人们可能期望的不匹配1字节用于亮度,1字节用于alpha int8_t数据[]={0,255,0,255,0,255,};我希望这张图片是一张完整的黑色图片。即使我伸展它。事实并非如此。模拟器显示了一些其他的东西。我也很想知道我是如何获得这些黑魔法数据的。KCGIMAGEAlphaPremultipledLast | KCGBitMapByteOrder32大。。。两个组成部分……)@Rankito-查看对我答案的编辑。至于
kgimageAlphaPremultipledLast | kCGBitmapByteOrder32Big
。。。我是从几年前写的一些代码中提取出来的。我不记得我是怎么做到的:(--玩弄了一些东西,这似乎也给出了想要的结果:
kgimagealphalast | kCGBitmapByteOrderDefault
很棒的编辑!非常感谢!这个答案应该被排除在外。感谢分享这些数据!不过最后一个问题:)-为什么我们有'int8_t'的数据,我们有'char'的malloc数据?这两者是如何对应的?我已经用:int8_t*像素=(int8_t*)malloc(imageSizeInPixels*字节/像素)进行了测试;而且似乎还可以。所以我想知道我们为什么用“char”instead@Rankito-我希望
(int8_t*)
相当于。。。
(unsigned char*)malloc
的使用源自我挖掘的旧代码,在我最初编写它时可能有特定的用途。在快速搜索之后,我想说对于这种特定用法,它们是可互换的,我们也可以使用
uint8\u t
。。。如果我们查看标题定义,就会看到:
typedef signed char int8\t。对于这种情况,
signed/unsigned
无关紧要。最后,一致性可能是决定因素。
@interface TestingViewController : UIViewController
@end
@interface TestingViewController ()
@end
@implementation TestingViewController

// CGDataProviderCreateWithData callback to free the pixel data buffer
void freePixelData(void *info, const void *data, size_t size) {
    free((void *)data);
}

- (UIImage*) getImageFromGrayScaleArray:(BOOL)allBlack {
    
    int8_t grayArray[] = {
        255, 122,
        122, 0,
    };
    
    int8_t blackArray[] = {
        0, 0,
        0, 0,
    };
    
    int width = 2;
    int height = 2;
    
    int imageSizeInPixels = width * height;
    int bytesPerPixel = 2; // 1 byte for brightness, 1 byte for alpha
    unsigned char *pixels = (unsigned char *)malloc(imageSizeInPixels * bytesPerPixel);
    memset(pixels, 255, imageSizeInPixels * bytesPerPixel); // setting alpha values to 255
    
    if (allBlack) {
        for (int i = 0; i < imageSizeInPixels; i++) {
            pixels[i * 2] = blackArray[i]; // writing array of bytes as image brightnesses
        }
    } else {
        for (int i = 0; i < imageSizeInPixels; i++) {
            pixels[i * 2] = grayArray[i]; // writing array of bytes as image brightnesses
        }
    }
    
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceGray();
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL,
                                                              pixels,
                                                              imageSizeInPixels * bytesPerPixel,
                                                              freePixelData);
    
    CGImageRef imageRef = CGImageCreate(width,
                                        height,
                                        8,
                                        8 * bytesPerPixel,
                                        width * bytesPerPixel,
                                        colorSpaceRef,
                                        kCGImageAlphaPremultipliedLast | kCGBitmapByteOrder32Big,
                                        provider,
                                        NULL,
                                        false,
                                        kCGRenderingIntentDefault);
    
    UIImage *image = [UIImage imageWithCGImage:imageRef];
    
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    
    return image;
}

- (void)viewDidLoad {
    [super viewDidLoad];
    
    self.view.backgroundColor = [UIColor systemTealColor];
    
    UIImage *img1 = [self getImageFromGrayScaleArray:NO];
    UIImage *img2 = [self getImageFromGrayScaleArray:YES];
    
    UIImageView *v1 = [UIImageView new];
    UIImageView *v2 = [UIImageView new];
    
    v1.image = img1;
    v1.backgroundColor = [UIColor systemYellowColor];
    v2.image = img2;
    v2.backgroundColor = [UIColor systemYellowColor];
    
    v1.contentMode = UIViewContentModeScaleToFill;
    v2.contentMode = UIViewContentModeScaleToFill;
    
    v1.translatesAutoresizingMaskIntoConstraints = NO;
    [self.view addSubview:v1];
    v2.translatesAutoresizingMaskIntoConstraints = NO;
    [self.view addSubview:v2];
    
    UILayoutGuide *g = [self.view safeAreaLayoutGuide];
    
    [NSLayoutConstraint activateConstraints:@[
        
        [v1.topAnchor constraintEqualToAnchor:g.topAnchor constant:40.0],
        [v1.centerXAnchor constraintEqualToAnchor:g.centerXAnchor],
        [v1.widthAnchor constraintEqualToConstant:200.0],
        [v1.heightAnchor constraintEqualToAnchor:v1.widthAnchor],
        
        [v2.topAnchor constraintEqualToAnchor:v1.bottomAnchor constant:40.0],
        [v2.centerXAnchor constraintEqualToAnchor:self.view.centerXAnchor],
        [v2.widthAnchor constraintEqualToAnchor:v1.widthAnchor],
        [v2.heightAnchor constraintEqualToAnchor:v2.widthAnchor],
        
    ]];
}

@end
    int8_t grayArray[] = {
        255, 122,
        122, 0,
    };
    
    int8_t blackArray[] = {
        0, 0,
        0, 0,
    };