为什么在iOS 6.0中glReadPixels()在这段代码中失败?

为什么在iOS 6.0中glReadPixels()在这段代码中失败?,ios,opengl-es,ios6,Ios,Opengl Es,Ios6,以下是我用于从OpenGL ES场景读取图像的代码: -(UIImage *)getImage{ GLint width; GLint height; glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width); glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &a

以下是我用于从OpenGL ES场景读取图像的代码:

-(UIImage *)getImage{

    GLint width;

    GLint height;

    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &width);

    glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &height);


    NSLog(@"%d %d",width,height);

    NSInteger myDataLength = width * height * 4;

    // allocate array and read pixels into it.
    GLubyte *buffer = (GLubyte *) malloc(myDataLength);
    glReadPixels(0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

    // gl renders "upside down" so swap top to bottom into new array.
    // there's gotta be a better way, but this works.
    GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
    for(int y = 0; y < height; y++)
        {
        for(int x = 0; x < width * 4; x++)
            {
            buffer2[((height - 1) - y) * width * 4 + x] = buffer[y * 4 * width + x];
            }
        }

    // make data provider with data.
    CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);

    // prep the ingredients
    int bitsPerComponent = 8;
    int bitsPerPixel = 32;
    int bytesPerRow = 4 * width;
    CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
    CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
    CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;

    // make the cgimage
    CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);

    // then make the uiimage from that
    UIImage *myImage = [UIImage imageWithCGImage:imageRef];
    CGImageRelease(imageRef);
    CGDataProviderRelease(provider);
    CGColorSpaceRelease(colorSpaceRef);
    free(buffer);
    free(buffer2);
    return myImage;

}
-(UIImage*)获取图像{
闪烁宽度;
闪烁高度;
glGetRenderBuffer参数(GL_RENDERBUFFER、GL_RENDERBUFFER_WIDTH和WIDTH);
GLGetRenderBuffer参数(GL_RENDERBUFFER、GL_RENDERBUFFER_高度和高度);
NSLog(@“%d%d”,宽度,高度);
NSInteger myDataLength=宽度*高度*4;
//分配数组并将像素读入其中。
GLubyte*buffer=(GLubyte*)malloc(myDataLength);
glReadPixels(0,0,宽度,高度,GL_RGBA,GL_无符号字节,缓冲区);
//gl呈现“倒置”,以便从上到下交换到新阵列中。
//肯定有更好的办法,但这是可行的。
GLubyte*buffer2=(GLubyte*)malloc(myDataLength);
对于(int y=0;y

这在iOS 5.x及更低版本中可以正常工作,但在iOS 6.0上,它现在返回黑色图像。为什么在iOS 6.0上,
glReadPixels()
失败了?

bro请尝试使用此方法获取屏幕截图图像。输出图像是
MailImage

- (UIImage*)screenshot
{
    // Create a graphics context with the target size
    // On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
    // On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
    CGSize imageSize = [[UIScreen mainScreen] bounds].size;
    if (NULL != UIGraphicsBeginImageContextWithOptions)
        UIGraphicsBeginImageContextWithOptions(imageSize, NO, 0);
    else
        UIGraphicsBeginImageContext(imageSize);

    CGContextRef context = UIGraphicsGetCurrentContext();

    // Iterate over every window from back to front
    for (UIWindow *window in [[UIApplication sharedApplication] windows])
    {
        if (![window respondsToSelector:@selector(screen)] || [window screen] == [UIScreen mainScreen])
        {
            // -renderInContext: renders in the coordinate space of the layer,
            // so we must first apply the layer's geometry to the graphics context
            CGContextSaveGState(context);
            // Center the context around the window's anchor point
            CGContextTranslateCTM(context, [window center].x, [window center].y);
            // Apply the window's transform about the anchor point
            CGContextConcatCTM(context, [window transform]);
            // Offset by the portion of the bounds left of and above the anchor point
            CGContextTranslateCTM(context,
                                  -[window bounds].size.width * [[window layer] anchorPoint].x,
                                  -[window bounds].size.height * [[window layer] anchorPoint].y);

            // Render the layer hierarchy to the current context
            [[window layer] renderInContext:context];

            // Restore the context
            CGContextRestoreGState(context);
        }
    }

    // Retrieve the screenshot image
    Mailimage = UIGraphicsGetImageFromCurrentImageContext();

    UIGraphicsEndImageContext();

    return Mailimage;


}
设置


(我不知道为什么这个技巧进展顺利..//)

hm,除了
glink viewport[4],我的外观和效果都一样;glGetIntegerv(GL_视口,视口);int width=视口[2];int height=视口[3]是,@SAKrisT这在iOS 6.0中不起作用在我的项目工作中,检查opengl错误glGetError();请不要回滚我的编辑。他们使问题更具描述性,也更容易搜索。此外,这与GPUImage无关,因此我删除了该标记。它为我提供了整个屏幕图像,不包括缓冲区中的图像。原因可能是您试图在使用
[context presentRenderbuffer:GL_RENDERBUFFER]呈现内容后从屏幕读取。除非像上面所做的那样,将图层设置为使用保留背衬,否则不能保证在呈现内容后仍保留该内容。iOS 6.0似乎更积极地删除不再需要的内容。上述操作将导致性能下降,因此最好执行
glFinish()
操作,然后捕获屏幕,然后显示渲染缓冲区;在我的代码中,我删除了eaglLayer.opaque=TRUE,这也阻止了截图代码的工作;并设置KeaglDrawablePropertyRetaineBacking=YES。现在它工作了。您如何设置KeaglDrawablePropertyRetaineBacking=YES?在哪里?嗨,我已经设置了KeaglDrawablePropertyRetaineBacking=是,但仍然不工作
CAEAGLLayer *eaglLayer = (CAEAGLLayer *) self.layer;
eaglLayer.drawableProperties = @{
    kEAGLDrawablePropertyRetainedBacking: [NSNumber numberWithBool:YES],
    kEAGLDrawablePropertyColorFormat: kEAGLColorFormatRGBA8
};
kEAGLDrawablePropertyRetainedBacking = YES