Ios 将OpenGL纹理渲染到CGContext中的最快方法
下面是一个简单的问题: 对于某些层合成,我必须在CGContext中渲染OpenGL纹理。最快的方法是什么? 迄今为止的想法: 显然,调用RenderContext不会捕获OpenGL内容,而且glReadPixels速度太慢 对于某些“上下文”,我在层的委托类中调用此方法:Ios 将OpenGL纹理渲染到CGContext中的最快方法,ios,core-graphics,core-animation,opengl-es-2.0,calayer,Ios,Core Graphics,Core Animation,Opengl Es 2.0,Calayer,下面是一个简单的问题: 对于某些层合成,我必须在CGContext中渲染OpenGL纹理。最快的方法是什么? 迄今为止的想法: 显然,调用RenderContext不会捕获OpenGL内容,而且glReadPixels速度太慢 对于某些“上下文”,我在层的委托类中调用此方法: - (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx 我曾考虑过使用CVOpenGLESTextureCache,但这需要额外的渲染,而且在渲染后
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
我曾考虑过使用CVOpenGLESTextureCache,但这需要额外的渲染,而且在渲染后似乎需要一些复杂的转换
下面是我现在(糟糕的)实现:
glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte *) malloc(dataLength * sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
谢谢 好的
对于任何好奇的人来说,上面所示的方法并不是最快的方法
当UIView被要求提供其内容时,它会要求其图层(通常是CALayer)为其绘制内容。例外:基于OpenGL的视图使用CaeAglayer(CALayer的子类),使用相同的方法,但不返回任何结果。没有绘画发生
因此,如果你打电话:
[someUIView.layer drawInContext:someContext];
它会起作用的,而
[someOpenGLView.layer drawInContext:someContext];
不会的
如果您向任何基于OpenGL的视图的superview请求其内容,这也会成为一个问题:它将递归地向其每个子视图请求其内容,而使用CaeAglayer的任何子视图都不会返回任何内容(您将看到一个黑色矩形)
我在上面开始寻找CALayer的委托方法drawLayer:inContext:的实现,我可以在任何基于OpenGL的视图中使用该方法,以便视图对象本身提供其内容(而不是层)。委托方法是自动调用的:苹果希望它能以这种方式工作
如果性能不是问题,则可以在视图中实现简单快照方法的变体。方法如下所示:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGContextDrawImage(ctx, self.bounds, iref);
}
但是这不是一个有效的性能
glReadPixels,正如到处都提到的,不是一个快速调用。从iOS 5开始,苹果公开了CVOpenGLESTextureCacheRef——基本上是一个共享缓冲区,既可以用作CVPixelBufferRef,也可以用作OpenGL纹理。最初,它被设计用来作为从视频帧获取OpenGL纹理的一种方式:现在它更常用于反向使用,从纹理获取视频帧
因此,更好地实现上述想法的方法是使用从CVOpenGleStextureCacheCreateTextureFromImageCVPixelBufferRef获得的CVPixelBufferRef,直接访问这些像素,将它们绘制到一个CGImage中,然后缓存该CGImage,并在上面的委托方法中将其绘制到上下文中
代码在这里。在每个渲染过程中,将纹理绘制到texturecache中,它链接到CVPixelBuffer Ref:
- (void) renderToCGImage {
// Setup the drawing
[ochrContext useProcessingContext];
glBindFramebuffer(GL_FRAMEBUFFER, layerRenderingFramebuffer);
glViewport(0, 0, (int) self.frame.size.width, (int) self.frame.size.height);
[ochrContext setActiveShaderProgram:layerRenderingShaderProgram];
// Do the actual drawing
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, self.inputTexture);
glUniform1i(layerRenderingInputTextureUniform, 4);
glVertexAttribPointer(layerRenderingShaderPositionAttribute, 2, GL_FLOAT, 0, 0, kRenderTargetVertices);
glVertexAttribPointer(layerRenderingShaderTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, kRenderTextureVertices);
// Draw and finish up
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glFinish();
// Try running this code asynchronously to improve performance
dispatch_async(PixelBufferReadingQueue, ^{
// Lock the base address (can't get the address without locking it).
CVPixelBufferLockBaseAddress(renderTarget, 0);
// Get a pointer to the pixels
uint32_t * pixels = (uint32_t*) CVPixelBufferGetBaseAddress(renderTarget);
// Wrap the pixel data in a data-provider object.
CGDataProviderRef pixelWrapper = CGDataProviderCreateWithData(NULL, pixels, CVPixelBufferGetDataSize(renderTarget), NULL);
// Get a color-space ref... can't this be done only once?
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Release the exusting CGImage
CGImageRelease(currentCGImage);
// Get a CGImage from the data (the CGImage is used in the drawLayer: delegate method above)
currentCGImage = CGImageCreate(self.frame.size.width,
self.frame.size.height,
8,
32,
4 * self.frame.size.width,
colorSpaceRef,
kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
pixelWrapper,
NULL,
NO,
kCGRenderingIntentDefault);
// Clean up
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
CGDataProviderRelease(pixelWrapper);
CGColorSpaceRelease(colorSpaceRef);
});
}
然后非常简单地实现委托方法:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGContextDrawImage(ctx, self.bounds, currentCGImage);
}
我希望这对你有帮助,亲爱的互联网。你怎么多次打这个电话?这是运行循环吗?