Ios 如何优化以下CoreGraphics代码
我想一次拍摄一张Ios 如何优化以下CoreGraphics代码,ios,performance,opengl-es,core-graphics,Ios,Performance,Opengl Es,Core Graphics,我想一次拍摄一张OpenGLES和UIKit的屏幕截图,经过一次大规模的研究,我发现了一种完全类似的方法: - (UIImage *)makeScreenshot { GLint backingWidth, backingHeight; // Bind the color renderbuffer used to render the OpenGL ES view // If your application only creates a single color r
OpenGLES
和UIKit
的屏幕截图,经过一次大规模的研究,我发现了一种完全类似的方法:
- (UIImage *)makeScreenshot {
GLint backingWidth, backingHeight;
// Bind the color renderbuffer used to render the OpenGL ES view
// If your application only creates a single color renderbuffer which is already bound at this point,
// this call is redundant, but it is needed if you're dealing with multiple renderbuffers.
// Note, replace "_colorRenderbuffer" with the actual name of the renderbuffer object defined in your class.
// glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
// Get the size of the backing CAEAGLLayer
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
// NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger x = _visibleFrame.origin.x, y = _visibleFrame.origin.y, width = _visibleFrame.size.width, height = _visibleFrame.size.height;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
// Create a CGImage with the pixel data
// If your OpenGL ES content is opaque, use kCGImageAlphaNoneSkipLast to ignore the alpha channel
// otherwise, use kCGImageAlphaPremultipliedLast
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
// CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast, ref, NULL, true, kCGRenderingIntentDefault);
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaNoneSkipLast, ref, NULL, true, kCGRenderingIntentDefault);
// OpenGL ES measures data in PIXELS
// Create a graphics context with the target size measured in POINTS
NSInteger widthInPoints, heightInPoints;
if (NULL != UIGraphicsBeginImageContextWithOptions) {
// On iOS 4 and later, use UIGraphicsBeginImageContextWithOptions to take the scale into consideration
// Set the scale parameter to your OpenGL ES view's contentScaleFactor
// so that you get a high-resolution snapshot when its value is greater than 1.0
CGFloat scale = _baseView.contentScaleFactor;
widthInPoints = width / scale;
heightInPoints = height / scale;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(widthInPoints, heightInPoints), NO, scale);
}
else {
// On iOS prior to 4, fall back to use UIGraphicsBeginImageContext
widthInPoints = width;
heightInPoints = height;
UIGraphicsBeginImageContext(CGSizeMake(widthInPoints, heightInPoints));
}
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
// return image;
UIImageView *GLImage = [[UIImageView alloc] initWithImage:image];
UIGraphicsBeginImageContext(_visibleFrame.size);
//order of getting the context depends on what should be rendered first.
// this draws the UIKit on top of the gl image
[GLImage.layer renderInContext:UIGraphicsGetCurrentContext()];
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -_visibleFrame.origin.x, -_visibleFrame.origin.y);
[_baseView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *finalImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
// Do something with resulting image
return finalImage;
}
但有趣的是合并部分。我有两个
UIGraphicsBeginImageContext();
.......
.......
UIGraphicsEndImageContext();
街区。首先生成OpenGLES
图像,然后与UIKit图像合并。使用单个UIGraphicsBeginImageContext()是否有更好的方法来实现这一点。。。UIGraphicsSendImageContext()代码>块而不是创建UIImageView
,然后执行渲染
比如:
CGContextRef cgcontext = UIGraphicsGetCurrentContext();
// UIKit coordinate system is upside down to GL/Quartz coordinate system
// Flip the CGImage by rendering it to the flipped bitmap context
// The size of the destination area is measured in POINTS
CGContextSetBlendMode(cgcontext, kCGBlendModeCopy);
CGContextDrawImage(cgcontext, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// the merging part starts
CGContextTranslateCTM(UIGraphicsGetCurrentContext(), -_visibleFrame.origin.x, -_visibleFrame.origin.y);
[_baseView.layer renderInContext:UIGraphicsGetCurrentContext()];
// the merging part ends
// Retrieve the UIImage from the current context
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
但不幸的是,它没有合并。有人能在这里纠正错误和/或找到最好的方法吗?UISNAPSHOTING
随着iOS 7的推出,苹果推出了UISnapshotting
,他们声称它非常快,比renderInContext:
快得多
UIView *snapshot = [view snapshotViewAfterScreenUpdates:YES];
此方法从中捕获屏幕的当前视觉内容
渲染服务器,并使用它们构建新的快照视图。你可以
使用返回的快照视图作为屏幕显示的可视替代
应用程序中的内容。(…)此方法比尝试
自己将屏幕内容渲染为位图图像
此外,请查看下面的链接。他们应该给你一些见解,并指出正确的方向
谢谢您提供的信息。我的应用程序仍然支持iOS 5.1,因此我也需要解决方案。你说“它不合并”是什么意思?你在看什么?只有基本视图?@JackWu是的,你是对的,我只能看到基本视图。