Iphone ios如何捕获屏幕的特定部分

Iphone ios如何捕获屏幕的特定部分,iphone,ios,screen,capture,Iphone,Ios,Screen,Capture,我想捕捉iPhone屏幕的特定部分。我使用了带有选项的UIGraphicsBeginImageContextWithOptions,但无法捕获屏幕的一部分。 请帮帮我。您可以使用UIGraphicsGetImageFromCurrentContext拍摄屏幕截图。从内存中编写以下代码,因此可能出现错误。请你自己改正 - (UIImage*)captureView:(UIView *)yourView { CGRect rect = [[UIScreen mainScreen] bound

我想捕捉iPhone屏幕的特定部分。我使用了带有选项的UIGraphicsBeginImageContextWithOptions,但无法捕获屏幕的一部分。
请帮帮我。

您可以使用
UIGraphicsGetImageFromCurrentContext
拍摄屏幕截图。从内存中编写以下代码,因此可能出现错误。请你自己改正

- (UIImage*)captureView:(UIView *)yourView {
    CGRect rect = [[UIScreen mainScreen] bounds];
    UIGraphicsBeginImageContext(rect.size);
    CGContextRef context = UIGraphicsGetCurrentContext();
    [yourView.layer renderInContext:context];
    UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
    UIGraphicsEndImageContext();
    return image;
}

您可以使用此代码

UIGraphicsBeginImageContext(self.view.bounds.size);

[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];

UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

CGRect rect = CGRectMake(250,61 ,410, 255);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], rect);

UIImage *img = [UIImage imageWithCGImage:imageRef]; 

UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil); 

CGImageRelease(imageRef);

另一种捕获清晰屏幕截图的方法

if ([[UIScreen mainScreen] respondsToSelector:@selector(scale)])
            UIGraphicsBeginImageContextWithOptions(self.captureView.frame.size, NO, [UIScreen mainScreen].scale);
        else
            UIGraphicsBeginImageContext(self.captureView.frame.size);

        [self.captureView.layer renderInContext:UIGraphicsGetCurrentContext()];
        UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();
        [viewImage drawInRect:CGRectMake(0.0, 0.0, 640,960)];
        //NSData*m_Imgdata=UIImagePNGRepresentation(viewImage);
        NSData*m_Imgdata=UIImageJPEGRepresentation(viewImage, 1.0);

       // UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil);

        UIGraphicsEndImageContext();

@Superdev给出的答案是正确的,如果您想要捕获父视图并避免子视图(特别是覆盖视图),我想添加的是一个小技巧。您可以做的是将此子视图作为属性并使用以下命令

CGFloat width = CGRectGetWidth(self.view.bounds);
CGFloat height = CGRectGetHeight(self.view.bounds);
_overlayView.hidden = YES;
UIGraphicsBeginImageContext(self.view.bounds.size);

[self.view.layer renderInContext:UIGraphicsGetCurrentContext()];

UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext();

UIGraphicsEndImageContext();

CGRect rect = CGRectMake(0,0 ,width, height);
CGImageRef imageRef = CGImageCreateWithImageInRect([viewImage CGImage], rect);

UIImage *img = [UIImage imageWithCGImage:imageRef];

UIImageWriteToSavedPhotosAlbum(img, nil, nil, nil);

CGImageRelease(imageRef);

_overlayView.hidden = NO;

这是一个小技巧,希望有人会发现它有用

这里是
UIView
上的一个快速扩展,它将捕获整个视图或视图中的一帧。它考虑了屏幕比例

extension UIView {

    func imageSnapshot() -> UIImage {
        return self.imageSnapshotCroppedToFrame(nil)
    }

    func imageSnapshotCroppedToFrame(frame: CGRect?) -> UIImage {
        let scaleFactor = UIScreen.mainScreen().scale
        UIGraphicsBeginImageContextWithOptions(bounds.size, false, scaleFactor)
        self.drawViewHierarchyInRect(bounds, afterScreenUpdates: true)
        var image: UIImage = UIGraphicsGetImageFromCurrentImageContext()
        UIGraphicsEndImageContext()

        if let frame = frame {
            // UIImages are measured in points, but CGImages are measured in pixels
            let scaledRect = CGRectApplyAffineTransform(frame, CGAffineTransformMakeScale(scaleFactor, scaleFactor))

            if let imageRef = CGImageCreateWithImageInRect(image.CGImage, scaledRect) {
                image = UIImage(CGImage: imageRef)
            }
        }
        return image
    }
}

SWIFT 3.0 Xcode8版本 来自@jDutton answer,为我工作

extension UIView {
    func imageSnapshot() -> UIImage {
        return self.imageSnapshotCroppedToFrame(frame: nil)
    }

    func imageSnapshotCroppedToFrame(frame: CGRect?) -> UIImage {
        let scaleFactor = UIScreen.main.scale
        UIGraphicsBeginImageContextWithOptions(bounds.size, false, scaleFactor)
        self.drawHierarchy(in: bounds, afterScreenUpdates: true)
        var image: UIImage = UIGraphicsGetImageFromCurrentImageContext()!
        UIGraphicsEndImageContext()

        if let frame = frame {
            // UIImages are measured in points, but CGImages are measured in pixels
            let scaledRect = frame.applying(CGAffineTransform(scaleX: scaleFactor, y: scaleFactor))

            if let imageRef = image.cgImage!.cropping(to: scaledRect) {
                image = UIImage(cgImage: imageRef)
            }
        }
        return image
    }
}
如果您崩溃并收到以下消息:

GContextSaveGState: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
CGContextRestoreGState: invalid context 0x0. If you want to see the backtrace, please set CG_CONTEXT_SHOW_BACKTRACE environmental variable.
试试这个


希望能节省你的时间

你想从你的机器上截取一个屏幕截图,还是想通过代码在你的应用程序中截取一个区域?@mAc显然想用代码来截取。如果不是的话,为什么他要使用UIGraphicsBeginImageContextWithOptions当你必须截取屏幕的特定区域时,这很有帮助。我从中得到了特定的部分。但在某些视图中,它隐藏了它的内容。你能告诉我为什么会这样吗?仅供未来谷歌用户参考:这段代码并没有考虑屏幕缩放,jDutton。我喜欢你的回答。决定更新Swift 3.0的代码。非常感谢!!