Ios 具有多个预览的AVCaptureSession

Ios 具有多个预览的AVCaptureSession,ios,avfoundation,Ios,Avfoundation,我有一个AVCaptureSession,正在运行AVCaptureVideoPreviewLayer 我可以看到视频,所以我知道它在工作 但是,我希望有一个集合视图,并在每个单元格中添加一个预览层,以便每个单元格显示视频的预览 如果我尝试将预览层传递到单元格中,并将其添加为子层,那么它将从其他单元格中删除该层,以便一次仅在一个单元格中显示该层 还有其他(更好的)方法吗?不能有多个预览。正如苹果基金会所说,只有一个输出流。我已经尝试了很多方法,但你就是做不到。实现AVCaptureSession

我有一个AVCaptureSession,正在运行AVCaptureVideoPreviewLayer

我可以看到视频,所以我知道它在工作

但是,我希望有一个集合视图,并在每个单元格中添加一个预览层,以便每个单元格显示视频的预览

如果我尝试将预览层传递到单元格中,并将其添加为子层,那么它将从其他单元格中删除该层,以便一次仅在一个单元格中显示该层


还有其他(更好的)方法吗?

不能有多个预览。正如苹果基金会所说,只有一个输出流。我已经尝试了很多方法,但你就是做不到。

实现AVCaptureSession delegate方法

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
使用它,您可以获得每个视频帧的样本缓冲区输出。使用缓冲区输出,您可以使用以下方法创建图像

- (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer 
{
    // Get a CMSampleBuffer's Core Video image buffer for the media data
    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); 
    // Lock the base address of the pixel buffer
    CVPixelBufferLockBaseAddress(imageBuffer, 0); 

    // Get the number of bytes per row for the pixel buffer
    void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer); 

    // Get the number of bytes per row for the pixel buffer
    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer); 
    // Get the pixel buffer width and height
    size_t width = CVPixelBufferGetWidth(imageBuffer); 
    size_t height = CVPixelBufferGetHeight(imageBuffer); 

    // Create a device-dependent RGB color space
    CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB(); 

    // Create a bitmap graphics context with the sample buffer data
    CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8, 
                                                 bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); 
    // Create a Quartz image from the pixel data in the bitmap graphics context
    CGImageRef quartzImage = CGBitmapContextCreateImage(context); 
    // Unlock the pixel buffer
    CVPixelBufferUnlockBaseAddress(imageBuffer,0);

    // Free up the context and color space
    CGContextRelease(context); 
    CGColorSpaceRelease(colorSpace);

    // Create an image object from the Quartz image
      UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationRight];

    // Release the Quartz image
    CGImageRelease(quartzImage);

    return (image);
}
因此,您可以向视图中添加多个ImageView,并在我前面提到的委托方法中添加以下行:

UIImage *image = [self imageFromSampleBuffer:sampleBuffer];
imageViewOne.image = image;
imageViewTwo.image = image;

我遇到了同样的问题,需要同时显示多个实时视图。使用上面的UIImage的答案对于我所需要的来说太慢了。以下是我发现的两种解决方案:

1.复叠层 第一个选项是使用自动复制图层。正如文档所说,它将自动创建“…其子层(源层)的指定数量的副本,每个副本都可能应用几何、时间和颜色变换。”

如果除了简单的几何变换或颜色变换之外,没有太多与实时预览的交互,这将非常有用(想想Photo Booth)。我经常看到CareReplicatorLayer被用作创建“反射”效果的方法

以下是复制CACaptureVideoPreviewLayer的一些示例代码:

Init AVCaptureVideoPreviewLayer Init CareReplicatorLayer和set属性 注意:这将复制实时预览层四次

添加层 注意:根据我的经验,您需要将要复制的层作为子层添加到CareReplicatorLayer

缺点 使用CareReplicatorLayer的一个缺点是它处理层复制的所有放置。因此,它将对每个实例应用任何集合转换,并且所有转换都将包含在其自身中。例如,无法在两个单独的单元上复制AVCaptureVideoPreviewLayer


2.手动渲染SampleBuffer 这种方法虽然有点复杂,但解决了CareReplicatorLayer的上述缺点。通过手动渲染实时预览,可以渲染任意多个视图。诚然,性能可能会受到影响

注意:可能还有其他方法来渲染SampleBuffer,但我选择OpenGL是因为它的性能。代码的灵感来源于

下面是我如何实现它的:

2.1背景和会议 设置OpenGL和CoreImage上下文 调度队列 此队列将用于会话和委托

self.captureSessionQueue = dispatch_queue_create("capture_session_queue", NULL);
初始化AVSession和AVCaptureVideoDataOutput 注意:我已删除所有设备功能检查,以使其更具可读性

注:以下代码为“魔法代码”。在这里,我们创建一个数据输出并将其添加到AVSession,这样我们就可以使用代理截取相机帧。这是我需要解决问题的突破点

2.2 OpenGL视图 我们正在使用GLKView渲染实时预览。因此,如果您想要4个实时预览,那么您需要4个GLKView

self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;
因为来自后置摄像头的本机视频图像位于UIDeviceOrientation和ScapeLeft(即home按钮位于右侧),所以我们需要应用顺时针90度变换,以便我们可以绘制视频预览,就像我们在横向视图中一样;如果您使用的是前置摄像头,并且希望有镜像预览(以便用户在镜像中看到自己),则需要应用额外的水平翻转(通过将CGAffineTransformMakeScale(-1.0,1.0)连接到旋转变换)

绑定帧缓冲区以获取帧缓冲区的宽度和高度。CIContext在绘制到GLKView时使用的边界以像素(而不是点)为单位,因此需要读取帧缓冲区的宽度和高度

[self.livePreviewView bindDrawable];
此外,由于我们将访问另一个队列(\u captureSessionQueue)中的边界,因此我们希望获得这段信息,这样我们就不会从另一个线程/队列访问\u videoPreviewView的属性

_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;

dispatch_async(dispatch_get_main_queue(), ^(void) {
    CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);        

    // *Horizontally flip here, if using front camera.*

    self.livePreviewView.transform = transform;
    self.livePreviewView.frame = self.bounds;
});
注意:如果您使用的是前置摄像头,则可以水平翻转实时预览,如下所示:

2.3委托执行 在设置了上下文、会话和GLKViews之后,我们现在可以从AVCaptureVideoDataOutputSampleBufferDelegate方法captureOutput:didOutputSampleBuffer:fromConnection:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;
    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
您需要对每个GLKView及其videoPreviewViewBounds进行引用。为了方便起见,我假设它们都包含在UICollectionViewCell中。您将需要根据自己的用例更改此选项

    for(CustomLivePreviewCell *cell in self.livePreviewCells) {
        CGFloat previewAspect = cell.videoPreviewViewBounds.size.width  / cell.videoPreviewViewBounds.size.height;

        // To maintain the aspect radio of the screen size, we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }

        [cell.livePreviewView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        if (sourceImage) {
            [_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
        }

        [cell.livePreviewView display];
    }
}
此解决方案允许您使用OpenGL渲染从AVCaptureVideoDataOutputSampleBufferDelegate接收的图像缓冲区,从而获得所需数量的实时预览

3.示例代码
下面是我与两个解决方案一起创建的github项目:

只需将预览层的内容设置为另一个CALayer:

CGImageRef cgImage=(uu桥cgImage)self.previewLayer.contents; self.duplicateLayer.contents=(_桥id)cgImage

您可以对任何金属或OpenGL层的内容执行此操作。在我的终端上,内存使用率和CPU负载也没有增加。你只复制了一个小指针。其他“解决方案”并非如此

我有一个可以下载的示例项目,它可以从单个摄影机提要中同时显示20个预览层。每一层都有不同的e
    :
    // create and configure video data output
    AVCaptureVideoDataOutput *videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
    videoDataOutput.videoSettings = outputSettings;
    [videoDataOutput setSampleBufferDelegate:self queue:self.captureSessionQueue];

    // begin configure capture session
    [self.captureSession beginConfiguration];

    // connect the video device input and video data and still image outputs
    [self.captureSession addInput:videoDeviceInput];
    [self.captureSession addOutput:videoDataOutput];

    [self.captureSession commitConfiguration];

    // then start everything
    [self.captureSession startRunning];
});
self.livePreviewView = [[GLKView alloc] initWithFrame:self.bounds context:self.eaglContext];
self.livePreviewView = NO;
self.livePreviewView.transform = CGAffineTransformMakeRotation(M_PI_2);
self.livePreviewView.frame = self.bounds;    
[self addSubview: self.livePreviewView];
[self.livePreviewView bindDrawable];
_videoPreviewViewBounds = CGRectZero;
_videoPreviewViewBounds.size.width = _videoPreviewView.drawableWidth;
_videoPreviewViewBounds.size.height = _videoPreviewView.drawableHeight;

dispatch_async(dispatch_get_main_queue(), ^(void) {
    CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_2);        

    // *Horizontally flip here, if using front camera.*

    self.livePreviewView.transform = transform;
    self.livePreviewView.frame = self.bounds;
});
transform = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(-1.0, 1.0));
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    CMFormatDescriptionRef formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer);

    // update the video dimensions information
    self.currentVideoDimensions = CMVideoFormatDescriptionGetDimensions(formatDesc);

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CIImage *sourceImage = [CIImage imageWithCVPixelBuffer:(CVPixelBufferRef)imageBuffer options:nil];

    CGRect sourceExtent = sourceImage.extent;
    CGFloat sourceAspect = sourceExtent.size.width / sourceExtent.size.height;
    for(CustomLivePreviewCell *cell in self.livePreviewCells) {
        CGFloat previewAspect = cell.videoPreviewViewBounds.size.width  / cell.videoPreviewViewBounds.size.height;

        // To maintain the aspect radio of the screen size, we clip the video image
        CGRect drawRect = sourceExtent;
        if (sourceAspect > previewAspect) {
            // use full height of the video image, and center crop the width
            drawRect.origin.x += (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0;
            drawRect.size.width = drawRect.size.height * previewAspect;
        } else {
            // use full width of the video image, and center crop the height
            drawRect.origin.y += (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0;
            drawRect.size.height = drawRect.size.width / previewAspect;
        }

        [cell.livePreviewView bindDrawable];

        if (_eaglContext != [EAGLContext currentContext]) {
            [EAGLContext setCurrentContext:_eaglContext];
        }

        // clear eagl view to grey
        glClearColor(0.5, 0.5, 0.5, 1.0);
        glClear(GL_COLOR_BUFFER_BIT);

        // set the blend mode to "source over" so that CI will use that
        glEnable(GL_BLEND);
        glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);

        if (sourceImage) {
            [_ciContext drawImage:sourceImage inRect:cell.videoPreviewViewBounds fromRect:drawRect];
        }

        [cell.livePreviewView display];
    }
}
    @IBOutlet var testView: UIImageView!
    private var extOrientation: UIImage.Orientation = .up
// MARK: - AVCaptureVideoDataOutputSampleBufferDelegate
extension CameraViewController: AVCaptureVideoDataOutputSampleBufferDelegate {
    func captureOutput(_ captureOutput: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

        let imageBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
        let ciimage : CIImage = CIImage(cvPixelBuffer: imageBuffer)
        let image : UIImage = self.convert(cmage: ciimage)

        DispatchQueue.main.sync(execute: {() -> Void in
            testView.image = image
        })

    }

    // Convert CIImage to CGImage
    func convert(cmage:CIImage) -> UIImage
    {
        let context:CIContext = CIContext.init(options: nil)
        let cgImage:CGImage = context.createCGImage(cmage, from: cmage.extent)!
        let image:UIImage = UIImage.init(cgImage: cgImage, scale: 1.0, orientation: extOrientation)
        return image
    }