Macos AVPlayer未渲染到其AVPlayerLayer

Macos AVPlayer未渲染到其AVPlayerLayer,macos,avfoundation,calayer,avplayer,Macos,Avfoundation,Calayer,Avplayer,我有一个AVPlayerLayer(CALayer的子类),我需要进入一个可以传递给QCRenderer(QCRenderer接受NSImages和CIImages)的图像类型。我可以将CALayer转换为CGImageRef,并将其转换为NSImage,但内容总是清晰的 我把范围缩小到两个原因之一: 我没有正确创建NSImage AVPlayer未渲染到AVPlayerLayer 我没有收到任何错误,并且找到了一些关于转换CALayers的文档。此外,我还将AVPlayerLayer添加到NS

我有一个AVPlayerLayer(CALayer的子类),我需要进入一个可以传递给QCRenderer(QCRenderer接受NSImages和CIImages)的图像类型。我可以将CALayer转换为CGImageRef,并将其转换为NSImage,但内容总是清晰的

我把范围缩小到两个原因之一:

  • 我没有正确创建NSImage
  • AVPlayer未渲染到AVPlayerLayer
  • 我没有收到任何错误,并且找到了一些关于转换CALayers的文档。此外,我还将AVPlayerLayer添加到NSView中,该视图仍然为空,因此我认为问题出在2上

    我使用的是苹果公司的AVPlayerDemo的avplayerdemoplaybackview控制器的改进版本。我把它变成了一个NSObject,因为我去掉了它所有的接口代码

    当我创建AVPlayer时,我在(void)prepareToplayerAsset:withKeys:方法中创建AVPlayerLayer:(我只是将该层添加到NSView以测试它是否工作。)

    然后,我创建一个nsrunlop以每秒30次抓取AVPlayerLayer的帧:

    framegrabTimer = [NSTimer timerWithTimeInterval:(1/30) target:self selector:@selector(grabFrameFromMovie) userInfo:nil repeats:YES];
    [[NSRunLoop currentRunLoop] addTimer:framegrabTimer forMode:NSDefaultRunLoopMode];
    
    下面是我用来抓取帧并将其传递给处理QCRenderer的类的代码:

    -(void)grabFrameFromMovie {
    CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
    CGContextRef theContext = CGBitmapContextCreate(NULL, mPlaybackView.frame.size.width, mPlaybackView.frame.size.height, 8, 4*mPlaybackView.frame.size.width, colorSpace, kCGImageAlphaPremultipliedLast);
    [mPlaybackView renderInContext:theContext];
    CGImageRef CGImage = CGBitmapContextCreateImage(theContext);
    NSImage *image = [[NSImage alloc] initWithCGImage:CGImage size:NSMakeSize(mPlaybackView.frame.size.width, mPlaybackView.frame.size.height)];
    [[NSNotificationCenter defaultCenter] postNotificationName:@"AVPlayerLoadedNewFrame" object:[image copy]];
    CGContextRelease(theContext);
    CGColorSpaceRelease(colorSpace);
    CGImageRelease(CGImage); }
    
    我不明白为什么我只想弄清楚。非常感谢您在这方面提供的任何帮助,因为没有足够的用于OS X的AVFoundation文档。

    它对我来说很有用:

    AVAssetImageGenerator *gen = [[AVAssetImageGenerator alloc] initWithAsset:[[[self player] currentItem] asset]];
    CGImageRef capture = [gen copyCGImageAtTime:self.player.currentTime actualTime:NULL error:NULL];
    NSImage *img = [[NSImage alloc] initWithCGImage:capture size:self.playerView.frame.size];
    

    您可以将AVPlayerItemVideoOutput添加到AVPlayerItem,然后调用copyPixelBufferForItemTime来查询在指定时间内包含帧的CVPixelBufferRef对象,以下是示例代码:

    NSDictionary *pixBuffAttributes = @{                                                                                      
        (id)kCVPixelBufferWidthKey:@(nWidth),                                 
        (id)kCVPixelBufferHeightKey:@(nHeight),                                  
        (id)kCVPixelBufferCGImageCompatibilityKey:@YES,
    };
    m_output = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:pixBuffAttributes];
    
    ...
    
    m_buffer = [m_output copyPixelBufferForItemTime:time itemTimeForDisplay:NULL];
    
    CVPixelBufferLockBaseAddress(m_buffer, 0);
    auto *buffer = CVPixelBufferGetBaseAddress(m_buffer);
    frame->width = CVPixelBufferGetWidth(m_buffer);
    frame->height = CVPixelBufferGetHeight(m_buffer);
    frame->widthbytes = CVPixelBufferGetBytesPerRow(m_buffer);
    frame->bufferlen = frame->widthbytes * (uint32)CVPixelBufferGetHeight(m_buffer);
    
    auto &videoInfo = m_info.video;
    CGDataProviderRef dp = CGDataProviderCreateWithData(nullptr, buffer, frame->bufferlen, nullptr);
    CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
    m_image = CGImageCreate(frame->width,
                            frame->height,
                            8,
                            videoInfo.pixelbits,
                            frame->widthbytes,
                            cs,
                            kCGImageAlphaNoneSkipFirst,
                            dp,
                            nullptr,
                            true,
                            kCGRenderingIntentDefault);
    CGColorSpaceRelease(cs);
    CGDataProviderRelease(dp);
    
    您还可以查看苹果的官方样品:


    SWIFT 5.2版本:

    我认为罗佐金的答案是正确的,我发现它非常有用。我自己测试了一下,效果很好

    我只想发布一个更新的Swift 5.2版本,以防有人需要它

    func getCurrentFrame() -> CGImage? {
        guard let player = self.player, let avPlayerAsset = player.currentItem?.asset else {return nil}
        let assetImageGenerator = AVAssetImageGenerator(asset: avPlayerAsset)
        assetImageGenerator.requestedTimeToleranceAfter = .zero
        assetImageGenerator.requestedTimeToleranceBefore = .zero
        assetImageGenerator.appliesPreferredTrackTransform = true
        let imageRef = try! assetImageGenerator.copyCGImage(at: player.currentTime(), actualTime: nil)
        return imageRef 
    }
    
    重要注意事项:

    requestedTimeToleranceAfter和requestedTimeToleranceBefore应设置为.0,因为根据源代码,“生成图像的实际时间[…]可能与效率要求的时间不同”


    appliesPreferredTrackTransform必须设置为TRUE(默认值为FALSE),否则会得到错误的旋转帧。将此属性设置为TRUE后,您将获得在播放器中真正看到的内容。

    您好。你在这个问题上运气好吗?怎么解决的?我也想从AVPlayerLayer上截图,但到目前为止RenderContext对我不起作用。我不确定它与您的问题有什么关系,但
    timerWithTimeInterval:(1/30)
    并不像您想象的那样
    1/30
    的计算结果为0,因为两个操作数都是整数,所以应该使用
    1/30.
    来获得0.0333333的双精度值。
    func getCurrentFrame() -> CGImage? {
        guard let player = self.player, let avPlayerAsset = player.currentItem?.asset else {return nil}
        let assetImageGenerator = AVAssetImageGenerator(asset: avPlayerAsset)
        assetImageGenerator.requestedTimeToleranceAfter = .zero
        assetImageGenerator.requestedTimeToleranceBefore = .zero
        assetImageGenerator.appliesPreferredTrackTransform = true
        let imageRef = try! assetImageGenerator.copyCGImage(at: player.currentTime(), actualTime: nil)
        return imageRef 
    }