Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/macos/9.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Macos 将视频帧传递到osx上的核心图像_Macos_Avfoundation_Core Image - Fatal编程技术网

Macos 将视频帧传递到osx上的核心图像

Macos 将视频帧传递到osx上的核心图像,macos,avfoundation,core-image,Macos,Avfoundation,Core Image,嗨,你们这些了不起的程序员!在过去的几周里,我从各种有用的来源(包括stackoverflow的许多帖子)收集了这件事,试图创建一个能够接收网络摄像头反馈并在微笑出现时检测微笑的东西(也可以在脸和微笑周围画框,一旦被发现似乎并不难)。如果代码混乱,请给我一些提示,因为我仍在学习中。 目前,我一直在尝试将图像传递给CIImage,以便对其进行面部分析(我计划在克服面部障碍后处理微笑)。因为如果我在(5)之后注释掉块,编译器就会成功—它会在窗口中显示一个简单的AVCaptureVideoPrevie

嗨,你们这些了不起的程序员!在过去的几周里,我从各种有用的来源(包括stackoverflow的许多帖子)收集了这件事,试图创建一个能够接收网络摄像头反馈并在微笑出现时检测微笑的东西(也可以在脸和微笑周围画框,一旦被发现似乎并不难)。如果代码混乱,请给我一些提示,因为我仍在学习中。 目前,我一直在尝试将图像传递给CIImage,以便对其进行面部分析(我计划在克服面部障碍后处理微笑)。因为如果我在(5)之后注释掉块,编译器就会成功—它会在窗口中显示一个简单的AVCaptureVideoPreviewLayer。我认为这就是我所说的“rootLayer”,所以它就像显示输出的第一层,在我检测到视频帧中的人脸后,我将在覆盖在这一层之上的新层中,在任何检测到的人脸的“边界”后面显示一个矩形,我将该层称为“预览层”…对吗

但是(5)之后的块,编译器抛出三个错误-

架构x86_64的未定义符号: “\CMCopyDictionaryOfAttachments”,引用自: -[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:]在AVRecorderDocument.o中 “\CMSampleBufferGetImageBuffer”,引用自: -[AVRecorderDocument captureOutput:didOutputSampleBuffer:fromConnection:]在AVRecorderDocument.o中 ld:找不到架构x86_64的符号 叮当声:错误:链接器命令失败,退出代码为1(使用-v查看调用)

谁能告诉我哪里出了问题,下一步该怎么做

谢谢你的帮助,我已经在这一点上被困了几天,我不明白,我能找到的所有例子都是针对IOS的,在OSX中不起作用

    - (id)init
{
    self = [super init];
    if (self) {

        // Move the output part to another function
        [self addVideoDataOutput];

        // Create a capture session
        session = [[AVCaptureSession alloc] init];

        // Set a session preset (resolution)
        self.session.sessionPreset = AVCaptureSessionPreset640x480;

        // Select devices if any exist
        AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
        if (videoDevice) {
            [self setSelectedVideoDevice:videoDevice];
        } else {
            [self setSelectedVideoDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeMuxed]];
        }
        NSError *error = nil;
        //  Add an input
        videoDeviceInput = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
        [self.session addInput:self.videoDeviceInput];

        // Start the session (app opens slower if it is here but I think it is needed in order to send the frames for processing)
        [[self session] startRunning];


          // Initial refresh of device list
         [self refreshDevices];

    }
    return self;
}

-(void) addVideoDataOutput {
    // (1) Instantiate a new video data output object
    AVCaptureVideoDataOutput * captureOutput = [[AVCaptureVideoDataOutput alloc] init];
    captureOutput.videoSettings = @{ (NSString *) kCVPixelBufferPixelFormatTypeKey : @(kCVPixelFormatType_32BGRA) };

    // discard if the data output queue is blocked (while CI processes the still image)
    captureOutput.alwaysDiscardsLateVideoFrames = YES;

    // (2) The sample buffer delegate requires a serial dispatch queue
    dispatch_queue_t captureOutputQueue;
    captureOutputQueue = dispatch_queue_create("CaptureOutputQueue", DISPATCH_QUEUE_SERIAL);
    [captureOutput setSampleBufferDelegate:self queue:captureOutputQueue];
    dispatch_release(captureOutputQueue);  //what does this do and should it be here or after we receive the processed image back?

    // (3) Define the pixel format for the video data output 
    NSString * key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
    NSNumber * value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
    NSDictionary * settings = @{key:value};
    [captureOutput setVideoSettings:settings];

    // (4) Configure the output port on the captureSession property
    if ( [self.session canAddOutput:captureOutput] )
    [session addOutput:captureOutput];

}
// Implement the Sample Buffer Delegate Method
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {

// I *think* I have a video frame now in some sort of image format... so have to convert it into a CIImage before I can process it:

    // (5) Convert CMSampleBufferRef to CVImageBufferRef, then to a CI Image (per weichsel's answer in July '13)
    CVImageBufferRef cvFrameImage = CMSampleBufferGetImageBuffer(sampleBuffer);  // Having trouble here, prog. stops and won't recognise CMSampleBufferGetImageBuffer.
    CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
    self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage options:(__bridge NSDictionary *)attachments];
    //self.ciFrameImage = [[CIImage alloc] initWithCVImageBuffer:cvFrameImage];

    //OK so it is a CIImage. Find some way to send it to a separate CIImage function to find the faces, then smiles.  Then send it somewhere else to be displayed on top of AVCaptureVideoPreviewLayer
    //TBW

}


- (NSString *)windowNibName
{
    return @"AVRecorderDocument";
}


- (void)windowControllerDidLoadNib:(NSWindowController *) aController
{
    [super windowControllerDidLoadNib:aController];

    // Attach preview to session
    CALayer *rootLayer = self.previewView.layer;
    [rootLayer setMasksToBounds:YES]; //aaron added
    self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:self.session];
    [self.previewLayer setBackgroundColor:CGColorGetConstantColor(kCGColorBlack)];
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
    [self.previewLayer setFrame:[rootLayer bounds]];
    //[newPreviewLayer setAutoresizingMask:kCALayerWidthSizable | kCALayerHeightSizable];  //don't think I need this for OSX?
    [self.previewLayer setVideoGravity:AVLayerVideoGravityResizeAspect];
    [rootLayer addSublayer:previewLayer];
//  [newPreviewLayer release];  //what's this for?


}
(从评论部分移动)


哇。我想两天和一个StackOverflow帖子就是我没有将CoreMedia.framework添加到我的项目中所需要的。哇。我想两天和一个StackOverflow帖子就是我没有将CoreMedia.framework添加到我的项目中所需要的。错误不再出现了,现在我有了另一个问题-程序ram在(5)之后从未到达过区块…我在那里放置了一个停止点,但从未命中。有什么想法吗?buuut…现在它确实到达了那里。不知道我从8分钟前更改了什么(我认为我没有更改任何内容),但现在它成功地到达了“转换为CI图像”step.会让你继续发帖的,因为你对你的旅程描述得如此详细……你愿意留下最后一句话吗?你让它起作用了吗?你能改进你的问题/答案并输入工作顺序代码吗?我现在自己做这件事,其他事情都失败了。任何帮助都将不胜感激。嘿,莫蒂-很抱歉,你是一个你说得对,我应该分享我学到的东西。不幸的是,我上一次接触这个项目是两年多前的事,所以我记不清了。但是整个项目仍然在我的GitHub repo上-你可以在