摄像头实时扫描iOS
我正在开发一个iOS应用程序,在那里我需要进行一些对象实时扫描。在这个例子中,我需要每秒3或4帧。以下是我创建捕获会话的代码:摄像头实时扫描iOS,ios,Ios,我正在开发一个iOS应用程序,在那里我需要进行一些对象实时扫描。在这个例子中,我需要每秒3或4帧。以下是我创建捕获会话的代码: // Create an AVCaptureSession AVCaptureSession *captureSession = [[AVCaptureSession alloc] init]; captureSession.sessionPreset = AVCaptureSessionPresetHigh; // Find a suitab
// Create an AVCaptureSession
AVCaptureSession *captureSession = [[AVCaptureSession alloc] init];
captureSession.sessionPreset = AVCaptureSessionPresetHigh;
// Find a suitable AVCaptureDevice
AVCaptureDevice *photoCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
// Create and add an AVCaptureDeviceInput
NSError *error = nil;
AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:photoCaptureDevice error:&error];
if(videoInput){
[captureSession addInput:videoInput];
}
// Create and add an AVCaptureVideoDataOutput
AVCaptureVideoDataOutput *videoOutput = [[AVCaptureVideoDataOutput alloc] init];
// we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'
NSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:
[NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];
[videoOutput setVideoSettings:rgbOutputSettings];
// Configure your output, and start the session
dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
[videoOutput setSampleBufferDelegate:self queue:queue];
if(videoOutput){
[captureSession addOutput:videoOutput];
}
[captureSession startRunning];
// Setting up the preview layer for the camera
AVCaptureVideoPreviewLayer *previewLayer = [AVCaptureVideoPreviewLayer layerWithSession:captureSession];
previewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
previewLayer.frame = cameraViewCanvas.bounds;
// ADDING FINAL VIEW layer TO THE MAIN VIEW sublayer
[cameraViewCanvas.layer addSublayer:previewLayer];
以及在队列上调用的委托方法:
-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
if(isCapturing){
NSLog(@"output");
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);
CIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer options:(NSDictionary *)attachments];
UIImage *newFrame = [[UIImage alloc] initWithCIImage:ciImage];
[self showImage:newFrame];
}
}
问题是我无法在屏幕上看到图像,没有错误和警告,但图像没有显示。我的问题是-我的路径是否正确,在我的代码中需要解决什么问题才能在屏幕上显示图像?姗姗来迟,但问题可能是由于没有在主线程中设置图像(captureOutput很可能在您创建的单独调度队列中调用) 或
您能提供更多关于您的
摄像机的信息吗?它在哪里定义和初始化等等…它只是一个@属性(非原子,保留)IBOutlet UIView*cameraViewCanvas
要显示摄像头的视频输出,它不是针对窗口的整个大小,因此我可以添加一些其他对象,例如UIImageView以预览我拍摄的帧。所以您在InterfaceBuilder中将其添加到主视图?是的,不要认为这是问题的原因,因为摄像头层看起来不错,正如我所建议的,smth与从CIImage到UIImage的转换有关,在这个主题上存在一些问题,我使用了他们的一些建议,但无论如何都不起作用。
dispatch_async(dispatch_get_main_queue(), ^{
[self showImage:newFrame];
});
[self performSelectorOnMainThread:@selector(showImage:) newFrame waitUntilDone:YES];