Ios cv::Mat不';与UIImageView宽度不匹配?

Ios cv::Mat不';与UIImageView宽度不匹配?,ios,opencv,width,avfoundation,frame,Ios,Opencv,Width,Avfoundation,Frame,我正在使用AVFoundation捕获视频帧,使用opencv进行处理,并在新iPad上的UIImageView中显示结果。opencv进程执行以下操作(“inImg”是视频帧): 然而,我没有在框架的左上角得到一个垂直的白色条(100行x 10列),而是从右上角到左下角得到了100条楼梯状的水平线,每条线都有10个像素长 经过一些调查,我意识到显示帧的宽度似乎比cv::Mat宽8像素。(即,第2行的第9个像素位于第1行的第1个像素的正下方。) 视频帧本身显示正确(行之间无位移)。 当AVCap

我正在使用AVFoundation捕获视频帧,使用opencv进行处理,并在新iPad上的UIImageView中显示结果。opencv进程执行以下操作(“inImg”是视频帧):

然而,我没有在框架的左上角得到一个垂直的白色条(100行x 10列),而是从右上角到左下角得到了100条楼梯状的水平线,每条线都有10个像素长

经过一些调查,我意识到显示帧的宽度似乎比cv::Mat宽8像素。(即,第2行的第9个像素位于第1行的第1个像素的正下方。)

视频帧本身显示正确(行之间无位移)。 当AVCaptureSession.sessionPreset为AVCaptureSessionPresetMedium(帧行=480,cols=360)时,问题出现,但当AVCaptureSessionPresetHigh(帧行=640,cols=480)时,问题不出现

全屏显示360列。(我尝试逐像素遍历和修改cv::Mat。像素1-360显示正确。361-368消失,369显示在像素1的正下方)

我尝试了imageview.contentMode(UIViewContentModeScaleSpectFill和UIViewContentModeScaleSpectFit)和imageview.ClipToBound(是/否)的组合,但没有成功

有什么问题吗? 多谢各位

我使用以下代码创建AVCaptureSession:

NSArray* devices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];

if ([devices count] == 0) {
    NSLog(@"No video capture devices found");
    return NO;
}


for (AVCaptureDevice *device in devices) {
     if ([device position] == AVCaptureDevicePositionFront) {
           _captureDevice = device;
     }
}


NSError* error_exp = nil;
if ([_captureDevice lockForConfiguration:&error_exp]) {
    [_captureDevice setWhiteBalanceMode:AVCaptureWhiteBalanceModeContinuousAutoWhiteBalance];
    [_captureDevice unlockForConfiguration];
}
// Create the capture session
_captureSession = [[AVCaptureSession alloc] init];
_captureSession.sessionPreset = AVCaptureSessionPresetMedium;


// Create device input
NSError *error = nil;
AVCaptureDeviceInput *input = [[AVCaptureDeviceInput alloc] initWithDevice:_captureDevice error:&error];

// Create and configure device output
_videoOutput = [[AVCaptureVideoDataOutput alloc] init];

dispatch_queue_t queue = dispatch_queue_create("cameraQueue", NULL); 
[_videoOutput setSampleBufferDelegate:self queue:queue];
dispatch_release(queue); 

_videoOutput.alwaysDiscardsLateVideoFrames = YES; 

OSType format = kCVPixelFormatType_32BGRA;

_videoOutput.videoSettings = [NSDictionary dictionaryWithObject:[NSNumber numberWithUnsignedInt:format]forKey:(id)kCVPixelBufferPixelFormatTypeKey];


// Connect up inputs and outputs
if ([_captureSession canAddInput:input]) {
    [_captureSession addInput:input];
}

if ([_captureSession canAddOutput:_videoOutput]) {
    [_captureSession addOutput:_videoOutput];
}

AVCaptureConnection * captureConnection = [_videoOutput connectionWithMediaType:AVMediaTypeVideo];

if (captureConnection.isVideoMinFrameDurationSupported)
    captureConnection.videoMinFrameDuration = CMTimeMake(1, 60);
if (captureConnection.isVideoMaxFrameDurationSupported)
    captureConnection.videoMaxFrameDuration = CMTimeMake(1, 60);

if (captureConnection.supportsVideoMirroring)
    [captureConnection setVideoMirrored:NO];

[captureConnection setVideoOrientation:AVCaptureVideoOrientationPortraitUpsideDown];
当接收到帧时,调用以下内容:

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
@autoreleasepool {

    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
    CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));

    AVCaptureConnection *currentConnection = [[_videoOutput connections] objectAtIndex:0];

    AVCaptureVideoOrientation videoOrientation = [currentConnection videoOrientation];
    CGImageRef quartzImage;

    // For color mode a 4-channel cv::Mat is created from the BGRA data
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);

    cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);

    if ([self doFrame]) { // a flag to switch processing ON/OFF
            [self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation];  // "processFrame" is the opencv function shown above
    }

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    quartzImage = [self.context createCGImage:ciImage fromRect:ciImage.extent];
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationUp];

    CGImageRelease(quartzImage);

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];

我假设您使用的是构造函数
Mat(int行,int列,int类型,void*\u数据,size\t\u step=AUTO\u step)
,AUTO\u step为0,并假设行跨距为
width*bytesperpoixel

这通常是错误的-将行与某个较大的边界对齐是很常见的。在这种情况下,360不是16的倍数,而是368的倍数;这有力地表明它与16个像素的边界对齐(也许是为了帮助在16×16块中处理的算法?)

试一试


太神了谢谢你,先生!
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
@autoreleasepool {

    CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    OSType format = CVPixelBufferGetPixelFormatType(pixelBuffer);
    CGRect videoRect = CGRectMake(0.0f, 0.0f, CVPixelBufferGetWidth(pixelBuffer), CVPixelBufferGetHeight(pixelBuffer));

    AVCaptureConnection *currentConnection = [[_videoOutput connections] objectAtIndex:0];

    AVCaptureVideoOrientation videoOrientation = [currentConnection videoOrientation];
    CGImageRef quartzImage;

    // For color mode a 4-channel cv::Mat is created from the BGRA data
    CVPixelBufferLockBaseAddress(pixelBuffer, 0);
    void *baseaddress = CVPixelBufferGetBaseAddress(pixelBuffer);

    cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, 0);

    if ([self doFrame]) { // a flag to switch processing ON/OFF
            [self processFrame:mat videoRect:videoRect videoOrientation:videoOrientation];  // "processFrame" is the opencv function shown above
    }

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    quartzImage = [self.context createCGImage:ciImage fromRect:ciImage.extent];
    CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);

    UIImage *image = [UIImage imageWithCGImage:quartzImage scale:1.0 orientation:UIImageOrientationUp];

    CGImageRelease(quartzImage);

    [self.imageView performSelectorOnMainThread:@selector(setImage:) withObject:image waitUntilDone:YES];
cv::Mat mat(videoRect.size.height, videoRect.size.width, CV_8UC4, baseaddress, CVPixelBufferGetBytesPerRow(pixelBuffer));