Ios 将传入的NSStream转换为视图

Ios 将传入的NSStream转换为视图,ios,iphone,nsdata,nsstream,mcsession,Ios,Iphone,Nsdata,Nsstream,Mcsession,我正在成功地发送一个数据流。下面的委托方法是获取该流并附加到NSMutableData self.data。如何将这些数据放入UIView/AVCaptureVideoPreviewLayer(应该显示视频)?我觉得我错过了另一个转换,AVCaptureSession>NSStream>MCSession>NSStream> - (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode { switch(ev

我正在成功地发送一个数据流。下面的委托方法是获取该流并附加到NSMutableData self.data。如何将这些数据放入UIView/AVCaptureVideoPreviewLayer(应该显示视频)?我觉得我错过了另一个转换,AVCaptureSession>NSStream>MCSession>NSStream>

- (void)stream:(NSStream *)stream handleEvent:(NSStreamEvent)eventCode {
    switch(eventCode) {
        case NSStreamEventHasBytesAvailable:
        {
            if(!self.data) {
                self.data = [NSMutableData data];
            }
            uint8_t buf[1024];
            unsigned int len = 0;
            len = [(NSInputStream *)stream read:buf maxLength:1024];
            if(len) {
                [self.data appendBytes:(const void *)buf length:len];
            } else {
                NSLog(@"no buffer!");
            }

// Code here to take self.data and convert the NSData to UIView/Video
}
我用以下内容发送流:

-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{

    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
    CVPixelBufferLockBaseAddress(imageBuffer,0);

    size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
//    size_t width = CVPixelBufferGetWidth(imageBuffer);
    size_t height = CVPixelBufferGetHeight(imageBuffer);
    void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);

    NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];

    CVPixelBufferUnlockBaseAddress(imageBuffer, 0);


    NSError *error;
    self.oStream = [self.mySession startStreamWithName:@"videoOut" toPeer:[[self.mySession connectedPeers]objectAtIndex:0] error:&error];
    self.oStream.delegate = self;
    [self.oStream scheduleInRunLoop:[NSRunLoop mainRunLoop]
                            forMode:NSDefaultRunLoopMode];
    [self.oStream open];

    [self.oStream write:[data bytes] maxLength:[data length]];






//    CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer( sampleBuffer );

    CGSize imageSize = CVImageBufferGetEncodedSize( imageBuffer );
    // also in the 'mediaSpecific' dict of the sampleBuffer

    NSLog( @"frame captured at %.fx%.f", imageSize.width, imageSize.height );
}

您可以在处理事件时创建UIImageView,如下所示:

UIImageView*iv=[[UIImageView alloc]initWithImage:[uiImageImageWithData:self.data];

也可以只分配一次,然后调用init

每次从套接字接收时,您都会初始化UIImageView,并且可以将该UIImageView添加到UIView中来显示它


对不起,我的英语不好,我不知道我是否听懂了你的意思。我想你需要
AVCaptureManager
,看看下面的代码是否适合你

AVCamCaptureManager *manager = [[AVCamCaptureManager alloc] init];
[self setCaptureManager:manager];

[[self captureManager] setDelegate:self];

if ([[self captureManager] setupSession]) {
     // Create video preview layer and add it to the UI
    AVCaptureVideoPreviewLayer *newCaptureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:[[self captureManager] session]];
    UIView *view = self.videoPreviewView;//Add a view in XIB where you want to show video
    CALayer *viewLayer = [view layer];
    [viewLayer setMasksToBounds:YES];
    CGRect bounds = [view bounds];

    [newCaptureVideoPreviewLayer setFrame:bounds];

    [newCaptureVideoPreviewLayer setVideoGravity:AVLayerVideoGravityResizeAspectFill];

    [viewLayer insertSublayer:newCaptureVideoPreviewLayer below:[[viewLayer sublayers] objectAtIndex:0]];

    [self setCaptureVideoPreviewLayer:newCaptureVideoPreviewLayer];

    dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
        [[[self captureManager] session] startRunning];
    });
}
管理学员

- (void)captureManager:(AVCamCaptureManager *)captureManager didFailWithError:(NSError *)error
{

}

- (void)captureManagerRecordingBegan:(AVCamCaptureManager *)captureManager
{

}

- (void)captureManagerRecordingFinished:(AVCamCaptureManager *)captureManager outputURL:(NSURL *)url
{



}

- (void)captureManagerStillImageCaptured:(AVCamCaptureManager *)captureManager
{



}

- (void)captureManagerDeviceConfigurationChanged:(AVCamCaptureManager *)captureManager
{

}

我希望它能有所帮助。

您可能想看看是否可以使用OpenGL。获取数据,将其转换为GL纹理,然后使用GL显示。可能有更高级别的API用于此。数据不是任何标准格式?视频格式是什么?a
UIView
?与视频的链接是什么?视频格式是avcaptureSession您必须锁定吗k/每帧解锁像素?我不知道这是否会花费大量时间。我不知道我在做什么。你在代码中看到了什么?captureManager的委托方法都没有处理视频的方法。我遗漏了什么吗?@Eric看到了这一点,如果有帮助的话。。