Iphone 录制视频时更改相机捕获设备
我正在开发一个iPhone应用程序。在这种情况下,需要暂停和恢复相机。因此,我使用了Iphone 录制视频时更改相机捕获设备,iphone,ios,objective-c,avfoundation,Iphone,Ios,Objective C,Avfoundation,我正在开发一个iPhone应用程序。在这种情况下,需要暂停和恢复相机。因此,我使用了AVFoundation,而不是使用UIImagePickerController 我的代码是: - (void) startup :(BOOL)isFrontCamera { if (_session == nil) { NSLog(@"Starting up server"); self.isCapturing =
AVFoundation
,而不是使用UIImagePickerController
我的代码是:
- (void) startup :(BOOL)isFrontCamera
{
if (_session == nil)
{
NSLog(@"Starting up server");
self.isCapturing = NO;
self.isPaused = NO;
_currentFile = 0;
_discont = NO;
// create capture device with video input
_session = [[AVCaptureSession alloc] init];
AVCaptureDevice *cameraDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
if(isFrontCamera)
{
NSArray *videoDevices = [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo];
AVCaptureDevice *captureDevice = nil;
for (AVCaptureDevice *device in videoDevices)
{
if (device.position == AVCaptureDevicePositionFront)
{
captureDevice = device;
break;
}
}
cameraDevice = captureDevice;
}
cameraDevice=[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
AVCaptureDeviceInput* input = [AVCaptureDeviceInput deviceInputWithDevice:cameraDevice error:nil];
[_session addInput:input];
// audio input from default mic
AVCaptureDevice* mic = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput* micinput = [AVCaptureDeviceInput deviceInputWithDevice:mic error:nil];
[_session addInput:micinput];
// create an output for YUV output with self as delegate
_captureQueue = dispatch_queue_create("uk.co.gdcl.cameraengine.capture", DISPATCH_QUEUE_SERIAL);
AVCaptureVideoDataOutput* videoout = [[AVCaptureVideoDataOutput alloc] init];
[videoout setSampleBufferDelegate:self queue:_captureQueue];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey,
nil];
videoout.videoSettings = setcapSettings;
[_session addOutput:videoout];
_videoConnection = [videoout connectionWithMediaType:AVMediaTypeVideo];
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationPortrait];
NSDictionary* actual = videoout.videoSettings;
_cy = [[actual objectForKey:@"Width"] integerValue];
_cx = [[actual objectForKey:@"Height"] integerValue];
AVCaptureAudioDataOutput* audioout = [[AVCaptureAudioDataOutput alloc] init];
[audioout setSampleBufferDelegate:self queue:_captureQueue];
[_session addOutput:audioout];
_audioConnection = [audioout connectionWithMediaType:AVMediaTypeAudio];
[_session startRunning];
_preview = [AVCaptureVideoPreviewLayer layerWithSession:_session];
_preview.videoGravity = AVLayerVideoGravityResizeAspectFill;
}
}
在这里,我面临的问题,当我改变相机的前面。当我调用上面的方法,将相机更改为“前”时,预览层被卡住,没有预览。我的疑问是“我们能在捕获会话中间改变捕获设备吗?”请指导我哪里出了问题(或)建议我在录音时如何在前后摄像头之间导航
提前谢谢 您不能在会话中期更改captureDevice。一次只能运行一个捕获会话。您可以结束当前会话并创建一个新会话。将有一个轻微的延迟(可能是一秒或两秒,取决于您的cpu负载)
我希望苹果能允许多个会话或者每个会话至少有多台设备。。。但是他们没有。。。但是。您是否考虑过进行多个会话,然后再处理视频文件,将它们合并为一个会话 是的,你可以。你只需要满足一些事情
- (void)configureVideoWithDevice:(AVCaptureDevice *)camera {
[_session beginConfiguration];
[_session removeInput:_videoInputDevice];
_videoInputDevice = nil;
_videoInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:camera error:nil];
if ([_session canAddInput:_videoInputDevice]) {
[_session addInput:_videoInputDevice];
}
[_session removeOutput:_videoDataOutput];
_videoDataOutput = nil;
_videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];
[_videoDataOutput setSampleBufferDelegate:self queue:_outputQueueVideo];
NSDictionary* setcapSettings = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange], kCVPixelBufferPixelFormatTypeKey, nil];
_videoDataOutput.videoSettings = setcapSettings;
[_session addOutput:_videoDataOutput];
_videoConnection = [_videoDataOutput connectionWithMediaType:AVMediaTypeVideo];
if([_videoConnection isVideoOrientationSupported]) {
[_videoConnection setVideoOrientation:AVCaptureVideoOrientationLandscapeRight];
}
[_session commitConfiguration];
}
- (void)configureAudioWithDevice:(AVCaptureDevice *)microphone {
[_session beginConfiguration];
_audioInputDevice = [AVCaptureDeviceInput deviceInputWithDevice:microphone error:nil];
if ([_session canAddInput:_audioInputDevice]) {
[_session addInput:_audioInputDevice];
}
[_session removeOutput:_audioDataOutput];
_audioDataOutput = nil;
_audioDataOutput = [[AVCaptureAudioDataOutput alloc] init];
[_audioDataOutput setSampleBufferDelegate:self queue:_outputQueueAudio];
[_session addOutput:_audioDataOutput];
_audioConnection = [_audioDataOutput connectionWithMediaType:AVMediaTypeAudio];
[_session commitConfiguration];
}
这个算法是致命的,我唯一的问题是视频和音频不同步。是否需要同时配置音频和视频?@HighFlyingFantasy肯定有办法。这有点复杂。您必须手动操作音频采样缓冲区的定时信息以匹配视频。因为我们正在重新创建音频输出,它的计时信息每次都将从零开始。您必须跟踪录制音频的时间,并在将其写入文件之前按该值调整audioSampleBuffer。Geraint Davies通过暂停和恢复视频的示例实现了这项技术。两者的混合应该对你有用@hatebyte你能解决音频同步的问题吗?非常感谢