Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/ios/98.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios AVCaptureAudioDataOutputSampleBufferDelegate未调用captureOutput_Ios_Swift_Avfoundation - Fatal编程技术网

Ios AVCaptureAudioDataOutputSampleBufferDelegate未调用captureOutput

Ios AVCaptureAudioDataOutputSampleBufferDelegate未调用captureOutput,ios,swift,avfoundation,Ios,Swift,Avfoundation,我有一个录制视频的应用程序,但我需要它向用户实时显示麦克风上捕获的声音的音调级别。我已经能够使用AVCaptureSession成功地将音频和视频录制到MP4。但是,当我将AVCaptureAudioDataOutput添加到会话并分配AVCaptureAudioDataOutputSampleBufferDelegate时,我没有收到任何错误,但是一旦会话启动,captureOutput函数就不会被调用 代码如下: import UIKit import AVFoundation import

我有一个录制视频的应用程序,但我需要它向用户实时显示麦克风上捕获的声音的音调级别。我已经能够使用AVCaptureSession成功地将音频和视频录制到MP4。但是,当我将AVCaptureAudioDataOutput添加到会话并分配AVCaptureAudioDataOutputSampleBufferDelegate时,我没有收到任何错误,但是一旦会话启动,captureOutput函数就不会被调用

代码如下:

import UIKit
import AVFoundation
import CoreLocation


class ViewController: UIViewController, 
AVCaptureVideoDataOutputSampleBufferDelegate, 
AVCaptureFileOutputRecordingDelegate, CLLocationManagerDelegate , 
AVCaptureAudioDataOutputSampleBufferDelegate {

var videoFileOutput: AVCaptureMovieFileOutput!
let session = AVCaptureSession()
var outputURL: URL!
var timer:Timer!
var locationManager:CLLocationManager!
var currentMagnitudeValue:CGFloat!
var defaultMagnitudeValue:CGFloat!
var visualMagnitudeValue:CGFloat!
var soundLiveOutput: AVCaptureAudioDataOutput!


override func viewDidLoad() {
    super.viewDidLoad()
    self.setupAVCapture()
}


func setupAVCapture(){

    session.beginConfiguration()

    //Add the camera INPUT to the session
    let videoDevice = AVCaptureDevice.default(.builtInWideAngleCamera,
                                              for: .video, position: .front)
    guard
        let videoDeviceInput = try? AVCaptureDeviceInput(device: videoDevice!),
        session.canAddInput(videoDeviceInput)
        else { return }
    session.addInput(videoDeviceInput)

    //Add the microphone INPUT to the session
    let microphoneDevice = AVCaptureDevice.default(.builtInMicrophone, for: .audio, position: .unspecified)
    guard
        let audioDeviceInput = try? AVCaptureDeviceInput(device: microphoneDevice!),
        session.canAddInput(audioDeviceInput)
        else { return }
    session.addInput(audioDeviceInput)

    //Add the video file OUTPUT to the session
    videoFileOutput = AVCaptureMovieFileOutput()
    guard session.canAddOutput(videoFileOutput) else {return}
    if (session.canAddOutput(videoFileOutput)) {
        session.addOutput(videoFileOutput)
    }

    //Add the audio output so we can get PITCH of the sounds
    //AND assign the SampleBufferDelegate
    soundLiveOutput = AVCaptureAudioDataOutput()
    soundLiveOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "test"))
    if (session.canAddOutput(soundLiveOutput)) {
        session.addOutput(soundLiveOutput)
        print ("Live AudioDataOutput added")
    } else
    {
        print("Could not add AudioDataOutput")
    }



    //Preview Layer
    let previewLayer = AVCaptureVideoPreviewLayer(session: session)
    let rootLayer :CALayer = self.cameraView.layer
    rootLayer.masksToBounds=true
    previewLayer.frame = rootLayer.bounds
    rootLayer.addSublayer(previewLayer)
    previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill;

    //Finalize the session
    session.commitConfiguration()

   //Begin the session
    session.startRunning()


}

func captureOutput(_: AVCaptureOutput, didOutput: CMSampleBuffer, from: 
AVCaptureConnection) {
    print("Bingo")
}

}
预期产出:

Bingo
Bingo
Bingo
...
我读过:

-用户未正确声明captureOutput方法

-用户根本没有声明captureOutput方法

-苹果关于委托及其方法的文档-该方法与我声明的方法相匹配

我在网上遇到的其他常见错误:

使用Swift旧版本的声明我使用的是v4.1 显然,在Swift 4.0之后的一篇文章中,AvCaptureMataOutput取代了AVCaptureAudioDataOutput——虽然我在苹果的文档中找不到这个,但我也尝试过这个,但类似地,metadataOutput函数从未被调用。
我没有什么想法。我是否遗漏了一些明显的内容?

好的,没有人回复我,但在仔细研究之后,我找到了正确的方法来声明Swift4的captureOutput方法,如下所示:

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
    //Do your stuff here
}

不幸的是,这个在线的文档非常差。我想您只需要完全正确地使用它-如果您错误地调用或错误地命名变量,则不会引发错误,因为这是一个可选函数。

您使用的方法已更新为此方法,它将被AVCaptureAudioDataOutput和AVCaptureAvideoDataOutput调用。确保在将示例缓冲区写入asset writer之前检查输出

func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {

    //Make sure you check the output before using sample buffer
    if output == audioDataOutput {
      //Use sample buffer for audio 
   }
}

我的问题是,AVAudioSession和AVCaptureSession声明为局部变量,当我开始会话时,它就消失了。一旦我把它们移到类级变量,一切都很好

虽然这段代码可以回答这个问题,但提供关于它如何以及为什么解决这个问题的附加上下文将提高答案的长期价值。