Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/visual-studio-2008/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Ios Swift 3音频无法播放_Ios_Swift_Audio_Speech Recognition_Avaudioplayer - Fatal编程技术网

Ios Swift 3音频无法播放

Ios Swift 3音频无法播放,ios,swift,audio,speech-recognition,avaudioplayer,Ios,Swift,Audio,Speech Recognition,Avaudioplayer,基本上,我正在尝试将语音识别整合到我正在构建的应用程序中。我希望能够在按下麦克风按钮时播放声音,然后开始录制和识别音频。问题是当我按下按钮时,没有声音播放。此外,当我在物理iPhone上运行应用程序时,控制面板中的声音滑块也会消失。有人能帮忙吗 这是我的密码: class VoiceViewController: UIViewController, SFSpeechRecognizerDelegate, UITextViewDelegate { private let speechRecogn

基本上,我正在尝试将语音识别整合到我正在构建的应用程序中。我希望能够在按下麦克风按钮时播放声音,然后开始录制和识别音频。问题是当我按下按钮时,没有声音播放。此外,当我在物理iPhone上运行应用程序时,控制面板中的声音滑块也会消失。有人能帮忙吗

这是我的密码:

class VoiceViewController: UIViewController, SFSpeechRecognizerDelegate, UITextViewDelegate {

private let speechRecognizer = SFSpeechRecognizer(locale: Locale(identifier: "en-US"))!
private var speechRecognitionRequest: SFSpeechAudioBufferRecognitionRequest?
private var speechRecognitionTask: SFSpeechRecognitionTask?
private let audioEngine = AVAudioEngine()

var audioPlayer: AVAudioPlayer = AVAudioPlayer()
var url: URL?
var recording: Bool = false

let myTextView = UITextView()

func startSession() throws {

    if let recognitionTask = speechRecognitionTask {
        recognitionTask.cancel()
        self.speechRecognitionTask = nil
    }

    let audioSession = AVAudioSession.sharedInstance()
    try audioSession.setCategory(AVAudioSessionCategoryRecord)

    speechRecognitionRequest = SFSpeechAudioBufferRecognitionRequest()

    guard let inputNode = audioEngine.inputNode else { fatalError("Audio engine has no input node") }

    speechRecognitionRequest?.shouldReportPartialResults = true

    speechRecognitionTask = speechRecognizer.recognitionTask(with: speechRecognitionRequest!) { result, error in

        var finished = false

        if let result = result {
            print(result.bestTranscription.formattedString)
            finished = result.isFinal
        }

        if error != nil || finished {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)

            self.speechRecognitionRequest = nil
            self.speechRecognitionTask = nil
        }
    }

    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in

        self.speechRecognitionRequest?.append(buffer)
    }

    audioEngine.prepare()
    try audioEngine.start()
}

func stopTranscribing() {
    if audioEngine.isRunning {
        audioEngine.stop()
        speechRecognitionRequest?.endAudio()
    }
}

func btn_pressed() {
    print("pressed")

    if recording {
        url = URL(fileURLWithPath: Bundle.main.path(forResource: "tweet", ofType: "mp3")!)
    } else {
        url = URL(fileURLWithPath: Bundle.main.path(forResource: "gesture", ofType: "mp3")!)
    }

    do {
        try(audioPlayer = AVAudioPlayer(contentsOf: url!))
    } catch let err {
        print(err)
    }
    audioPlayer.play()
    recording = (recording == false)
    if recording == false {
        stopTranscribing()
    } else {
        try! startSession()
    }
}

override func viewDidLoad() {
    super.viewDidLoad()

    let button = UIButton()
    button.setTitle("push me", for: UIControlState())
    button.frame = CGRect(x: 10, y: 30, width: 80, height: 30)
    button.addTarget(self, action: #selector(btn_pressed), for: .touchUpInside)
    self.view.addSubview(button)

    myTextView.frame = CGRect(x: 60, y: 100, width: 300, height: 200)
    self.view.addSubview(myTextView)


    // Do any additional setup after loading the view.
}

override func didReceiveMemoryWarning() {
    super.didReceiveMemoryWarning()
    // Dispose of any resources that can be recreated.
}


/*
// MARK: - Navigation

// In a storyboard-based application, you will often want to do a little preparation before navigation
override func prepare(for segue: UIStoryboardSegue, sender: AnyObject?) {
    // Get the new view controller using segue.destinationViewController.
    // Pass the selected object to the new view controller.
}
*/

}
文件说:

AVAudioSessionCategoryRecord 录音类别;此类别使播放音频静音


这是您正在使用的唯一音频会话类别,当您开始尝试播放AVAudioPlayer时,您会立即设置它,因此您自然听不到任何声音。你需要更多地思考如何灵活、正确地使用音频会话。如果要播放声音然后开始录制,请使用播放类别播放声音,在通过AvaudioPlayerLegate通知您声音已结束之前,不要开始录音。

使用AVAudioSessionCategoryMultiRoute,它将继续播放您的音频。

是否可以使用两个单独的音频会话类别?这样我就可以在播放音频的同时进行录制。这不是AvaudioSessionCategoryPlay和record的作用吗?但我认为在您的用例中,这可能是一个坏主意。当你播放一个额外的声音时,计算机如何识别语音?它只会快速播放一秒钟的声音做你喜欢做的事,我已经回答了这个问题