Swift 使用AVAudioRecorder生成AvaudioPCBuffer

Swift 使用AVAudioRecorder生成AvaudioPCBuffer,swift,ios10,Swift,Ios10,与iOS 10一起,苹果发布了一个新的语音识别框架。数据可以通过附加AVAudioPCMBuffers或向m4a提供URL传递到此框架。目前,语音识别使用前者工作,但这只有在某人完成后才可能,并且不是实时的。以下是代码: let audioSession = AVAudioSession.sharedInstance() var audioRecorder:AVAudioRecorder!; var soundURLGlobal:URL!; function setUp(){ let

与iOS 10一起,苹果发布了一个新的语音识别框架。数据可以通过附加AVAudioPCMBuffers或向m4a提供URL传递到此框架。目前,语音识别使用前者工作,但这只有在某人完成后才可能,并且不是实时的。以下是代码:

let audioSession = AVAudioSession.sharedInstance()
var audioRecorder:AVAudioRecorder!;
var soundURLGlobal:URL!;

function setUp(){
    let recordSettings = [AVSampleRateKey : NSNumber(value: Float(44100.0)),
                          AVFormatIDKey : NSNumber(value: Int32(kAudioFormatMPEG4AAC)),
                          AVNumberOfChannelsKey : NSNumber(value: 1),
                          AVEncoderAudioQualityKey : NSNumber(value: Int32(AVAudioQuality.medium.rawValue))]

    let fileManager = FileManager.default()
    let urls = fileManager.urlsForDirectory(.documentDirectory, inDomains: .userDomainMask)
    let documentDirectory = urls[0] as NSURL
    let soundURL = documentDirectory.appendingPathComponent("sound.m4a")
    soundURLGlobal=soundURL;


    do {
        try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord)
        try audioRecorder = AVAudioRecorder(url: soundURL!, settings: recordSettings)
        audioRecorder.prepareToRecord()
    } catch {}
}

function start(){
    do {
        try audioSession.setActive(true)
        audioRecorder.record()
    } catch {}
}

function stop(){
    audioRecorder.stop()
    let request=SFSpeechURLRecognitionRequest(url: soundURLGlobal!)
    let recognizer=SFSpeechRecognizer();
    recognizer?.recognitionTask(with: request, resultHandler: { (result, error) in
        if(result!.isFinal){
            print(result?.bestTranscription.formattedString)
        }
    })

}
我试图转换这个,但我找不到哪里可以买到AvaudiOpcBuffer

谢谢,好话题

嗨B人

这是一个有解决方案的主题

见Wwdc 2014讲座 502-实际使用的AVAudioEngine 捕获麦克风=>在20分钟内 在21.50中使用tap code=>创建缓冲区

这是swift 3代码

@IBAction func button01Pressed(_ sender: Any) {

    let inputNode = audioEngine.inputNode
    let bus = 0
    inputNode?.installTap(onBus: bus, bufferSize: 2048, format: inputNode?.inputFormat(forBus: bus)) {
        (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in

            var theLength = Int(buffer.frameLength)
            print("theLength = \(theLength)")

            var samplesAsDoubles:[Double] = []
            for i in 0 ..< Int(buffer.frameLength)
            {
                var theSample = Double((buffer.floatChannelData?.pointee[i])!)
                samplesAsDoubles.append( theSample )
            }

            print("samplesAsDoubles.count = \(samplesAsDoubles.count)")

    }

    audioEngine.prepare()
    try! audioEngine.start()

}

你好,我想作者问的是如何使用AVAudioRecorder,而不是AVAudioEngine。
func stopAudio()
    {

        let inputNode = audioEngine.inputNode
        let bus = 0
        inputNode?.removeTap(onBus: bus)
        self.audioEngine.stop()

    }