Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/javascript/446.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Javascript &引用;400“未设置识别音频”&&引用;不活动“错误”;[谷歌文字语音转换API]_Javascript_Python_Google App Engine_Flask_Google Text To Speech - Fatal编程技术网

Javascript &引用;400“未设置识别音频”&&引用;不活动“错误”;[谷歌文字语音转换API]

Javascript &引用;400“未设置识别音频”&&引用;不活动“错误”;[谷歌文字语音转换API],javascript,python,google-app-engine,flask,google-text-to-speech,Javascript,Python,Google App Engine,Flask,Google Text To Speech,我想认识到这一点 用户与web浏览器对话 网络浏览器将他的声音记录为WAV文件(Recorder.js)并发送到服务器(Google App Engine标准环境Python 3.7) Python服务器调用Google云文本到语音API并转录WAV文件,然后将转录的文本发送到web浏览器 我收到了这个错误信息 2020-01-30 08:37:38 speech[20200130t173543] "GET / HTTP/1.1" 200 2020-01-30 08:37:38 speech[

我想认识到这一点

  • 用户与web浏览器对话
  • 网络浏览器将他的声音记录为WAV文件(Recorder.js)并发送到服务器(Google App Engine标准环境Python 3.7)
  • Python服务器调用Google云文本到语音API并转录WAV文件,然后将转录的文本发送到web浏览器
  • 我收到了这个错误信息

    2020-01-30 08:37:38 speech[20200130t173543]  "GET / HTTP/1.1" 200
    2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [8] [INFO] Starting gunicorn 20.0.4
    2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [8] [INFO] Listening at: http://0.0.0.0:8081 (8)
    2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [8] [INFO] Using worker: sync
    2020-01-30 08:37:38 speech[20200130t173543]  [2020-01-30 08:37:38 +0000] [15] [INFO] Booting worker with pid: 15
    2020-01-30 08:37:55 speech[20200130t173543]  "POST / HTTP/1.1" 500
    2020-01-30 08:37:56 speech[20200130t173543]  /tmp/file.wav exists
    2020-01-30 08:37:56 speech[20200130t173543]  [2020-01-30 08:37:56,717] ERROR in app: Exception on / [POST]
    2020-01-30 08:37:56 speech[20200130t173543]  Traceback (most recent call last):    
    File "/env/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 57, in error_remapped_callable  return callable_(*args, **kwargs)    
    File "/env/lib/python3.7/site-packages/grpc/_channel.py", line 824, in __call__   return _end_unary_response_blocking(state, call, False, None)    
    File "/env/lib/python3.7/site-packages/grpc/_channel.py", line 726, in _end_unary_response_blocking   raise _InactiveRpcError(state)  grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
    2020-01-30 08:37:56 speech[20200130t173543]     status = StatusCode.INVALID_ARGUMENT
    2020-01-30 08:37:56 speech[20200130t173543]     details = "RecognitionAudio not set."
    2020-01-30 08:37:56 speech[20200130t173543]     debug_error_string = "{"created":"@1580373476.716586092","description":"Error received from peer ipv4:172.217.175.42:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"RecognitionAudio not set.","grpc_status":3}"
    2020-01-30 08:37:56 speech[20200130t173543]  >
    2020-01-30 08:37:56 speech[20200130t173543]
    2020-01-30 08:37:56 speech[20200130t173543]  The above exception was the direct cause of the following exception:
    2020-01-30 08:37:56 speech[20200130t173543]
    2020-01-30 08:37:57 speech[20200130t173543]  Traceback (most recent call last):    
    File "/env/lib/python3.7/site-packages/flask/app.py", line 2446, in wsgi_app      response = self.full_dispatch_request()    
    File "/env/lib/python3.7/site-packages/flask/app.py", line 1951, in full_dispatch_request      rv = self.handle_user_exception(e)    
    File "/env/lib/python3.7/site-packages/flask/app.py", line 1820, in handle_user_exception      reraise(exc_type, exc_value, tb)    
    File "/env/lib/python3.7/site-packages/flask/_compat.py", line 39, in reraise      raise value    
    File "/env/lib/python3.7/site-packages/flask/app.py", line 1949, in full_dispatch_request      rv = self.dispatch_request()    
    File "/env/lib/python3.7/site-packages/flask/app.py", line 1935, in dispatch_request      return self.view_functions[rule.endpoint](**req.view_args)    
    File "/srv/main.py", line 38, in index      response = client.recognize(config, audio)    
    File "/env/lib/python3.7/site-packages/google/cloud/speech_v1/gapic/speech_client.py", line 256, in recognize      request, retry=retry, timeout=timeout, metadata=metadata    
    File "/env/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 143, in __call__      return wrapped_func(*args, **kwargs)    
    File "/env/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func      on_error=on_error,    
    File "/env/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target      return target()    
    File "/env/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout      return func(*args, **kwargs)    
    File "/env/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable      six.raise_from(exceptions.from_grpc_error(exc), exc)    
    File "<string>", line 3, in raise_from  google.api_core.exceptions.InvalidArgument: 400 RecognitionAudio not set.
    
    这是app.yaml

    runtime: python37
    entrypoint: gunicorn -b :$PORT main:app
    service: speech
    
    这是main.py

    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    from flask import Flask
    from flask import request
    from flask import render_template
    from flask import send_file
    from google.cloud import speech
    from google.cloud.speech import enums
    from google.cloud.speech import types
    import os
    import io
    
    app = Flask(__name__)
    
    @app.route("/", methods=['POST', 'GET'])
    def index():
        if request.method == "POST":
            f = open('/tmp/file.wav', 'wb')
            f.write(request.data)
            f.close()
            if os.path.isfile('/tmp/file.wav'):
                print("/tmp/file.wav exists")
            os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="credentials.json"
            client = speech.SpeechClient()
            # [START speech_python_migration_sync_request]
            # [START speech_python_migration_config]
            with io.open('/tmp/file.wav', 'rb') as audio_file:
                content = audio_file.read()
    
            audio = types.RecognitionAudio(content=content)
            config = types.RecognitionConfig(
                encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
                sample_rate_hertz=16000,
                language_code='ja-JP')
            # [END speech_python_migration_config]
    
            # [START speech_python_migration_sync_response]
            response = client.recognize(config, audio)
            # [END speech_python_migration_sync_request]
            # Each result is for a consecutive portion of the audio. Iterate through
            # them to get the transcripts for the entire audio file.
            for result in response.results:
                # The first alternative is the most likely one for this portion.
                print(u'Transcript: {}'.format(result.alternatives[0].transcript))
            return print(u'Transcript: {}'.format(result.alternatives[0].transcript))    
        else:
            return render_template("index.html")
    
    if __name__ == "__main__":
        app.run()
    
    这是requirements.txt

    Flask
    google-cloud-speech
    gunicorn
    
    这是index.html

    <!DOCTYPE html>
    <html>
      <head>
        <meta charset="UTF-8">
        <title>Simple Recorder.js demo with record, stop and pause - addpipe.com</title>
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
      </head>
      <body>
        <h1>Simple Recorder.js demo</h1>
    
        <div id="controls">
         <button id="recordButton">Record</button>
         <button id="pauseButton" disabled>Pause</button>
         <button id="stopButton" disabled>Stop</button>
        </div>
        <div id="formats">Format: start recording to see sample rate</div>
        <p><strong>Recordings:</strong></p>
        <ol id="recordingsList"></ol>
        <!-- inserting these scripts at the end to be able to use all the elements in the DOM -->
        <script src="https://cdn.rawgit.com/mattdiamond/Recorderjs/08e7abd9/dist/recorder.js"></script>
        <script src="/static/js/app.js"></script>
      </body>
    </html>
    
    
    带有录制、停止和暂停的Simple Recorder.js演示-addpipe.com
    SimpleRecorder.js演示
    记录
    暂停
    停止
    格式:开始记录以查看采样率
    录制:

    这是app.js

    //webkitURL is deprecated but nevertheless
    URL = window.URL || window.webkitURL;
    
    var gumStream;                      //stream from getUserMedia()
    var rec;                            //Recorder.js object
    var input;                          //MediaStreamAudioSourceNode we'll be recording
    
    // shim for AudioContext when it's not avb. 
    var AudioContext = window.AudioContext || window.webkitAudioContext;
    var audioContext //audio context to help us record
    
    var recordButton = document.getElementById("recordButton");
    var stopButton = document.getElementById("stopButton");
    var pauseButton = document.getElementById("pauseButton");
    
    //add events to those 2 buttons
    recordButton.addEventListener("click", startRecording);
    stopButton.addEventListener("click", stopRecording);
    pauseButton.addEventListener("click", pauseRecording);
    
    function startRecording() {
        console.log("recordButton clicked");
    
        /*
            Simple constraints object, for more advanced audio features see
            https://addpipe.com/blog/audio-constraints-getusermedia/
        */
    
        var constraints = { audio: true, video:false }
    
        /*
            Disable the record button until we get a success or fail from getUserMedia() 
        */
    
        recordButton.disabled = true;
        stopButton.disabled = false;
        pauseButton.disabled = false
    
        /*
            We're using the standard promise based getUserMedia() 
            https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
        */
    
        navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
            console.log("getUserMedia() success, stream created, initializing Recorder.js ...");
    
            /*
                create an audio context after getUserMedia is called
                sampleRate might change after getUserMedia is called, like it does on macOS when recording through AirPods
                the sampleRate defaults to the one set in your OS for your playback device
    
            */
            audioContext = new AudioContext();
    
            //update the format 
            document.getElementById("formats").innerHTML="Format: 1 channel pcm @ "+audioContext.sampleRate/1000+"kHz"
    
            /*  assign to gumStream for later use  */
            gumStream = stream;
    
            /* use the stream */
            input = audioContext.createMediaStreamSource(stream);
    
            /* 
                Create the Recorder object and configure to record mono sound (1 channel)
                Recording 2 channels  will double the file size
            */
            rec = new Recorder(input,{numChannels:1})
    
            //start the recording process
            rec.record()
    
            console.log("Recording started");
    
        }).catch(function(err) {
            //enable the record button if getUserMedia() fails
            recordButton.disabled = false;
            stopButton.disabled = true;
            pauseButton.disabled = true
        });
    }
    
    function pauseRecording(){
        console.log("pauseButton clicked rec.recording=",rec.recording );
        if (rec.recording){
            //pause
            rec.stop();
            pauseButton.innerHTML="Resume";
        }else{
            //resume
            rec.record()
            pauseButton.innerHTML="Pause";
    
        }
    }
    
    function stopRecording() {
        console.log("stopButton clicked");
    
        //disable the stop button, enable the record too allow for new recordings
        stopButton.disabled = true;
        recordButton.disabled = false;
        pauseButton.disabled = true;
    
        //reset button just in case the recording is stopped while paused
        pauseButton.innerHTML="Pause";
    
        //tell the recorder to stop the recording
        rec.stop();
    
        //stop microphone access
        gumStream.getAudioTracks()[0].stop();
    
        //create the wav blob and pass it on to createDownloadLink
        rec.exportWAV(createDownloadLink);
    }
    
    function createDownloadLink(blob) {
    
        var url = URL.createObjectURL(blob);
        var au = document.createElement('audio');
        var li = document.createElement('li');
        var link = document.createElement('a');
    
        //name of .wav file to use during upload and download (without extendion)
        var filename = new Date().toISOString();
    
        //add controls to the <audio> element
        au.controls = true;
        au.src = url;
    
        //save to disk link
        link.href = url;
        link.download = filename+".wav"; //download forces the browser to donwload the file using the  filename
        link.innerHTML = "Save to disk";
    
        //add the new audio element to li
        li.appendChild(au);
    
        //add the filename to the li
        li.appendChild(document.createTextNode(filename+".wav "))
    
        //add the save to disk link to li
        li.appendChild(link);
    
        //upload link
        var upload = document.createElement('a');
        upload.href="#";
        upload.innerHTML = "Upload";
        upload.addEventListener("click", function(event){
              var xhr=new XMLHttpRequest();
              xhr.onload=function(e) {
                  if(this.readyState === 4) {
                      console.log("Server returned: ",e.target.responseText);
                  }
              };
              var fd=new FormData();
              fd.append("audio_data",blob, filename);
              xhr.open("POST","/",true);
              xhr.send(fd);
        })
        li.appendChild(document.createTextNode (" "))//add a space in between
        li.appendChild(upload)//add the upload link to li
    
        //add the li element to the ol
        recordingsList.appendChild(li);
    }
    
    //webkitURL不推荐使用,但
    URL=window.URL | | window.webkitURL;
    var gumStream//来自getUserMedia()的流
    var-rec//Recorder.js对象
    var输入//我们将录制MediaStreamAudioSourceNode
    //当不是avb时,为AudioContext填充。
    var AudioContext=window.AudioContext | | window.webkitadiocontext;
    var audioContext//audio context帮助我们录制
    var recordButton=document.getElementById(“recordButton”);
    var stopButton=document.getElementById(“stopButton”);
    var pauseButton=document.getElementById(“pauseButton”);
    //向这两个按钮添加事件
    recordButton.addEventListener(“单击”,开始录制);
    stopButton.addEventListener(“单击”,停止录制);
    pauseButton.addEventListener(“单击”,暂停录制);
    函数startRecording(){
    日志(“单击记录按钮”);
    /*
    简单约束对象,有关更高级的音频功能,请参见
    https://addpipe.com/blog/audio-constraints-getusermedia/
    */
    var约束={audio:true,video:false}
    /*
    禁用录制按钮,直到我们从getUserMedia()获得成功或失败
    */
    recordButton.disabled=true;
    stopButton.disabled=false;
    pauseButton.disabled=false
    /*
    我们使用的是基于承诺的标准getUserMedia()
    https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
    */
    navigator.mediaDevices.getUserMedia(约束)。然后(函数(流){
    log(“getUserMedia()成功,创建流,初始化Recorder.js…”);
    /*
    在调用getUserMedia后创建音频上下文
    sampleRate在调用getUserMedia后可能会更改,就像在macOS上通过AirPods录制时一样
    采样器默认为操作系统中为播放设备设置的采样器
    */
    audioContext=新的audioContext();
    //更新格式
    document.getElementById(“格式”).innerHTML=“格式:1通道pcm@”+audioContext.sampleRate/1000+“kHz”
    /*分配给gumStream供以后使用*/
    gumStream=溪流;
    /*使用流*/
    输入=audioContext.createMediaStreamSource(流);
    /* 
    创建Recorder对象并配置为录制单声道声音(1个通道)
    录制2个频道将使文件大小加倍
    */
    rec=新记录器(输入,{numChannels:1})
    //开始录制过程
    记录
    console.log(“记录已开始”);
    }).catch(函数(err){
    //如果getUserMedia()失败,请启用录制按钮
    recordButton.disabled=false;
    stopButton.disabled=true;
    pauseButton.disabled=true
    });
    }
    函数暂停录制(){
    console.log(“pauseButton clicked rec.recording=“,rec.recording”);
    如果(记录){
    //停顿
    建议停止();
    pauseButton.innerHTML=“Resume”;
    }否则{
    //恢复
    记录
    pauseButton.innerHTML=“暂停”;
    }
    }
    函数stopRecording(){
    日志(“单击停止按钮”);
    //禁用“停止”按钮,启用录制也允许新录制
    stopButton.disabled=true;
    recordButton.disabled=false;
    pauseButton.disabled=true;
    //重置按钮,以防暂停时录制停止
    pauseButton.innerHTML=“暂停”;
    //告诉录音机停止录音
    建议停止();
    //停止麦克风访问
    gumStream.getAudioTracks()[0]。停止();
    //创建wav blob并将其传递给createDownloadLink
    rec.exportWAV(createDownloadLink);
    }
    函数createDownloadLink(blob){
    var url=url.createObjectURL(blob);
    var au=document.createElement('audio');
    var li=document.createElement('li');
    var link=document.createElement('a');
    //上载和下载期间使用的.wav文件的名称(不带扩展名)
    var filename=new Date().toISOString();
    //向元素添加控件
    au.controls=true;
    au.src=url;
    //保存到磁盘链接
    link.href=url;
    link.download=filename+“.wav”//download强制浏览器使用文件名加载文件
    link.innerHTML=“保存到磁盘”;
    //将新的音频元素添加到li
    李.儿童(非盟);
    //将文件名添加到li
    li.appendChild(document.createTextNode(filename+“.wav”))
    //将“保存到磁盘”链接添加到li
    李.儿童(链接);;
    //上传链接
    var upload=document.createElement('a');
    upload.href=“#”;
    upload.innerHTML=“upload”;
    upload.addEventListener(“单击”),函数(事件){
    var xhr=new XMLHttpRequest();
    xhr.onload=函数(e){
    if(this.readyState==4){
    log(“服务器返回:”,e.target.responseText);
    }
    };
    var fd=新FormData();
    追加(“音频数据”,blob,文件名);
    xhr.open(“POST”、“/”、true);
    xhr.send(fd);
    })
    li.appendChild(document.createTextNo
    
    Flask
    google-cloud-speech
    gunicorn
    
    <!DOCTYPE html>
    <html>
      <head>
        <meta charset="UTF-8">
        <title>Simple Recorder.js demo with record, stop and pause - addpipe.com</title>
        <meta name="viewport" content="width=device-width, initial-scale=1.0">
      </head>
      <body>
        <h1>Simple Recorder.js demo</h1>
    
        <div id="controls">
         <button id="recordButton">Record</button>
         <button id="pauseButton" disabled>Pause</button>
         <button id="stopButton" disabled>Stop</button>
        </div>
        <div id="formats">Format: start recording to see sample rate</div>
        <p><strong>Recordings:</strong></p>
        <ol id="recordingsList"></ol>
        <!-- inserting these scripts at the end to be able to use all the elements in the DOM -->
        <script src="https://cdn.rawgit.com/mattdiamond/Recorderjs/08e7abd9/dist/recorder.js"></script>
        <script src="/static/js/app.js"></script>
      </body>
    </html>
    
    //webkitURL is deprecated but nevertheless
    URL = window.URL || window.webkitURL;
    
    var gumStream;                      //stream from getUserMedia()
    var rec;                            //Recorder.js object
    var input;                          //MediaStreamAudioSourceNode we'll be recording
    
    // shim for AudioContext when it's not avb. 
    var AudioContext = window.AudioContext || window.webkitAudioContext;
    var audioContext //audio context to help us record
    
    var recordButton = document.getElementById("recordButton");
    var stopButton = document.getElementById("stopButton");
    var pauseButton = document.getElementById("pauseButton");
    
    //add events to those 2 buttons
    recordButton.addEventListener("click", startRecording);
    stopButton.addEventListener("click", stopRecording);
    pauseButton.addEventListener("click", pauseRecording);
    
    function startRecording() {
        console.log("recordButton clicked");
    
        /*
            Simple constraints object, for more advanced audio features see
            https://addpipe.com/blog/audio-constraints-getusermedia/
        */
    
        var constraints = { audio: true, video:false }
    
        /*
            Disable the record button until we get a success or fail from getUserMedia() 
        */
    
        recordButton.disabled = true;
        stopButton.disabled = false;
        pauseButton.disabled = false
    
        /*
            We're using the standard promise based getUserMedia() 
            https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
        */
    
        navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
            console.log("getUserMedia() success, stream created, initializing Recorder.js ...");
    
            /*
                create an audio context after getUserMedia is called
                sampleRate might change after getUserMedia is called, like it does on macOS when recording through AirPods
                the sampleRate defaults to the one set in your OS for your playback device
    
            */
            audioContext = new AudioContext();
    
            //update the format 
            document.getElementById("formats").innerHTML="Format: 1 channel pcm @ "+audioContext.sampleRate/1000+"kHz"
    
            /*  assign to gumStream for later use  */
            gumStream = stream;
    
            /* use the stream */
            input = audioContext.createMediaStreamSource(stream);
    
            /* 
                Create the Recorder object and configure to record mono sound (1 channel)
                Recording 2 channels  will double the file size
            */
            rec = new Recorder(input,{numChannels:1})
    
            //start the recording process
            rec.record()
    
            console.log("Recording started");
    
        }).catch(function(err) {
            //enable the record button if getUserMedia() fails
            recordButton.disabled = false;
            stopButton.disabled = true;
            pauseButton.disabled = true
        });
    }
    
    function pauseRecording(){
        console.log("pauseButton clicked rec.recording=",rec.recording );
        if (rec.recording){
            //pause
            rec.stop();
            pauseButton.innerHTML="Resume";
        }else{
            //resume
            rec.record()
            pauseButton.innerHTML="Pause";
    
        }
    }
    
    function stopRecording() {
        console.log("stopButton clicked");
    
        //disable the stop button, enable the record too allow for new recordings
        stopButton.disabled = true;
        recordButton.disabled = false;
        pauseButton.disabled = true;
    
        //reset button just in case the recording is stopped while paused
        pauseButton.innerHTML="Pause";
    
        //tell the recorder to stop the recording
        rec.stop();
    
        //stop microphone access
        gumStream.getAudioTracks()[0].stop();
    
        //create the wav blob and pass it on to createDownloadLink
        rec.exportWAV(createDownloadLink);
    }
    
    function createDownloadLink(blob) {
    
        var url = URL.createObjectURL(blob);
        var au = document.createElement('audio');
        var li = document.createElement('li');
        var link = document.createElement('a');
    
        //name of .wav file to use during upload and download (without extendion)
        var filename = new Date().toISOString();
    
        //add controls to the <audio> element
        au.controls = true;
        au.src = url;
    
        //save to disk link
        link.href = url;
        link.download = filename+".wav"; //download forces the browser to donwload the file using the  filename
        link.innerHTML = "Save to disk";
    
        //add the new audio element to li
        li.appendChild(au);
    
        //add the filename to the li
        li.appendChild(document.createTextNode(filename+".wav "))
    
        //add the save to disk link to li
        li.appendChild(link);
    
        //upload link
        var upload = document.createElement('a');
        upload.href="#";
        upload.innerHTML = "Upload";
        upload.addEventListener("click", function(event){
              var xhr=new XMLHttpRequest();
              xhr.onload=function(e) {
                  if(this.readyState === 4) {
                      console.log("Server returned: ",e.target.responseText);
                  }
              };
              var fd=new FormData();
              fd.append("audio_data",blob, filename);
              xhr.open("POST","/",true);
              xhr.send(fd);
        })
        li.appendChild(document.createTextNode (" "))//add a space in between
        li.appendChild(upload)//add the upload link to li
    
        //add the li element to the ol
        recordingsList.appendChild(li);
    }
    
    xhr.onreadystatechange = function() {
                if (xhr.readyState == XMLHttpRequest.DONE) {
                    document.write(xhr.responseText);
                }
            }
    
    #!/usr/bin/env python
    # -*- coding: utf-8 -*-
    from flask import Flask
    from flask import request
    from flask import render_template
    from flask import send_file
    from google.cloud import speech
    from google.cloud.speech import enums
    from google.cloud.speech import types
    import os
    import io
    
    app = Flask(__name__)
    
    @app.route("/", methods=['POST', 'GET'])
    def index():
        if request.method == "POST":
            f = open('/tmp/file.wav', 'wb')
            f.write(request.files['audio_data'].read())
            f.close()
    
            os.environ["GOOGLE_APPLICATION_CREDENTIALS"]="credentials.json"
            client = speech.SpeechClient()
            with io.open('/tmp/file.wav', 'rb') as audio_file:
                content = audio_file.read()
    
            audio = types.RecognitionAudio(content=content)
            config = types.RecognitionConfig(
                encoding=enums.RecognitionConfig.AudioEncoding.LINEAR16,
                language_code='ja-JP',
                enable_automatic_punctuation=True)
            response = client.recognize(config, audio)
    
            resultsentence = []
            for result in response.results:
                # The first alternative is the most likely one for this portion.
                sentence = u'Transcript: {}'.format(result.alternatives[0].transcript)
                resultsentence.append(sentence)
    
            print(resultsentence)
    
            return render_template("result.html", resultsentence=resultsentence)
        else:
            return render_template("index.html")
    
    if __name__ == "__main__":
        app.run()
    
    //webkitURL is deprecated but nevertheless
    URL = window.URL || window.webkitURL;
    
    var gumStream;                      //stream from getUserMedia()
    var rec;                            //Recorder.js object
    var input;                          //MediaStreamAudioSourceNode we'll be recording
    
    // shim for AudioContext when it's not avb. 
    var AudioContext = window.AudioContext || window.webkitAudioContext;
    var audioContext //audio context to help us record
    
    var recordButton = document.getElementById("recordButton");
    var stopButton = document.getElementById("stopButton");
    var pauseButton = document.getElementById("pauseButton");
    
    //add events to those 2 buttons
    recordButton.addEventListener("click", startRecording);
    stopButton.addEventListener("click", stopRecording);
    pauseButton.addEventListener("click", pauseRecording);
    
    function startRecording() {
        console.log("recordButton clicked");
    
        /*
            Simple constraints object, for more advanced audio features see
            https://addpipe.com/blog/audio-constraints-getusermedia/
        */
    
        var constraints = { audio: true, video:false }
    
        /*
            Disable the record button until we get a success or fail from getUserMedia() 
        */
    
        recordButton.disabled = true;
        stopButton.disabled = false;
        pauseButton.disabled = false
    
        /*
            We're using the standard promise based getUserMedia() 
            https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia
        */
    
        navigator.mediaDevices.getUserMedia(constraints).then(function(stream) {
            console.log("getUserMedia() success, stream created, initializing Recorder.js ...");
    
            /*
                create an audio context after getUserMedia is called
                sampleRate might change after getUserMedia is called, like it does on macOS when recording through AirPods
                the sampleRate defaults to the one set in your OS for your playback device
    
            */
            audioContext = new AudioContext();
    
            //update the format 
            document.getElementById("formats").innerHTML="Format: 1 channel pcm @ "+audioContext.sampleRate/1000+"kHz"
    
            /*  assign to gumStream for later use  */
            gumStream = stream;
    
            /* use the stream */
            input = audioContext.createMediaStreamSource(stream);
    
            /* 
                Create the Recorder object and configure to record mono sound (1 channel)
                Recording 2 channels  will double the file size
            */
            rec = new Recorder(input,{numChannels:1})
    
            //start the recording process
            rec.record()
    
            console.log("Recording started");
    
        }).catch(function(err) {
            //enable the record button if getUserMedia() fails
            recordButton.disabled = false;
            stopButton.disabled = true;
            pauseButton.disabled = true
        });
    }
    
    function pauseRecording(){
        console.log("pauseButton clicked rec.recording=",rec.recording );
        if (rec.recording){
            //pause
            rec.stop();
            pauseButton.innerHTML="Resume";
        }else{
            //resume
            rec.record()
            pauseButton.innerHTML="Pause";
    
        }
    }
    
    function stopRecording() {
        console.log("stopButton clicked");
    
        //disable the stop button, enable the record too allow for new recordings
        stopButton.disabled = true;
        recordButton.disabled = false;
        pauseButton.disabled = true;
    
        //reset button just in case the recording is stopped while paused
        pauseButton.innerHTML="Pause";
    
        //tell the recorder to stop the recording
        rec.stop();
    
        //stop microphone access
        gumStream.getAudioTracks()[0].stop();
    
        //create the wav blob and pass it on to createDownloadLink
        rec.exportWAV(createDownloadLink);
    }
    
    function createDownloadLink(blob) {
    
        var url = URL.createObjectURL(blob);
        var au = document.createElement('audio');
        var li = document.createElement('li');
        var link = document.createElement('a');
    
        //name of .wav file to use during upload and download (without extendion)
        var filename = new Date().toISOString();
    
        //add controls to the <audio> element
        au.controls = true;
        au.src = url;
    
        //save to disk link
        link.href = url;
        link.download = filename+".wav"; //download forces the browser to donwload the file using the  filename
        link.innerHTML = "Save to disk";
    
        //add the new audio element to li
        li.appendChild(au);
    
        //add the filename to the li
        li.appendChild(document.createTextNode(filename+".wav "))
    
        //add the save to disk link to li
        li.appendChild(link);
    
        //upload link
        var upload = document.createElement('a');
        upload.href="#";
        upload.innerHTML = "Upload";
        upload.addEventListener("click", function(event){
              var xhr=new XMLHttpRequest();
              xhr.onreadystatechange = function() {
                if (xhr.readyState == XMLHttpRequest.DONE) {
                    document.write(xhr.responseText);
                }
            }
              var fd=new FormData();
              fd.append("audio_data",blob, filename);
              xhr.open("POST","/",true);
              xhr.send(fd);
        })
        li.appendChild(document.createTextNode (" "))//add a space in between
        li.appendChild(upload)//add the upload link to li
    
        //add the li element to the ol
        recordingsList.appendChild(li);
    }