Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/reporting-services/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Javascript WebRTC MediaStream的麦克风活动级别_Javascript_Audio_Microphone_Webrtc_Getusermedia - Fatal编程技术网

Javascript WebRTC MediaStream的麦克风活动级别

Javascript WebRTC MediaStream的麦克风活动级别,javascript,audio,microphone,webrtc,getusermedia,Javascript,Audio,Microphone,Webrtc,Getusermedia,我想要一些关于如何在Chrome/Canary中获得音频MediaStreamTrackjavascript对象的麦克风活动级别的建议。MediaStreamTrack对象是由getUserMedia返回的MediaStream的音频曲目,作为WebRTC javascript API的一部分。您要查找的是webkitAudioContext及其createMediaStreamSource方法 下面是一个代码示例,它绘制了一个绿色条,使其像VU仪表一样工作: navigator.webkitG

我想要一些关于如何在Chrome/Canary中获得音频
MediaStreamTrack
javascript对象的麦克风活动级别的建议。
MediaStreamTrack
对象是由
getUserMedia
返回的
MediaStream
的音频曲目,作为WebRTC javascript API的一部分。

您要查找的是
webkitAudioContext
及其
createMediaStreamSource
方法

下面是一个代码示例,它绘制了一个绿色条,使其像VU仪表一样工作:

navigator.webkitGetUserMedia({audio:true, video:true}, function(stream){
    audioContext = new webkitAudioContext();
    analyser = audioContext.createAnalyser();
    microphone = audioContext.createMediaStreamSource(stream);
    javascriptNode = audioContext.createJavaScriptNode(2048, 1, 1);

    analyser.smoothingTimeConstant = 0.3;
    analyser.fftSize = 1024;

    microphone.connect(analyser);
    analyser.connect(javascriptNode);
    javascriptNode.connect(audioContext.destination);

    canvasContext = $("#canvas")[0].getContext("2d");

    javascriptNode.onaudioprocess = function() {
        var array =  new Uint8Array(analyser.frequencyBinCount);
        analyser.getByteFrequencyData(array);
        var values = 0;

        var length = array.length;
        for (var i = 0; i < length; i++) {
            values += array[i];
        }

        var average = values / length;
        canvasContext.clearRect(0, 0, 60, 130);
        canvasContext.fillStyle = '#00ff00';
        canvasContext.fillRect(0,130-average,25,130);
    }

}
navigator.webkitGetUserMedia({audio:true,video:true},函数(流){
audioContext=新的webkitAudioContext();
Analyzer=audioContext.createAnalyzer();
麦克风=audioContext.createMediaStreamSource(流);

javascriptNode=audioContext.createJavaScriptNode(2048,1,1); Analyzer.smoothingTimeConstant=0.3; Analyzer.fftSize=1024; 麦克风。连接(分析仪); connect(javascriptNode); 连接(audioContext.destination); canvasContext=$(“#canvas”)[0].getContext(“2d”); javascriptNode.onaudioprocess=function(){ var阵列=新UINT8阵列(分析仪频率BINCOUNT); 分析仪。GetByTefFrequencyData(阵列); var值=0; var length=array.length; 对于(变量i=0;i

当麦克风有音频时,绿色条上下非常漂亮:

<script type="text/javascript">
navigator.webkitGetUserMedia({audio:true, video:true}, function(stream){
    // audioContext = new webkitAudioContext(); deprecated  OLD!!
    audioContext = new AudioContext(); // NEW!!
    analyser = audioContext.createAnalyser();
    microphone = audioContext.createMediaStreamSource(stream);
    javascriptNode = audioContext.createJavaScriptNode(2048, 1, 1);

    analyser.smoothingTimeConstant = 0.3;
    analyser.fftSize = 1024;

    microphone.connect(analyser);
    analyser.connect(javascriptNode);
    javascriptNode.connect(audioContext.destination);

    //canvasContext = $("#canvas")[0].getContext("2d");
    canvasContext = document.getElementById("test");
    canvasContext= canvasContext.getContext("2d");

    javascriptNode.onaudioprocess = function() {
        var array =  new Uint8Array(analyser.frequencyBinCount);
        analyser.getByteFrequencyData(array);
        var values = 0;

        var length = array.length;
        for (var i = 0; i < length; i++) {
            values += array[i];
        }

        var average = values / length;
        canvasContext.clearRect(0, 0, 60, 130);
        canvasContext.fillStyle = '#00ff00';
        canvasContext.fillRect(0,130-average,25,130);
    }

}  
);
</script>
<canvas id="test" style="background-color: black;"></canvas>


webkitGetUserMedia({audio:true,video:true},函数(流){
//audioContext=新webkitAudioContext();已弃用旧版本!!
audioContext=新建audioContext();//新建!!
Analyzer=audioContext.createAnalyzer();
麦克风=audioContext.createMediaStreamSource(流);

javascriptNode=audioContext.createJavaScriptNode(2048,1,1); Analyzer.smoothingTimeConstant=0.3; Analyzer.fftSize=1024; 麦克风。连接(分析仪); connect(javascriptNode); 连接(audioContext.destination); //canvasContext=$(“#canvas”)[0].getContext(“2d”); canvasContext=document.getElementById(“测试”); canvasContext=canvasContext.getContext(“2d”); javascriptNode.onaudioprocess=function(){ var阵列=新UINT8阵列(分析仪频率BINCOUNT); 分析仪。GetByTefFrequencyData(阵列); var值=0; var length=array.length; 对于(变量i=0;i
更新:使用以下命令修改代码:

navigator.mediaDevices.getUserMedia(constraints).then(
    function(stream){
        // code ... 
    }).catch(function(err) {
        // code ... 
});

这里有一个小问题:

您忘记了末尾的右括号。audioContext.createJavaScriptNode被重命名为audioContext.createScriptProcessor,现在您需要为
webkitGetUserMedia
定义一个错误函数作为第三个参数。