Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/javascript/464.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Javascript 在使用webrtc播放流媒体之前放大MediaStreamTrack(音频)_Javascript_Audio_Volume_Audiotrack_Mediastream - Fatal编程技术网

Javascript 在使用webrtc播放流媒体之前放大MediaStreamTrack(音频)

Javascript 在使用webrtc播放流媒体之前放大MediaStreamTrack(音频),javascript,audio,volume,audiotrack,mediastream,Javascript,Audio,Volume,Audiotrack,Mediastream,我使用getAudioTracks()从视频元素获取音频。然后,我需要先放大(增加音量),然后再使用addTrack()将其添加到画布中,并使用webrtc将两者流式播放 有没有办法用javascript在客户端实现这一点?我制定了一个解决方案。对于任何需要同样东西的人: // supposing we have the getUserMedia stream and a canvas // we want to stream the canvas

我使用
getAudioTracks()
从视频元素获取音频。然后,我需要先放大(增加音量),然后再使用
addTrack()
将其添加到画布中,并使用webrtc将两者流式播放


有没有办法用javascript在客户端实现这一点?

我制定了一个解决方案。对于任何需要同样东西的人:

            // supposing we have the getUserMedia stream and a canvas
            // we want to stream the canvas content and the
            // amplified audio from user's microphone

            var s = canvas.captureStream();

            var context = new AudioContext(); 

            var gainNode = context.createGain();
            gainNode.gain.value = 1;

            // compress to avoid clipping
            var compressor = context.createDynamicsCompressor();
            compressor.threshold.value = -30;
            compressor.knee.value = 40;
            compressor.ratio.value = 4;
            compressor.reduction.value = -10;
            compressor.attack.value = 0;
            compressor.release.value = 0.25;

            var destination = context.createMediaStreamDestination();

            var input = context.createMediaStreamSource(stream); 

            input.connect(compressor); 
            compressor.connect(gainNode); 
            gainNode.connect( destination); 

            var audioTracks = destination.stream.getAudioTracks();

            // use a slider to alter the value of amplification dynamically
            var rangeElement = document.getElementById("amplifierSlider"); 
            rangeElement .addEventListener("input", function() {
                gainNode.gain.value = parseFloat(rangeElement .value); 
            }, false); 

            for (var i=0; i < audioTracks.length; i++) {  
                s.addTrack(audioTracks[i]);   
            } 

            // stream the canvas with the added audio tracks
//假设我们有getUserMedia流和画布
//我们希望流式处理画布内容和
//来自用户麦克风的放大音频
var s=canvas.captureStream();
var context=新的AudioContext();
var gainNode=context.createGain();
gainNode.gain.value=1;
//压缩以避免剪切
var compressor=context.createdynamiccompressor();
compressor.threshold.value=-30;
压缩机.膝关节.值=40;
压缩比=4;
压缩机的压缩比值=-10;
compressor.attack.value=0;
压缩机释放值=0.25;
var destination=context.createMediaStreamDestination();
var input=context.createMediaStreamSource(流);
输入。连接(压缩机);
压缩机。连接(增益节点);
连接(目的地);
var audioTracks=destination.stream.getAudioTracks();
//使用滑块动态更改放大的值
var rangeElement=document.getElementById(“放大器滑块”);
rangeElement.addEventListener(“输入”,函数(){
gainNode.gain.value=parseFloat(rangeElement.value);
},假);
对于(var i=0;i

这真的很酷,但是你知道如何取消反馈吗??