Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/javascript/384.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Javascript 使用Web Audio API创建完整曲目的波形_Javascript_Html_Audio_Canvas_Web Audio Api - Fatal编程技术网

Javascript 使用Web Audio API创建完整曲目的波形

Javascript 使用Web Audio API创建完整曲目的波形,javascript,html,audio,canvas,web-audio-api,Javascript,Html,Audio,Canvas,Web Audio Api,实时移动波形 我目前正在玩网络音频API,并使用canvas制作了一个频谱 function animate(){ var a=新UINT8阵列(分析仪频率B计数), y=新UINT8阵列(分析仪频率比计数),b、c、d; 分析器.getByteTimeDomainData(y); 分析仪。GetByTefFrequencyData(a); b=c=a.长度; d=水灰比; ctx.clearRect(0,0,w,h); 而(b){ var bh=a[b]+1; ctx.fillStyle='h

实时移动波形

我目前正在玩网络音频API,并使用canvas制作了一个频谱

function animate(){
var a=新UINT8阵列(分析仪频率B计数),
y=新UINT8阵列(分析仪频率比计数),b、c、d;
分析器.getByteTimeDomainData(y);
分析仪。GetByTefFrequencyData(a);
b=c=a.长度;
d=水灰比;
ctx.clearRect(0,0,w,h);
而(b){
var bh=a[b]+1;
ctx.fillStyle='hsla(+(b/c*240)+','+(y[b]/255*100 | 0)+',50%,1';
ctx.fillRect(1*b,h-bh,1,bh);
ctx.fillRect(1*b,y[b],1,1);
}
动画=webkitRequestAnimationFrame(动画);
}
小问题:有没有办法不写两次新的Uint8Array(Analyzer.frequencyBinCount)

演示

添加MP3/MP4文件,然后等待。(镀铬测试)

但是有很多问题。我找不到各种音频过滤器的正确文档

此外,如果你观察光谱,你会发现在70%或范围之后没有数据。这是什么意思?也许从16kHz到20kHz是没有声音的?我将在画布上应用一个文本,以显示各种HZ。但是在哪里

我发现返回的数据长度是32的幂,最大值是2048 高度始终为256

但真正的问题是。。。我想创建一个像traktor一样的移动波形。

不久前,我已经用PHP实现了,它将文件转换为低比特率,而不是提取数据并将其转换为图像。我在某处找到了剧本…但我不记得在哪里。。。 注:需要跛脚


剧本很有效。。。但您的最大图像大小限制为4k像素

因此,如果波形只出现几毫秒,那么波形就不好

我需要什么来存储/创建像traktors应用程序或此php脚本这样的实时波形?顺便说一句,traktor还有一个彩色波形(php脚本没有)。

编辑

我重写了你的剧本,它符合我的想法。。。速度相对较快

正如您在函数createArray中看到的,我将不同的线推到一个对象中,键为x坐标

我只是拿最高的数字

这里是我们可以玩颜色的地方

var ajaxB,AC,B,LC,op,x,y,ARRAY={},W=1024,H=256;
var aMax=Math.max.apply.bind(Math.max,Math);
函数错误(a){
控制台日志(a);
};
函数createDrawing(){
控制台日志(“drawingArray”);
var C=document.createElement('canvas');
C.宽度=W;
C.高度=H;
文件.正文.附件(C);
var context=C.getContext('2d');
context.save();
context.strokeStyle='#121';
globalCompositeOperation='lighter';
L2=W*1;
而(L2--){
context.beginPath();
上下文。moveTo(L2,0);
lineTo(L2+1,数组[L2]);
stroke();
}
restore();
};
函数createArray(a){
console.log('creatingArray');
B=a;
LC=B.getChannelData(0);//描述左通道的Float32Array
L=LC.长度;
op=W/L;

对于(var i=0;iOk),我要做的是使用XMLHttpRequest加载声音,然后使用webaudio对其进行解码,然后“小心地”显示它以获得您正在搜索的颜色

我刚刚制作了一个快速版本,从我的各种项目中复制粘贴,它非常有效,正如您在这张图片中看到的:

问题是速度太慢了。要获得(更)合适的速度,您必须进行一些计算以减少画布上要绘制的线的数量,因为在441000 Hz时,您很快就会得到太多要绘制的线

// AUDIO CONTEXT
window.AudioContext = window.AudioContext || window.webkitAudioContext ;

if (!AudioContext) alert('This site cannot be run in your Browser. Try a recent Chrome or Firefox. ');

var audioContext = new AudioContext();
var currentBuffer  = null;

// CANVAS
var canvasWidth = 512,  canvasHeight = 120 ;
var newCanvas   = createCanvas (canvasWidth, canvasHeight);
var context     = null;

window.onload = appendCanvas;
function appendCanvas() { document.body.appendChild(newCanvas);
                          context = newCanvas.getContext('2d'); }

// MUSIC LOADER + DECODE
function loadMusic(url) {   
    var req = new XMLHttpRequest();
    req.open( "GET", url, true );
    req.responseType = "arraybuffer";    
    req.onreadystatechange = function (e) {
          if (req.readyState == 4) {
             if(req.status == 200)
                  audioContext.decodeAudioData(req.response, 
                    function(buffer) {
                             currentBuffer = buffer;
                             displayBuffer(buffer);
                    }, onDecodeError);
             else
                  alert('error during the load.Wrong url or cross origin issue');
          }
    } ;
    req.send();
}

function onDecodeError() {  alert('error while decoding your file.');  }

// MUSIC DISPLAY
function displayBuffer(buff /* is an AudioBuffer */) {
   var leftChannel = buff.getChannelData(0); // Float32Array describing left channel     
   var lineOpacity = canvasWidth / leftChannel.length  ;      
   context.save();
   context.fillStyle = '#222' ;
   context.fillRect(0,0,canvasWidth,canvasHeight );
   context.strokeStyle = '#121';
   context.globalCompositeOperation = 'lighter';
   context.translate(0,canvasHeight / 2);
   context.globalAlpha = 0.06 ; // lineOpacity ;
   for (var i=0; i<  leftChannel.length; i++) {
       // on which line do we get ?
       var x = Math.floor ( canvasWidth * i / leftChannel.length ) ;
       var y = leftChannel[i] * canvasHeight / 2 ;
       context.beginPath();
       context.moveTo( x  , 0 );
       context.lineTo( x+1, y );
       context.stroke();
   }
   context.restore();
   console.log('done');
}

function createCanvas ( w, h ) {
    var newCanvas = document.createElement('canvas');
    newCanvas.width  = w;     newCanvas.height = h;
    return newCanvas;
};


loadMusic('could_be_better.mp3');
//音频上下文
window.AudioContext=window.AudioContext | | window.webkitadiocontext;
如果(!AudioContext)警报('此网站无法在您的浏览器中运行。请尝试使用最新的Chrome或Firefox。');
var audioContext=新的audioContext();
var currentBuffer=null;
//帆布
var canvasWidth=512,canvasHeight=120;
var newCanvas=createCanvas(画布宽度、画布高度);
var-context=null;
window.onload=appendCanvas;
函数appendCanvas(){document.body.appendChild(newCanvas);
context=newCanvas.getContext('2d');}
//音乐加载器+解码
函数loadMusic(url){
var req=新的XMLHttpRequest();
请求打开(“获取”,url,true);
req.responseType=“arraybuffer”;
req.onreadystatechange=功能(e){
如果(req.readyState==4){
如果(请求状态==200)
audioContext.decodeAudioData(请求响应,
功能(缓冲区){
currentBuffer=缓冲区;
显示缓冲区(buffer);
},编码错误);
其他的
警报(“加载过程中出错。url错误或跨来源问题”);
}
} ;
请求发送();
}
函数onDecodeError(){alert('解码文件时出错');}
//音乐显示
函数displayBuffer(buff/*是音频缓冲区*/){
var leftChannel=buff.getChannelData(0);//描述左通道的Float32Array
var lineOpacity=画布宽度/leftChannel.length;
context.save();
context.fillStyle='#222';
fillRect(0,0,画布宽度,画布高度);
context.strokeStyle='#121';
context.globalCompositeOperation='lighter';
翻译(0,画布高度/2);
context.globalAlpha=0.06;//lineOpacity;
对于(变量i=0;i// MUSIC DISPLAY
function displayBuffer2(buff /* is an AudioBuffer */) {
   var leftChannel = buff.getChannelData(0); // Float32Array describing left channel       
   // we 'resample' with cumul, count, variance
   // Offset 0 : PositiveCumul  1: PositiveCount  2: PositiveVariance
   //        3 : NegativeCumul  4: NegativeCount  5: NegativeVariance
   // that makes 6 data per bucket
   var resampled = new Float64Array(canvasWidth * 6 );
   var i=0, j=0, buckIndex = 0;
   var min=1e3, max=-1e3;
   var thisValue=0, res=0;
   var sampleCount = leftChannel.length;
   // first pass for mean
   for (i=0; i<sampleCount; i++) {
        // in which bucket do we fall ?
        buckIndex = 0 | ( canvasWidth * i / sampleCount );
        buckIndex *= 6;
        // positive or negative ?
        thisValue = leftChannel[i];
        if (thisValue>0) {
            resampled[buckIndex    ] += thisValue;
            resampled[buckIndex + 1] +=1;               
        } else if (thisValue<0) {
            resampled[buckIndex + 3] += thisValue;
            resampled[buckIndex + 4] +=1;                           
        }
        if (thisValue<min) min=thisValue;
        if (thisValue>max) max = thisValue;
   }
   // compute mean now
   for (i=0, j=0; i<canvasWidth; i++, j+=6) {
       if (resampled[j+1] != 0) {
             resampled[j] /= resampled[j+1]; ;
       }
       if (resampled[j+4]!= 0) {
             resampled[j+3] /= resampled[j+4];
       }
   }
   // second pass for mean variation  ( variance is too low)
   for (i=0; i<leftChannel.length; i++) {
        // in which bucket do we fall ?
        buckIndex = 0 | (canvasWidth * i / leftChannel.length );
        buckIndex *= 6;
        // positive or negative ?
        thisValue = leftChannel[i];
        if (thisValue>0) {
            resampled[buckIndex + 2] += Math.abs( resampled[buckIndex] - thisValue );               
        } else  if (thisValue<0) {
            resampled[buckIndex + 5] += Math.abs( resampled[buckIndex + 3] - thisValue );                           
        }
   }
   // compute mean variation/variance now
   for (i=0, j=0; i<canvasWidth; i++, j+=6) {
        if (resampled[j+1]) resampled[j+2] /= resampled[j+1];
        if (resampled[j+4]) resampled[j+5] /= resampled[j+4];   
   }
   context.save();
   context.fillStyle = '#000' ;
   context.fillRect(0,0,canvasWidth,canvasHeight );
   context.translate(0.5,canvasHeight / 2);   
  context.scale(1, 200);

   for (var i=0; i< canvasWidth; i++) {
        j=i*6;
       // draw from positiveAvg - variance to negativeAvg - variance 
       context.strokeStyle = '#F00';
       context.beginPath();
       context.moveTo( i  , (resampled[j] - resampled[j+2] ));
       context.lineTo( i  , (resampled[j +3] + resampled[j+5] ) );
       context.stroke();
       // draw from positiveAvg - variance to positiveAvg + variance 
       context.strokeStyle = '#FFF';
       context.beginPath();
       context.moveTo( i  , (resampled[j] - resampled[j+2] ));
       context.lineTo( i  , (resampled[j] + resampled[j+2] ) );
       context.stroke();
       // draw from negativeAvg + variance to negativeAvg - variance 
       // context.strokeStyle = '#FFF';
       context.beginPath();
       context.moveTo( i  , (resampled[j+3] + resampled[j+5] ));
       context.lineTo( i  , (resampled[j+3] - resampled[j+5] ) );
       context.stroke();
   }
   context.restore();
   console.log('done 231 iyi');
}