Warning: file_get_contents(/data/phpspider/zhask/data//catemap/9/javascript/468.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Javascript 如何将AudioBuffer转换为wav文件?_Javascript_Audio_Wav_Audiobuffer_Format Conversion - Fatal编程技术网

Javascript 如何将AudioBuffer转换为wav文件?

Javascript 如何将AudioBuffer转换为wav文件?,javascript,audio,wav,audiobuffer,format-conversion,Javascript,Audio,Wav,Audiobuffer,Format Conversion,我正在尝试将音频缓冲区转换为可以下载的wav文件。 我尝试了两种方法: 第一个,我用mediaRecorder录制所有声音,然后执行以下操作: App.model.mediaRecorder.ondataavailable = function(evt) { // push each chunk (blobs) in an array //console.log(evt.data) App.model.chunks.push(evt.data); }; App.mode

我正在尝试将音频缓冲区转换为可以下载的wav文件。 我尝试了两种方法: 第一个,我用mediaRecorder录制所有声音,然后执行以下操作:

App.model.mediaRecorder.ondataavailable = function(evt) {
   // push each chunk (blobs) in an array
    //console.log(evt.data)
   App.model.chunks.push(evt.data);
 };

 App.model.mediaRecorder.onstop = function(evt) {
   // Make blob out of our blobs, and open it.
   var blob = new Blob(App.model.chunks, { 'type' : 'audio/wav; codecs=opus' });
   createDownloadLink(blob);

 };
我创建了一个包含Blob的块表,然后用这些块创建了一个新的Blob。然后在函数“createDownloadLink()”中创建音频节点和下载链接:

function createDownloadLink(blob) {

  var url = URL.createObjectURL(blob);
  var li = document.createElement('li');
  var au = document.createElement('audio');
  li.className = "recordedElement";
  var hf = document.createElement('a');
  li.style.textDecoration ="none";
  au.controls = true;
  au.src = url;
  hf.href = url;
  hf.download = 'myrecording' + App.model.countRecordings + ".wav";
  hf.innerHTML = hf.download;
  li.appendChild(au);
  li.appendChild(hf);
  recordingslist.appendChild(li);
}

音频节点被创建,我可以听我录制的声音,所以一切似乎都正常。但当我下载文件时,任何玩家都无法读取。我想这是因为它不是用WAV编码的,所以它不能理解

除了“createDownloadLink()”函数外,第二种方法与上述方法相同

function createDownloadLink(blob) {


  var reader = new FileReader();
  reader.readAsArrayBuffer(blob);
  App.model.sourceBuffer = App.model.audioCtx.createBufferSource();

  reader.onloadend = function()
  {
      App.model.recordBuffer = reader.result;
      App.model.audioCtx.decodeAudioData(App.model.recordBuffer, function(decodedData)
                                        {
          App.model.sourceBuffer.buffer = decodedData;

      })
  }

在这里,我得到了我录制的声音的音频缓冲区,但我没有找到如何将其转换为WAV文件…

您可以使用它的变体吗?

也许是这样的

var wav = createWavFromBuffer(convertBlock(decodedData), 44100); 
// Then call wav.getBuffer or wav.getWavInt16Array() for the WAV-RIFF formatted data
此处的其他功能:

class Wav {
    constructor(opt_params) {
        this._sampleRate = opt_params && opt_params.sampleRate ? opt_params.sampleRate : 44100;
        this._channels = opt_params && opt_params.channels ? opt_params.channels : 2;
        this._eof = true;
        this._bufferNeedle = 0;
        this._buffer;
    }
    setBuffer(buffer) {
        this._buffer = this.getWavInt16Array(buffer);
        this._bufferNeedle = 0;
        this._internalBuffer = '';
        this._hasOutputHeader = false;
        this._eof = false;
    }
    getBuffer(len) {
        var rt;
        if( this._bufferNeedle + len >= this._buffer.length ){
            rt = new Int16Array(this._buffer.length - this._bufferNeedle);
            this._eof = true;
        }
        else {
            rt = new Int16Array(len);
        }
        for(var i=0; i<rt.length; i++){
            rt[i] = this._buffer[i+this._bufferNeedle];
        }
        this._bufferNeedle += rt.length;
        return  rt.buffer;
    }
    eof() {
        return this._eof;
    }
    getWavInt16Array(buffer) {

        var intBuffer = new Int16Array(buffer.length + 23), tmp;

        intBuffer[0] = 0x4952; // "RI"
        intBuffer[1] = 0x4646; // "FF"

        intBuffer[2] = (2*buffer.length + 15) & 0x0000ffff; // RIFF size
        intBuffer[3] = ((2*buffer.length + 15) & 0xffff0000) >> 16; // RIFF size

        intBuffer[4] = 0x4157; // "WA"
        intBuffer[5] = 0x4556; // "VE"

        intBuffer[6] = 0x6d66; // "fm"
        intBuffer[7] = 0x2074; // "t "

        intBuffer[8] = 0x0012; // fmt chunksize: 18
        intBuffer[9] = 0x0000; //

        intBuffer[10] = 0x0001; // format tag : 1 
        intBuffer[11] = this._channels; // channels: 2

        intBuffer[12] = this._sampleRate & 0x0000ffff; // sample per sec
        intBuffer[13] = (this._sampleRate & 0xffff0000) >> 16; // sample per sec

        intBuffer[14] = (2*this._channels*this._sampleRate) & 0x0000ffff; // byte per sec
        intBuffer[15] = ((2*this._channels*this._sampleRate) & 0xffff0000) >> 16; // byte per sec

        intBuffer[16] = 0x0004; // block align
        intBuffer[17] = 0x0010; // bit per sample
        intBuffer[18] = 0x0000; // cb size
        intBuffer[19] = 0x6164; // "da"
        intBuffer[20] = 0x6174; // "ta"
        intBuffer[21] = (2*buffer.length) & 0x0000ffff; // data size[byte]
        intBuffer[22] = ((2*buffer.length) & 0xffff0000) >> 16; // data size[byte]  

        for (var i = 0; i < buffer.length; i++) {
            tmp = buffer[i];
            if (tmp >= 1) {
                intBuffer[i+23] = (1 << 15) - 1;
            }
            else if (tmp <= -1) {
                intBuffer[i+23] = -(1 << 15);
            }
            else {
                intBuffer[i+23] = Math.round(tmp * (1 << 15));
            }
        }

        return intBuffer;
    }
}

// factory
function createWavFromBuffer(buffer, sampleRate) {
  var wav = new Wav({
      sampleRate: sampleRate,
      channels: 1
  });
  wav.setBuffer(buffer);
  return wav;
}


// ArrayBuffer -> Float32Array
var convertBlock = function(buffer) {
    var incomingData = new Uint8Array(buffer);
    var i, l = incomingData.length;
    var outputData = new Float32Array(incomingData.length);
    for (i = 0; i < l; i++) {
        outputData[i] = (incomingData[i] - 128) / 128.0;
    }
    return outputData;
}
class Wav{
构造函数(opt_参数){
这个._sampleRate=opt_params&&opt_params.sampleRate?opt_params.sampleRate:44100;
这._channels=opt_params&&opt_params.channels?opt_params.channels:2;
这是真的;
这个。_缓冲针=0;
这是一个缓冲区;
}
设置缓冲区(缓冲区){
this.\u buffer=this.getWavInt16Array(buffer);
这个。_缓冲针=0;
这._internalBuffer='';
这。_hasOutputHeader=false;
这是错误的;
}
getBuffer(len){
var-rt;
如果(this.\u bufferpine+len>=this.\u buffer.length){
rt=新的Int16Array(this.\u buffer.length-this.\u bufferneed);
这是真的;
}
否则{
rt=新的INT16阵列(len);
}
对于(var i=0;i>16;//RIFF大小
intBuffer[4]=0x4157;/“WA”
intBuffer[5]=0x4556;/“VE”
intBuffer[6]=0x6d66;//“fm”
intBuffer[7]=0x2074;/“t”
intBuffer[8]=0x0012;//fmt chunksize:18
intBuffer[9]=0x0000//
intBuffer[10]=0x0001;//格式标记:1
intBuffer[11]=此。_channels;//通道:2
intBuffer[12]=此。_sampleRate&0x0000ffff;//每秒采样数
intBuffer[13]=(this._sampleRate&0xffff0000)>>16;//每秒采样数
intBuffer[14]=(2*此._通道*此._采样器)&0x0000ffff;//字节/秒
intBuffer[15]=((2*this.\u channels*this.\u sampleRate)&0xffff0000)>>16;//字节/秒
intBuffer[16]=0x0004;//块对齐
intBuffer[17]=0x0010;//每个样本的位
intBuffer[18]=0x0000;//cb大小
intBuffer[19]=0x6164;/“da”
intBuffer[20]=0x6174;/“ta”
intBuffer[21]=(2*buffer.length)&0x0000ffff;//数据大小[字节]
intBuffer[22]=((2*buffer.length)&0xffff0000)>>16;//数据大小[字节]
对于(变量i=0;i=1){

intBuffer[i+23]=(1)代码剪接也取自此处:不要为此检查mediaRecorder。相反,使用audioContext,从中创建MediaStreamSource,并将其传递给ScriptProcessorNode。从那里,您将记录传递到此processorNode的所有数据,并将其与正确的wav元数据结合起来。类似lib的库对最后几步:您只需向它提供MediaStreamSource即可。