使用ChannelAPI的Android Wear录音机

使用ChannelAPI的Android Wear录音机,android,audio,wear-os,audiorecord,channel-api,Android,Audio,Wear Os,Audiorecord,Channel Api,我正在尝试为Android Wear构建一个录音应用程序。现在,我能够捕获手表上的音频,将其流式传输到手机,并将其保存在文件中。但是,音频文件显示的是间隙或裁剪部分 我发现这些问题与我的问题有关,但它们帮不了我 这是我的密码: 首先,在手表方面,我使用channelAPI创建频道,并成功地将手表上捕获的音频发送到智能手机 //here are the variables values that I used //44100Hz is currently the only rate that

我正在尝试为Android Wear构建一个录音应用程序。现在,我能够捕获手表上的音频,将其流式传输到手机,并将其保存在文件中。但是,音频文件显示的是间隙或裁剪部分

我发现这些问题与我的问题有关,但它们帮不了我


这是我的密码:

首先,在手表方面,我使用channelAPI创建频道,并成功地将手表上捕获的音频发送到智能手机

//here are the variables values that I used

//44100Hz is currently the only rate that is guaranteed to work on all devices
//but other rates such as 22050, 16000, and 11025 may work on some devices.

private static final int RECORDER_SAMPLE_RATE = 44100; 
private static final int RECORDER_CHANNELS = AudioFormat.CHANNEL_IN_MONO;
private static final int RECORDER_AUDIO_ENCODING = AudioFormat.ENCODING_PCM_16BIT;
int BufferElements2Rec = 1024; 
int BytesPerElement = 2; 

//start the process of recording audio
private void startRecording() {

    recorder = new AudioRecord(MediaRecorder.AudioSource.MIC,
            RECORDER_SAMPLE_RATE, RECORDER_CHANNELS,
            RECORDER_AUDIO_ENCODING, BufferElements2Rec * BytesPerElement);

    recorder.startRecording();
    isRecording = true;
    recordingThread = new Thread(new Runnable() {
        public void run() {
            writeAudioDataToPhone();
        }
    }, "AudioRecorder Thread");
    recordingThread.start();
}

private void writeAudioDataToPhone(){

    short sData[] = new short[BufferElements2Rec];
    ChannelApi.OpenChannelResult result = Wearable.ChannelApi.openChannel(googleClient, nodeId, "/mypath").await();
    channel = result.getChannel();

    Channel.GetOutputStreamResult getOutputStreamResult = channel.getOutputStream(googleClient).await();
    OutputStream outputStream = getOutputStreamResult.getOutputStream();

    while (isRecording) {
        // gets the voice output from microphone to byte format

        recorder.read(sData, 0, BufferElements2Rec);
        try {
            byte bData[] = short2byte(sData);
            outputStream.write(bData);
        } catch (IOException e) {
            e.printStackTrace();
        }
    }
    try {
        outputStream.close();
    } catch (IOException e) {
        e.printStackTrace();
    }
}
然后,在智能手机端,我从通道接收音频数据并将其写入PCM文件

public void onChannelOpened(Channel channel) {
    if (channel.getPath().equals("/mypath")) {
        Channel.GetInputStreamResult getInputStreamResult = channel.getInputStream(mGoogleApiClient).await();
        inputStream = getInputStreamResult.getInputStream();

        writePCMToFile(inputStream);

        MainActivity.this.runOnUiThread(new Runnable() {
            public void run() {
                Toast.makeText(MainActivity.this, "Audio file received!", Toast.LENGTH_SHORT).show();
            }
        });
    }
}

public void writePCMToFile(InputStream inputStream) {
    OutputStream outputStream = null;

    try {
        // write the inputStream to a FileOutputStream
        outputStream = new FileOutputStream(new File("/sdcard/wearRecord.pcm"));

        int read = 0;
        byte[] bytes = new byte[1024];

        while ((read = inputStream.read(bytes)) != -1) {
            outputStream.write(bytes, 0, read);
        }

        System.out.println("Done writing PCM to file!");

    } catch (Exception e) {
        e.printStackTrace();
    } finally {
        if (inputStream != null) {
            try {
                inputStream.close();
            } catch (Exception e) {
                e.printStackTrace();
            }
        }
        if (outputStream != null) {
            try {
                // outputStream.flush();
                outputStream.close();
            } catch (Exception e) {
                e.printStackTrace();
            }

        }
    }
}


我做错了什么,或者你对在智能手机上实现完美无间隙音频文件有什么建议?提前感谢。

我在您的代码中注意到,您正在将所有内容读入一个短[]数组,然后将其转换为字节[]数组,以便通道API发送。代码还通过循环的每次迭代创建一个新的byte[]数组,这将为垃圾收集器创建大量工作。通常,您希望避免在循环内进行分配

我会在顶部分配一个byte[]数组,并让AudioRecord类将其直接存储到byte[]数组中(只需确保分配的字节数是原来的两倍),代码如下:

mAudioTemp = new byte[bufferSize];

int result;
while ((result = mAudioRecord.read(mAudioTemp, 0, mAudioTemp.length)) > 0) {
  try {
    mAudioStream.write(mAudioTemp, 0, result);
  } catch (IOException e) {
    Log.e(Const.TAG, "Write to audio channel failed: " + e);
  }
}
我还用一个1秒的音频缓冲区进行了测试,使用了这样的代码,效果很好。在开始出现问题之前,我不确定最小缓冲区大小是多少:

int bufferSize = Math.max(
  AudioTrack.getMinBufferSize(44100, AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT),
  44100 * 2);

你试过制造更大的缓冲区吗?1024字节非常小,手机延迟很容易使您丢失一些音频。是的。我还尝试使用AudioRecord.getMinBufferSize函数返回的值的十倍,但结果是一样的。感谢您的回复!所以,在手持设备方面,我会使用与可穿戴设备相同的缓冲区大小,而不是1024,对吗?