Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/196.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Android 安卓音轨课程的问题。声音有时会丢失_Android_Audio_Video_Android Ndk_Ffmpeg - Fatal编程技术网

Android 安卓音轨课程的问题。声音有时会丢失

Android 安卓音轨课程的问题。声音有时会丢失,android,audio,video,android-ndk,ffmpeg,Android,Audio,Video,Android Ndk,Ffmpeg,我找到了Android的开源视频播放器,它使用ffmpeg解码视频。 我在音频方面有一些问题,有时会播放一些乱七八糟的声音,但视频画面显示得很好。播放器的基本思想是音频和视频在两个不同的流中解码,然后在第三个流中传回,视频图片显示在SurfaceView上,视频声音以字节数组形式传递到AudioTrack,然后播放。但有时声音会丢失,或是在玩混蛋游戏。有人能给我一些基本概念的起点吗。可能是我应该改变音轨的缓冲区大小或添加一些标志。下面是一段代码,创建AudioTrack类的地方 private

我找到了Android的开源视频播放器,它使用ffmpeg解码视频。 我在音频方面有一些问题,有时会播放一些乱七八糟的声音,但视频画面显示得很好。播放器的基本思想是音频和视频在两个不同的流中解码,然后在第三个流中传回,视频图片显示在SurfaceView上,视频声音以字节数组形式传递到AudioTrack,然后播放。但有时声音会丢失,或是在玩混蛋游戏。有人能给我一些基本概念的起点吗。可能是我应该改变音轨的缓冲区大小或添加一些标志。下面是一段代码,创建AudioTrack类的地方

private AudioTrack prepareAudioTrack(int sampleRateInHz,
        int numberOfChannels) {

    for (;;) {
        int channelConfig;
        if (numberOfChannels == 1) {
            channelConfig = AudioFormat.CHANNEL_OUT_MONO;
        } else if (numberOfChannels == 2) {
            channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
        } else if (numberOfChannels == 3) {
            channelConfig = AudioFormat.CHANNEL_OUT_FRONT_CENTER
                    | AudioFormat.CHANNEL_OUT_FRONT_RIGHT
                    | AudioFormat.CHANNEL_OUT_FRONT_LEFT;
        } else if (numberOfChannels == 4) {
            channelConfig = AudioFormat.CHANNEL_OUT_QUAD;
        } else if (numberOfChannels == 5) {
            channelConfig = AudioFormat.CHANNEL_OUT_QUAD
                    | AudioFormat.CHANNEL_OUT_LOW_FREQUENCY;
        } else if (numberOfChannels == 6) {
            channelConfig = AudioFormat.CHANNEL_OUT_5POINT1;
        } else if (numberOfChannels == 8) {
            channelConfig = AudioFormat.CHANNEL_OUT_7POINT1;
        } else {
            channelConfig = AudioFormat.CHANNEL_OUT_STEREO;
        }
        try {
            Log.d("MyLog","Creating Audio player");
            int minBufferSize = AudioTrack.getMinBufferSize(sampleRateInHz,
                    channelConfig, AudioFormat.ENCODING_PCM_16BIT);
            AudioTrack audioTrack = new AudioTrack(
                    AudioManager.STREAM_MUSIC, sampleRateInHz,
                    channelConfig, AudioFormat.ENCODING_PCM_16BIT,
                    minBufferSize, AudioTrack.MODE_STREAM);
            return audioTrack;
        } catch (IllegalArgumentException e) {
            if (numberOfChannels > 2) {
                numberOfChannels = 2;
            } else if (numberOfChannels > 1) {
                numberOfChannels = 1;
            } else {
                throw e;
            }
        }
    }
}
这是一段本机代码,其中声音字节被写入AudioTrack

int player_write_audio(struct DecoderData *decoder_data, JNIEnv *env,
    int64_t pts, uint8_t *data, int data_size, int original_data_size) {
struct Player *player = decoder_data->player;
int stream_no = decoder_data->stream_no;
int err = ERROR_NO_ERROR;
int ret;
AVCodecContext * c = player->input_codec_ctxs[stream_no];
AVStream *stream = player->input_streams[stream_no];
LOGI(10, "player_write_audio Writing audio frame")

jbyteArray samples_byte_array = (*env)->NewByteArray(env, data_size);
if (samples_byte_array == NULL) {
    err = -ERROR_NOT_CREATED_AUDIO_SAMPLE_BYTE_ARRAY;
    goto end;
}

if (pts != AV_NOPTS_VALUE) {
    player->audio_clock = av_rescale_q(pts, stream->time_base, AV_TIME_BASE_Q);
    LOGI(9, "player_write_audio - read from pts")
} else {
    int64_t sample_time = original_data_size;
    sample_time *= 1000000ll;
    sample_time /= c->channels;
    sample_time /= c->sample_rate;
    sample_time /= av_get_bytes_per_sample(c->sample_fmt);
    player->audio_clock += sample_time;
    LOGI(9, "player_write_audio - added")
}
enum WaitFuncRet wait_ret = player_wait_for_frame(player,
        player->audio_clock + AUDIO_TIME_ADJUST_US, stream_no);
if (wait_ret == WAIT_FUNC_RET_SKIP) {
    goto end;
}

LOGI(10, "player_write_audio Writing sample data")

jbyte *jni_samples = (*env)->GetByteArrayElements(env, samples_byte_array,
        NULL);
memcpy(jni_samples, data, data_size);
(*env)->ReleaseByteArrayElements(env, samples_byte_array, jni_samples, 0);

LOGI(10, "player_write_audio playing audio track");
ret = (*env)->CallIntMethod(env, player->audio_track,
        player->audio_track_write_method, samples_byte_array, 0, data_size);
jthrowable exc = (*env)->ExceptionOccurred(env);
if (exc) {
    err = -ERROR_PLAYING_AUDIO;
    LOGE(3, "Could not write audio track: reason in exception");
    // TODO maybe release exc
    goto free_local_ref;
}
if (ret < 0) {
    err = -ERROR_PLAYING_AUDIO;
    LOGE(3,
            "Could not write audio track: reason: %d look in AudioTrack.write()", ret);
    goto free_local_ref;
}

free_local_ref:
LOGI(10, "player_write_audio releasing local ref");
(*env)->DeleteLocalRef(env, samples_byte_array);

end: return err;
}


我会很高兴得到任何帮助!!!!非常感谢

我也有同样的问题。问题在于写入音频播放器的音频数据的起始点。在PCM数据中,每个2字节的数据根据little_endian转换创建一个音频样本。为了正确播放PCM数据样本,必须正确创建写入音频播放器。如果读取缓冲区的起始点不是样本的第一个字节,则音频样本无法正确创建,声音将被破坏。在我的情况下,我从文件中读取样本。在某些时候,从文件中读取数据的起点是样本的第二个字节,然后我从文件中读取的所有数据都被错误地解码。我通过检查起始点来解决这个问题,如果起始点是奇数,我增加它并将其改为偶数。 请原谅我英语不好