Warning: file_get_contents(/data/phpspider/zhask/data//catemap/3/android/202.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/5/ruby/21.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Android 谷歌语音转换文本不会返回结果_Android_Kotlin_Google Speech Api - Fatal编程技术网

Android 谷歌语音转换文本不会返回结果

Android 谷歌语音转换文本不会返回结果,android,kotlin,google-speech-api,Android,Kotlin,Google Speech Api,我希望谷歌语音到文本API在我按下按钮后识别一个短短语。所以我想出了下面的代码。但它一直没有结果。我很困惑,那里有结果(缓冲区等),麦克风工作正常,并且在模拟器中启用。谷歌控制台也不会显示错误 这是我的密码。 单击开始录制的侦听器: val clicker: View.OnClickListener = View.OnClickListener { Log.d(TAG, "Starting record thread") mAudioRecor

我希望谷歌语音到文本API在我按下按钮后识别一个短短语。所以我想出了下面的代码。但它一直没有结果。我很困惑,那里有结果(缓冲区等),麦克风工作正常,并且在模拟器中启用。谷歌控制台也不会显示错误

这是我的密码。 单击开始录制的侦听器:

val clicker: View.OnClickListener = View.OnClickListener {
        Log.d(TAG, "Starting record thread")
        mAudioRecorder.record(LISTEN_TIME_MILLIS)
    }
    mReadButton.setOnClickListener(clicker)
fun record(listenTimeMillis: Long) {
    val byteString: ByteString = ByteString.EMPTY
    mAudioRecorder = initAudioRecorder()
    val mBuffer = ByteArray(4 * AudioRecord.getMinBufferSize(SAMPLE_RATE_HZ, CHANNEL, ENCODING))
    mAudioRecorder!!.startRecording()

    Thread {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND)
        Thread.sleep(listenTimeMillis)

        val read = mAudioRecorder!!.read(mBuffer, 0, mBuffer.size, AudioRecord.READ_NON_BLOCKING)
        val intent = Intent(RECORDING_COMPLETED_INTENT)
        try {
            if (read > 0) {
                intent.putExtra(RECORDING_AUDIO, mBuffer)
                intent.putExtra(RECORDING_SUCCESS, true)
            }

            LocalBroadcastManager.getInstance(context).sendBroadcast(intent)
        } catch (e: Exception) {
            Log.e(TAG, e.stackTrace.toString())
        }

        releaseAudioRecorder()
    }.start()
}
这是一个广播接收器,它处理结果并尝试将结果发送到谷歌:

private val broadCastReceiver = object : BroadcastReceiver() {
    override fun onReceive(contxt: Context?, intent: Intent?) {
        if (intent!!.getBooleanExtra(RECORDING_SUCCESS, false)) {
            val byteArrayExtra = intent.getByteArrayExtra(RECORDING_AUDIO)
            val audioResultByteString: ByteString = ByteString.copyFrom(byteArrayExtra)

            if (audioResultByteString.size() > 0) {
                val audio: RecognitionAudio = RecognitionAudio.newBuilder()
                    .setContent(audioResultByteString).build()

                val resultsList = mSpeechClient.recognize(config, audio).resultsList

                if (resultsList.size > 0) {                       
                    for (result in resultsList) {
                        val resultText = result.alternativesList[0].transcript
                    }
                }
                Log.d(TAG, "- Done recognition. Result Qty: ${resultsList.size}")
            }
        }
    }
}
下面是AudioRecorder类函数,用于录制:

val clicker: View.OnClickListener = View.OnClickListener {
        Log.d(TAG, "Starting record thread")
        mAudioRecorder.record(LISTEN_TIME_MILLIS)
    }
    mReadButton.setOnClickListener(clicker)
fun record(listenTimeMillis: Long) {
    val byteString: ByteString = ByteString.EMPTY
    mAudioRecorder = initAudioRecorder()
    val mBuffer = ByteArray(4 * AudioRecord.getMinBufferSize(SAMPLE_RATE_HZ, CHANNEL, ENCODING))
    mAudioRecorder!!.startRecording()

    Thread {
        Process.setThreadPriority(Process.THREAD_PRIORITY_BACKGROUND)
        Thread.sleep(listenTimeMillis)

        val read = mAudioRecorder!!.read(mBuffer, 0, mBuffer.size, AudioRecord.READ_NON_BLOCKING)
        val intent = Intent(RECORDING_COMPLETED_INTENT)
        try {
            if (read > 0) {
                intent.putExtra(RECORDING_AUDIO, mBuffer)
                intent.putExtra(RECORDING_SUCCESS, true)
            }

            LocalBroadcastManager.getInstance(context).sendBroadcast(intent)
        } catch (e: Exception) {
            Log.e(TAG, e.stackTrace.toString())
        }

        releaseAudioRecorder()
    }.start()
}

我解决了这个问题。原因是缓冲区太小。因此,识别服务器实际上得到了半秒钟的音频记录,但它显然无法识别

 val mBuffer = ByteArray(4 * AudioRecord.getMinBufferSize(SAMPLE_RATE_HZ, CHANNEL, ENCODING))
我放了200而不是
AudioRecord.READ_NON_BLOCKING
我放了
AudioRecord.READ_BLOCKING
,我在循环中读取缓冲区,并在每次迭代中增加偏移量。然后它开始工作

val startTime = System.currentTimeMillis()
        var deltaTime = 0L
        var offset = 0
        val intent = Intent(RECORDING_COMPLETED_INTENT)
        val readChunk = 512

        while (deltaTime < listenTimeMillis && offset < mBuffer.size) {
            val read = mAudioRecord!!.read(mBuffer, offset, readChunk, AudioRecord.READ_BLOCKING)

            if (read < 0) {
                intent.putExtra(RECORDING_SUCCESS, false)
                break; //if read with error, end here
            }

            deltaTime = System.currentTimeMillis() - startTime //startTime is a while loop breaking condition so it lestens only for specified amount of time
            offset += readChunk
        }
val startTime=System.currentTimeMillis()
变量deltaTime=0L
var偏移量=0
val intent=intent(记录\u完成\u intent)
val readChunk=512
while(deltaTime