Android 如何从特定层获取输出张量?

Android 如何从特定层获取输出张量?,android,tensorflow-lite,Android,Tensorflow Lite,我想弄清楚是否有可能使用tensorflow lite for android环境从特定层获得输出。 目前,我知道使用:'explorer.run()'我们获得“标准”输出,但这不是我想要的。 谢谢你的建议。@Simo我将在这里写一个解决这个问题的方法。保存模型中要保存的部分.tflite文件怎么样。让我解释一下。不要执行以下操作并保存整个模型: # WHOLE MODEL tflite_model = tf.keras.models.load_model('face_recog.weights

我想弄清楚是否有可能使用tensorflow lite for android环境从特定层获得输出。 目前,我知道使用:'explorer.run()'我们获得“标准”输出,但这不是我想要的。
谢谢你的建议。

@Simo我将在这里写一个解决这个问题的方法。保存模型中要保存的部分.tflite文件怎么样。让我解释一下。不要执行以下操作并保存整个模型:

# WHOLE MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model)
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)
您可以打印keras模型的图层:

print([layer.name for layer in keras_model.layers])
Output:
['anchor', 'positive', 'negative', 'model', 'lambda']

print([layer.name for layer in keras_model.get_layer('model').layers])
Output:  
['input_1', 'Conv1_pad', 'Conv1', 'bn_Conv1', 'Conv1_relu', 'expanded_conv_depthwise', 'expanded_conv_depthwise_BN', 'expanded_conv_depthwise_relu', 'expanded_conv_project', 'expanded_conv_project_BN', 'block_1_expand', 'block_1_expand_BN', 'block_1_expand_relu', 'block_1_pad', 'block_1_depthwise', 'block_1_depthwise_BN', 'block_1_depthwise_relu', 'block_1_project', 'block_1_project_BN', 'block_2_expand', 'block_2_expand_BN', 'block_2_expand_relu', 'block_2_depthwise', 'block_2_depthwise_BN', 'block_2_depthwise_relu', 'block_2_project', 'block_2_project_BN', 'block_2_add', 'block_3_expand', 'block_3_expand_BN', 'block_3_expand_relu', 'block_3_pad', 'block_3_depthwise', 'block_3_depthwise_BN', 'block_3_depthwise_relu', 'block_3_project', 'block_3_project_BN', 'block_4_expand', 'block_4_expand_BN', 'block_4_expand_relu', 'block_4_depthwise', 'block_4_depthwise_BN', 'block_4_depthwise_relu', 'block_4_project', 'block_4_project_BN', 'block_4_add', 'block_5_expand', 'block_5_expand_BN', 'block_5_expand_relu', 'block_5_depthwise', 'block_5_depthwise_BN', 'block_5_depthwise_relu', 'block_5_project', 'block_5_project_BN', 'block_5_add', 'block_6_expand', 'block_6_expand_BN', 'block_6_expand_relu', 'block_6_pad', 'block_6_depthwise', 'block_6_depthwise_BN', 'block_6_depthwise_relu', 'block_6_project', 'block_6_project_BN', 'block_7_expand', 'block_7_expand_BN', 'block_7_expand_relu', 'block_7_depthwise', 'block_7_depthwise_BN', 'block_7_depthwise_relu', 'block_7_project', 'block_7_project_BN', 'block_7_add', 'block_8_expand', 'block_8_expand_BN', 'block_8_expand_relu', 'block_8_depthwise', 'block_8_depthwise_BN', 'block_8_depthwise_relu', 'block_8_project', 'block_8_project_BN', 'block_8_add', 'block_9_expand', 'block_9_expand_BN', 'block_9_expand_relu', 'block_9_depthwise', 'block_9_depthwise_BN', 'block_9_depthwise_relu', 'block_9_project', 'block_9_project_BN', 'block_9_add', 'block_10_expand', 'block_10_expand_BN', 'block_10_expand_relu', 'block_10_depthwise', 'block_10_depthwise_BN', 'block_10_depthwise_relu', 'block_10_project', 'block_10_project_BN', 'block_11_expand', 'block_11_expand_BN', 'block_11_expand_relu', 'block_11_depthwise', 'block_11_depthwise_BN', 'block_11_depthwise_relu', 'block_11_project', 'block_11_project_BN', 'block_11_add', 'block_12_expand', 'block_12_expand_BN', 'block_12_expand_relu', 'block_12_depthwise', 'block_12_depthwise_BN', 'block_12_depthwise_relu', 'block_12_project', 'block_12_project_BN', 'block_12_add', 'block_13_expand', 'block_13_expand_BN', 'block_13_expand_relu', 'block_13_pad', 'block_13_depthwise', 'block_13_depthwise_BN', 'block_13_depthwise_relu', 'block_13_project', 'block_13_project_BN', 'block_14_expand', 'block_14_expand_BN', 'block_14_expand_relu', 'block_14_depthwise', 'block_14_depthwise_BN', 'block_14_depthwise_relu', 'block_14_project', 'block_14_project_BN', 'block_14_add', 'block_15_expand', 'block_15_expand_BN', 'block_15_expand_relu', 'block_15_depthwise', 'block_15_depthwise_BN', 'block_15_depthwise_relu', 'block_15_project', 'block_15_project_BN', 'block_15_add', 'block_16_expand', 'block_16_expand_BN', 'block_16_expand_relu', 'block_16_depthwise', 'block_16_depthwise_BN', 'block_16_depthwise_relu', 'block_16_project', 'block_16_project_BN', 'Conv_1', 'Conv_1_bn', 'out_relu', 'global_average_pooling2d', 'predictions', 'dense', 'dense_1']
然后,您可以从模型中获取所需的任何图层,并将其保存到.tflite:

# PART OF MODEL
tflite_model = tf.keras.models.load_model('face_recog.weights.best.hdf5')
converter = tf.lite.TFLiteConverter.from_keras_model(tflite_model.get_layer('model'))
tflite_save = converter.convert()
open("face_recog.tflite", "wb").write(tflite_save)
因此,使用上述代码,.tflite文件将具有input tensor=“input\u 1”和output=“densite\u 1”

然后在android内部,您必须使用特定层“模型”的输入,您将获得特定形状的输出,就像在python中打印输出详细信息一样:

interpreter = tf.lite.Interpreter('face_recog.tflite')
print(interpreter.get_output_details())
interpreter.get_tensor_details()
安卓部分:

// Initialize interpreter
@Throws(IOException::class)
private suspend fun initializeInterpreter(app: Application) = withContext(Dispatchers.IO) {
    // Load the TF Lite model from asset folder and initialize TF Lite Interpreter without NNAPI enabled.
    val assetManager = app.assets
    val model = loadModelFile(assetManager, "face_recog_model_layer.tflite")
    val options = Interpreter.Options()
    options.setUseNNAPI(false)
    interpreter = Interpreter(model, options)
    // Reads type and shape of input and output tensors, respectively.
    val imageTensorIndex = 0
    val imageShape: IntArray =
        interpreter.getInputTensor(imageTensorIndex).shape() 
    Log.i("INPUT_TENSOR_WHOLE", Arrays.toString(imageShape))
    val imageDataType: DataType =
        interpreter.getInputTensor(imageTensorIndex).dataType()
    Log.i("INPUT_DATA_TYPE", imageDataType.toString())
    val probabilityTensorIndex = 0
    val probabilityShape: IntArray =
        interpreter.getOutputTensor(probabilityTensorIndex).shape()
    Log.i("OUTPUT_TENSOR_SHAPE", Arrays.toString(probabilityShape))
    val probabilityDataType: DataType =
        interpreter.getOutputTensor(probabilityTensorIndex).dataType()
    Log.i("OUTPUT_DATA_TYPE", probabilityDataType.toString())
    Log.i(TAG, "Initialized TFLite interpreter.")

}

@Throws(IOException::class)
private fun loadModelFile(assetManager: AssetManager, filename: String): MappedByteBuffer {
    val fileDescriptor = assetManager.openFd(filename)
    val inputStream = FileInputStream(fileDescriptor.fileDescriptor)
    val fileChannel = inputStream.channel
    val startOffset = fileDescriptor.startOffset
    val declaredLength = fileDescriptor.declaredLength
    return fileChannel.map(FileChannel.MapMode.READ_ONLY, startOffset, declaredLength)
}

我希望这能帮助别人。当然,如果您还需要什么,请给我贴上标签:)

目前,我认为没有办法做到这一点。推理运行完成后,您感兴趣的中间输出可能在内存中可用,也可能在内存中不可用,因为TFLite会在以后的阶段积极尝试重用早期使用的张量内存。你的用例到底是什么?你能更好地解释一下当你说Tflite重用早期使用的张量时的部分吗?我问这个问题是因为我需要从网络的不同层获得输出,以便在移动环境中进行一些实验。我看到解释器对象有一个方法“getOutputSensor(int-index)”,理论上(因为我找不到解释的API),它给出了指定索引处的张量,但我并不真正理解它是如何工作的……当然。为了便于解释,让我们简化一下,假设图中有4个节点按顺序执行:输入->A->B->C->D->输出。在执行C节点时,A的输出已经被B消耗,因此不再需要内存。因此,用于存储A的输出的内存可以重新用于存储C节点的输出。推理完全运行后,无法获取A的输出,因为它已被另一个节点的输出覆盖。请参阅此注释:因此无法从节点C获取输出?从技术上讲,可以获取特定节点的TfLiteNode对象,然后检查输出。请参阅我最近对类似问题的回答:。在您的情况下,您需要检查输出。但是,请再次注意,如果以后被其他层覆盖,您正在读取的值可能不是正确的值。您好,我认为这个解决方案可能不适合我的问题,因为我需要从哪一层友好地选择提取输出,Simo。请看这一页。也许你会找到一个答案,谢谢,但目前我想弄清楚如何使用C++ API来实现TFLITE,但我真的不明白,我用PIP命令安装了TysOracle命令,BAZEL在指令之后,但是当启动BAZEL命令时它给了我错误,你能帮助我吗?对不起,Simo,我还没有和巴塞尔玩过。我希望你很快能成功!