Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Tensorflow TFlite解释器在为量化模型分配张量时引发运行时错误。涉及scale_diff和output_scale的断言失败_Tensorflow_Runtime Error_Quantization_Tensorflow Lite - Fatal编程技术网

Tensorflow TFlite解释器在为量化模型分配张量时引发运行时错误。涉及scale_diff和output_scale的断言失败

Tensorflow TFlite解释器在为量化模型分配张量时引发运行时错误。涉及scale_diff和output_scale的断言失败,tensorflow,runtime-error,quantization,tensorflow-lite,Tensorflow,Runtime Error,Quantization,Tensorflow Lite,亲爱的开发人员和NN爱好者,我已经量化了一个模型(8位训练后量化),我正在尝试使用tflite Interpter对得到的模型进行推理 在某些情况下,解释器运行正常,我可以按预期对量化模型进行推理,输出与原始模型足够接近。因此,我的设置似乎是正确的。 然而,根据具体的量化模型,我经常遇到以下运行时错误 Traceback (most recent call last): File ".\quantize_model.py", line 328, in <modu

亲爱的开发人员和NN爱好者,我已经量化了一个模型(8位训练后量化),我正在尝试使用tflite Interpter对得到的模型进行推理

在某些情况下,解释器运行正常,我可以按预期对量化模型进行推理,输出与原始模型足够接近。因此,我的设置似乎是正确的。 然而,根据具体的量化模型,我经常遇到以下运行时错误

Traceback (most recent call last):
    File ".\quantize_model.py", line 328, in <module>
        interpreter.allocate_tensors()
    File "---path removed---tf-nightly_py37\lib\site-packages\tensorflow\lite\python\interpreter.py", line 243, in allocate_tensors
        return self._interpreter.AllocateTensors()
RuntimeError: tensorflow/lite/kernels/kernel_util.cc:154 scale_diff / output_scale <= 0.02 was not true.Node number 26 (FULLY_CONNECTED) failed to prepare.

我找到了一个解决方法,包括手动修改量化的tflite模型。 这是触发所述RuntimeError()的文件:

//TODO(ahentz):培训管道必须保证以下条件。
...
常数双刻度差=标准::abs(输入产品刻度-偏差刻度);
常量双输出刻度=静态刻度(输出->参数刻度);

TF_LITE_sure(上下文、刻度/输出刻度我有另一种方法可以克服我的问题并与大家分享
激活的量化仅支持Relu和标识。如果我们在Relu激活之前错过了BIASDD,则可能会失败,因此,我们可以通过
tf.Identity
将层包装为标识以绕过此问题。我已经尝试过,它适用于我的情况,而不编辑cpp文件中的任何内容。

我也看到了此问题,是否存在gith跟踪这个问题?
*Node properties ->
type: FullyConnected, location:26. *Attributes asymmetric_quantization: false, fused_activation: NONE, keep_num_dims: false, weights_format: DEFAULT. 
*Inputs ->
input. name: functional_3/tf_op_layer_Reshape/Reshape;StatefulPartitionedCall/functional_3/tf_op_layer_Reshape/Reshape
type: int8[1,34]
quantization: 0 ≤ 0.007448929361999035 * (q - -128) ≤ 1.8994770050048828
location: 98
weights. name: functional_3/tf_op_layer_MatMul_54/MatMul_54;StatefulPartitionedCall/functional_3/tf_op_layer_MatMul_54/MatMul_54
type: int8[34,34]
quantization: -0.3735211491584778 ≤ 0.002941111335530877 * q ≤ 0.1489555984735489
location: 42
[weights omitted to save space]
bias. name: functional_3/tf_op_layer_AddV2_93/AddV2_3/y;StatefulPartitionedCall/functional_3/tf_op_layer_AddV2_93/AddV2_3/y
type: int32[34]
quantization: 0.0002854724007192999 * q
location: 21
[13,-24,-19,-9,4,59,-18,9,14,-15,13,6,12,5,10,-2,-14,16,11,-1,12,7,-4,16,-8,6,-17,-7,9,-15,7,-29,5,3]
*outputs ->
output. name: functional_3/tf_op_layer_AddV2/AddV2;StatefulPartitionedCall/functional_3/tf_op_layer_AddV2/AddV2;functional_3/tf_op_layer_Reshape_99/Reshape_99/shape;StatefulPartitionedCall/functional_3/tf_op_layer_Reshape_99/Reshape_99/shape;functional_3/tf_op_layer_Reshape_1/Reshape_1;StatefulPartitionedCall/functional_3/tf_op_layer_Reshape_1/Reshape_1;functional_3/tf_op_layer_AddV2_93/AddV2_3/y;StatefulPartitionedCall/functional_3/tf_op_layer_AddV2_93/AddV2_3/y
type: int8[1,34]
quantization: -0.46506571769714355 ≤ 0.0031077787280082703 * (q - 22) ≤ 0.32741788029670715
location: 99
// TODO(ahentz): The following conditions must be guaranteed by the training pipeline.
...
const double scale_diff = std::abs(input_product_scale - bias_scale);
const double output_scale = static_cast<double>(output->params.scale);
TF_LITE_ENSURE(context, scale_diff / output_scale <= 0.02);