Warning: file_get_contents(/data/phpspider/zhask/data//catemap/6/cplusplus/125.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
如何通过C+向tflite提供多维输入+;美国石油学会 我正在试用TFLITE C++ API来运行我构建的一个模型。我通过以下代码片段将模型转换为tflite格式: import tensorflow as tf converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5') tfmodel = converter.convert() open("model.tflite", "wb").write(tfmodel)_C++_Tensorflow_Tensorflow Lite - Fatal编程技术网

如何通过C+向tflite提供多维输入+;美国石油学会 我正在试用TFLITE C++ API来运行我构建的一个模型。我通过以下代码片段将模型转换为tflite格式: import tensorflow as tf converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5') tfmodel = converter.convert() open("model.tflite", "wb").write(tfmodel)

如何通过C+向tflite提供多维输入+;美国石油学会 我正在试用TFLITE C++ API来运行我构建的一个模型。我通过以下代码片段将模型转换为tflite格式: import tensorflow as tf converter = tf.lite.TFLiteConverter.from_keras_model_file('model.h5') tfmodel = converter.convert() open("model.tflite", "wb").write(tfmodel),c++,tensorflow,tensorflow-lite,C++,Tensorflow,Tensorflow Lite,我正在遵循中提供的步骤,到目前为止,我的代码如下所示 // Load the model std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model.tflite"); // Build the interpreter tflite::ops::builtin::BuiltinOpResolver resolver; std::unique_ptr<

我正在遵循中提供的步骤,到目前为止,我的代码如下所示

// Load the model
std::unique_ptr<tflite::FlatBufferModel> model = tflite::FlatBufferModel::BuildFromFile("model.tflite");

// Build the interpreter
tflite::ops::builtin::BuiltinOpResolver resolver;
std::unique_ptr<tflite::Interpreter> interpreter;

tflite::InterpreterBuilder builder(*model, resolver);
builder(&interpreter);
interpreter->AllocateTensors();

// Check interpreter state
tflite::PrintInterpreterState(_interpreter.get());
//加载模型
std::unique_ptr model=tflite::FlatBufferModel::BuildFromFile(“model.tflite”);
//构建解释器
tflite::ops::builtin::BuiltinOpResolver解析器;
std::唯一的ptr解释器;
tflite::解释器生成器(*模型,解析器);
建设者和口译员;
解释器->分配传感器();
//检查解释器状态
tflite::PrintExpressorState(_解释器.get());
这显示我的输入层的形状为(12050,6)。为了从C++输入,我遵循了,我的输入代码看起来是这样的:

std::vector<std::vector<double>> tensor;     // I filled this vector, (dims are 2050, 6)

int input = interpreter->inputs()[0];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input);
for (int i = 0; i < 2050; ++i) {
    for (int j = 0; j < 6; j++) {
        *(input_data_ptr) = (float)tensor[i][j];
        input_data_ptr++;
    }
}
std::向量张量;//我填充了这个向量,(DIM是2050,6)
int input=解释器->输入()[0];
浮点*输入\数据\ ptr=解释器->类型化\输入\张量(输入);
对于(int i=0;i<2050;++i){
对于(int j=0;j<6;j++){
*(input_data_ptr)=(float)张量[i][j];
输入_数据_ptr++;
}
}
该模型的最后一层返回一个浮点(概率)。我从下面的代码中得到输出

interpreter->Invoke();
int output_idx = interpreter->outputs()[0];
float* output = interpreter->typed_output_tensor<float>(output_idx);
std::cout << "OUTPUT: " << *output << std::endl;
解释器->调用();
int output_idx=解释器->输出()[0];
float*output=解释器->类型化的输出张量(输出idx);

这是错误的API用法

typed\u input\u tensor
更改为
typed\u tensor
并将
typed\u output\u tensor
更改为
typed\u tensor
为我解决了这个问题

对于其他有同样问题的人

int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_input_tensor<float>(input_tensor_idx);
int-input\u-tensor\u-idx=0;
int input=解释器->输入();
float*input\u data\u ptr=解释器->类型化输入张量(input\u tensor\u idx);

int-input\u-tensor\u-idx=0;
int input=解释器->输入();
float*input\u data\u ptr=解释器->类型化张量(输入);
都是一样的

这可以通过查看的实现来验证

模板
T*类型化输入张量(整数索引){
返回类型的_张量(inputs()[index]);
}
int input_tensor_idx = 0;
int input = interpreter->inputs()[input_tensor_idx];
float* input_data_ptr = interpreter->typed_tensor<float>(input);
  template <class T>
  T* typed_input_tensor(int index) {
    return typed_tensor<T>(inputs()[index]);
  }