Python 在Tensorflow C+中为占位符张量输入值+;美国石油学会
我使用(Tensorflow)Python API重新培训了Inception-v3模型,并通过修改Tensorflow/Tensorflow/examples/image_Retaining/Retain.py,在分类层之前添加了一个退出层,在.pb中保存了一个独立的图形:Python 在Tensorflow C+中为占位符张量输入值+;美国石油学会,python,c++,machine-learning,tensorflow,deep-learning,Python,C++,Machine Learning,Tensorflow,Deep Learning,我使用(Tensorflow)Python API重新培训了Inception-v3模型,并通过修改Tensorflow/Tensorflow/examples/image_Retaining/Retain.py,在分类层之前添加了一个退出层,在.pb中保存了一个独立的图形: def nn_layer(input_tensor, input_dim, output_dim, layer_name, activation_name='activation', act=tf.nn.softmax):
def nn_layer(input_tensor, input_dim, output_dim, layer_name, activation_name='activation', act=tf.nn.softmax):
# Adding a name scope ensures logical grouping of the layers in the graph.
with tf.name_scope(layer_name):
# This Variable will hold the state of the weights for the layer
with tf.name_scope('weights'):
weights = weight_variable([input_dim, output_dim])
variable_summaries(weights, layer_name + '/weights')
with tf.name_scope('dropout'):
keep_prob = tf.placeholder(tf.float32)
tf.scalar_summary('dropout_keep_probability', keep_prob)
drop = tf.nn.dropout(input_tensor, keep_prob)
variable_summaries(drop, layer_name + '/dropout')
with tf.name_scope('biases'):
biases = bias_variable([output_dim])
variable_summaries(biases, layer_name + '/biases')
preactivate = tf.matmul(drop, weights) + biases
tf.histogram_summary(layer_name + '/pre_activations', preactivate)
with tf.name_scope(activation_name):
activations = act(preactivate)
tf.histogram_summary(layer_name + '/activations', activations)
return preactivate, activations, keep_prob
用python生成预测的代码如下:
softmax_tensor = sess.graph.get_tensor_by_name('final_layer/final_result/Softmax:0')
predictions = sess.run(softmax_tensor, { 'DecodeJpeg/contents:0':image_data, 'final_layer/dropout/Placeholder:0': 1.})
string input_layer = "Mul";
string output_layer = "final_layer/dropout/Placeholder:0";
Status run_status = session->Run({{input_layer, resized_tensor}}, {output_layer}, {}, &outputs);
< Python代码的C++对应关系如下:
softmax_tensor = sess.graph.get_tensor_by_name('final_layer/final_result/Softmax:0')
predictions = sess.run(softmax_tensor, { 'DecodeJpeg/contents:0':image_data, 'final_layer/dropout/Placeholder:0': 1.})
string input_layer = "Mul";
string output_layer = "final_layer/dropout/Placeholder:0";
Status run_status = session->Run({{input_layer, resized_tensor}}, {output_layer}, {}, &outputs);
C++代码以以下错误消息结尾:
Running model failed: Invalid argument: You must feed a value for placeholder tensor 'final_layer/dropout/Placeholder'
<>上述C++代码中我应该更改什么来删除这个错误?换句话说,如何在Python代码中更改C++代码中的占位符值。我陷入这个问题已经很多天了。任何帮助都将被高度赞赏。< P>你的C++代码不是你的Python代码的对应部分。 在python中,您得到了
softmax_tensor = sess.graph.get_tensor_by_name('final_layer/final_result/Softmax:0')
predictions = sess.run(softmax_tensor, { 'DecodeJpeg/contents:0':image_data, 'final_layer/dropout/Placeholder:0': 1.})
因此,您的提要内容是{'DecodeJpeg/contents:0':图像数据,'final\u layer/dropout/Placeholder:0':1.}
这意味着:用image\u data
覆盖DecodeJpeg/contents:0的值,用1覆盖final\u layer/dropout/Placeholder:0的值。
在C++中,你得到:
Status run_status = session->Run({{input_layer, resized_tensor}}, {output_layer}, {}, &outputs);
这是您的feed\u dict
等价物的第一个输入参数,即:
{{input_layer, resized_tensor}}
这意味着:使用调整大小的张量
覆盖输入层
第一个问题是您正在尝试覆盖节点Mul
,而不是如上所述的节点DecodeJpeg/contents:0
此外,缺少对占位符的覆盖
<>但是,你的C++代码有一些混乱,因为你调用了<代码> OutPuthPosiths<代码>什么其实是<代码>占位符< /C> > /<
TL;博士
对应的Python代码应该是
Status run_status = session->Run({
{"DecodeJpeg/contents", resized_tensor},
{"final_layer/dropout/Placeholder", 1f}
}, {"final_layer/final_result/Softmax"}, {}, &outputs);
这意味着:
使用resize\u tensor
覆盖DecodeJpeg/contents
的节点值。
用1覆盖final_layer/dropout/Placeholder
的节点值
获取节点的值final\u layer/final\u result/Softmax
。
将结果放入输出> P>你的C++代码不是你的Python代码的对应部分。
在python中,您得到了
softmax_tensor = sess.graph.get_tensor_by_name('final_layer/final_result/Softmax:0')
predictions = sess.run(softmax_tensor, { 'DecodeJpeg/contents:0':image_data, 'final_layer/dropout/Placeholder:0': 1.})
因此,您的提要内容是{'DecodeJpeg/contents:0':图像数据,'final\u layer/dropout/Placeholder:0':1.}
这意味着:用image\u data
覆盖DecodeJpeg/contents:0的值,用1覆盖final\u layer/dropout/Placeholder:0的值。
在C++中,你得到:
Status run_status = session->Run({{input_layer, resized_tensor}}, {output_layer}, {}, &outputs);
这是您的feed\u dict
等价物的第一个输入参数,即:
{{input_layer, resized_tensor}}
这意味着:使用调整大小的张量
覆盖输入层
第一个问题是您正在尝试覆盖节点Mul
,而不是如上所述的节点DecodeJpeg/contents:0
此外,缺少对占位符的覆盖
<>但是,你的C++代码有一些混乱,因为你调用了<代码> OutPuthPosiths<代码>什么其实是<代码>占位符< /C> > /<
TL;博士
对应的Python代码应该是
Status run_status = session->Run({
{"DecodeJpeg/contents", resized_tensor},
{"final_layer/dropout/Placeholder", 1f}
}, {"final_layer/final_result/Softmax"}, {}, &outputs);
这意味着:
使用resize\u tensor
覆盖DecodeJpeg/contents
的节点值。
用1覆盖final_layer/dropout/Placeholder
的节点值
获取节点的值final\u layer/final\u result/Softmax
。
将结果放入输出中