Tensorflow评估,

Tensorflow评估,,tensorflow,neural-network,tensorflow-estimator,Tensorflow,Neural Network,Tensorflow Estimator,给定层的张量名称,是否可以仅对特定层的输入进行评估,并且通常是否可以在向前传递期间保存所有结果 如果您能提供帮助,我将不胜感激。问题有点不清楚,但我想这就是您想要的: 创建的每个张量或操作都有一个可能的参数name。通过为每个张量提供一个名称,您可以在调用该张量时在feed\u dict中使用并传递所需的输入 至于保存结果,可以使用类保存模型的当前状态 下面是一个简单的模型示例,其中在一个脚本中创建并保存一个模型,然后加载该模型的第二个脚本,并使用访问其张量 保存\u model.py #cre

给定层的张量名称,是否可以仅对特定层的输入进行评估,并且通常是否可以在向前传递期间保存所有结果


如果您能提供帮助,我将不胜感激。问题有点不清楚,但我想这就是您想要的:

创建的每个张量或操作都有一个可能的参数
name
。通过为每个张量提供一个名称,您可以在调用该张量时在
feed\u dict
中使用并传递所需的输入

至于保存结果,可以使用类保存模型的当前状态

下面是一个简单的模型示例,其中在一个脚本中创建并保存一个模型,然后加载该模型的第二个脚本,并使用访问其张量

保存\u model.py

#create model
x = tf.placeholder(tf.float32, shape=[3,3], name="x")
w = tf.Variable(tf.random_normal(dtype=tf.float32, shape=[3,3], mean=0, stddev=0.5), name="w")    
xw = tf.multiply(x,w, name="xw")

# create saver
saver = tf.train.Saver()

# run and save model
x_input = np.ones((3,3))*2 # numpy array of 2s
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    xw_out = sess.run(xw, feed_dict={x: x_input})

    # save model including variables to ./tmp
    save_path = saver.save(sess, "./tmp/model.ckpt")
    print("Model saved with w at: \n {}".format(w.eval()))

>>>Model saved with w at: 
>>> [[ 0.07033788 -0.9353725   0.9999725 ]
>>> [-0.2922624  -1.143613   -1.0453095 ]
>>> [ 0.02661585  0.18821386  0.19582961]]

print(xw_out)

>>>[[ 0.14067577 -1.870745    1.999945  ]
>>>[-0.5845248  -2.287226   -2.090619  ]
>>>[ 0.05323171  0.3764277   0.39165923]]
# load saved model graph
saver = tf.train.import_meta_graph("./tmp/model.ckpt.meta")

x_input = np.ones((3,3))*2 # numpy array of 2s
with tf.Session() as sess:
    # Restore sesssion from saver
    saver.restore(sess, "./tmp/model.ckpt")
    print("Model restored.")

    # Check the values of the variables
    w = sess.run(sess.graph.get_tensor_by_name("w:0"))
    xw = sess.run(sess.graph.get_tensor_by_name("xw:0"), feed_dict={"x:0": x_input})
    print("Output calculated with w loaded from ./tmp at: \n {}".format(w))

>>>INFO:tensorflow:Restoring parameters from ./tmp/model.ckpt
>>>Model restored.
>>>Output calculated with w loaded from ./tmp at: 
>>> [[ 0.07033788 -0.9353725   0.9999725 ]
>>> [-0.2922624  -1.143613   -1.0453095 ]
>>> [ 0.02661585  0.18821386  0.19582961]]

print(xw)
>>>[[ 0.14067577 -1.870745    1.999945  ]
>>>[-0.5845248  -2.287226   -2.090619  ]
>>>[ 0.05323171  0.3764277   0.39165923]]
load_model.py

#create model
x = tf.placeholder(tf.float32, shape=[3,3], name="x")
w = tf.Variable(tf.random_normal(dtype=tf.float32, shape=[3,3], mean=0, stddev=0.5), name="w")    
xw = tf.multiply(x,w, name="xw")

# create saver
saver = tf.train.Saver()

# run and save model
x_input = np.ones((3,3))*2 # numpy array of 2s
with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    xw_out = sess.run(xw, feed_dict={x: x_input})

    # save model including variables to ./tmp
    save_path = saver.save(sess, "./tmp/model.ckpt")
    print("Model saved with w at: \n {}".format(w.eval()))

>>>Model saved with w at: 
>>> [[ 0.07033788 -0.9353725   0.9999725 ]
>>> [-0.2922624  -1.143613   -1.0453095 ]
>>> [ 0.02661585  0.18821386  0.19582961]]

print(xw_out)

>>>[[ 0.14067577 -1.870745    1.999945  ]
>>>[-0.5845248  -2.287226   -2.090619  ]
>>>[ 0.05323171  0.3764277   0.39165923]]
# load saved model graph
saver = tf.train.import_meta_graph("./tmp/model.ckpt.meta")

x_input = np.ones((3,3))*2 # numpy array of 2s
with tf.Session() as sess:
    # Restore sesssion from saver
    saver.restore(sess, "./tmp/model.ckpt")
    print("Model restored.")

    # Check the values of the variables
    w = sess.run(sess.graph.get_tensor_by_name("w:0"))
    xw = sess.run(sess.graph.get_tensor_by_name("xw:0"), feed_dict={"x:0": x_input})
    print("Output calculated with w loaded from ./tmp at: \n {}".format(w))

>>>INFO:tensorflow:Restoring parameters from ./tmp/model.ckpt
>>>Model restored.
>>>Output calculated with w loaded from ./tmp at: 
>>> [[ 0.07033788 -0.9353725   0.9999725 ]
>>> [-0.2922624  -1.143613   -1.0453095 ]
>>> [ 0.02661585  0.18821386  0.19582961]]

print(xw)
>>>[[ 0.14067577 -1.870745    1.999945  ]
>>>[-0.5845248  -2.287226   -2.090619  ]
>>>[ 0.05323171  0.3764277   0.39165923]]
注意:
get\u tensor\u by\u name()
中操作名称后面的“
:0
”指定它是该操作所需的第0个张量输出

这段代码可以在一组jupyter笔记本中看到,如果已经构建了图形,那么还有另一个更简单的实现