Python Tensorflow restore()缺少1个必需的位置参数:';保存路径';

Python Tensorflow restore()缺少1个必需的位置参数:';保存路径';,python,tensorflow,deep-learning,Python,Tensorflow,Deep Learning,我正在尝试用Python制作一个神经网络,它基于爱尔兰数据集,将根据我输入的数组预测花的类型。我的NN就是这个样子 names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'species'] train = pd.read_csv(dataset, names=names, skiprows=1) test = pd.read_csv(test_dataset, names=n

我正在尝试用Python制作一个神经网络,它基于爱尔兰数据集,将根据我输入的数组预测花的类型。我的NN就是这个样子

    names = ['sepal-length', 'sepal-width', 'petal-length', 'petal-width', 'species']  
    train = pd.read_csv(dataset, names=names, skiprows=1)  
    test = pd.read_csv(test_dataset, names=names, skiprows=1)
    Xtrain = train.drop("species" , axis = 1)
    Xtest = train.drop("species" , axis = 1)

    ytrain = pd.get_dummies(train.species)
    ytest = pd.get_dummies(test.species)
def create_train_model(hidden_nodes, num_iters):

    # Reset the graph
    tf.reset_default_graph()

    # Placeholders for input and output data
    X = tf.placeholder(shape=(120, 4), dtype=tf.float64, name='X')
    y = tf.placeholder(shape=(120, 3), dtype=tf.float64, name='y')

    # Variables for two group of weights between the three layers of the network
    W1 = tf.Variable(np.random.rand(4, hidden_nodes), dtype=tf.float64)
    W2 = tf.Variable(np.random.rand(hidden_nodes, 3), dtype=tf.float64)

    # Create the neural net graph
    A1 = tf.sigmoid(tf.matmul(X, W1))
    y_est = tf.sigmoid(tf.matmul(A1, W2))

    # Define a loss function
    deltas = tf.square(y_est - y)
    loss = tf.reduce_sum(deltas)

    # Define a train operation to minimize the loss
    optimizer = tf.train.GradientDescentOptimizer(0.005)
    train = optimizer.minimize(loss)

    # Initialize variables and run session
    init = tf.global_variables_initializer()
    saver = tf.train.Saver()
    sess = tf.Session()
    sess.run(init)

    # Go through num_iters iterations
    for i in range(num_iters):
        sess.run(train, feed_dict={X: Xtrain, y: ytrain})
        loss_plot[hidden_nodes].append(sess.run(loss, feed_dict={X: Xtrain.as_matrix(), y: ytrain.as_matrix()}))
        weights1 = sess.run(W1)
        weights2 = sess.run(W2)

    print("loss (hidden nodes: %d, iterations: %d): %.2f" % (hidden_nodes, num_iters, loss_plot[hidden_nodes][-1]))
    save_path = saver.save(sess, model_path , hidden_nodes)
    print("Model saved in path: %s" % save_path)
    return weights1, weights2
# Plot the loss function over iterations
num_hidden_nodes = [5, 10, 20]  
loss_plot = {5: [], 10: [], 20: []}  
weights1 = {5: None, 10: None, 20: None}  
weights2 = {5: None, 10: None, 20: None}  
num_iters = 2000

plt.figure(figsize=(12,8))  
for hidden_nodes in num_hidden_nodes:  
    weights1[hidden_nodes], weights2[hidden_nodes] = create_train_model(hidden_nodes, num_iters)
    plt.plot(range(num_iters), loss_plot[hidden_nodes], label="nn: 4-%d-3" % hidden_nodes)

plt.xlabel('Iteration', fontsize=12)  
plt.ylabel('Loss', fontsize=12)
plt.legend(fontsize=12)  
一切正常。模型正在保存,所有培训都进行得很顺利。但当我输入数组并恢复模型时,我得到了一个错误

new_samples = np.array([[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=np.float32)
with tf.Session() as sess:
  saver = tf.train.Saver
  saver.restore(sess , model_path , str(hidden_nodes))
  y_est_val = sess.run(y_est, feed_dict={X: new_samples})
在此之后,我得到一个错误
缺少1个必需的位置参数:“save\u path”
。我不知道会有什么问题。错误在这行

saver.restore(sess , model_path , hidden_nodes)

我看了一些教程,它们有相同的代码,对它们很有效

我不确定你看的是哪种教程,如果你把它们发布在这里会很有帮助。 据我所知,只需要两个参数,session和save_path。 我怀疑错误来自
save\u path=saver.save(sess、model\u path、hidden\u节点)
你不能像那样保存变量。你保存了模型,一旦恢复,你就可以像

w1 = graph.get_tensor_by_name("w1:0")
w2 = graph.get_tensor_by_name("w2:0")
我的建议是在保存时使用显式Arumings,它会告诉你哪些关键词是错误的

save_path = saver.save(sess=sess,save_path=model_path , not sure what this is=hidden_nodes)

以下是关于保存和恢复的原始片段,以及如何保存和恢复模型,以及如何使用tensorflow模型的良好教程。

模型恢复似乎是个问题。首先使用
import\u meta\u graph
创建图形,然后使用
saver.restore将参数还原到图形中

还有其他问题,例如在恢复图形时,需要使用
get\u tensor\u by\u name
加载张量,因此可以适当地命名张量

以下是您可能需要进行的更改:

# The test batch size is different from the hard-coded batch_size in the original graph, so replace `120` to `None` in the placeholders of X and y.
new_samples = np.array([[6.4, 3.2, 4.5, 1.5], [5.8, 3.1, 5.0, 1.7]], dtype=np.float32)

tf.reset_default_graph()
graph = tf.Graph()

with graph.as_default():

    with tf.Session() as sess:

       # Create the network, load the meta file appropriately.
       saver = tf.train.import_meta_graph('{your meta file for the hidden unit}.meta')
       # Load the parameters
       saver.restore(sess , tf.train.latest_checkpoint(model_path))
       # Get the tensors from the graph. 
       X = graph.get_tensor_by_name("X:0")

       # `y_est` is not named in your graph: change to y_est = tf.identity(tf.sigmoid(tf.matmul(A1, W2)), 'y_est')
       y_est = graph.get_tensor_by_name("y_est:0")

       y_est_val = sess.run(y_est, feed_dict={X: new_samples})
注意:您需要不同的检查点而不覆盖它们,因此:

 save_path = saver.save(sess, model_dir+str(hidden_nodes)+'/' , hidden_nodes ).

我在以下
saver.restore(sess、tf.train.latest_checkpoint(“model saved/model.ckpt.index”)
中出错。当“保存路径”为“无”时无法加载。这是我在文件夹中获取的文件的信息。当我必须加载节点时,我加载
“model-saved/model.ckpt-10.meta”
。和checkpoint我上传了主模型。路径是个问题,只需给出文件
checkpoint
所在的目录。但随后我得到错误
InvalidArgumentError(回溯见上文):Assign需要两个张量的形状匹配。lhs shape=[10,3]rhs shape=[20,3]
删除保存的
模型中的所有模型
,然后再次运行。我从保存的模型中删除了所有文件,并重新启动了脚本,得到了相同的输出。问题可能是我创建了三个不同的模型,分别有10个、5个和20个节点?