Python Tensorflow:为什么必须在声明变量之后声明`saver=tf.train.saver()`呢?

Python Tensorflow:为什么必须在声明变量之后声明`saver=tf.train.saver()`呢?,python,tensorflow,Python,Tensorflow,重要说明:我只是在笔记本环境中运行这个部分,即图形定义。我还没有运行实际的会话 运行此代码时: with graph.as_default(): #took out " , tf.device('/cpu:0')" saver = tf.train.Saver() valid_examples = np.array(random.sample(range(1, valid_window), valid_size)) #put inside graph to get new words

重要说明:我只是在笔记本环境中运行这个部分,即图形定义。我还没有运行实际的会话

运行此代码时:

with graph.as_default(): #took out " , tf.device('/cpu:0')"

  saver = tf.train.Saver()
  valid_examples = np.array(random.sample(range(1, valid_window), valid_size)) #put inside graph to get new words each time

  train_dataset = tf.placeholder(tf.int32, shape=[batch_size, cbow_window*2 ])
  train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
  valid_datasetSM = tf.constant(valid_examples, dtype=tf.int32)

  embeddings = tf.get_variable( 'embeddings', 
    initializer= tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))

  softmax_weights = tf.get_variable( 'softmax_weights',
    initializer= tf.truncated_normal([vocabulary_size, embedding_size],
                         stddev=1.0 / math.sqrt(embedding_size)))

  softmax_biases = tf.get_variable('softmax_biases', 
    initializer= tf.zeros([vocabulary_size]),  trainable=False )

  embed = tf.nn.embedding_lookup(embeddings, train_dataset) #train data set is

  embed_reshaped = tf.reshape( embed, [batch_size*cbow_window*2, embedding_size] )


  segments= np.arange(batch_size).repeat(cbow_window*2)

  averaged_embeds = tf.segment_mean(embed_reshaped, segments, name=None)

    #return tf.reduce_mean( tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
                               #labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

  loss = tf.reduce_mean(
    tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
                               labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

  norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
  normSM = tf.sqrt(tf.reduce_sum(tf.square(softmax_weights), 1, keepdims=True))

  normalized_embeddings = embeddings / norm
  normalized_embeddingsSM = softmax_weights / normSM

  valid_embeddings = tf.nn.embedding_lookup(
    normalized_embeddings, valid_dataset)
  valid_embeddingsSM = tf.nn.embedding_lookup(
    normalized_embeddingsSM, valid_datasetSM)

  similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
  similaritySM = tf.matmul(valid_embeddingsSM, tf.transpose(normalized_embeddingsSM))
我犯了这个错误

ValueError:没有要保存的变量

指着这条线

saver = tf.train.Saver()
我搜索了堆栈溢出,找到了这个答案

所以我简单地把这条线放在图形定义的底部,就像这样

with graph.as_default(): #took out " , tf.device('/cpu:0')"

  valid_examples = np.array(random.sample(range(1, valid_window), valid_size)) #put inside graph to get new words each time

  train_dataset = tf.placeholder(tf.int32, shape=[batch_size, cbow_window*2 ])
  train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  valid_dataset = tf.constant(valid_examples, dtype=tf.int32)
  valid_datasetSM = tf.constant(valid_examples, dtype=tf.int32)

  embeddings = tf.get_variable( 'embeddings', 
    initializer= tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
  softmax_weights = tf.get_variable( 'softmax_weights',
    initializer= tf.truncated_normal([vocabulary_size, embedding_size],
                         stddev=1.0 / math.sqrt(embedding_size)))

  softmax_biases = tf.get_variable('softmax_biases', 
    initializer= tf.zeros([vocabulary_size]),  trainable=False )

  embed = tf.nn.embedding_lookup(embeddings, train_dataset) #train data set is
  embed_reshaped = tf.reshape( embed, [batch_size*cbow_window*2, embedding_size] )

  segments= np.arange(batch_size).repeat(cbow_window*2)

  averaged_embeds = tf.segment_mean(embed_reshaped, segments, name=None)

  loss = tf.reduce_mean(
    tf.nn.sampled_softmax_loss(weights=softmax_weights, biases=softmax_biases, inputs=averaged_embeds,
                               labels=train_labels, num_sampled=num_sampled, num_classes=vocabulary_size))

  norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keepdims=True))
  normSM = tf.sqrt(tf.reduce_sum(tf.square(softmax_weights), 1, keepdims=True))

  normalized_embeddings = embeddings / norm
  normalized_embeddingsSM = softmax_weights / normSM

  valid_embeddings = tf.nn.embedding_lookup(
    normalized_embeddings, valid_dataset)
  valid_embeddingsSM = tf.nn.embedding_lookup(
    normalized_embeddingsSM, valid_datasetSM)

  similarity = tf.matmul(valid_embeddings, tf.transpose(normalized_embeddings))
  similaritySM = tf.matmul(valid_embeddingsSM, tf.transpose(normalized_embeddingsSM))

  saver = tf.train.Saver()
然后就没有错误了

为什么会这样?图形定义只是定义图形,而不是运行任何东西。也许这是一种错误预防措施?

来自
\uuuu init\uuuu
方法的
变量列表中有一个参数,其描述如下:

var_list: A list of Variable/SaveableObject, or a dictionary mapping names 
to SaveableObjects. If None, defaults to the list of all saveable objects.
这表明保存程序在第一次创建时会创建一个要保存的变量列表,默认情况下,该列表包含它可以找到的所有变量。如果没有生成变量,则错误是有意义的,因为没有要保存的变量

随机示例:

import tensorflow as tf
saver = tf.train.Saver()
上面的抛出错误,下面的抛出错误

import tensorflow as tf
x = tf.placeholder(dtype=tf.float32)
saver = tf.train.Saver()
但最后一个例子是

import tensorflow as tf
x = tf.Variable(0.0)
saver = tf.train.Saver()

不一定
tf.train.Saver
有一个
defer\u build
参数,如果设置为
True
,则允许您在构建变量后定义变量。然后需要显式调用
build

saver = tf.train.Saver(defer_build=True)
# construct your graph, create variables...
...
saver.build()
graph.finalize()
# go on with training

谢谢我想这会改变我对如何设置和运行图表的看法。我认为在会话中运行图之前,声明的任何tensorflow变量都只是定义。
tf.train.Saver()。你可以问一个新的问题:我想这会改变我对如何设置和运行图表的看法。我认为在会话中运行图之前,声明的任何tensorflow变量都只是定义。是tf.train.Saver()和例外吗?@Santosh,这仍然是事实。所有变量都只是定义的,直到您初始化它们。初始化通常是在会话中运行图表之前的第一步。