Python 运行同一tensorflow-r1.0程序两次时出现不同错误

Python 运行同一tensorflow-r1.0程序两次时出现不同错误,python,tensorflow,Python,Tensorflow,第一次运行程序时,我发现错误: with tf.variable_scope('noise_z'): for noise_idx in range(num_noise): noise = gaussian_sampler(mu_noise, var_noise, 1) noise_vec = multi_layer_nn(noise, [dim_noise, 64, embedding_size], name=str(noise_idx)+'_')

第一次运行程序时,我发现错误:

with tf.variable_scope('noise_z'):
    for noise_idx in range(num_noise):
        noise = gaussian_sampler(mu_noise, var_noise, 1)
        noise_vec = multi_layer_nn(noise, [dim_noise, 64, embedding_size], name=str(noise_idx)+'_')
        noise_vecs.append(noise_vec)

def fully_con_layer(input_, fan_in, fan_out, name, initializer=tf.orthogonal_initializer()):
    w = tf.get_variable(name+'_weight_', shape=[fan_in, fan_out], initializer=initializer)
    b = tf.get_variable('bias'+name, [fan_out], initializer=tf.random_uniform_initializer())
    return tf.nn.sigmoid(tf.matmul(input_, w)+b)

def multi_layer_nn(input_, num_unit_each_layer, name, initializer=tf.orthogonal_initializer()):
    x = input_
    num_layer = len(num_unit_each_layer)-1
    for layer in range(num_layer):
        with tf.variable_scope(name+'_'+"mnn"):
            x = fully_con_layer(x, num_unit_each_layer[layer], num_unit_each_layer[layer+1], str(layer))
    return x 
tensorflow.python.framework.errors\u impl.failedPremissionError:尝试使用未初始化的值噪波\u z/0\u mnn/bias1

但当我再次运行时,错误变成:

tensorflow.python.framework.errors\u impl.failedPremissionError:尝试使用未初始化的值噪波\u z/1\u mnn/0weight_

请注意,变量名是不同的。调试很烦人。我想知道为什么会发生这种情况,我该如何解决它

以下是与错误相关的代码:

with tf.variable_scope('noise_z'):
    for noise_idx in range(num_noise):
        noise = gaussian_sampler(mu_noise, var_noise, 1)
        noise_vec = multi_layer_nn(noise, [dim_noise, 64, embedding_size], name=str(noise_idx)+'_')
        noise_vecs.append(noise_vec)

def fully_con_layer(input_, fan_in, fan_out, name, initializer=tf.orthogonal_initializer()):
    w = tf.get_variable(name+'_weight_', shape=[fan_in, fan_out], initializer=initializer)
    b = tf.get_variable('bias'+name, [fan_out], initializer=tf.random_uniform_initializer())
    return tf.nn.sigmoid(tf.matmul(input_, w)+b)

def multi_layer_nn(input_, num_unit_each_layer, name, initializer=tf.orthogonal_initializer()):
    x = input_
    num_layer = len(num_unit_each_layer)-1
    for layer in range(num_layer):
        with tf.variable_scope(name+'_'+"mnn"):
            x = fully_con_layer(x, num_unit_each_layer[layer], num_unit_each_layer[layer+1], str(layer))
    return x 

如果在调用函数之前运行
tf.global\u variables\u initializer()
sess.run(init\u op)
(正如您在注释中所说的那样),函数中定义的变量将不会初始化。定义所有变量后,必须运行sess.run(init_op)。

在重新运行图形之前是否重置图形
tf.reset\u default\u graph()
@Tasos感谢您的回复!但是我是通过pythonxxx.py运行这个程序的,为什么我需要在程序中插入这个代码来再次运行它呢?我想你可以在笔记本上运行它。我发现每次错误都是针对不同的变量。如果使用
tf.initialize_all_variables()
,您会得到什么?@Tasos我在终端中运行它。实际上,在调用上述函数之前,我已经执行了
init\u op=tf.global\u variables\u initializer()
sess.run(init\u op)