Python TF:如何解决ValueError:变量…重量已存在,不允许。您的意思是将重用设置为True吗

Python TF:如何解决ValueError:变量…重量已存在,不允许。您的意思是将重用设置为True吗,python,tensorflow,Python,Tensorflow,我建立了一个反向合成CNN,但它报告错误如下: ValueError: Variable left_src_tgt_warp/ICSTN/icnv1/weight already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at: 我发现使用tf.reset\u default\u graph()可以解决这个问题。但我不知道应该在哪里加上它 for l in range(o

我建立了一个反向合成CNN,但它报告错误如下:

ValueError: Variable left_src_tgt_warp/ICSTN/icnv1/weight already exists, disallowed. Did you mean to set reuse=True in VarScope? Originally defined at:
我发现使用
tf.reset\u default\u graph()
可以解决这个问题。但我不知道应该在哪里加上它

for l in range(opt.warpN):
    with tf.variable_scope("ICSTN", reuse=l > 0) as sc:
    end_points_collection = sc.original_name_scope + '_end_points'
    with slim.arg_scope([slim.conv2d, slim.conv2d_transpose],
                       normalizer_fn=slim.batch_norm,
                      weights_regularizer=slim.l2_regularizer(0.05),
                            normalizer_params=batch_norm_params,
                            activation_fn=tf.nn.relu,
                          outputs_collections=end_points_collection):
            imageWarp = inverse_warp(
                inputImage,
                depth,
                pM,
                intrinsics,
                intrinsics_inv)
            imageWarpAll.append(imageWarp)
            feat = tf.reshape(imageWarp, [batch_size, H, W, C])
            print('feat shape:', feat.get_shape())
            print('pM_ini:', pM.get_shape())
            with tf.variable_scope("icnv1"):
                feat = conv2Layer(opt, feat, 4)
                feat = tf.nn.relu(feat)
            with tf.variable_scope("icnv2"):
                feat = conv2Layer(opt, feat, 8)
                feat = tf.nn.relu(feat)
                feat = tf.nn.max_pool(feat, ksize=[1, 2, 2, 1], strides=[1, 2, 2, 1], padding="VALID")
            feat = tf.reshape(feat, [opt.batch_size, -1])
            with tf.variable_scope("fc3"):
                feat = linearLayer(opt, feat, 48)
                feat = tf.nn.relu(feat)
            with tf.variable_scope("fc4"):
                feat = linearLayer(opt, feat, 6, final=True)
            dp = tf.reshape(feat, [-1, 6])
            print('dp: ', dp.get_shape())
        dpM = pose_vec2mat(dp)
        pM = tf.matmul(dpM, pM)
    imageWarp = inverse_warp(
        inputImage,
        depth,
        pM,
        intrinsics,
        intrinsics_inv)
    imageWarpAll.append(imageWarp)
    return imageWarpAll, pM
def build_train_graph():
    with tf.name_scope("cnn1"):...
    with tf.name_scope("cnn2"):...
    with tf.name_scope("Inverse Compositional CNN"):...
def train(self, opt):
    with tf.variable_scope(tf.get_variable_scope()):
            for i in range(opt.num_gpus):
                print('gpu:', i)
                with tf.device('/gpu:%d' % i):
                    self.build_train_graph(L_img_splits[i], R_img_splits[i], L_cam2pix_splits[i], L_pix2cam_splits[i],
                                       R_cam2pix_splits[i], R_pix2cam_splits[i], L_sca_splits[i], R_sca_splits[i],
                                       reuse_variables)
                    self.collect_summaries(i)
                    tower_losses.append(self.total_loss)
                    reuse_variables = True
                    grads = opt_step.compute_gradients(self.total_loss)
                    tower_grads.append(grads)
        grads = average_gradients(tower_grads)
        apply_gradient_op = opt_step.apply_gr`enter code here`adients(grads, global_step=global_step)
        incr_global_step = tf.assign(global_step, global_step + 1)
        total_loss = tf.reduce_mean(tower_losses)

        tf.summary.scalar('learning_rate', learning_rate, ['model_0'])
        tf.summary.scalar('total_loss', total_loss, ['model_0'])
        summary_op = tf.summary.merge_all('model_0')
        # self.collect_summaries()
        # SESSION
        config = tf.ConfigProto(allow_soft_placement=True)
        config.gpu_options.allow_growth = True
        sess = tf.Session(config=config)

        # SAVER
        summary_writer = tf.summary.FileWriter(
            opt.checkpoint_dir + '/s%.1d_%.3d/' % (opt.seq_length, opt.img_height) + opt.model_name, sess.graph)
        self.saver = tf.train.Saver()
        # COUNT PARAM
        total_num_parameters = 0
        for variable in tf.trainable_variables():
            total_num_parameters += np.array(variable.get_shape().as_list()).prod()
         print("number of trainable parameters: {}".format(total_num_parameters))
        # INIT
        sess.run(tf.global_variables_initializer())
        sess.run(tf.local_variables_initializer())
        coordinator = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess=sess, coord=coordinator)
        # LOAD CHECKPOINT IF SET
        if opt.continue_train:
             print("Resume training from previous checkpoint")
             checkpoint = tf.train.latest_checkpoint(
                 os.path.join(opt.checkpoint_dir, 's%.1d_%.3d' % (opt.seq_length, opt.img_height), opt.model_name))
             self.saver.restore(sess, checkpoint)
        if opt.re_train:
            sess.run(global_step.assign(0))

这是因为代码第一部分中的for循环可能缺少函数名

循环尝试创建
左扭曲/ICSTN/icnv1/weight
(与
icnv2
等相同):

变量需要不同的名称。实现这一目标的一种方法如下:

def foo(num_layers):
    opt = tf.placeholder(tf.float32, [None, 64])
    for i in range(num_layers):
        with tf.variable_scope("icnv1_layer_{}".format(i)):
            feat = tf.layers.dense(opt, units=1, activation=tf.nn.relu)
现在,根据深度不同,我们对每个层都有不同的名称,
icnv1\u layer\u 1
icnv1\u layer\u 2
,等等

当然,除非您想要共享权重(例如,它是同一层,更新为一层)。在这种情况下,只需设置:

with tf.variable_scope("icnv1", reuse=tf.AUTO_REUSE):

我正在使用Jupyter笔记本来运行我的模型,我最近意识到发生这个错误是因为我的模型的变量被保存在“outter上下文”中。因此,当我重新启动内核(从而清理所有工作区变量)并运行所有单元格时,错误消失了。

您解决了这个问题吗?您可能知道什么值错误:尝试共享变量快捷方式/权重,但指定了形状(1、1、32、64)和找到的形状(1、1、16、32)。建议何时设置重用=tf.AUTO_重用?@JuneWang嗨,当你尝试重用层时,你的维度之一从16变为32。检查共享层的输入,确保尺寸相同。
def foo(num_layers):
    opt = tf.placeholder(tf.float32, [None, 64])
    for i in range(num_layers):
        with tf.variable_scope("icnv1_layer_{}".format(i)):
            feat = tf.layers.dense(opt, units=1, activation=tf.nn.relu)
with tf.variable_scope("icnv1", reuse=tf.AUTO_REUSE):