使用TensorFlow为多对象跟踪(MOT)中的每个对象创建多个权重张量

使用TensorFlow为多对象跟踪(MOT)中的每个对象创建多个权重张量,tensorflow,Tensorflow,我正在使用TensorFlow V1.10.0并开发一个基于MDNet的多对象跟踪器。我需要为每个检测到的对象为完全连接的层分配一个单独的权重矩阵,以便在在线训练期间为每个对象获得不同的嵌入。我使用这个tf.map_fn来生成高阶权重张量(n_对象、平坦层、隐藏单位) ''' ''' 然而,在执行过程中,当我为W4运行会话时,我得到了一个权重矩阵,但所有权重矩阵都具有相同的值。有什么帮助吗 TIA这里有一个解决方法,我能够在for循环中生成图形外部的多个内核,然后将其提供给图形: w6 = []

我正在使用TensorFlow V1.10.0并开发一个基于MDNet的多对象跟踪器。我需要为每个检测到的对象为完全连接的层分配一个单独的权重矩阵,以便在在线训练期间为每个对象获得不同的嵌入。我使用这个tf.map_fn来生成高阶权重张量(n_对象、平坦层、隐藏单位)

'''

'''

然而,在执行过程中,当我为W4运行会话时,我得到了一个权重矩阵,但所有权重矩阵都具有相同的值。有什么帮助吗


TIA

这里有一个解决方法,我能够在for循环中生成图形外部的多个内核,然后将其提供给图形:

w6 = []
for n_obj in range(pos_data.shape[0]):
    w6.append(tf.get_variable("fc6/kernel-" + str(n_obj), shape=(512, 2),
                         initializer=tf.contrib.layers.xavier_initializer()))

print("modeling fc6 branches...")
prob, train_op, accuracy, loss, pred, initialize_vars, y, fc6 = build_branches(fc5, w6)

def build_branches(fc5, w6):
    y = tf.placeholder(tf.int64, [None, None])

    b6 = tf.get_variable('fc6/bias', shape=2, initializer=tf.zeros_initializer())

    fc6 = tf.add(tf.matmul(fc5, w6), b6)

    loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
                                                                         logits=fc6))

    train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="fc6")

    with tf.variable_scope("",  reuse=tf.AUTO_REUSE):

        optimizer = tf.train.AdamOptimizer(learning_rate=0.001, name='adam')
        train_op = optimizer.minimize(loss, var_list=train_vars)

        initialize_vars = train_vars
        initialize_vars += [optimizer.get_slot(var, name)
                            for name in optimizer.get_slot_names()
                            for var in train_vars]
        if isinstance(optimizer, tf.train.AdamOptimizer):
            initialize_vars += optimizer._get_beta_accumulators()

    prob = tf.nn.softmax(fc6)
    pred = tf.argmax(prob, 2)
    correct_pred = tf.equal(pred, y)

    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

    return prob, train_op, accuracy, loss, pred, initialize_vars, y, fc6

这里有一个解决方法,我能够在for循环中生成图形外部的多个内核,然后将其交给图形:

w6 = []
for n_obj in range(pos_data.shape[0]):
    w6.append(tf.get_variable("fc6/kernel-" + str(n_obj), shape=(512, 2),
                         initializer=tf.contrib.layers.xavier_initializer()))

print("modeling fc6 branches...")
prob, train_op, accuracy, loss, pred, initialize_vars, y, fc6 = build_branches(fc5, w6)

def build_branches(fc5, w6):
    y = tf.placeholder(tf.int64, [None, None])

    b6 = tf.get_variable('fc6/bias', shape=2, initializer=tf.zeros_initializer())

    fc6 = tf.add(tf.matmul(fc5, w6), b6)

    loss = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(labels=y,
                                                                         logits=fc6))

    train_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope="fc6")

    with tf.variable_scope("",  reuse=tf.AUTO_REUSE):

        optimizer = tf.train.AdamOptimizer(learning_rate=0.001, name='adam')
        train_op = optimizer.minimize(loss, var_list=train_vars)

        initialize_vars = train_vars
        initialize_vars += [optimizer.get_slot(var, name)
                            for name in optimizer.get_slot_names()
                            for var in train_vars]
        if isinstance(optimizer, tf.train.AdamOptimizer):
            initialize_vars += optimizer._get_beta_accumulators()

    prob = tf.nn.softmax(fc6)
    pred = tf.argmax(prob, 2)
    correct_pred = tf.equal(pred, y)

    accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))

    return prob, train_op, accuracy, loss, pred, initialize_vars, y, fc6