Python 如何重用这个tensorflow层?
我正在用Tensorflow编写一个模型,我曾经用过PyTorch。有些机制非常不同,我被困在某些点上。 特别是:Python 如何重用这个tensorflow层?,python,tensorflow,Python,Tensorflow,我正在用Tensorflow编写一个模型,我曾经用过PyTorch。有些机制非常不同,我被困在某些点上。 特别是: dense = tf.layers.dense adam = tf.train.AdamOptimizer nb_joints= 3 code_size = 8 joints_info = tf.placeholder(tf.float32, shape = [None,nb_joints], name = 'joints_state') target_info = tf.pl
dense = tf.layers.dense
adam = tf.train.AdamOptimizer
nb_joints= 3
code_size = 8
joints_info = tf.placeholder(tf.float32, shape = [None,nb_joints], name = 'joints_state')
target_info = tf.placeholder(tf.float32, shape = [None,2], name = 'target_pos')
next_joint_info = tf.placeholder(tf.float32, shape = [None,nb_joints], name = 'next_joints_state')
with tf.variable_scope('Encoder'):
e1 = dense(joints_info, 32, activation = tf.nn.relu, name ='encoding_1')
code = dense(e1, code_size, activation = tf.nn.relu, name ='code')
d1 = dense(code, code_size, activation = tf.nn.relu, name ='decoding_1')
recon = dense(d1, code_size, activation = tf.nn.relu, name ='reconstructed')
with tf.variable_scope('EncoderLoss'):
encoder_loss = tf.squared_difference(joints_info, recon)
train_encoder = adam(3e-4).minimize(encoder_loss)
with tf.variable_scope('Task'):
t1 = dense(code, 32, activation = tf.nn.relu, name ='task_code')
t1_targ = dense(target_info, 32, activation = tf.nn.relu, name ='task_target')
task_joint = tf.concat([t1,t1_targ],1, name ='States_concatenation')
t2 = dense(task_joint, 128, activation = tf.nn.relu, name = 'task_joint_transformation')
task_prediction = dense(t2, code_size, activation = None, name = 'task_prediction')
with tf.variable_scope('TaskLoss'):
task_real = here, I want to call the CODE operation from the encoder but using next_joint_info placeholder
task_loss = tf.squared_difference(task_prediction, task_real)
有人能给我指出正确的方向吗?我不知道如何在这里进行
非常感谢 错误是什么?PyTorch的代码是什么?在这里我们真的帮不上忙。嗯,在
TaskLoss
变量范围内,我需要重用Encoder
模块来创建真正的任务代码。但我不知道怎么做。在Pytorch中,我只需使用我的模块并编写类似编码器模块(next_joints_info)的内容。对不起,如果不清楚的话。