Tensorflow 什么';“的含义;n张量“;在张量板图中?
我正在阅读TensorFlow教程代码Tensorflow 什么';“的含义;n张量“;在张量板图中?,tensorflow,tensorboard,Tensorflow,Tensorboard,我正在阅读TensorFlow教程代码mnist_deep.py并保存图形 作用域fc1的输出应具有形状[-1144]。但在TensorBoard的图中,它是2个张量 “n张量”在张量板图中是什么意思 # Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image # is down to 7x7x64 feature maps -- maps this to 1024 features. wit
mnist_deep.py
并保存图形
作用域fc1
的输出应具有形状[-1144]
。但在TensorBoard的图中,它是2个张量
“n张量”在张量板图中是什么意思
# Fully connected layer 1 -- after 2 round of downsampling, our 28x28 image
# is down to 7x7x64 feature maps -- maps this to 1024 features.
with tf.name_scope('fc1'):
W_fc1 = weight_variable([7 * 7 * 64, 1024])
b_fc1 = bias_variable([1024])
h_pool2_flat = tf.reshape(h_pool2, [-1, 7*7*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)
# Dropout - controls the complexity of the model, prevents co-adaptation of
# features.
with tf.name_scope('dropout'):
keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)
这意味着在Droopout节点中使用了两次Relu的输出张量。如果你尝试扩展它,你会看到输入进入两个不同的节点。我认为在图上显示n张量作为张量被“消耗”的次数有点混乱。在上图中,relu
运算符只产生1个张量,不再产生更多张量。我希望TooBoo板开发人员考虑改变这一点。