Tensorflow 在TF2中的丢失函数中使用中间层的输出

Tensorflow 在TF2中的丢失函数中使用中间层的输出,tensorflow,machine-learning,tensorflow2.0,Tensorflow,Machine Learning,Tensorflow2.0,作为TF2学习的一部分,我尝试在Tensorflow 2中复制OpenPose的训练,但要做到这一点,我需要在损失函数中使用S,L中间层的输出 new_model.compile(optimizer='sgd', loss={ 'dense': 'binary_crossentropy', 'conv_5': 'mse', 'avg_pool': 'mae'

作为TF2学习的一部分,我尝试在Tensorflow 2中复制OpenPose的训练,但要做到这一点,我需要在损失函数中使用S,L中间层的输出

new_model.compile(optimizer='sgd',
              loss={
                  'dense': 'binary_crossentropy',
                  'conv_5': 'mse',
                  'avg_pool': 'mae'
              })
我尝试过使用函数式API,但似乎无法从S/L层获得输出,以便能够根据需要在丢失函数中使用它们。我可以看出子类化是如何实现的,但这会增加复杂性,并且不适合调试。在我的学习中,调试和易用性可能是这一阶段的一大优势

有什么方法可以使用函数API或顺序模型来实现这种类型的模型吗


是的,功能型和顺序型keras模型都支持这一点。您始终可以传递一个dict,其中包含作为键的图层名和作为值的丢失函数。下面是一段代码来演示这一点

如果要从头开始构建模型,只需将层添加为模型的输出之一即可

import tensorflow as tf

img = tf.keras.Input([128, 128, 3], name='image')
conv_1 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_1')(img)
conv_2 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_2')(conv_1)
conv_3 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_3')(conv_2)
conv_4 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_4')(conv_3)
conv_5 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_5')(conv_4)
avg_pool = tf.keras.layers.GlobalAvgPool2D(name='avg_pool')(conv_5)
output = tf.keras.layers.Dense(1, activation='sigmoid')(avg_pool)

model = tf.keras.Model(inputs=[img], outputs=[output, conv_5])
print(model.outputs)
输出:

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>]

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>,
<tf.Tensor 'avg_pool/Mean:0' shape=(None, 16) dtype=float32>]
25/25 [==============================] - 2s 79ms/step - loss: 2.4339
- dense_loss: 0.3904 - conv_5_loss: 1.2367 - avg_pool_loss: 0.8068

输出:

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>]

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>,
<tf.Tensor 'avg_pool/Mean:0' shape=(None, 16) dtype=float32>]
25/25 [==============================] - 2s 79ms/step - loss: 2.4339
- dense_loss: 0.3904 - conv_5_loss: 1.2367 - avg_pool_loss: 0.8068

一些虚拟数据和标签

images = tf.random.normal([100, 128, 128, 3])
conv_3_labels = tf.random.normal([100, 118, 118, 16])
avg_pool_labels =  tf.random.normal([100, 16])
class_labels = tf.random.uniform([100], 0, 2, tf.float32)

dataset = tf.data.Dataset.from_tensor_slices(
    (images, (class_labels, conv_3_labels, avg_pool_labels))
)
dataset = dataset.batch(4, drop_remainder=True)
训练

new_model.fit(dataset)
输出:

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>]

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>,
<tf.Tensor 'avg_pool/Mean:0' shape=(None, 16) dtype=float32>]
25/25 [==============================] - 2s 79ms/step - loss: 2.4339
- dense_loss: 0.3904 - conv_5_loss: 1.2367 - avg_pool_loss: 0.8068


是的,功能型和顺序型keras模型都支持这一点。您始终可以传递一个dict,其中包含作为键的图层名和作为值的丢失函数。下面是一段代码来演示这一点

如果要从头开始构建模型,只需将层添加为模型的输出之一即可

import tensorflow as tf

img = tf.keras.Input([128, 128, 3], name='image')
conv_1 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_1')(img)
conv_2 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_2')(conv_1)
conv_3 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_3')(conv_2)
conv_4 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_4')(conv_3)
conv_5 = tf.keras.layers.Conv2D(16, 3, 1, name='conv_5')(conv_4)
avg_pool = tf.keras.layers.GlobalAvgPool2D(name='avg_pool')(conv_5)
output = tf.keras.layers.Dense(1, activation='sigmoid')(avg_pool)

model = tf.keras.Model(inputs=[img], outputs=[output, conv_5])
print(model.outputs)
输出:

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>]

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>,
<tf.Tensor 'avg_pool/Mean:0' shape=(None, 16) dtype=float32>]
25/25 [==============================] - 2s 79ms/step - loss: 2.4339
- dense_loss: 0.3904 - conv_5_loss: 1.2367 - avg_pool_loss: 0.8068

输出:

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>]

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>,
<tf.Tensor 'avg_pool/Mean:0' shape=(None, 16) dtype=float32>]
25/25 [==============================] - 2s 79ms/step - loss: 2.4339
- dense_loss: 0.3904 - conv_5_loss: 1.2367 - avg_pool_loss: 0.8068

一些虚拟数据和标签

images = tf.random.normal([100, 128, 128, 3])
conv_3_labels = tf.random.normal([100, 118, 118, 16])
avg_pool_labels =  tf.random.normal([100, 16])
class_labels = tf.random.uniform([100], 0, 2, tf.float32)

dataset = tf.data.Dataset.from_tensor_slices(
    (images, (class_labels, conv_3_labels, avg_pool_labels))
)
dataset = dataset.batch(4, drop_remainder=True)
训练

new_model.fit(dataset)
输出:

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>]

[<tf.Tensor 'dense/Sigmoid:0' shape=(None, 1) dtype=float32>,
<tf.Tensor 'conv_5/BiasAdd:0' shape=(None, 118, 118, 16) dtype=float32>,
<tf.Tensor 'avg_pool/Mean:0' shape=(None, 16) dtype=float32>]
25/25 [==============================] - 2s 79ms/step - loss: 2.4339
- dense_loss: 0.3904 - conv_5_loss: 1.2367 - avg_pool_loss: 0.8068


谢谢,回答得很好@Srihari。模型是由我建立的,所以我可以使用任何一种方法。使用多个模型/new_model.outputs apporach是否有很多开销或性能缺陷,还是所有这些都生成到一个图中?最初的方法似乎更简单,但我必须返回7层输出,这在一个输出中似乎有点混乱=[]。再次感谢!谢谢,回答得很好@Srihari。模型是由我建立的,所以我可以使用任何一种方法。使用多个模型/new_model.outputs apporach是否有很多开销或性能缺陷,还是所有这些都生成到一个图中?最初的方法似乎更简单,但我必须返回7层输出,这在一个输出中似乎有点混乱=[]。再次感谢!