使用@tf.function时,在Tensorflow 2.0中训练同一类中定义的多个模型失败

使用@tf.function时,在Tensorflow 2.0中训练同一类中定义的多个模型失败,tensorflow,tensorflow2.0,Tensorflow,Tensorflow2.0,我正在使用Tensorflow 2.1创建自定义模型和自定义训练循环。我的目标是比较我的神经网络的不同配置的准确性。具体地说,在这种情况下,我比较了具有不同潜在维数的自动编码器的重建误差。因此,我正在为一个潜在维度训练我的网络,然后计算测试误差,然后为另一个潜在维度重复这个过程,依此类推。通过此过程,我希望创建如下图: 绘图示例: 为了加快训练速度,我想在训练循环的反向传播部分使用@tf.function decorator。然而,当我尝试训练几个不同的网络,在潜在维度上循环时,我得到了一个错

我正在使用Tensorflow 2.1创建自定义模型和自定义训练循环。我的目标是比较我的神经网络的不同配置的准确性。具体地说,在这种情况下,我比较了具有不同潜在维数的自动编码器的重建误差。因此,我正在为一个潜在维度训练我的网络,然后计算测试误差,然后为另一个潜在维度重复这个过程,依此类推。通过此过程,我希望创建如下图:

绘图示例:

为了加快训练速度,我想在训练循环的反向传播部分使用@tf.function decorator。然而,当我尝试训练几个不同的网络,在潜在维度上循环时,我得到了一个错误。见下文:

ValueError: in converted code:

    <ipython-input-19-78bafad21717>:41 grad  *
        loss_value = tf.losses.mean_squared_error(inputs, model(inputs))
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/base_layer.py:778 __call__
        outputs = call_fn(cast_inputs, *args, **kwargs)
    <ipython-input-19-78bafad21717>:33 call  *
        x_enc = self.encoder(inp)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/base_layer.py:778 __call__
        outputs = call_fn(cast_inputs, *args, **kwargs)
    <ipython-input-19-78bafad21717>:9 call  *
        x = self.dense1(inp)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/base_layer.py:748 __call__
        self._maybe_build(inputs)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/base_layer.py:2116 _maybe_build
        self.build(input_shapes)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/layers/core.py:1113 build
        trainable=True)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/base_layer.py:446 add_weight
        caching_device=caching_device)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/training/tracking/base.py:744 _add_variable_with_custom_getter
        **kwargs_for_getter)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/keras/engine/base_layer_utils.py:142 make_variable
        shape=variable_shape if variable_shape else None)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/variables.py:258 __call__
        return cls._variable_v1_call(*args, **kwargs)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/variables.py:219 _variable_v1_call
        shape=shape)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/ops/variables.py:65 getter
        return captured_getter(captured_previous, **kwargs)
    /tensorflow-2.1.0/python3.6/tensorflow_core/python/eager/def_function.py:502 invalid_creator_scope
        "tf.function-decorated function tried to create "

    ValueError: tf.function-decorated function tried to create variables on non-first call.

您提供的代码段中有错误

我将最后一个密集层单位从6更改为10

# Decoder
class build_decoder(tf.keras.Model):
  def __init__(self,):
      super(build_decoder, self).__init__()

      self.dense1 = tf.keras.layers.Dense(32, activation='relu',use_bias=True)
      self.dense2 = tf.keras.layers.Dense(10, activation='relu',use_bias=True)

  def call(self, inp):
      x = self.dense1(inp)
      x = self.dense2(x)
      return x
至于你关于训练多模型的问题

错误消息“ValueError:tf.function-decorated函数试图在非第一次调用时创建变量”表示由@tf.function修饰的函数正在其下一次迭代中创建新变量,这是不允许的,因为此函数已转换为图形

我已经修改了你的反向传播方法,我注释掉了你的原始代码来观察差异

#### Here is the backpropagation with @tf.function decorator ####
# @tf.function
# def grad(model, inputs):
#     with tf.GradientTape() as tape:
#         loss_value = tf.losses.mean_squared_error(inputs, model(inputs))
#     return loss_value, tape.gradient(loss_value, model.trainable_variables)

@tf.function
def MSE(y_true, y_pred):
  return tf.keras.losses.MSE(y_true, y_pred)

def backprop(inputs, model):
  with tf.GradientTape() as tape:
    loss_value = MSE(inputs, model(inputs))
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

def gradient_func(model, inputs):
  return backprop(inputs, model)
原始代码的罪魁祸首是调用模型(输入)作为损失函数中的输入,当您在函数中修饰@tf.Function时,它会继承到函数中的所有函数上,这意味着损失函数得到了优化

另外一种不重写单个变量而训练多个模型的方法是将它们放入数组中

model_array = [0]
# Looping over the latent dimensions
for latent_dim in range(1,10):
  model_array.append(Autoencoder(latent_dim))
 # Creating an instance of my Autoencoder
  optimizer = tf.keras.optimizers.Adam(learning_rate=0.00005) # Defining an optimizer
  train_loss = train(x_train, model=model_array[latent_dim], num_epochs=num_epochs, batch_size=batch_size, optimizer=optimizer) # Training the network
  test_loss.append(tf.reduce_mean(tf.losses.mean_squared_error(x_test, model_array[latent_dim](x_test))).numpy())
这将把模型排列成数组,更容易访问和调试

下面是完整的修改代码

import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# Encoder
class build_encoder(tf.keras.Model):
  def __init__(self,latent_dim):
      super(build_encoder, self).__init__()

      self.dense1 = tf.keras.layers.Dense(32, activation='relu',use_bias=True)
      self.dense2 = tf.keras.layers.Dense(latent_dim, activation='relu',use_bias=True)

  def call(self, inp):
      x = self.dense1(inp)
      x = self.dense2(x)
      return x

# Decoder
class build_decoder(tf.keras.Model):
  def __init__(self,):
      super(build_decoder, self).__init__()

      self.dense1 = tf.keras.layers.Dense(32, activation='relu',use_bias=True)
      self.dense2 = tf.keras.layers.Dense(10, activation='relu',use_bias=True)

  def call(self, inp):
      x = self.dense1(inp)
      x = self.dense2(x)
      return x

# Full Autoencoder
class Autoencoder(tf.keras.Model):
  def __init__(self,latent_dim=5):
      super(Autoencoder, self).__init__()

      self.encoder = build_encoder(latent_dim)
      self.decoder = build_decoder()

  def call(self, inp):
      x_enc = self.encoder(inp)
      x_dec = self.decoder(x_enc)
      return x_dec

#### Here is the backpropagation with @tf.function decorator ####
# @tf.function
# def grad(model, inputs):
#     with tf.GradientTape() as tape:
#         loss_value = tf.losses.mean_squared_error(inputs, model(inputs))
#     return loss_value, tape.gradient(loss_value, model.trainable_variables)

@tf.function
def MSE(y_true, y_pred):
  return tf.keras.losses.MSE(y_true, y_pred)

def backprop(inputs, model):
  with tf.GradientTape() as tape:
    loss_value = MSE(inputs, model(inputs))
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

def gradient_func(model, inputs):
  return backprop(inputs, model)

# Training loop function
def train(x_train, model, num_epochs, batch_size,optimizer):

    train_loss = []

    for epoch in range(num_epochs):
        tf.random.shuffle(x_train)
        for i in range(0, len(x_train), batch_size):
            x_inp = x_train[i: i + batch_size]
            loss_value, grads = gradient_func(model, x_inp)
            optimizer.apply_gradients(zip(grads, model.trainable_variables))
        train_loss.append(tf.reduce_mean(tf.losses.mean_squared_error(x_train, model(x_train))).numpy())

        if epoch % 100 == 0:
            print("Epoch: {}, Train loss: {:.9f}".format(epoch, train_loss[epoch]))

    return train_loss

#### Generating simple training and test data
num_train = 10000
num_test = 1000

x_train = s = np.random.uniform(0,1,(num_train,10)).astype(np.float32)
x_train[:,6:10] = 0

x_test = s = np.random.uniform(0,1,(num_test,10)).astype(np.float32)
x_test[:,6:10] = 0
###

batch_size = 8
num_epochs = 10000

test_loss = []

model_array = [0]
# Looping over the latent dimensions
for latent_dim in range(1,10):
  model_array.append(Autoencoder(latent_dim))
 # Creating an instance of my Autoencoder
  optimizer = tf.keras.optimizers.Adam(learning_rate=0.00005) # Defining an optimizer
  train_loss = train(x_train, model=model_array[latent_dim], num_epochs=num_epochs, batch_size=batch_size, optimizer=optimizer) # Training the network
  test_loss.append(tf.reduce_mean(tf.losses.mean_squared_error(x_test, model_array[latent_dim](x_test))).numpy())

plt.figure()
plt.plot(range(1,10),test_loss,linewidth=1.5)
plt.grid(True)
plt.show()
本文还简要讨论了tf文档中的@tf.function签名


请随意提问,希望这对您有所帮助。

非常感谢您的精彩回答!关于数组和循环的好提示。backprop函数似乎有问题。它试图返回tape.gradient(loss_value,model.trainable_变量),但由于函数没有访问模型的权限,因此会抛出错误。我没有立即看到一个解决这个问题的方法,它不会让我恢复最初的问题。嗨@nmucke,我已经修复了这个错误,因为我可能忽略了它。我移动了'@tf.function'以优化损失。如果需要对某个函数进行更具体的优化,可以应用这种方法。干杯美好的非常感谢。也许这是一个愚蠢的问题,但这样做会不会失去@tf.function的很多有效性?我的意思是,反向传播是计算量很大的一部分,所以如果这部分没有被“转化为图形”,我们就不会得到很大的加速。嗨@nmucke,我明白你的意思。但是如果你想优化梯度。我建议您生成一个培训标签(y),而不仅仅是培训功能(x)。因此,您可以避免依赖“模型(输入)”,因为此函数调用阻止您重复优化模型。如果您已经这样做了,您可以将'@tf.function'装饰符放在包含损耗和梯度的基函数上,前提是不再有错误。干杯谢谢,非常有帮助的回答!
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt

# Encoder
class build_encoder(tf.keras.Model):
  def __init__(self,latent_dim):
      super(build_encoder, self).__init__()

      self.dense1 = tf.keras.layers.Dense(32, activation='relu',use_bias=True)
      self.dense2 = tf.keras.layers.Dense(latent_dim, activation='relu',use_bias=True)

  def call(self, inp):
      x = self.dense1(inp)
      x = self.dense2(x)
      return x

# Decoder
class build_decoder(tf.keras.Model):
  def __init__(self,):
      super(build_decoder, self).__init__()

      self.dense1 = tf.keras.layers.Dense(32, activation='relu',use_bias=True)
      self.dense2 = tf.keras.layers.Dense(10, activation='relu',use_bias=True)

  def call(self, inp):
      x = self.dense1(inp)
      x = self.dense2(x)
      return x

# Full Autoencoder
class Autoencoder(tf.keras.Model):
  def __init__(self,latent_dim=5):
      super(Autoencoder, self).__init__()

      self.encoder = build_encoder(latent_dim)
      self.decoder = build_decoder()

  def call(self, inp):
      x_enc = self.encoder(inp)
      x_dec = self.decoder(x_enc)
      return x_dec

#### Here is the backpropagation with @tf.function decorator ####
# @tf.function
# def grad(model, inputs):
#     with tf.GradientTape() as tape:
#         loss_value = tf.losses.mean_squared_error(inputs, model(inputs))
#     return loss_value, tape.gradient(loss_value, model.trainable_variables)

@tf.function
def MSE(y_true, y_pred):
  return tf.keras.losses.MSE(y_true, y_pred)

def backprop(inputs, model):
  with tf.GradientTape() as tape:
    loss_value = MSE(inputs, model(inputs))
  return loss_value, tape.gradient(loss_value, model.trainable_variables)

def gradient_func(model, inputs):
  return backprop(inputs, model)

# Training loop function
def train(x_train, model, num_epochs, batch_size,optimizer):

    train_loss = []

    for epoch in range(num_epochs):
        tf.random.shuffle(x_train)
        for i in range(0, len(x_train), batch_size):
            x_inp = x_train[i: i + batch_size]
            loss_value, grads = gradient_func(model, x_inp)
            optimizer.apply_gradients(zip(grads, model.trainable_variables))
        train_loss.append(tf.reduce_mean(tf.losses.mean_squared_error(x_train, model(x_train))).numpy())

        if epoch % 100 == 0:
            print("Epoch: {}, Train loss: {:.9f}".format(epoch, train_loss[epoch]))

    return train_loss

#### Generating simple training and test data
num_train = 10000
num_test = 1000

x_train = s = np.random.uniform(0,1,(num_train,10)).astype(np.float32)
x_train[:,6:10] = 0

x_test = s = np.random.uniform(0,1,(num_test,10)).astype(np.float32)
x_test[:,6:10] = 0
###

batch_size = 8
num_epochs = 10000

test_loss = []

model_array = [0]
# Looping over the latent dimensions
for latent_dim in range(1,10):
  model_array.append(Autoencoder(latent_dim))
 # Creating an instance of my Autoencoder
  optimizer = tf.keras.optimizers.Adam(learning_rate=0.00005) # Defining an optimizer
  train_loss = train(x_train, model=model_array[latent_dim], num_epochs=num_epochs, batch_size=batch_size, optimizer=optimizer) # Training the network
  test_loss.append(tf.reduce_mean(tf.losses.mean_squared_error(x_test, model_array[latent_dim](x_test))).numpy())

plt.figure()
plt.plot(range(1,10),test_loss,linewidth=1.5)
plt.grid(True)
plt.show()