Tensorflow“;检测到的边创建循环”;在适应keras模型训练中

Tensorflow“;检测到的边创建循环”;在适应keras模型训练中,tensorflow,keras,optimization,neural-network,tensorflow2.0,Tensorflow,Keras,Optimization,Neural Network,Tensorflow2.0,我目前正在尝试在tensorflow2中使用投影梯度下降进行对抗性训练。所以我想迭代地计算负损耗的梯度,当前的迭代,然后沿着负梯度的方向扰动我的批次 因此,我修改了中的示例,并尝试覆盖train\u步骤方法。但是,这给了我以下错误消息,我认为它禁用了tensorflow中的图形执行,并在急切执行中运行代码: E tensorflow/core/grappler/optimizers/meta_optimizer.cc:808] layout failed: Invalid argument: M

我目前正在尝试在tensorflow2中使用投影梯度下降进行对抗性训练。所以我想迭代地计算负损耗的梯度,当前的迭代,然后沿着负梯度的方向扰动我的批次

因此,我修改了中的示例,并尝试覆盖
train\u步骤
方法。但是,这给了我以下错误消息,我认为它禁用了tensorflow中的图形执行,并在急切执行中运行代码:

E tensorflow/core/grappler/optimizers/meta_optimizer.cc:808] layout failed: Invalid argument: MutableGraphView::SortTopologically error: detected edge(s) creating cycle(s) {'while/body/_1/while/clip_by_value' -> 'while/body/_1/while/Identity_2', 'while/body/_1/while/AssignAddVariableOp' -> 'while/body/_1/while/Identity_2', 'Func/while/body/_1/output_control_node/_97' -> 'while/next_iteration/_43', 'while/body/_1/while/AssignAddVariableOp_1' -> 'while/body/_1/while/Identity_2'}.
我不明白的是为什么会发生这种错误。如果没有卷积层,错误消息将消失,并且在将循环体中的
下降方向设置为
tf.zeros
时也会消失。我的方法错了吗?我能做些什么使它工作?我真的不明白grappler优化器是如何工作的。感谢您的帮助

我的代码如下(我只在
train\u step()
中添加了
tf.while\u loop()
):

import tensorflow as tf

class CustomModel(tf.keras.Model):
    def __init__(self):
        super(CustomModel, self).__init__()
        self.sequential = tf.keras.models.Sequential([
            tf.keras.layers.Conv2D(32, 3, activation='relu'),
            tf.keras.layers.Flatten(),
            tf.keras.layers.Dense(128, activation='relu'),
            tf.keras.layers.Dropout(0.2),
            tf.keras.layers.Dense(10)
        ])

    def call(self, inputs):
        return self.sequential(inputs)

    def train_step(self, data):
        x, y = data
        tol = 1e-7
        max_iterations = 5
        epsilon = 0.09
        sigma = 10.0

        x_lower = tf.clip_by_value(tf.subtract(x, epsilon), 0, 1)
        x_upper = tf.clip_by_value(tf.add(x, epsilon), 0, 1)

        def pgd_iterations_cond(xk, gradients_xk):
            """Should be a stopping criterion depending on the gradient."""
            return True

        def pgd_iterations_body(xk, gradients_xk):
            """Calculate gradient w.r.t. current iterate xk and do projected step of size sigma in descent direction."""
            with tf.GradientTape() as tape:
                tape.watch(xk)
                y_pred = self(xk, training=True)
                loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)
                neg_loss = tf.negative(loss)
            gradients_xk_step = tape.gradient(neg_loss, xk)
            descent_direction = tf.negative(gradients_xk_step)

            xk_step = tf.add(xk, tf.scalar_mul(sigma, descent_direction))
            xk_step = tf.clip_by_value(xk_step, x_lower, x_upper)
            return xk_step, gradients_xk_step

        gradients_x = tf.ones_like(x)
        x_perturbed, _ = tf.while_loop(pgd_iterations_cond, pgd_iterations_body, [x, gradients_x],
                                       maximum_iterations=max_iterations)

        with tf.GradientTape() as tape:
            y_pred = self(x_perturbed, training=True)
            loss = self.compiled_loss(y, y_pred, regularization_losses=self.losses)

        trainable_vars = self.trainable_variables
        gradients = tape.gradient(loss, trainable_vars)
        self.optimizer.apply_gradients(zip(gradients, trainable_vars))
        self.compiled_metrics.update_state(y, y_pred)
        return {m.name: m.result() for m in self.metrics}


if __name__ == "__main__":
    mnist = tf.keras.datasets.mnist
    (x_train, y_train), _ = mnist.load_data()
    x_train = x_train / 255.0
    x_train = x_train[..., tf.newaxis].astype('float32')

    Model = CustomModel()
    loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
    Model.compile(optimizer='adam',
                  loss=loss_fn,
                  metrics=['accuracy'])
    Model.fit(x_train, y_train, epochs=5)