Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python tf.keras GradientTape:获取关于输入的梯度_Python_Tensorflow_Tf.keras - Fatal编程技术网

Python tf.keras GradientTape:获取关于输入的梯度

Python tf.keras GradientTape:获取关于输入的梯度,python,tensorflow,tf.keras,Python,Tensorflow,Tf.keras,Tensorflow版本:Tensorflow 2.1 我想得到关于输入的梯度,而不是关于可训练权重的梯度。我将示例从调整为 但是,这不起作用,因为grads=tape.gradient(loss_value,model.inputs)返回[None]。这是不是有意的行为?如果是,建议使用什么方法来获取输入的梯度?要使其正常工作,需要添加两件事: 将图像转换为tf.变量 使用tape.watch观察相对于所需变量的梯度 image = tf.Variable(input) for iterati

Tensorflow版本:Tensorflow 2.1

我想得到关于输入的梯度,而不是关于可训练权重的梯度。我将示例从调整为


但是,这不起作用,因为grads=tape.gradient(loss_value,model.inputs)返回[None]。这是不是有意的行为?如果是,建议使用什么方法来获取输入的梯度?

要使其正常工作,需要添加两件事:

  • 将图像转换为tf.变量
  • 使用tape.watch观察相对于所需变量的梯度

    image = tf.Variable(input)
    for iteration in range(400):
        with tf.GradientTape() as tape:
            tape.watch(image)
            # Run the forward pass of the layer.
            # The operations that the layer applies
            # to its inputs are going to be recorded
            # on the GradientTape.
            prediction = model(image, training=False)  # Logits for this minibatch
    
            # Compute the loss value for this minibatch.
            loss_value = loss_fun(target, prediction)
    
        # Use the gradient tape to automatically retrieve
        # the gradients of the trainable variables with respect to the loss.
        grads = tape.gradient(loss_value, image)
        #print(grads)  # output: [None]
        # Run one step of gradient descent by updating
        # the value of the variables to minimize the loss.
        optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
        optimizer.apply_gradients(zip([grads], [image]))
    
        print('Iteration {}'.format(iteration))
    
  • image = tf.Variable(input)
    for iteration in range(400):
        with tf.GradientTape() as tape:
            tape.watch(image)
            # Run the forward pass of the layer.
            # The operations that the layer applies
            # to its inputs are going to be recorded
            # on the GradientTape.
            prediction = model(image, training=False)  # Logits for this minibatch
    
            # Compute the loss value for this minibatch.
            loss_value = loss_fun(target, prediction)
    
        # Use the gradient tape to automatically retrieve
        # the gradients of the trainable variables with respect to the loss.
        grads = tape.gradient(loss_value, image)
        #print(grads)  # output: [None]
        # Run one step of gradient descent by updating
        # the value of the variables to minimize the loss.
        optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
        optimizer.apply_gradients(zip([grads], [image]))
    
        print('Iteration {}'.format(iteration))