Python 3.x 如何计算过滤器的梯度';对于Tensorflow 2.0中的输入图像,中间层的激活?

Python 3.x 如何计算过滤器的梯度';对于Tensorflow 2.0中的输入图像,中间层的激活?,python-3.x,tensorflow,tensorflow2.0,Python 3.x,Tensorflow,Tensorflow2.0,我试图将激活中间层特定过滤器的图像可视化。为此,我需要计算该过滤器中激活平均值相对于输入图像的梯度,然后使用梯度上升更新图像 我一直在研究如何在Tensorflow 2.0中计算这个梯度。我试过这个;在这里,我试图在block3\u conv1层中获取索引为0的过滤器的输出: input = tf.convert_to_tensor(np.random.random((1, 150, 150, 3)) activation_model = Model(inputs=model.input,

我试图将激活中间层特定过滤器的图像可视化。为此,我需要计算该过滤器中激活平均值相对于输入图像的梯度,然后使用梯度上升更新图像

我一直在研究如何在Tensorflow 2.0中计算这个梯度。我试过这个;在这里,我试图在
block3\u conv1
层中获取索引为
0
的过滤器的输出:

input = tf.convert_to_tensor(np.random.random((1, 150, 150, 3))

activation_model = Model(inputs=model.input,
                         outputs=model.get_layer("block3_conv1").output)

with tf.GradientTape() as tape:
    tape.watch(inputs)
    preds = activation_model.predict(inputs)
    loss = np.mean(preds[:,:,:,0]) # defining the mean of all activations as the loss, in the filter with index 0

grads = tape.gradient(tf.convert_to_tensor(loss), inputs)
但这给了我
grads
作为
None
的分数。以下是模型摘要:

Model: "vgg16"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         [(None, None, None, 3)]   0         
_________________________________________________________________
block1_conv1 (Conv2D)        (None, None, None, 64)    1792      
_________________________________________________________________
block1_conv2 (Conv2D)        (None, None, None, 64)    36928     
_________________________________________________________________
block1_pool (MaxPooling2D)   (None, None, None, 64)    0         
_________________________________________________________________
block2_conv1 (Conv2D)        (None, None, None, 128)   73856     
_________________________________________________________________
block2_conv2 (Conv2D)        (None, None, None, 128)   147584    
_________________________________________________________________
block2_pool (MaxPooling2D)   (None, None, None, 128)   0         
_________________________________________________________________
block3_conv1 (Conv2D)        (None, None, None, 256)   295168    
_________________________________________________________________
block3_conv2 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_conv3 (Conv2D)        (None, None, None, 256)   590080    
_________________________________________________________________
block3_pool (MaxPooling2D)   (None, None, None, 256)   0         
_________________________________________________________________
block4_conv1 (Conv2D)        (None, None, None, 512)   1180160   
_________________________________________________________________
block4_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block4_pool (MaxPooling2D)   (None, None, None, 512)   0         
_________________________________________________________________
block5_conv1 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv2 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_conv3 (Conv2D)        (None, None, None, 512)   2359808   
_________________________________________________________________
block5_pool (MaxPooling2D)   (None, None, None, 512)   0         
=================================================================
Total params: 14,714,688
Trainable params: 14,714,688
Non-trainable params: 0
_________________________________________________________________

不要使用
模型。预测
。这将返回numpy数组,并且不能通过numpy操作进行反向传播。下面的代码通过使用模型的
call
函数保留在tensor land中

with tf.GradientTape() as tape:
    tape.watch(inputs)
    preds = activation_model(inputs)
    loss = tf.reduce_mean(preds[:,:,:,0]) # defining the mean of all activations as the loss, in the filter with index 0

grads = tape.gradient(loss, inputs)

啊,我明白了!非常感谢!:)