Python 为什么K.梯度(损耗、输入img)[0]返回“;无”;有线电视新闻网和xFF1F;

Python 为什么K.梯度(损耗、输入img)[0]返回“;无”;有线电视新闻网和xFF1F;,python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,我有一个这样的网络结构 Layer (type) Output Shape Param # ================================================================= embedding_1 (Embedding) (None, 7507, 2131) 15999548 ________________________________________

我有一个这样的网络结构

Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, 7507, 2131)        15999548  
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 7507, 256)         2727936   
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 2503, 256)         0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 256)               525312    
_________________________________________________________________
dense_1 (Dense)              (None, 4096)              1052672   
_________________________________________________________________
activation_1 (Activation)    (None, 4096)              0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 2131)              8730707   
_________________________________________________________________
activation_2 (Activation)    (None, 2131)              0         
=================================================================
Total params: 29,036,175
Trainable params: 29,036,175
Non-trainable params: 0
_________________________________________________________________

然后我想计算给定输入的梯度信息,如下所示

Layer (type)                 Output Shape              Param #   
=================================================================
embedding_1 (Embedding)      (None, 7507, 2131)        15999548  
_________________________________________________________________
conv1d_1 (Conv1D)            (None, 7507, 256)         2727936   
_________________________________________________________________
max_pooling1d_1 (MaxPooling1 (None, 2503, 256)         0         
_________________________________________________________________
lstm_1 (LSTM)                (None, 256)               525312    
_________________________________________________________________
dense_1 (Dense)              (None, 4096)              1052672   
_________________________________________________________________
activation_1 (Activation)    (None, 4096)              0         
_________________________________________________________________
dropout_1 (Dropout)          (None, 4096)              0         
_________________________________________________________________
dense_2 (Dense)              (None, 2131)              8730707   
_________________________________________________________________
activation_2 (Activation)    (None, 2131)              0         
=================================================================
Total params: 29,036,175
Trainable params: 29,036,175
Non-trainable params: 0
_________________________________________________________________

adv_list = []
loss = layer_list[-2][1].output[:, f]
grads = K.gradients(loss, model.input)[0]
iterate = K.function([model.input], [loss, grads])
但是,当代码执行到此行时:
grads=K.梯度(损失、输入量)[0]
我发现它只返回None对象,所以程序在此之后无法继续。 毕业生没有,我打印了一些中间信息

('loss: ', <tf.Tensor 'strided_slice:0' shape=(?,) dtype=float32>)
('model.input', <tf.Tensor 'embedding_1_input:0' shape=(?, 7507) dtype=float32>)
('K.gradients(loss, model.input', [None])
(“损失:”,)
('model.input',)
('K.梯度(损失,模型输入',[None])

损失函数是什么?损失是选择输出层中的神经元,如图所示“('layer_list[-2][1]。输出:”,)`