Python 变量不存在梯度?

Python 变量不存在梯度?,python,tensorflow,keras,Python,Tensorflow,Keras,如何修复下面显示的错误? 输入和输出形状应该由1或-1组成 这是我的密码: #Data Input main_input=Input(shape=(2*N_c),name='main_input') encoding_x=Dense(2*N_c,activation='relu',name='input_layer')(main_input) #Channel Input # channel_input=Input(shape=(4,),dtype='complex64',name='cha

如何修复下面显示的错误? 输入和输出形状应该由1或-1组成

这是我的密码:

#Data Input
main_input=Input(shape=(2*N_c),name='main_input')
encoding_x=Dense(2*N_c,activation='relu',name='input_layer')(main_input)


#Channel Input
# channel_input=Input(shape=(4,),dtype='complex64',name='channel_input')
channel_input = Lambda(set_channel2)(encoding_x)

padded_channel = Lambda(z_padding,name='ppading_layerddddd')(channel_input)
ffted_channel = Lambda(ffting,name='ffting_channel')(padded_channel)
realed_ffted_channel = Lambda(complex_to_real,name='c_to_r')(ffted_channel)
realed_ffted_channel1 = Dense(2*N_c,activation='relu',name='channel_layer')(realed_ffted_channel)


#Precoding Encoder
precoded_data = Lambda(lambda x: tf.concat([x[0],x[1]],1),name='precoding_layer')([encoding_x,realed_ffted_channel1])
# encoder_data = Dense(2*N_c,activation='relu',name='prencoder_layer1')(precoded_dataasdasd)
# encoder_data_1 = Dense(4*N_c,activation='relu',name='prencoder_layer2')(encoder_data)
encoder_data1 = Dense(4*N_c,activation='relu',name='prencoder_layer3')(precoded_data)
encoder_data2= Dense(2*N_c,activation='linear',name='prencoder_layer4')(encoder_data1)
encoder_data3 = Lambda(real_to_complex,name='r_to_c')(encoder_data2)
encoder_data4=BatchNormalization(axis=-1, momentum=0.99, epsilon=0.001, center=True, scale=True, beta_initializer='zeros', gamma_initializer='ones', moving_mean_initializer='zeros', moving_variance_initializer='ones', beta_regularizer=None, gamma_regularizer=None, beta_constraint=None, gamma_constraint=None)(encoder_data3)
iffted_encoded_data = Lambda(iffting,name='iffting_layer')(encoder_data4)


#Channel
conved_data = Lambda(lambda x:conv_channel(x[0],x[1]),name='convolution_layer')([iffted_encoded_data,channel_input])
noised_data = Lambda(noising,name='adding_noise_layer')(conved_data)


#Decoder
ffted_data = Lambda(ffting, name='ffting')(noised_data)
ffted_data2 = Lambda(lambda x : tf.reshape(tf.reshape(x,(67,))[0:64:],(1,64)),name='removing_delay')(ffted_data)

#single_tap
single_tap_equalizer= Lambda(lambda x:tf.math.divide(x[0],x[1]),name='dividing')([ffted_data2,ffted_channel])

realed_received_data = Lambda(complex_to_real,name='c_to_r2')(ffted_data2)
decoder_y=Dense(2*N_c,activation='relu',name='decoder_y')(realed_received_data)
decoder_y1=Dense(4*N_c,activation='relu',name='decoder_y1')(decoder_y)
decoder_y2=Dense(4*N_c,activation='relu',name='decoder_y2')(decoder_y1)
main_output=Dense(2*N_c,activation='linear',name='main_output')(decoder_y1)
autoencoder = Model(inputs=[main_input],outputs=[main_output])
autoencoder.compile(optimizer=Adam(lr=0.001),loss=tf.keras.losses.KLDivergence())

模型摘要:

odel: "model_22"
__________________________________________________________________________________________________
Layer (type)                    Output Shape         Param #     Connected to                     
==================================================================================================
main_input (InputLayer)         [(None, 128)]        0                                            
__________________________________________________________________________________________________
input_layer (Dense)             (None, 128)          16512       main_input[0][0]                 
__________________________________________________________________________________________________
lambda_11 (Lambda)              (1, 4)               0           input_layer[0][0]                
__________________________________________________________________________________________________
ppading_layerddddd (Lambda)     (1, 64)              0           lambda_11[0][0]                  
__________________________________________________________________________________________________
ffting_channel (Lambda)         (1, 64)              0           ppading_layerddddd[0][0]         
__________________________________________________________________________________________________
c_to_r (Lambda)                 (1, 128)             0           ffting_channel[0][0]             
__________________________________________________________________________________________________
channel_layer (Dense)           (1, 128)             16512       c_to_r[0][0]                     
__________________________________________________________________________________________________
precoding_layer (Lambda)        (1, 256)             0           input_layer[0][0]                
                                                                 channel_layer[0][0]              
__________________________________________________________________________________________________
prencoder_layer3 (Dense)        (1, 256)             65792       precoding_layer[0][0]            
__________________________________________________________________________________________________
prencoder_layer4 (Dense)        (1, 128)             32896       prencoder_layer3[0][0]           
__________________________________________________________________________________________________
r_to_c (Lambda)                 (1, 64)              0           prencoder_layer4[0][0]           
__________________________________________________________________________________________________
batch_normalization_21 (BatchNo (1, 64)              256         r_to_c[0][0]                     
__________________________________________________________________________________________________
iffting_layer (Lambda)          (1, 64)              0           batch_normalization_21[0][0]     
__________________________________________________________________________________________________
convolution_layer (Lambda)      (1, 67)              0           iffting_layer[0][0]              
                                                                 lambda_11[0][0]                  
__________________________________________________________________________________________________
adding_noise_layer (Lambda)     (1, 67)              0           convolution_layer[0][0]          
__________________________________________________________________________________________________
ffting (Lambda)                 (1, 67)              0           adding_noise_layer[0][0]         
__________________________________________________________________________________________________
removing_delay (Lambda)         (1, 64)              0           ffting[0][0]                     
__________________________________________________________________________________________________
c_to_r2 (Lambda)                (1, 128)             0           removing_delay[0][0]             
__________________________________________________________________________________________________
decoder_y (Dense)               (1, 128)             16512       c_to_r2[0][0]                    
__________________________________________________________________________________________________
decoder_y1 (Dense)              (1, 256)             33024       decoder_y[0][0]                  
__________________________________________________________________________________________________
main_output (Dense)             (1, 128)             32896       decoder_y1[0][0]                 
==================================================================================================
Total params: 214,400
Trainable params: 214,272
Non-trainable params: 128
错误是:

Train on 10000 samples
Epoch 1/10
WARNING:tensorflow:Gradients do not exist for variables ['input_layer_31/kernel:0', 'input_layer_31/bias:0', 'channel_layer_21/kernel:0', 'channel_layer_21/bias:0', 'prencoder_layer3_21/kernel:0', 'prencoder_layer3_21/bias:0', 'prencoder_layer4_21/kernel:0', 'prencoder_layer4_21/bias:0', 'batch_normalization_21/gamma:0', 'batch_normalization_21/beta:0'] when minimizing the loss.
WARNING:tensorflow:Gradients do not exist for variables ['input_layer_31/kernel:0', 'input_layer_31/bias:0', 'channel_layer_21/kernel:0', 'channel_layer_21/bias:0', 'prencoder_layer3_21/kernel:0', 'prencoder_layer3_21/bias:0', 'prencoder_layer4_21/kernel:0', 'prencoder_layer4_21/bias:0', 'batch_normalization_21/gamma:0', 'batch_normalization_21/beta:0'] when minimizing the loss.
10000/10000 [==============================] - 36s 4ms/sample - loss: -1.6228
Epoch 2/10
10000/10000 [==============================] - 35s 3ms/sample - loss: -8.9771
Epoch 3/10
10000/10000 [==============================] - 34s 3ms/sample - loss: -10.0491
Epoch 4/10

real\u to\u complex
ffting
conv\u channel
complex\u to\u real
看起来不像tensorflow或keras函数。是吗?那是我做的定制lambda图层!他们是否使用能够有梯度的张量流函数?没有它们就不可能回答您的问题。那些lambda层用于转换!重量和基本度没有变量,这是Lambda层的标准,但我们需要查看它们的代码。它们必须使用能够传播梯度的tensorflow函数。
real\u to\u complex
ffting
conv\u channel
complex\u to\u real
看起来不像tensorflow或keras函数。是吗?那是我做的定制lambda图层!他们是否使用能够有梯度的张量流函数?没有它们就不可能回答您的问题。那些lambda层用于转换!重量和基本度没有变量,这是Lambda层的标准,但我们需要查看它们的代码。它们必须使用能够传播梯度的张量流函数。