Keras LSTM-分类交叉熵降至0
我目前正在尝试比较一些RNN,我只对LSTM有一个问题,我不知道为什么 我正在使用相同的代码/数据集a LSTM、SimpleRN和GRU进行培训。对于所有这些情况,损失都会正常减少。但对于LSTM,在某一点(损失约为0.4)后,损失直接降至10e-8。如果我试图预测一个输出,我只有Nan 代码如下:Keras LSTM-分类交叉熵降至0,keras,lstm,recurrent-neural-network,Keras,Lstm,Recurrent Neural Network,我目前正在尝试比较一些RNN,我只对LSTM有一个问题,我不知道为什么 我正在使用相同的代码/数据集a LSTM、SimpleRN和GRU进行培训。对于所有这些情况,损失都会正常减少。但对于LSTM,在某一点(损失约为0.4)后,损失直接降至10e-8。如果我试图预测一个输出,我只有Nan 代码如下: nb_unit = 7 inp_shape = (maxlen, 7) loss_ = "categorical_crossentropy" metrics_ = "categorical_cro
nb_unit = 7
inp_shape = (maxlen, 7)
loss_ = "categorical_crossentropy"
metrics_ = "categorical_crossentropy"
optimizer_ = "Nadam"
nb_epoch = 250
batch_size = 64
model = Sequential()
model.add(LSTM( units=nb_unit,
input_shape=inp_shape,
return_sequences=True,
activation='softmax')) # I just change the cell name
model.compile(loss=loss_,
optimizer=optimizer_,
metrics=[metrics_])
checkpoint = ModelCheckpoint("lstm_simple.h5",
monitor=loss_,
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
early = EarlyStopping( monitor='loss',
min_delta=0,
patience=10,
verbose=1,
mode='auto')
history = model.fit(X_train, y_train,
validation_data=(X_test, y_test),
epochs=nb_epoch,
batch_size=batch_size,
verbose=2,
callbacks = [checkpoint, early])
这是具有相同输入的GRU和LSTM的输出:
Input :
[[[1 0 0 0 0 0 0]
[0 1 0 0 0 0 0]
[0 0 0 1 0 0 0]
[0 0 0 1 0 0 0]
[0 1 0 0 0 0 0]
[0 0 0 0 0 1 0]
[0 0 0 0 1 0 0]
[0 0 0 1 0 0 0]
[0 0 0 0 0 1 0]
[0 0 0 0 1 0 0]
[0 0 0 1 0 0 0]
[0 1 0 0 0 0 0]
[0 0 0 0 0 1 0]
[0 0 0 0 1 0 0]
[0 0 0 1 0 0 0]
[0 0 0 0 0 1 0]
[0 0 0 0 0 1 0]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]
[0 0 0 0 0 0 0]]]
LSTM predicts :
[[[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]
[ nan nan nan nan nan nan nan]]]
GRU predicts :
[[[ 0. 0.54 0. 0. 0.407 0. 0. ]
[ 0. 0.005 0.66 0.314 0. 0. 0.001]
[ 0. 0.001 0.032 0.957 0. 0.004 0. ]
[ 0. 0.628 0. 0. 0. 0.372 0. ]
[ 0. 0.555 0. 0. 0. 0.372 0. ]
[ 0. 0. 0. 0. 0.996 0.319 0. ]
[ 0. 0. 0.167 0.55 0. 0. 0. ]
[ 0. 0.486 0. 0.002 0. 0.51 0. ]
[ 0. 0.001 0. 0. 0.992 0.499 0. ]
[ 0. 0. 0.301 0.55 0. 0. 0. ]
[ 0. 0.396 0.001 0.007 0. 0.592 0. ]
[ 0. 0.689 0. 0. 0. 0.592 0. ]
[ 0. 0.001 0. 0. 0.997 0.592 0. ]
[ 0. 0. 0.37 0.55 0. 0. 0. ]
[ 0. 0.327 0.003 0.025 0. 0.599 0. ]
[ 0. 0.001 0. 0. 0.967 0.599 0.002]
[ 0. 0. 0. 0. 0. 0.002 0.874]
[ 0.004 0.076 0.128 0.337 0.02 0.069 0.378]
[ 0.006 0.379 0.047 0.113 0.029 0.284 0.193]
[ 0.006 0.469 0.001 0.037 0.13 0.295 0.193]]]
对于损失,您可以在下面找到fit()历史记录的最后几行:
或者是基于时代的损失演变
我以前在没有Softmax和MSE作为损失函数的情况下尝试过它,但没有得到任何错误
如果需要,您可以在Github()上找到用于生成数据集的笔记本和脚本
非常感谢您的支持,
当做
尼古拉斯
编辑1:
根本原因似乎是Softmax功能消失了。如果我在它崩溃之前停止它,并显示我拥有的每个时间步的softmax总和:
LSTM :
[[ 0.112]
[ 0.008]
[ 0.379]
[ 0.04 ]
[ 0.001]
[ 0.104]
[ 0.021]
[ 0. ]
[ 0.104]
[ 0.343]
[ 0.012]
[ 0. ]
[ 0.23 ]
[ 0.13 ]
[ 0.147]
[ 0.145]
[ 0.152]
[ 0.157]
[ 0.163]
[ 0.169]]
GRU :
[[ 0.974]
[ 0.807]
[ 0.719]
[ 1.184]
[ 0.944]
[ 0.999]
[ 1.426]
[ 0.957]
[ 0.999]
[ 1.212]
[ 1.52 ]
[ 0.954]
[ 0.42 ]
[ 0.83 ]
[ 0.903]
[ 0.944]
[ 0.976]
[ 1.005]
[ 1.022]
[ 1.029]]
Softmax为0时,下一步将尝试除以0。现在我不知道如何修复它。我只是发布了我当前的解决方案,以防将来有人遇到这个问题 为了避免消失,我添加了一个简单的完全连接的层,它的输出大小与输入大小相同,之后工作正常。该层允许LSTM/GRU/SRNN输出的另一种“配置”,并避免输出消失 这是最终代码:
nb_unit = 7
inp_shape = (maxlen, 7)
loss_ = "categorical_crossentropy"
metrics_ = "categorical_crossentropy"
optimizer_ = "Nadam"
nb_epoch = 250
batch_size = 64
model = Sequential()
model.add(LSTM(units=nb_unit,
input_shape=inp_shape,
return_sequences=True)) # LSTG/GRU/SimpleRNN
model.add(Dense(7, activation='softmax')) # New
model.compile(loss=loss_,
optimizer=optimizer_,
metrics=[metrics_])
checkpoint = ModelCheckpoint("lstm_simple.h5",
monitor=loss_,
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
early = EarlyStopping(
monitor='loss',
min_delta=0,
patience=10,
verbose=1,
mode='auto')
我希望这能帮助其他人:)
nb_unit = 7
inp_shape = (maxlen, 7)
loss_ = "categorical_crossentropy"
metrics_ = "categorical_crossentropy"
optimizer_ = "Nadam"
nb_epoch = 250
batch_size = 64
model = Sequential()
model.add(LSTM(units=nb_unit,
input_shape=inp_shape,
return_sequences=True)) # LSTG/GRU/SimpleRNN
model.add(Dense(7, activation='softmax')) # New
model.compile(loss=loss_,
optimizer=optimizer_,
metrics=[metrics_])
checkpoint = ModelCheckpoint("lstm_simple.h5",
monitor=loss_,
verbose=1,
save_best_only=True,
save_weights_only=False,
mode='auto',
period=1)
early = EarlyStopping(
monitor='loss',
min_delta=0,
patience=10,
verbose=1,
mode='auto')