Python keras LSTM val_loss在训练中始终返回NaN
因此,我正在使用以下代码对股票数据的模型进行培训:Python keras LSTM val_loss在训练中始终返回NaN,python,keras,lstm,Python,Keras,Lstm,因此,我正在使用以下代码对股票数据的模型进行培训: .... generator = batch_generator( sequence_length=SEQ, testsize=testsize, x_train_g=x_train, y_train_g=y_train) test_generator = batch_generator( sequence_length=SEQ,testsize=testsize, x_train_g=x_test
....
generator = batch_generator(
sequence_length=SEQ, testsize=testsize, x_train_g=x_train, y_train_g=y_train)
test_generator = batch_generator(
sequence_length=SEQ,testsize=testsize, x_train_g=x_test, y_train_g=y_test_reshaped)
x_batch, y_batch = next(generator)
...
model.add(Dense(num_y_signals, activation='sigmoid'))
model.compile(loss='mse', optimizer='rmsprop', metrics=["mae"])
history = model.fit_generator(generator=generator, verbose=1, validation_data=test_generator, validation_steps=10,
epochs=80,
steps_per_epoch=20,
)
def batch_generator(sequence_length, testsize, x_train_g, y_train_g, batch_size=256):
warmup_steps = 30
num_x_signals = len(x_train_g[0])
num_y_signals = 1
while True:
x_shape = (batch_size, sequence_length, num_x_signals)
x_batch = np.zeros(shape=x_shape, dtype=np.float16)
y_shape = (batch_size, sequence_length, num_y_signals)
y_batch = np.zeros(shape=y_shape, dtype=np.float16)
for i in range(batch_size):
idx = np.random.randint(testsize - sequence_length)
x_batch[i] = x_train_g[idx:idx+sequence_length]
y_batch[i] = y_train_g[idx:idx+sequence_length]
yield (x_batch, y_batch)
然而,在培训时,验证损失总是“NaN”
我尝试过不同的激活函数和优化器,但没有任何帮助
我相信错误很简单,但我就是想不出来。好的,我发现了错误:
我的validationset包含NaN值。好的,我发现了错误:
My validationset包含NaN值。是否缩放了数据。?是的,数据已缩放。?是的,数据已缩放