Machine learning 验证准确度随培训准确度的提高而波动?

Machine learning 验证准确度随培训准确度的提高而波动?,machine-learning,keras,deep-learning,lstm,Machine Learning,Keras,Deep Learning,Lstm,我有一个多分类问题,这取决于历史数据。我正在尝试使用loss='sparse\u categorical\u crossentropy'进行LSTM。列车精度和损耗分别增加和减少。但是,我的测试精度开始剧烈波动 我做错了什么 输入数据: X = np.reshape(X, (X.shape[0], X.shape[1], 1)) X.shape (200146, 13, 1) 我的型号 # fix random seed for reproducibility seed = 7 np.rand

我有一个多分类问题,这取决于历史数据。我正在尝试使用loss='sparse\u categorical\u crossentropy'进行LSTM。列车精度和损耗分别增加和减少。但是,我的测试精度开始剧烈波动

我做错了什么

输入数据:

X = np.reshape(X, (X.shape[0], X.shape[1], 1))
X.shape
(200146, 13, 1)
我的型号

# fix random seed for reproducibility
seed = 7
np.random.seed(seed)

# define 10-fold cross validation test harness
kfold = StratifiedKFold(n_splits=10, shuffle=False, random_state=seed)
cvscores = []
for train, test in kfold.split(X, y):
    regressor = Sequential()

    # Units = the number of LSTM that we want to have in this first layer -> we want very high dimentionality, we need high number
    # return_sequences =  True because we are adding another layer after this
    # input shape = the last two dimensions and the indicator
    regressor.add(LSTM(units=50, return_sequences=True, input_shape=(X[train].shape[1], 1)))
    regressor.add(Dropout(0.2))

    # Extra LSTM layer
    regressor.add(LSTM(units=50, return_sequences=True))
    regressor.add(Dropout(0.2))
    # 3rd
    regressor.add(LSTM(units=50, return_sequences=True))
    regressor.add(Dropout(0.2))

    #4th
    regressor.add(LSTM(units=50))
    regressor.add(Dropout(0.2))

    # output layer
    regressor.add(Dense(4, activation='softmax', kernel_regularizer=regularizers.l2(0.001)))

    # Compile the RNN
    regressor.compile(optimizer='adam', loss='sparse_categorical_crossentropy',metrics=['accuracy'])

    # Set callback functions to early stop training and save the best model so far
    callbacks = [EarlyStopping(monitor='val_loss', patience=9),
             ModelCheckpoint(filepath='best_model.h5', monitor='val_loss', save_best_only=True)]


    history = regressor.fit(X[train], y[train], epochs=250, callbacks=callbacks, 
                        validation_data=(X[test], y[test]))

    # plot train and validation loss
    pyplot.plot(history.history['loss'])
    pyplot.plot(history.history['val_loss'])
    pyplot.title('model train vs validation loss')
    pyplot.ylabel('loss')
    pyplot.xlabel('epoch')
    pyplot.legend(['train', 'validation'], loc='upper right')
    pyplot.show()


    # evaluate the model
    scores = regressor.evaluate(X[test], y[test], verbose=0)
    print("%s: %.2f%%" % (regressor.metrics_names[1], scores[1]*100))
    cvscores.append(scores[1] * 100)
print("%.2f%% (+/- %.2f%%)" % (np.mean(cvscores), np.std(cvscores)))
结果:

X = np.reshape(X, (X.shape[0], X.shape[1], 1))
X.shape
(200146, 13, 1)


您在这里描述的内容过于贴切。这意味着你的模型不断学习你的训练数据,而不是泛化,或者说它正在学习你训练集的确切特征。这是在深度学习中可以解决的主要问题。根本没有解决办法。您必须尝试不同的体系结构、不同的超参数等等

您可以尝试使用不匹配的小型模型(即列车acc和验证的百分比较低),并不断增加您的模型,直到其过度匹配。然后您可以使用优化器和其他超参数


我所说的更小的模型是指隐藏单元更少或层更少的模型。

您似乎有太多的LSTM层反复堆叠,最终导致过度拟合。可能应该减少层数。

谢谢您的回答!对于预测变量取决于过去事件的多分类问题,您是否会尝试采用不同的方法?