Python 如何在Keras中训练不合适的模型?

Python 如何在Keras中训练不合适的模型?,python,tensorflow,keras,deep-learning,neural-network,Python,Tensorflow,Keras,Deep Learning,Neural Network,我试图在卡拉斯训练一名LSTM,但它似乎不太合适 adam=adam(学习率=0.1,β1=0.9,β2=0.999,amsgrad=False) 模型=顺序() 添加模型(LSTM(124,输入_形状=(trainX.shape[1],trainX.shape[2])) 模型.添加(密度(62)) compile(loss='mse',optimizer=adam) 历史=model.fit( trainX,trainY,epochs=100,批大小=431,验证数据=(testX,testY

我试图在卡拉斯训练一名LSTM,但它似乎不太合适

adam=adam(学习率=0.1,β1=0.9,β2=0.999,amsgrad=False)
模型=顺序()
添加模型(LSTM(124,输入_形状=(trainX.shape[1],trainX.shape[2]))
模型.添加(密度(62))
compile(loss='mse',optimizer=adam)
历史=model.fit(
trainX,trainY,epochs=100,批大小=431,验证数据=(testX,testY),verbose=2,shuffle=False)
我的数据代表了一系列的数据泄露,其中前0-7列是公司类型,第0-51列是美国各州的onehot向量。我的目标是根据这个数据集预测下一个漏洞

      date     size    0    1    2    3    4    5    6    7    0    1  ...   40   41   42   43   44   45   46   47   48   49   50   51
0        1    32000  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  ...  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
1        2     3500  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ...  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
2        3  1400000  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ...  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
3        4   120000  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ...  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
4        5    59000  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  ...  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0

每个历元所用的时间为0,测试数据的损失基本保持不变。以下是我的模型的最后10个时代

Epoch 90/100
 - 0s - loss: 130445623221660.6562 - val_loss: 5098486354074.1934
Epoch 91/100
 - 0s - loss: 130445622565865.9375 - val_loss: 5098486346506.8730
Epoch 92/100
 - 0s - loss: 130445622191126.0625 - val_loss: 5098485465787.7988
Epoch 93/100
 - 0s - loss: 130445621816386.2344 - val_loss: 5098484149524.7568
Epoch 94/100
 - 0s - loss: 130445621347961.4219 - val_loss: 5098484141613.4668
Epoch 95/100
 - 0s - loss: 130445620785851.6406 - val_loss: 5098483698846.1465
Epoch 96/100
 - 0s - loss: 130445620317426.8281 - val_loss: 5098483255046.9189
Epoch 97/100
 - 0s - loss: 130445619942686.9844 - val_loss: 5098482810903.7217
Epoch 98/100
 - 0s - loss: 130445619380577.2188 - val_loss: 5098481494640.6797
Epoch 99/100
 - 0s - loss: 130445619005837.3750 - val_loss: 5098481050841.4521
Epoch 100/100
 - 0s - loss: 130445618724782.5000 - val_loss: 5098480170466.3477
这是显示培训和测试期间损失的图表。 标准化后 数据:

输出:

Epoch 90/100
 - 0s - loss: 1425.8638 - val_loss: 8939.7717
Epoch 91/100
 - 0s - loss: 1397.5078 - val_loss: 8847.1606
Epoch 92/100
 - 0s - loss: 1403.1658 - val_loss: 8756.0660
Epoch 93/100
 - 0s - loss: 1453.6541 - val_loss: 9025.9873
Epoch 94/100
 - 0s - loss: 1435.4252 - val_loss: 8928.1981
Epoch 95/100
 - 0s - loss: 1414.3660 - val_loss: 8830.8435
Epoch 96/100
 - 0s - loss: 1391.6843 - val_loss: 8734.7518
Epoch 97/100
 - 0s - loss: 1365.5961 - val_loss: 8655.9235
Epoch 98/100
 - 0s - loss: 1431.5669 - val_loss: 8896.3086
Epoch 99/100
 - 0s - loss: 1409.6963 - val_loss: 9031.4488
Epoch 100/100
 - 0s - loss: 1466.6886 - val_loss: 9088.9677
损失:

您的模型并不不合适,它根本学不到任何东西。你的数据是什么?“你想达到什么目的?”ThibaultBacqueyrisss我已经对问题中的数据做了解释。模型不学习有什么原因吗?基本上没有学习,即不适合条件。您必须详细说明您的数据,并使用适当的损失函数指定问题类型(分类、回归、其他)是目标数据?你应该强烈考虑规范它。你的数据是稀疏的,只是代表州和公司的一个热点向量。我从你的问题中了解到的是,你试图预测某一特定类型公司的51个州之一。这使您的公司类型为X:type,y:type为一个或多个州。首先,mse在这里不起作用,因为您正在对一个或多个州进行分类。其次,据我所知,你唯一的原始特征是公司类型(一个热门)。为什么要使用LSTM?输入中有序列吗?
Epoch 90/100
 - 0s - loss: 1425.8638 - val_loss: 8939.7717
Epoch 91/100
 - 0s - loss: 1397.5078 - val_loss: 8847.1606
Epoch 92/100
 - 0s - loss: 1403.1658 - val_loss: 8756.0660
Epoch 93/100
 - 0s - loss: 1453.6541 - val_loss: 9025.9873
Epoch 94/100
 - 0s - loss: 1435.4252 - val_loss: 8928.1981
Epoch 95/100
 - 0s - loss: 1414.3660 - val_loss: 8830.8435
Epoch 96/100
 - 0s - loss: 1391.6843 - val_loss: 8734.7518
Epoch 97/100
 - 0s - loss: 1365.5961 - val_loss: 8655.9235
Epoch 98/100
 - 0s - loss: 1431.5669 - val_loss: 8896.3086
Epoch 99/100
 - 0s - loss: 1409.6963 - val_loss: 9031.4488
Epoch 100/100
 - 0s - loss: 1466.6886 - val_loss: 9088.9677