Python Keras损失值不变

Python Keras损失值不变,python,keras,Python,Keras,我正在尝试对贷款状态数据集应用一个深度学习网络,以检查我是否能够获得比传统机器学习算法更好的结果 准确率似乎很低(甚至低于使用正常逻辑回归)。我怎样才能改进它 我尝试过的事情: -改变学习速度 -增加层数 -增加/减少节点数** X = df_dummies.drop('Loan_Status', axis=1).values y = df_dummies['Loan_Status'].values model = Sequential() model.add(Dense(50, input_

我正在尝试对贷款状态数据集应用一个深度学习网络,以检查我是否能够获得比传统机器学习算法更好的结果

准确率似乎很低(甚至低于使用正常逻辑回归)。我怎样才能改进它

我尝试过的事情: -改变学习速度 -增加层数 -增加/减少节点数**

X = df_dummies.drop('Loan_Status', axis=1).values
y = df_dummies['Loan_Status'].values
model = Sequential()

model.add(Dense(50, input_dim = 17, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(100, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))

sgd = optimizers.SGD(lr = 0.00001)

model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=`['accuracy'])`

model.fit(X, y, epochs = 50, shuffle=True, verbose=2)
model.summary()
纪元1/50 -1s-损耗:4.9835-附件:0.6873 纪元2/50 -0s-损失:4.9830-附件:0.6873 纪元3/50 -0s-损失:4.9821-附件:0.6873 纪元4/50 -0s-损失:4.9815-附件:0.6873 纪元5/50 -0s-损失:4.9807-附件:0.6873 纪元6/50 -0s-损失:4.9800-附件:0.6873 纪元7/50 -0s-损失:4.9713-附件:0.6873 纪元8/50 -0s-损失:8.5354-附件:0.4397 纪元9/50 -0s-损失:4.8322-附件:0.6743 纪元10/50 -0s-损失:4.9852-附件:0.6873 纪元11/50 -0s-损失:4.9852-附件:0.6873 纪元12/50 -0s-损失:4.9852-附件:0.6873 纪元13/50 -0s-损失:4.9852-附件:0.6873 纪元14/50 -0s-损失:4.9852-附件:0.6873 纪元15/50 -0s-损失:4.9852-附件:0.6873 纪元16/50 -0s-损失:4.9852-附件:0.6873 纪元17/50 -0s-损失:4.9852-附件:0.6873 纪元18/50 -0s-损失:4.9852-附件:0.6873 纪元19/50 -0s-损失:4.9852-附件:0.6873 纪元20/50 -0s-损失:4.9852-附件:0.6873 新纪元21/50 -0s-损失:4.9852-附件:0.6873 纪元22/50 -0s-损失:4.9852-附件:0.6873 纪元23/50 -0s-损失:4.9852-附件:0.6873 纪元24/50 -0s-损失:4.9852-附件:0.6873 纪元25/50 -0s-损失:4.9852-附件:0.6873 纪元26/50 -0s-损失:4.9852-附件:0.6873 纪元27/50 -0s-损失:4.9852-附件:0.6873 纪元28/50 -0s-损失:4.9852-附件:0.6873 纪元29/50 -0s-损失:4.9852-附件:0.6873 纪元30/50 -0s-损失:4.9852-附件:0.6873 纪元31/50 -0s-损失:4.9852-附件:0.6873 纪元32/50 -0s-损失:4.9852-附件:0.6873 纪元33/50 -0s-损失:4.9852-附件:0.6873 纪元34/50 -0s-损失:4.9852-附件:0.6873 纪元35/50 -0s-损失:4.9852-附件:0.6873 纪元36/50 -0s-损失:4.9852-附件:0.6873 纪元37/50 -0s-损失:4.9852-附件:0.6873 纪元38/50 -0s-损失:4.9852-附件:0.6873 纪元39/50 -0s-损失:4.9852-附件:0.6873 纪元40/50 -0s-损失:4.9852-附件:0.6873 纪元41/50 -0s-损失:4.9852-附件:0.6873 纪元42/50 -0s-损失:4.9852-附件:0.6873 纪元43/50 -0s-损失:4.9852-附件:0.6873 纪元44/50 -0s-损失:4.9852-附件:0.6873 纪元45/50 -0s-损失:4.9852-附件:0.6873 纪元46/50 -0s-损失:4.9852-附件:0.6873 纪元47/50 -0s-损失:4.9852-附件:0.6873 纪元48/50 -0s-损失:4.9852-附件:0.6873 纪元49/50 -0s-损失:4.9852-附件:0.6873 纪元50/50 -0s-损失:4.9852-附件:0.6873

Layer (type)                 Output Shape              Param #   
=================================================================
dense_19 (Dense)             (None, 50)                900       
_________________________________________________________________
dense_20 (Dense)             (None, 100)               5100      
_________________________________________________________________
dense_21 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_22 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_23 (Dense)             (None, 100)               10100     
_________________________________________________________________
dense_24 (Dense)             (None, 1)                 101       
=================================================================
Total params: 36,401
Trainable params: 36,401
Non-trainable params: 0
_________________________________________________________________

**通过加深网络和增加辍学率,我得到了一点改进,但我仍然认为这可以进一步改进,因为使用正态逻辑回归可以提供更好的准确性(80%+)

有人知道进一步改进的方法吗**

model = Sequential()

model.add(Dense(1000, input_dim = 17, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dropout(0.2))
model.add(Dense(1000, activation = 'relu'))
model.add(Dense(1, activation = 'sigmoid'))

sgd = optimizers.SGD(lr = 0.0001)

model.compile(optimizer=sgd, loss='binary_crossentropy', metrics=['accuracy'])



model.fit(X_train, y_train, epochs = 20, shuffle=True, verbose=2, batch_size=30)



Epoch 1/20
 - 2s - loss: 4.8965 - acc: 0.6807
Epoch 2/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 3/20
 - 1s - loss: 4.6091 - acc: 0.7040
Epoch 4/20
 - 1s - loss: 4.5642 - acc: 0.7040
Epoch 5/20
 - 1s - loss: 4.6937 - acc: 0.7040
Epoch 6/20
 - 1s - loss: 4.6830 - acc: 0.7063
Epoch 7/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 8/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 9/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 10/20
 - 1s - loss: 4.6452 - acc: 0.7086
Epoch 11/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 12/20
 - 1s - loss: 4.6824 - acc: 0.7063
Epoch 13/20
 - 1s - loss: 4.7200 - acc: 0.7040
Epoch 14/20
 - 1s - loss: 4.6608 - acc: 0.7063
Epoch 15/20
 - 1s - loss: 4.6940 - acc: 0.7040
Epoch 16/20
 - 1s - loss: 4.7136 - acc: 0.7040
Epoch 17/20
 - 1s - loss: 4.6056 - acc: 0.7063
Epoch 18/20
 - 1s - loss: 4.5640 - acc: 0.7016
Epoch 19/20
 - 1s - loss: 4.7009 - acc: 0.7040
Epoch 20/20
 - 1s - loss: 4.6892 - acc: 0.7040