Python 神经网络&x27;在训练期间,s的损失和准确性保持不变

Python 神经网络&x27;在训练期间,s的损失和准确性保持不变,python,neural-network,Python,Neural Network,我构建了一个带有1个隐藏层的nn。我使用relu层作为隐藏层,softmax作为输出层。 以下是代码: import numpy as np import pandas as pd from sklearn import metrics from keras import layers from keras.utils import np_utils from keras.models import Sequential from keras import optimizers data =

我构建了一个带有1个隐藏层的nn。我使用relu层作为隐藏层,softmax作为输出层。 以下是代码:

import numpy as np
import pandas as pd
from sklearn import metrics
from keras import layers
from keras.utils import np_utils
from keras.models import Sequential
from keras import optimizers

data = pd.read_csv("/kaggle/input/breast-cancer-wisconsin-data/data.csv")
data = np.array(data)
train=data[0:400]
validation=data[400:500]
test=data[500:569]

x_train = train[:,2:-2]
y_train = train[:,1]
y_train_digit=[0]*len(y_train)
for i in range(len(y_train)):
    if y_train[i]=="B":
        y_train_digit[i]=0
    else:
        y_train_digit[i]=1

y_train_digit= np.eye(2)[y_train_digit]

x_val= validation[:,2:-2]
y_val = validation[:,1]
y_val_digit=[0]*len(y_val)

for i in range(len(y_val)):
    if y_val[i]=="B":
        y_val_digit[i]=0
    else:
        y_val_digit[i]=1

y_val_digit=np.eye(2)[y_val_digit]

print(np.shape(x_train))
print(y_val_digit)


model = Sequential()
model.add(layers.Dense(10, activation = "relu", input_shape=(29,)))

model.add(layers.Dense(2, activation = "softmax"))
model.summary()

sgd = optimizers.SGD(lr=0.00001, decay=1e-6, momentum=0.9, nesterov=True)  
model.compile(loss='categorical_crossentropy',
              optimizer="sgd",
              metrics=['accuracy'])


model.fit( x_train, y_train_digit,
          batch_size=30,
          epochs=1000,
          verbose=1,
          validation_data=(x_val, y_val_digit))
但在训练期间,所有损失和准确度保持不变:

Epoch 81/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6840 - accuracy: 0.5675 - val_loss: 0.6231 - val_accuracy: 0.7800
Epoch 82/1000
400/400 [==============================] - 0s 57us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6230 - val_accuracy: 0.7800
Epoch 83/1000
400/400 [==============================] - 0s 57us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6231 - val_accuracy: 0.7800
Epoch 84/1000
400/400 [==============================] - 0s 55us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6232 - val_accuracy: 0.7800
Epoch 85/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6239 - val_accuracy: 0.7800
Epoch 86/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6240 - val_accuracy: 0.7800
Epoch 87/1000
400/400 [==============================] - 0s 56us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6240 - val_accuracy: 0.7800
Epoch 88/1000
400/400 [==============================] - 0s 55us/step - loss: 0.6841 - accuracy: 0.5675 - val_loss: 0.6241 - val_accuracy: 0.7800

怎么了?为什么网络不学习?这是因为损失函数吗?还是优化器?我认为学习率也很低。

你对数据进行了标准化处理吗?你如何对数据进行标准化处理?我进行了标准化处理,但有时,它仍然显示出恒定的精确度似乎其中一个极小值是局部极小值,而t=不是全局极小值……是否有可能告诉神经网络继续训练,直到验证精确度达到一定百分比?