Warning: file_get_contents(/data/phpspider/zhask/data//catemap/4/kotlin/3.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 验证准确性没有提高_Python_Machine Learning_Keras_Deep Learning_Conv Neural Network - Fatal编程技术网

Python 验证准确性没有提高

Python 验证准确性没有提高,python,machine-learning,keras,deep-learning,conv-neural-network,Python,Machine Learning,Keras,Deep Learning,Conv Neural Network,我对深度学习相当陌生,现在我正试图根据脑电图数据预测消费者的选择。整个数据集由1045个脑电图记录组成,每个记录都有相应的标签,表示对产品的喜欢或不喜欢。班级分布如下(44%喜欢,56%不喜欢)。我了解到卷积神经网络适用于处理原始EEG数据,因此我尝试实现一个基于keras的网络,其结构如下: from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_sp

我对深度学习相当陌生,现在我正试图根据脑电图数据预测消费者的选择。整个数据集由1045个脑电图记录组成,每个记录都有相应的标签,表示对产品的喜欢或不喜欢。班级分布如下(44%喜欢,56%不喜欢)。我了解到卷积神经网络适用于处理原始EEG数据,因此我尝试实现一个基于keras的网络,其结构如下:

from sklearn.model_selection import train_test_split

X_train, X_test, y_train, y_test = train_test_split(full_data, target, test_size=0.20, random_state=42)

y_train = np.asarray(y_train).astype('float32').reshape((-1,1))
y_test = np.asarray(y_test).astype('float32').reshape((-1,1))


# X_train.shape = ((836, 512, 14))
# y_train.shape = ((836, 1))

from keras.optimizers import Adam
from keras.optimizers import SGD
from keras.layers import MaxPooling1D
model = Sequential()

model.add(Conv1D(16, kernel_size=3, activation="relu", input_shape=(512,14)))

model.add(MaxPooling1D())

model.add(Conv1D(8, kernel_size=3, activation="relu"))

model.add(MaxPooling1D())

model.add(Flatten())

model.add(Dense(1, activation="sigmoid"))

model.compile(optimizer=Adam(lr = 0.001), loss='binary_crossentropy', metrics=['accuracy'])

model.fit(X_train, y_train, validation_data=(X_test, y_test), epochs=20, batch_size = 64)

但是,当我拟合模型时,验证精度不会因以下输出而发生任何变化:


Epoch 1/20
14/14 [==============================] - 0s 32ms/step - loss: 292.6353 - accuracy: 0.5383 - val_loss: 0.7884 - val_accuracy: 0.5407
Epoch 2/20
14/14 [==============================] - 0s 7ms/step - loss: 1.3748 - accuracy: 0.5598 - val_loss: 0.8860 - val_accuracy: 0.5502
Epoch 3/20
14/14 [==============================] - 0s 6ms/step - loss: 1.0537 - accuracy: 0.5598 - val_loss: 0.7629 - val_accuracy: 0.5455
Epoch 4/20
14/14 [==============================] - 0s 6ms/step - loss: 0.8827 - accuracy: 0.5598 - val_loss: 0.7010 - val_accuracy: 0.5455
Epoch 5/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7988 - accuracy: 0.5598 - val_loss: 0.8689 - val_accuracy: 0.5407
Epoch 6/20
14/14 [==============================] - 0s 6ms/step - loss: 1.0221 - accuracy: 0.5610 - val_loss: 0.6961 - val_accuracy: 0.5455
Epoch 7/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7415 - accuracy: 0.5598 - val_loss: 0.6945 - val_accuracy: 0.5455
Epoch 8/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7381 - accuracy: 0.5574 - val_loss: 0.7761 - val_accuracy: 0.5455
Epoch 9/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7326 - accuracy: 0.5598 - val_loss: 0.6926 - val_accuracy: 0.5455
Epoch 10/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7338 - accuracy: 0.5598 - val_loss: 0.6917 - val_accuracy: 0.5455
Epoch 11/20
14/14 [==============================] - 0s 7ms/step - loss: 0.7203 - accuracy: 0.5610 - val_loss: 0.6916 - val_accuracy: 0.5455
Epoch 12/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7192 - accuracy: 0.5610 - val_loss: 0.6914 - val_accuracy: 0.5455
Epoch 13/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7174 - accuracy: 0.5610 - val_loss: 0.6912 - val_accuracy: 0.5455
Epoch 14/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7155 - accuracy: 0.5610 - val_loss: 0.6911 - val_accuracy: 0.5455
Epoch 15/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7143 - accuracy: 0.5610 - val_loss: 0.6910 - val_accuracy: 0.5455
Epoch 16/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7129 - accuracy: 0.5610 - val_loss: 0.6909 - val_accuracy: 0.5455
Epoch 17/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7114 - accuracy: 0.5610 - val_loss: 0.6907 - val_accuracy: 0.5455
Epoch 18/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7103 - accuracy: 0.5610 - val_loss: 0.6906 - val_accuracy: 0.5455
Epoch 19/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7088 - accuracy: 0.5610 - val_loss: 0.6906 - val_accuracy: 0.5455
Epoch 20/20
14/14 [==============================] - 0s 6ms/step - loss: 0.7075 - accuracy: 0.5610 - val_loss: 0.6905 - val_accuracy: 0.5455


提前感谢您的任何见解

您遇到的现象称为
拟合不良
。当您的培训数据质量不足,或者您的网络体系结构太小,无法了解问题时,就会发生这种情况

尝试对输入数据进行规范化,并尝试不同的网络架构、学习速率和激活功能


正如@Muhammad Shahzad在其评论中所述,在平整后添加一些致密层将是一种具体的架构调整,您应该尝试。

在输入模型之前,您是否对数据进行了标准化?如果没有,试试看。此外,使用卷积层,我建议在最后一层之前添加一个完全连接的层,如“model.add(Dense(128,activation='tanh')”)。你也可以尝试改变conv层的位置。是的,我对数据进行了标准化,并且像你说的那样添加了一些密集层,仍然是相同的问题。你可以共享数据吗?这是数据文件的链接:谢谢你的建议,但我确实尝试了,仍然是相同的问题