Python 网络学的东西不多

Python 网络学的东西不多,python,keras,conv-neural-network,training-data,Python,Keras,Conv Neural Network,Training Data,我正在我的RGB图像数据集上训练一个非常简单的网络,但是网络似乎没有学到多少东西,val精度从一开始就没有提高,训练精度提高了,但很少。我做错了什么?这是一个简单的网络,所以我不确定是什么出了如此可怕的问题 import cv2 import numpy as np import pandas as pd import matplotlib.pyplot as plt from keras.preprocessing.image import ImageDataGenerator import

我正在我的RGB图像数据集上训练一个非常简单的网络,但是网络似乎没有学到多少东西,val精度从一开始就没有提高,训练精度提高了,但很少。我做错了什么?这是一个简单的网络,所以我不确定是什么出了如此可怕的问题

import cv2
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator
import os
from keras import layers
from keras import models
from keras import optimizers
from keras.layers import Dropout
from keras.preprocessing.image import img_to_array, load_img

os.environ["CUDA_VISIBLE_DEVICES"]="0"

train_dir = '/home/d/Desktop/Bl/data/train'
eval_dir = '/home/d/Desktop/Bl/data/eval'
test_dir = '/home/d/Desktop/Bl/data/test'


# create a data generator
train_datagen = ImageDataGenerator(rescale=1./255,   #Scale the image between 0 and 1
                                    rotation_range=40,
                                    width_shift_range=0.2,
                                    height_shift_range=0.2,
                                    shear_range=0.2,
                                    zoom_range=0.2,
                                    horizontal_flip=True,)

val_datagen = ImageDataGenerator(rescale=1./255)  #We do not augment validation data. we only perform rescale

test_datagen = ImageDataGenerator(rescale=1./255)  #We do not augment validation data. we only perform rescale

# load and iterate training dataset
train_generator = train_datagen.flow_from_directory(train_dir, class_mode='categorical', batch_size=16, shuffle='True', seed=42)
# load and iterate validation dataset
val_generator = val_datagen.flow_from_directory(eval_dir, class_mode='categorical', batch_size=16, shuffle='True', seed=42)
# load and iterate test dataset
test_generator = test_datagen.flow_from_directory(test_dir, class_mode=None, batch_size=1, shuffle='False', seed=42)




model = models.Sequential()
model.add(layers.Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=(256, 256, 3)))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(layers.Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(layers.Conv2D(128, (3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(layers.MaxPooling2D((2, 2)))
model.add(Dropout(0.2))
model.add(layers.Flatten())
model.add(layers.Dropout(0.5))  #Dropout for regularization
model.add(layers.Dense(512, activation='relu'))
model.add(layers.Dense(3, activation='softmax'))  #Sigmoid function at the end because we have just two classes

#Lets see our model
model.summary()

model.compile(loss='categorical_crossentropy', optimizer=optimizers.SGD(lr=1e-7, momentum=0.9), metrics=['acc']) 
#Adam(lr=0.000001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0), metrics=['acc']) 

#The training part
history = model.fit_generator(train_generator,
                              steps_per_epoch=train_generator.n // train_generator.batch_size,
                              epochs=200,
                              validation_data=val_generator,
                              validation_steps=val_generator.n // val_generator.batch_size)

#Save the model
model.save_weights('/home/d/Desktop/Bl/model_weights.h5')
model.save('/home/d/Desktop/Bl/model_keras.h5')

#lets plot the train and val curve
#get the details form the history object
acc = history.history['acc']
val_acc = history.history['val_acc']
loss = history.history['loss']
val_loss = history.history['val_loss']

epochs = range(1, len(acc) + 1)

#Train and validation accuracy
plt.plot(epochs, acc, 'b', label='Training accuracy')
plt.plot(epochs, val_acc, 'r', label='Validation accuracy')
plt.title('Training and Validation accurarcy')
plt.legend()

plt.figure()
#Train and validation loss
plt.plot(epochs, loss, 'b', label='Training loss')
plt.plot(epochs, val_loss, 'r', label='Validation loss')
plt.title('Training and Validation loss')
plt.legend()

plt.show()

通过观察,我可以说1e-7的学习率太小,您必须调整此参数以使损失持续下降。您也可以使用默认学习率尝试Adam optimizer。谢谢,我实际上降低了它,因为或多或少-可能稍微更糟,同样的事情发生在lr=0.01时,在这两者之间,如果损失随着学习率的降低而减少,那么您应该提高学习率,而不是降低学习率。你试过亚当吗?我试过亚当,所以0.001开始时(在我试过0.01后)很好,但很快就稳定下来,然后我试过0.0001,现在又回到了同样的行为。先是几个时代,它似乎在不断改进,然后开始退化-不是变化,奇怪的是变化很小,而且很快就超过了最佳点-只有几个时代我也尝试过使用调度程序的Adadelta(在改进很小时降低lr)然而,我似乎得到了一些和运气没什么不同的东西。所以网络没有学到任何东西。我有3门课,val精度保持在0.3333左右