Python 训练张量流神经网络时遇到问题,如何解决这个问题?
我目前正在培训一个包含三类车辆(货车/SUV、轿车和卡车)的图像分类模型。我有1800张训练图像和210张验证图像。当我尝试插入数据时。我使用Python 训练张量流神经网络时遇到问题,如何解决这个问题?,python,numpy,tensorflow,machine-learning,data-processing,Python,Numpy,Tensorflow,Machine Learning,Data Processing,我目前正在培训一个包含三类车辆(货车/SUV、轿车和卡车)的图像分类模型。我有1800张训练图像和210张验证图像。当我尝试插入数据时。我使用keras.preprocessing.image.ImageDataGenerator()和Val\u data.flow()预处理数据。由于我的准确性保持不变,这似乎是绝对发生的。以下是我的代码和结果。我尝试修复这一问题已久,但似乎无法修复此问题 守则: # Creating Training Data Shuffled and Organiz
keras.preprocessing.image.ImageDataGenerator()
和Val\u data.flow(
)预处理数据。由于我的准确性保持不变,这似乎是绝对发生的。以下是我的代码和结果。我尝试修复这一问题已久,但似乎无法修复此问题
守则:
# Creating Training Data Shuffled and Organized
Train_Data = keras.preprocessing.image.ImageDataGenerator()
Train_Gen = Train_Data.flow(
Train_Img,
Train_Labels,
batch_size=BATCH_SIZE,
shuffle=True)
# Creating Validation Data Shuffled and Organized
Val_Data = keras.preprocessing.image.ImageDataGenerator()
Val_Gen = Val_Data.flow(
Train_Img,
Train_Labels,
batch_size=BATCH_SIZE,
shuffle=True)
print(Train_Gen)
###################################################################################
###################################################################################
#Outline the Model
hidden_layer_size = 300
output_size = 3
#Model Core
model = tf.keras.Sequential([
tf.keras.layers.Flatten(input_shape=(IMG_HEIGHT,IMG_WIDTH,CHANNELS)),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(hidden_layer_size, activation = 'relu'),
tf.keras.layers.Dense(output_size, activation = 'softmax')
])
custom_optimizer = tf.keras.optimizers.SGD(learning_rate=0.001)
#Compile Model
model.compile(optimizer='adam', loss ='sparse_categorical_crossentropy', metrics = ['accuracy'])
#Train Model
NUM_EPOCHS = 15;
model.fit(Train_Gen, validation_steps = 10, epochs = NUM_EPOCHS, validation_data = Val_Gen, verbose = 2)
结果是:
180/180 - 27s - loss: 10.7153 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 2/15
180/180 - 23s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 3/15
180/180 - 23s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 4/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 5/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 6/15
180/180 - 21s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 7/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 8/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 9/15
180/180 - 23s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 10/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 11/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 12/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 13/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 14/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
Epoch 15/15
180/180 - 22s - loss: 10.7454 - accuracy: 0.3333 - val_loss: 10.7991 - val_accuracy: 0.3300
我想你首先要研究的是卷积神经网络,因为我可以看到你正试图用一个密集的网络来解决一个基于图像的问题。它可以工作,但不如CNN好 这些模型陷入tensorflow的原因有很多,我遇到的最常见的原因是:
custom_optimizer = tf.keras.optimizers.Adam(lr=0.0001)
model.compile(optimizer=custom_optimizer, loss="sparse_categorical_crossentropy", metrics = ['acc'])
检查这个链接,这是一个CNN的实施,是接近你的