Python 为什么我的keras计算精度为80%,而手动计算的精度为50%?
我的模型有问题。 在训练并保存之后,我加载了我的模型并尝试预测图像。 为了测试它,我试着用我用来训练模型的相同图像Python 为什么我的keras计算精度为80%,而手动计算的精度为50%?,python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,我的模型有问题。 在训练并保存之后,我加载了我的模型并尝试预测图像。 为了测试它,我试着用我用来训练模型的相同图像 from tensorflow.keras.models import load_model from keras.preprocessing.image import ImageDataGenerator import csv model = load_model('newmodel.h5') #training and validation dataset train_di
from tensorflow.keras.models import load_model
from keras.preprocessing.image import ImageDataGenerator
import csv
model = load_model('newmodel.h5')
#training and validation dataset
train_dir = "ROIClassifier/data/train/"
train_image_generator = ImageDataGenerator(rescale=1./255) # Generator for our training data
train_dataset = train_image_generator.flow_from_directory(directory=train_dir,
target_size= (128,128),
classes=['tumor', 'stroma'],
class_mode='binary',
)
validation_image_generator = ImageDataGenerator(rescale=1./255,
) # Generator for our training data
test_dir = "ROIClassifier/data/validation/"
validation_dataset = validation_image_generator.flow_from_directory(directory=test_dir,
target_size= (128,128),
classes=['tumor', 'stroma'],
class_mode='binary')
results = model.predict_classes(train_dataset, batch_size=None)
evaluation = model.evaluate(train_dataset)
print(train_dataset.class_indices)
# name of csv file
filename = "results.csv"
# field names
fields = ['ID', 'Class', 'Prediction']
with open(filename, 'w', newline='') as csvfile:
# creating a csv writer object
csvwriter = csv.writer(csvfile)
# writing the fields
csvwriter.writerow(fields)
a = 0
for i in range(0, 293):
if train_dataset.classes[i]== int(*results[i]):
a += 1
csvwriter.writerow([i, train_dataset.classes[i], *results[i]])
我加载模型,生成数据集,预测类并评估模型。最后,我保存了一个csv(用SPSS进行一些统计)
模型评估:
1/10 [==>...........................] - ETA: 2s - loss: 0.6935 - accuracy: 0.6562
2/10 [=====>........................] - ETA: 1s - loss: 0.6839 - accuracy: 0.6875
3/10 [========>.....................] - ETA: 1s - loss: 0.6657 - accuracy: 0.7188
4/10 [===========>..................] - ETA: 1s - loss: 0.6388 - accuracy: 0.7812
5/10 [==============>...............] - ETA: 0s - loss: 0.6361 - accuracy: 0.7812
6/10 [=================>............] - ETA: 0s - loss: 0.6315 - accuracy: 0.7865
7/10 [====================>.........] - ETA: 0s - loss: 0.6202 - accuracy: 0.7991
8/10 [=======================>......] - ETA: 0s - loss: 0.6241 - accuracy: 0.7891
9/10 [==========================>...] - ETA: 0s - loss: 0.6256 - accuracy: 0.7951
10/10 [==============================] - 1s 147ms/step - loss: 0.6205 - accuracy: 0.7959
但只有50%的预测符合实际的类别。
为什么?我在代码中找不到存储模型的步骤,您可以看到如何在这个线程中完成。这需要小心完成,否则您可能不会保存最终权重,并且当您加载模型时,它将作为未经培训的对象加载。我在模型拟合之后使用了“model.save('newmodel.h5')。实际上我编写了history=model.fit(…),这可能是问题吗?我发现了问题:我必须在来自\目录的flow\中使用“shuffle=off”