Python Keras中MNIST数字识别中测试数据的不同精度

Python Keras中MNIST数字识别中测试数据的不同精度,python,machine-learning,keras,classification,mnist,Python,Machine Learning,Keras,Classification,Mnist,我正在使用Keras进行手写数字识别,我有两个文件:predict.py和train.py train.py训练模型(如果尚未训练)并将其保存到一个目录中,否则它只会从保存到的目录中加载训练过的模型,并打印测试损失和测试精度 def getData(): (X_train, y_train), (X_test, y_test) = mnist.load_data() y_train = to_categorical(y_train, num_classes=10) y_t

我正在使用Keras进行手写数字识别,我有两个文件:predict.py和train.py

train.py训练模型(如果尚未训练)并将其保存到一个目录中,否则它只会从保存到的目录中加载训练过的模型,并打印
测试损失
测试精度

def getData():
    (X_train, y_train), (X_test, y_test) = mnist.load_data()
    y_train = to_categorical(y_train, num_classes=10)
    y_test = to_categorical(y_test, num_classes=10)
    X_train = X_train.reshape(X_train.shape[0], 784)
    X_test = X_test.reshape(X_test.shape[0], 784)
    
    # normalizing the data to help with the training
    X_train /= 255
    X_test /= 255
    
 
    return X_train, y_train, X_test, y_test

def trainModel(X_train, y_train, X_test, y_test):
    # training parameters
    batch_size = 1
    epochs = 10
    # create model and add layers
    model = Sequential()    
    model.add(Dense(64, activation='relu', input_shape=(784,)))
    model.add(Dense(10, activation = 'softmax'))

  
    # compiling the sequential model
    model.compile(loss='categorical_crossentropy', metrics=['accuracy'], optimizer='adam')
    # training the model and saving metrics in history
    history = model.fit(X_train, y_train,
          batch_size=batch_size, epochs=epochs,
          verbose=2,
          validation_data=(X_test, y_test))

    loss_and_metrics = model.evaluate(X_test, y_test, verbose=2)
    print("Test Loss", loss_and_metrics[0])
    print("Test Accuracy", loss_and_metrics[1])
    
    # Save model structure and weights
    model_json = model.to_json()
    with open('model.json', 'w') as json_file:
        json_file.write(model_json)
    model.save_weights('mnist_model.h5')
    return model

def loadModel():
    json_file = open('model.json', 'r')
    model_json = json_file.read()
    json_file.close()
    model = model_from_json(model_json)
    model.load_weights("mnist_model.h5")
    return model

X_train, y_train, X_test, y_test = getData()

if(not os.path.exists('mnist_model.h5')):
    model = trainModel(X_train, y_train, X_test, y_test)
    print('trained model')
    print(model.summary())
else:
    model = loadModel()
    print('loaded model')
    print(model.summary())
    model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
    loss_and_metrics = model.evaluate(X_test, y_test, verbose=2)
    print("Test Loss", loss_and_metrics[0])
    print("Test Accuracy", loss_and_metrics[1])
   
以下是输出(假设模型已提前训练,这次只加载模型):

(“测试损失”,1.741784990310669)

(“测试精度”,0.414)

另一方面,predict.py预测手写数字:

def loadModel():
    json_file = open('model.json', 'r')
    model_json = json_file.read()
    json_file.close()
    model = model_from_json(model_json)
    model.load_weights("mnist_model.h5")
    return model

model = loadModel()

model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())

(X_train, y_train), (X_test, y_test) = mnist.load_data()
y_test = to_categorical(y_test, num_classes=10)
X_test = X_test.reshape(X_test.shape[0], 28*28)


loss_and_metrics = model.evaluate(X_test, y_test, verbose=2)

print("Test Loss", loss_and_metrics[0])
print("Test Accuracy", loss_and_metrics[1])
在这种情况下,令我惊讶的是,得到了以下结果:

(“测试损失”,1.838037786674995)

(“测试精度”,0.8856)

在第二个文件中,我得到了
测试精度
为0.88(是我之前得到的两倍多)

另外,
model.summy()
在两个文件中都是相同的:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
dense_1 (Dense)              (None, 64)                50240     
_________________________________________________________________
dense_2 (Dense)              (None, 10)                650       
=================================================================
Total params: 50,890
Trainable params: 50,890
Non-trainable params: 0
_________________________________________________________________

我想不出这种行为背后的原因。这正常吗?或者我遗漏了什么?

这种差异是因为一次您使用规范化数据(即除以255)调用
evaluate()
方法,而另一次(即在“predict.py”文件中)使用非规范化数据调用它。在推断时间(即测试时间)中,应始终使用与训练数据相同的预处理步骤

此外,首先将数据转换为浮点,然后将其除以255(否则,在Python 2.x和Python 3.x中使用
x\u train/=255
x\u test/=255
进行真正的除法):


在训练模型之前你没有做任何预处理吗?我做了。编辑了我的问题(我现在已经包含了完整的文件),我猜您正在使用Python 2.x?是的,
Python 2.7.15rc1
X_train = X_train.astype('float32')
X_test = X_test.astype('float32')

X_train /= 255.
X_test /= 255.