TF准确度得分和混淆矩阵不一致。TensorFlow是否在每次访问BatchDataset时对数据进行洗牌?

TF准确度得分和混淆矩阵不一致。TensorFlow是否在每次访问BatchDataset时对数据进行洗牌?,tensorflow,scikit-learn,tensorflow2.0,tensorflow-datasets,Tensorflow,Scikit Learn,Tensorflow2.0,Tensorflow Datasets,model.evaluate()报告的精度与从Sklearn或TF混淆矩阵计算的精度非常不同 from sklearn.metrics import confusion_matrix ... training_data, validation_data, testing_data = load_img_datasets() # These ^ are tensorflow.python.data.ops.dataset_ops.BatchDataset strategy = tf.distr

model.evaluate()
报告的精度与从Sklearn或TF混淆矩阵计算的精度非常不同

from sklearn.metrics import confusion_matrix
...

training_data, validation_data, testing_data = load_img_datasets()
# These ^ are tensorflow.python.data.ops.dataset_ops.BatchDataset

strategy = tf.distribute.MirroredStrategy()
with strategy.scope():
    model = create_model(INPUT_SHAPE, NUM_CATEGORIES)
    optimizer = tf.keras.optimizers.Adam()
    metrics = ['accuracy']
    model.compile(loss='categorical_crossentropy',
                  optimizer=optimizer,
                  metrics=metrics)

history = model.fit(training_data, epochs=epochs,
                    validation_data=validation_data)

testing_data.shuffle(len(testing_data), reshuffle_each_iteration=False)
# I think this ^ is preventing additional shuffles on access

loss, accuracy = model.evaluate(testing_data)
print(f"Accuracy: {(accuracy * 100):.2f}%")
# Prints 
# Accuracy: 78.7%

y_hat = model.predict(testing_data)
y_test = np.concatenate([y for x, y in testing_data], axis=0)
c_matrix = confusion_matrix(np.argmax(y_test, axis=-1),
                            np.argmax(y_hat, axis=-1))
print(c_matrix)
# Prints result that does not agree:
# Confusion matrix:
#[[ 72 111  54  15  69]
# [ 82 100  44  16  78]
# [ 64 114  52  21  69]
# [ 71 106  54  21  68]
# [ 79 101  51  25  64]]
# Accuracy calculated from CM = 19.3%
起初,我认为TensorFlow在每次访问时都在洗牌
测试\u数据
,所以我添加了
测试\u数据。洗牌(len(testing\u data),reshuffle\u each\u iteration=False)
,但结果仍然不一致

还尝试了TF混淆矩阵:

y_hat = model.predict(testing_data)
y_test = np.concatenate([y for x, y in testing_data], axis=0)
true_class = tf.argmax(y_test, 1)
predicted_class = tf.argmax(y_hat, 1)
cm = tf.math.confusion_matrix(true_class, predicted_class, NUM_CATEGORIES)
print(cm)
…结果相似


显然,预测的标签必须与正确的标签进行比较。我做错了什么?

我找不到来源,但Tensorflow似乎仍在幕后操纵测试。您可以尝试迭代数据集以获得预测和实际类:

predicted_classes = np.array([])
true_classes =  np.array([])

for x, y in testing_data:
  predicted_classes = np.concatenate([predicted_classes,
                       np.argmax(model(x), axis = -1)])
  true_classes = np.concatenate([true_classes, np.argmax(y.numpy(), axis=-1)])
型号(x)
用于更快的执行:

计算是分批进行的。此方法是为 演出 大规模投入。对于适合一批的少量输入, 建议直接使用
\uuuu调用\uuuu
以加快执行速度,例如:。,
型号(x)


如果它不起作用,你可以尝试
model.predict(x)

我找不到源代码,但Tensorflow似乎仍在幕后操纵测试。您可以尝试迭代数据集以获得预测和实际类:

predicted_classes = np.array([])
true_classes =  np.array([])

for x, y in testing_data:
  predicted_classes = np.concatenate([predicted_classes,
                       np.argmax(model(x), axis = -1)])
  true_classes = np.concatenate([true_classes, np.argmax(y.numpy(), axis=-1)])
型号(x)
用于更快的执行:

计算是分批进行的。此方法是为 演出 大规模投入。对于适合一批的少量输入, 建议直接使用
\uuuu调用\uuuu
以加快执行速度,例如:。,
型号(x)

如果它不起作用,您可以尝试使用
model.predict(x)