Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/339.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 为什么model.evaluate()不能产生与使用for循环手动计算相同的精度?_Python_Tensorflow_Machine Learning_Keras_Deep Learning - Fatal编程技术网

Python 为什么model.evaluate()不能产生与使用for循环手动计算相同的精度?

Python 为什么model.evaluate()不能产生与使用for循环手动计算相同的精度?,python,tensorflow,machine-learning,keras,deep-learning,Python,Tensorflow,Machine Learning,Keras,Deep Learning,继上的迁移学习教程之后,我有一个问题,与手工计算精度相比,model.evaluate()如何工作 最后,在微调之后,在评估和预测部分,我们使用model.evaluate()计算测试集的精度,如下所示: loss, accuracy = model.evaluate(test_dataset) print('Test accuracy :', accuracy) 6/6 [==============================] - 2s 217ms/step - loss: 0.051

继上的迁移学习教程之后,我有一个问题,与手工计算精度相比,model.evaluate()如何工作

最后,在微调之后,在评估和预测部分,我们使用
model.evaluate()
计算测试集的精度,如下所示:

loss, accuracy = model.evaluate(test_dataset)
print('Test accuracy :', accuracy)
6/6 [==============================] - 2s 217ms/step - loss: 0.0516 - accuracy: 0.9740
Test accuracy : 0.9739583134651184
接下来,作为可视化练习的一部分,我们从测试集中的一批图像手动生成预测:

# Apply a sigmoid since our model returns logits
predictions = tf.nn.sigmoid(predictions)
predictions = tf.where(predictions < 0.5, 0, 1)
有人知道为什么会有差异吗?model.evaluate()是否不使用sigmoid?或者它使用了不同于0.5的阈值?或者是我没有考虑的其他事情?请注意,我的新模型使用的图像与教程中的猫和狗不同,但代码是相同的


提前感谢您的帮助

您提供给模型的图像对于模型来说是新的,并且从未对其进行过培训。因此,模型调整来了,我的模型在训练数据集上运行得很好,在测试集上也可以,但在新数据上运行得不太好。因此,您需要对模型参数进行超调,然后再次检查新图像,无论效果如何。感谢您的回复。我更新了帖子,使其更加清晰。我确实在新图片上训练了这个单独的模型。我完全按照教程进行操作,只是使用了不同的图像。问题是为什么在同一个模型的同一测试集上应用两种精度方法会给出不同的结果。
all_acc=tf.zeros([], tf.int32) #initialize array to hold all accuracy indicators (single element)
for image_batch, label_batch in test_dataset.as_numpy_iterator():
    predictions = model.predict_on_batch(image_batch).flatten() #run batch through model and return logits
    predictions = tf.nn.sigmoid(predictions) #apply sigmoid activation function to transform logits to [0,1]
    predictions = tf.where(predictions < 0.5, 0, 1) #round down or up accordingly since it's a binary classifier
    accuracy = tf.where(tf.equal(predictions,label_batch),1,0) #correct is 1 and incorrect is 0
    all_acc = tf.experimental.numpy.append(all_acc, accuracy)
all_acc = all_acc[1:]  #drop first placeholder element
avg_acc = tf.math.reduce_mean(tf.dtypes.cast(all_acc, tf.float16)) 
print('My Accuracy:', avg_acc.numpy()) 
My Accuracy: 0.974
current_set = set9 #define set to process. must do all nine, one at a time
all_acc=tf.zeros([], tf.int32) #initialize array to hold all accuracy indicators (single element)
loss, acc = model.evaluate(current_set) #now test the model's performance on the test set
for image_batch, label_batch in current_set.as_numpy_iterator():
    predictions = model.predict_on_batch(image_batch).flatten() #run batch through model and return logits
    predictions = tf.nn.sigmoid(predictions) #apply sigmoid activation function to transform logits to [0,1]
    predictions = tf.where(predictions < 0.5, 0, 1) #round down or up accordingly since it's a binary classifier
    accuracy = tf.where(tf.equal(predictions,label_batch),1,0) #correct is 1 and incorrect is 0
    all_acc = tf.experimental.numpy.append(all_acc, accuracy)
all_acc = all_acc[1:]  #drop first placeholder element
avg_acc = tf.math.reduce_mean(tf.dtypes.cast(all_acc, tf.float16))
print('My Accuracy:', avg_acc.numpy()) 
print('Tf Accuracy:', acc) 
My Accuracy: 0.7183
Tf Accuracy: 0.6240000128746033