Python 3.x 预训练接收的随机结果v3 CNN

Python 3.x 预训练接收的随机结果v3 CNN,python-3.x,tensorflow,conv-neural-network,image-recognition,Python 3.x,Tensorflow,Conv Neural Network,Image Recognition,我正在尝试创建一个InceptionV3 CNN,它以前在Imagenet上接受过培训。虽然检查点的创建和加载似乎工作正常,但结果似乎是随机的,因为每次运行脚本时,我都会得到不同的结果,即使我没有更改任何内容。网络从头开始重新创建,加载相同的未更改的网络,并对相同的图像进行分类(据我所知,这仍然会导致相同的结果,即使它无法确定图像实际上是什么) 我只是注意到,即使我在同一个脚本执行过程中多次尝试对同一个图像进行分类,结果也是随机的 我用这样的方式创建CNN from tensorflow.con

我正在尝试创建一个InceptionV3 CNN,它以前在Imagenet上接受过培训。虽然检查点的创建和加载似乎工作正常,但结果似乎是随机的,因为每次运行脚本时,我都会得到不同的结果,即使我没有更改任何内容。网络从头开始重新创建,加载相同的未更改的网络,并对相同的图像进行分类(据我所知,这仍然会导致相同的结果,即使它无法确定图像实际上是什么)

我只是注意到,即使我在同一个脚本执行过程中多次尝试对同一个图像进行分类,结果也是随机的

我用这样的方式创建CNN

from tensorflow.contrib.slim.nets import inception as nn_architecture
from tensorflow.contrib import slim

with slim.arg_scope([slim.conv2d, slim.fully_connected], normalizer_fn=slim.batch_norm,
                    normalizer_params={'updates_collections': None}): ## this is a fix for an issue where the model doesn't fit the checkpoint https://github.com/tensorflow/models/issues/2977
    logits, endpoints = nn_architecture.inception_v3(input,  # input
                                                     1001, #NUM_CLASSES, #num classes
                                                     # num classes #maybe set to 0 or none to ommit logit layer and return input for logit layer instead.
                                                     True,  # is training (dropout = zero if false for eval
                                                     0.8,  # dropout keep rate
                                                     16,  # min depth
                                                     1.0,  # depth multiplayer
                                                     layers_lib.softmax,  # prediction function
                                                     True,  # spatial squeeze
                                                     tf.AUTO_REUSE,
                                                     # reuse, use get variable to get variables directly... probably
                                                     'InceptionV3')  # scope
saver = tf.train.Saver()
saver.restore(sess, CHECKPOINT_PATH)
后来我就这样装了

from tensorflow.contrib.slim.nets import inception as nn_architecture
from tensorflow.contrib import slim

with slim.arg_scope([slim.conv2d, slim.fully_connected], normalizer_fn=slim.batch_norm,
                    normalizer_params={'updates_collections': None}): ## this is a fix for an issue where the model doesn't fit the checkpoint https://github.com/tensorflow/models/issues/2977
    logits, endpoints = nn_architecture.inception_v3(input,  # input
                                                     1001, #NUM_CLASSES, #num classes
                                                     # num classes #maybe set to 0 or none to ommit logit layer and return input for logit layer instead.
                                                     True,  # is training (dropout = zero if false for eval
                                                     0.8,  # dropout keep rate
                                                     16,  # min depth
                                                     1.0,  # depth multiplayer
                                                     layers_lib.softmax,  # prediction function
                                                     True,  # spatial squeeze
                                                     tf.AUTO_REUSE,
                                                     # reuse, use get variable to get variables directly... probably
                                                     'InceptionV3')  # scope
saver = tf.train.Saver()
saver.restore(sess, CHECKPOINT_PATH)
然后,我通过对该图像进行分类来验证它是否正常工作

我将其原始分辨率压缩为299x299,这是网络输入所必需的

from skimage import io

car = io.imread("data/car.jpg")
car_scaled = zoom(car, [299 / car.shape[0], 299 / car.shape[1], 1])

car_cnnable = np.array([car_scaled])
然后,我尝试对图像进行分类,并打印出图像最可能属于哪一类以及可能性有多大

predictions = sess.run(logits, feed_dict={images: car_cnnable})
predictions = np.squeeze(predictions) #shape (1, 1001) to shape (1001)  

print(np.argmax(predictions))
print(predictions[np.argmax(predictions)])
这个类是(或似乎是)随机的,可能性也各不相同。 我最后几次被处决是:

Class - likelihood 
899 - 0.98858
660 - 0.887204
734 - 0.904047
675 - 0.886952

这是我的完整代码:

因为我将isTraining设置为true,所以每次使用网络时它都会应用辍学率。我的印象是,这只发生在反向传播过程中

为了让它正常工作,代码应该是

logits, endpoints = nn_architecture.inception_v3(input,  # input
                                                 1001, #NUM_CLASSES, #num classes
                                                 # num classes #maybe set to 0 or none to ommit logit layer and return input for logit layer instead.
                                                 False,  # is training (dropout = zero if false for eval
                                                 0.8,  # dropout keep rate
                                                 16,  # min depth
                                                 1.0,  # depth multiplayer
                                                 layers_lib.softmax,  # prediction function
                                                 True,  # spatial squeeze
                                                 tf.AUTO_REUSE,
                                                 # reuse, use get variable to get variables directly... probably
                                                 'InceptionV3')  # scope