Neural network Google Inception tensorflow.python.framework.errors.ResourceExhustederRor

Neural network Google Inception tensorflow.python.framework.errors.ResourceExhustederRor,neural-network,tensorflow,imagenet,Neural Network,Tensorflow,Imagenet,当我尝试在一个图像列表上循环运行Google的Inception模型时,我在大约100张图像之后得到了下面的问题。似乎内存不足。我用的是中央处理器。还有其他人遇到过这个问题吗 Traceback (most recent call last): File "clean_dataset.py", line 33, in <module> description, score = inception.run_inference_on_image(f.read()) Fil

当我尝试在一个图像列表上循环运行Google的Inception模型时,我在大约100张图像之后得到了下面的问题。似乎内存不足。我用的是中央处理器。还有其他人遇到过这个问题吗

Traceback (most recent call last):
  File "clean_dataset.py", line 33, in <module>
    description, score = inception.run_inference_on_image(f.read())
  File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 178, in run_inference_on_image
    node_lookup = NodeLookup()
  File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 83, in __init__
    self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
  File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 112, in load
    proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 110, in readlines
    self._prereadline_check()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 72, in _prereadline_check
    compat.as_bytes(self.__name), 1024 * 512, status)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.ResourceExhaustedError: /tmp/imagenet/imagenet_2012_challenge_label_map_proto.pbtxt


real    6m32.403s
user    7m8.210s
sys     1m36.114s
回溯(最近一次呼叫最后一次):
文件“clean_dataset.py”,第33行,在
description,score=inception.在图像上运行推理(f.read())
文件“/Volumes/EXPANSION/research/dcgan transfer/data/classify\u image.py”,第178行,在图像上运行
node_lookup=NodeLookup()
文件“/Volumes/EXPANSION/research/dcgan transfer/data/classify_image.py”,第83行,在uu init中__
self.node\u lookup=self.load(标签\u lookup\u路径、uid\u lookup\u路径)
文件“/Volumes/EXPANSION/research/dcgan transfer/data/classify_image.py”,第112行,已加载
proto_as_ascii=tf.gfile.gfile(label_lookup_path).readlines()
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/tensorflow/Python/lib/io/File_io.py”,第110行,在readlines中
self.\u prereadline\u check()
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/tensorflow/Python/lib/io/File_io.py”,第72行,在预读行检查中
兼容字节(自身名称),1024*512,状态)
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py”,第24行,在退出时__
self.gen.next()
文件“/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site packages/tensorflow/Python/framework/errors.py”,第463行,处于“未正常”状态时引发异常
pywrap_tensorflow.TF_GetCode(状态))
tensorflow.python.framework.errors.ResourceExhustederRor:/tmp/imagenet/imagenet\u 2012\u challenge\u label\u map\u proto.pbtxt
实数6m32.403s
用户7m8.210s
sys 1m36.114s

问题在于,您不能简单地在自己的代码中导入原始的“classify_image.py”(),特别是当您将其放入一个巨大的循环中,以“批处理模式”对数千张图像进行分类时

请看此处的原始代码:

with tf.Session() as sess:
# Some useful tensors:
# 'softmax:0': A tensor containing the normalized prediction across
#   1000 labels.
# 'pool_3:0': A tensor containing the next-to-last layer containing 2048
#   float description of the image.
# 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG
#   encoding of the image.
# Runs the softmax tensor by feeding the image_data as input to the graph.
softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
predictions = sess.run(softmax_tensor,
                       {'DecodeJpeg/contents:0': image_data})
predictions = np.squeeze(predictions)

# Creates node ID --> English string lookup.
node_lookup = NodeLookup()

top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]
for node_id in top_k:
  human_string = node_lookup.id_to_string(node_id)
  score = predictions[node_id]
  print('%s (score = %.5f)' % (human_string, score))
从上面可以看出,对于每个分类任务,它都会生成一个新的“NodeLookup”类实例,该类从以下文件加载:

  • label\u lookup=“imagenet\u 2012\u challenge\u label\u map\u proto.pbtxt”
  • uid\u lookup\u path=“imagenet\u synset\u to\u human\u label\u map.txt”
因此,这个实例将非常庞大,然后在代码的循环中,它将生成这个类的数百个实例,这将导致“tensorflow.python.framework.errors.ResourceExhustederRor”

我建议编写一个新脚本,并从“classify_image.py”修改这些类和函数,避免为每个循环实例化NodeLookup类,只需实例化一次并在循环中使用它。大概是这样的:

with tf.Session() as sess:
        softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
        print 'Making classifications:'

        # Creates node ID --> English string lookup.
        node_lookup = NodeLookup(label_lookup_path=self.Model_Save_Path + self.label_lookup,
                                 uid_lookup_path=self.Model_Save_Path + self.uid_lookup_path)

        current_counter = 1
        for (tensor_image, image) in self.tensor_files:
            print 'On ' + str(current_counter)

            predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': tensor_image})
            predictions = np.squeeze(predictions)

            top_k = predictions.argsort()[-int(self.filter_level):][::-1]

             for node_id in top_k:
                 human_string = node_lookup.id_to_string(node_id)
                 score = predictions[node_id]