Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/286.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Inception:如何从图像输入创建输出函数_Python_Tensorflow - Fatal编程技术网

Python Inception:如何从图像输入创建输出函数

Python Inception:如何从图像输入创建输出函数,python,tensorflow,Python,Tensorflow,如何为Inception v3创建一个函数: 将图像作为输入 打印标签的登录作为输出 inception v3的原始代码如下: 下面是一个示例代码,其中他们可以从图形中计算输出。我希望模型使用检查点而不是图形。然而,我不知道如何像下面的例子一样做同样的事情,但是使用检查点 """Simple image classification with Inception. Run image classification with Inception trained on ImageNet 2012

如何为Inception v3创建一个函数:

  • 将图像作为输入

  • 打印标签的登录作为输出

  • inception v3的原始代码如下:

    下面是一个示例代码,其中他们可以从图形中计算输出。我希望模型使用检查点而不是图形。然而,我不知道如何像下面的例子一样做同样的事情,但是使用检查点

    """Simple image classification with Inception.
    
    Run image classification with Inception trained on ImageNet 2012 Challenge data
    set.
    
    This program creates a graph from a saved GraphDef protocol buffer,
    and runs inference on an input JPEG image. It outputs human readable
    strings of the top 5 predictions along with their probabilities.
    
    Change the --image_file argument to any jpg image to compute a
    classification of that image.
    
    Please see the tutorial and website for a detailed description of how
    to use this script to perform image recognition.
    
    https://tensorflow.org/tutorials/image_recognition/
    """
    
    from __future__ import absolute_import
    from __future__ import division
    from __future__ import print_function
    
    import argparse
    import os.path
    import re
    import sys
    import tarfile
    
    import numpy as np
    from six.moves import urllib
    import tensorflow as tf
    
    FLAGS = None
    
    # pylint: disable=line-too-long
    DATA_URL = 'http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'
    # pylint: enable=line-too-long
    
    
    class NodeLookup(object):
      """Converts integer node ID's to human readable labels."""
    
      def __init__(self,
                   label_lookup_path=None,
                   uid_lookup_path=None):
        if not label_lookup_path:
          label_lookup_path = os.path.join(
              FLAGS.model_dir, 'imagenet_2012_challenge_label_map_proto.pbtxt')
        if not uid_lookup_path:
          uid_lookup_path = os.path.join(
              FLAGS.model_dir, 'imagenet_synset_to_human_label_map.txt')
        self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
    
      def load(self, label_lookup_path, uid_lookup_path):
        """Loads a human readable English name for each softmax node.
    
        Args:
          label_lookup_path: string UID to integer node ID.
          uid_lookup_path: string UID to human-readable string.
    
        Returns:
          dict from integer node ID to human-readable string.
        """
        if not tf.gfile.Exists(uid_lookup_path):
          tf.logging.fatal('File does not exist %s', uid_lookup_path)
        if not tf.gfile.Exists(label_lookup_path):
          tf.logging.fatal('File does not exist %s', label_lookup_path)
    
        # Loads mapping from string UID to human-readable string
        proto_as_ascii_lines = tf.gfile.GFile(uid_lookup_path).readlines()
        uid_to_human = {}
        p = re.compile(r'[n\d]*[ \S,]*')
        for line in proto_as_ascii_lines:
          parsed_items = p.findall(line)
          uid = parsed_items[0]
          human_string = parsed_items[2]
          uid_to_human[uid] = human_string
    
        # Loads mapping from string UID to integer node ID.
        node_id_to_uid = {}
        proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
        for line in proto_as_ascii:
          if line.startswith('  target_class:'):
            target_class = int(line.split(': ')[1])
          if line.startswith('  target_class_string:'):
            target_class_string = line.split(': ')[1]
            node_id_to_uid[target_class] = target_class_string[1:-2]
    
        # Loads the final mapping of integer node ID to human-readable string
        node_id_to_name = {}
        for key, val in node_id_to_uid.items():
          if val not in uid_to_human:
            tf.logging.fatal('Failed to locate: %s', val)
          name = uid_to_human[val]
          node_id_to_name[key] = name
    
        return node_id_to_name
    
      def id_to_string(self, node_id):
        if node_id not in self.node_lookup:
          return ''
        return self.node_lookup[node_id]
    
    
    def create_graph():
      """Creates a graph from saved GraphDef file and returns a saver."""
      # Creates graph from saved graph_def.pb.
      with tf.gfile.FastGFile(os.path.join(
          FLAGS.model_dir, 'classify_image_graph_def.pb'), 'rb') as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())
        _ = tf.import_graph_def(graph_def, name='')
    
    
    def run_inference_on_image(image):
      """Runs inference on an image.
    
      Args:
        image: Image file name.
    
      Returns:
        Nothing
      """
      if not tf.gfile.Exists(image):
        tf.logging.fatal('File does not exist %s', image)
      image_data = tf.gfile.FastGFile(image, 'rb').read()
    
      # Creates graph from saved GraphDef.
      create_graph()
    
      with tf.Session() as sess:
        # Some useful tensors:
        # 'softmax:0': A tensor containing the normalized prediction across
        #   1000 labels.
        # 'pool_3:0': A tensor containing the next-to-last layer containing 2048
        #   float description of the image.
        # 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG
        #   encoding of the image.
        # Runs the softmax tensor by feeding the image_data as input to the graph.
        softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
        predictions = sess.run(softmax_tensor,
                               {'DecodeJpeg/contents:0': image_data})
        predictions = np.squeeze(predictions)
    
        # Creates node ID --> English string lookup.
        node_lookup = NodeLookup()
    
        top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]
        for node_id in top_k:
          human_string = node_lookup.id_to_string(node_id)
          score = predictions[node_id]
          print('%s (score = %.5f)' % (human_string, score))
    
    
    def maybe_download_and_extract():
      """Download and extract model tar file."""
      dest_directory = FLAGS.model_dir
      if not os.path.exists(dest_directory):
        os.makedirs(dest_directory)
      filename = DATA_URL.split('/')[-1]
      filepath = os.path.join(dest_directory, filename)
      if not os.path.exists(filepath):
        def _progress(count, block_size, total_size):
          sys.stdout.write('\r>> Downloading %s %.1f%%' % (
              filename, float(count * block_size) / float(total_size) * 100.0))
          sys.stdout.flush()
        filepath, _ = urllib.request.urlretrieve(DATA_URL, filepath, _progress)
        print()
        statinfo = os.stat(filepath)
        print('Successfully downloaded', filename, statinfo.st_size, 'bytes.')
      tarfile.open(filepath, 'r:gz').extractall(dest_directory)
    
    
    def main(_):
      maybe_download_and_extract()
      image = (FLAGS.image_file if FLAGS.image_file else
               os.path.join(FLAGS.model_dir, 'cropped_panda.jpg'))
      run_inference_on_image(image)
    
    
    if __name__ == '__main__':
      parser = argparse.ArgumentParser()
      # classify_image_graph_def.pb:
      #   Binary representation of the GraphDef protocol buffer.
      # imagenet_synset_to_human_label_map.txt:
      #   Map from synset ID to a human readable string.
      # imagenet_2012_challenge_label_map_proto.pbtxt:
      #   Text representation of a protocol buffer mapping a label to synset ID.
      parser.add_argument(
          '--model_dir',
          type=str,
          default='/tmp/imagenet',
          help="""\
          Path to classify_image_graph_def.pb,
          imagenet_synset_to_human_label_map.txt, and
          imagenet_2012_challenge_label_map_proto.pbtxt.\
          """
      )
      parser.add_argument(
          '--image_file',
          type=str,
          default='',
          help='Absolute path to image file.'
      )
      parser.add_argument(
          '--num_top_predictions',
          type=int,
          default=5,
          help='Display this many predictions.'
      )
      FLAGS, unparsed = parser.parse_known_args()
      tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
    

    只需像这样运行
    python classify\u image.py--image\u file/path/to/file
    这将以图像作为输入并输出标签

    您可能还想尝试添加下面的行。它将识别并分析最后添加到指定文件夹中的.jpg文件

    latest=max(glob.iglob('/home/l2grp/Jetty/src/ubiserv/simple/img/*.[Jj][Pp][Gg]”),key=os.path.getctime)

    有关如何操作的详细说明,请参见教程和网站
    使用此脚本执行图像识别。
    https://tensorflow.org/tutorials/image_recognition/
    """
    从未来导入绝对导入
    来自未来进口部
    来自未来导入打印功能
    导入argparse
    导入操作系统路径
    进口稀土
    导入系统
    导入tarfile
    导入glob
    将numpy作为np导入
    从六个移动导入urllib
    导入tensorflow作为tf
    标志=无
    #pylint:disable=行太长
    数据http://download.tensorflow.org/models/image/imagenet/inception-2015-12-05.tgz'
    #pylint:enable=行太长
    最新=最大值(glob.iglob('/home/l2grp/Jetty/src/ubiserv/simple/img/*.[Jj][Pp][Gg]”),key=os.path.getctime)
    类节点查找(对象):
    “”“将整数节点ID转换为人类可读的标签。”“”
    定义初始化(自我,
    标签\查找\路径=无,
    uid\u lookup\u path=None):
    如果未标记\u查找\u路径:
    label\u lookup\u path=os.path.join(
    FLAGS.model_dir,'imagenet_2012_challenge_label_map_proto.pbtxt')
    如果不是uid\u查找路径:
    uid\u lookup\u path=os.path.join(
    FLAGS.model_dir,'imagenet_synset_to_human_label_map.txt')
    self.node\u lookup=self.load(标签\u lookup\u路径、uid\u lookup\u路径)
    def加载(自、标签查找路径、uid查找路径):
    “”“为每个softmax节点加载人类可读的英文名称。
    Args:
    标签\查找\路径:字符串UID到整数节点ID。
    uid\u查找\u路径:字符串uid到人类可读字符串。
    返回:
    从整数节点ID到人类可读字符串的dict。
    """
    如果tf.gfile.不存在(uid\u查找\u路径):
    tf.logging.fatal('文件不存在%s',uid\u查找\u路径)
    如果tf.gfile.不存在(标签\u查找\u路径):
    tf.logging.fatal('文件不存在%s',标签\u查找\u路径)
    #加载从字符串UID到人类可读字符串的映射
    proto_as_ascii_lines=tf.gfile.gfile(uid_lookup_path).readlines()
    uid_to_human={}
    p=重新编译(r'[n\d]*[\S,]*')
    对于proto_中的行作为ascii_行:
    解析的_项=p.findall(行)
    uid=已分析的\u项[0]
    human_string=已解析的_项[2]
    uid\u to\u human[uid]=human\u字符串
    #加载从字符串UID到整数节点ID的映射。
    节点\u id\u到\u uid={}
    proto_as_ascii=tf.gfile.gfile(label_lookup_path).readlines()
    对于proto_中作为ascii的行:
    如果line.startswith('target_class:'):
    target_class=int(line.split(':')[1])
    如果line.startswith('target_class_string:'):
    target_class_string=line.split(“:”)[1]
    节点id到uid[目标类]=目标类字符串[1:-2]
    #加载整数节点ID到人类可读字符串的最终映射
    节点_id_to_name={}
    对于键,节点_id_to_uid.items()中的val:
    如果val不在uid\u to\u human中:
    tf.logging.fatal('未能找到:%s',val)
    name=uid\u to\u human[val]
    节点\u id\u到\u name[键]=名称
    将节点\u id\u返回到\u名称
    定义id到字符串(自身、节点id):
    如果节点id不在self.node\u查找中:
    返回“”
    返回self.node\u查找[node\u id]
    def create_graph():
    “”“从保存的GraphDef文件创建图形并返回一个保存程序。”“”
    #从保存的图形_def.pb创建图形。
    使用tf.gfile.FastGFile(os.path.join(
    FLAGS.model_dir、'classify_image_graph_def.pb')、'rb')作为f:
    graph_def=tf.GraphDef()
    graph_def.ParseFromString(f.read())
    _=tf.import_graph_def(graph_def,name='')
    def在图像上运行图像(图像):
    “”对图像运行推断。
    Args:
    图像:图像文件名。
    返回:
    没有什么
    """
    如果tf.gfile.不存在(图像):
    tf.logging.fatal('文件不存在%s',映像)
    image_data=tf.gfile.FastGFile(image,'rb').read()
    #从保存的GraphDef创建图形。
    创建_图()
    使用tf.Session()作为sess:
    #一些有用的张量:
    #“softmax:0”:一个包含跨
    #1000个标签。
    #“pool_3:0”:一个张量,包含包含2048的倒数第二层
    #图像的浮点描述。
    #“DecodeJpeg/contents:0”:包含提供JPEG的字符串的张量
    #图像的编码。
    #通过将图像_数据作为图形的输入,运行softmax张量。
    softmax\u tensor=sess.graph.get\u tensor\u by\u name('softmax:0'))
    预测=sess.run(softmax\u张量,
    {'DecodeJpeg/contents:0':图像\数据})
    预测=np.挤压(预测)
    #创建节点ID-->英文字符串查找。
    node_lookup=NodeLookup()
    top_k=predictions.argsort()[-FLAGS.num_top_predictions:[:-1]
    对于top_k中的节点id:
    human\u string=node\u lookup.id\u to\u string(node\u id)
    分数=预测[节点id]
    打印(“%s(分数=%.5f)”(人类字符串,分数))
    def main(ux):
    图像=最新
    在图像上运行图像(图像)
    如果uuuu name uuuuuu='\uuuuuuu main\uuuuuuu':
    parser=argparse.ArgumentParser()
    #分类_图像_图形_def.pb:
    #GraphDef协议缓冲区的二进制表示。
    #imagenet_synset_to_human_label_map.txt:
    #从synset ID映射到人类可读的字符串。
    #imagenet_2012_挑战_l