Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/312.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/visual-studio-2010/4.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 培训后如何保存/恢复模型?_Python_Tensorflow - Fatal编程技术网

Python 培训后如何保存/恢复模型?

Python 培训后如何保存/恢复模型?,python,tensorflow,Python,Tensorflow,在Tensorflow中训练模型后: 如何保存经过培训的模型 以后如何恢复此保存的模型 对于TensorFlow版本

在Tensorflow中训练模型后:

  • 如何保存经过培训的模型
  • 以后如何恢复此保存的模型

  • 对于TensorFlow版本<0.11.0RC1:

    保存的检查点包含模型中
    变量
    s的值,而不是模型/图形本身的值,这意味着恢复检查点时图形应该相同

    这里是一个线性回归的例子,其中有一个保存变量检查点的训练循环和一个恢复先前运行中保存的变量并计算预测的评估部分。当然,如果您愿意,还可以恢复变量并继续培训

    x = tf.placeholder(tf.float32)
    y = tf.placeholder(tf.float32)
    
    w = tf.Variable(tf.zeros([1, 1], dtype=tf.float32))
    b = tf.Variable(tf.ones([1, 1], dtype=tf.float32))
    y_hat = tf.add(b, tf.matmul(x, w))
    
    ...more setup for optimization and what not...
    
    saver = tf.train.Saver()  # defaults to saving all variables - in this case w and b
    
    with tf.Session() as sess:
        sess.run(tf.initialize_all_variables())
        if FLAGS.train:
            for i in xrange(FLAGS.training_steps):
                ...training loop...
                if (i + 1) % FLAGS.checkpoint_steps == 0:
                    saver.save(sess, FLAGS.checkpoint_dir + 'model.ckpt',
                               global_step=i+1)
        else:
            # Here's where you're restoring the variables w and b.
            # Note that the graph is exactly as it was when the variables were
            # saved in a prior training run.
            ckpt = tf.train.get_checkpoint_state(FLAGS.checkpoint_dir)
            if ckpt and ckpt.model_checkpoint_path:
                saver.restore(sess, ckpt.model_checkpoint_path)
            else:
                ...no checkpoint found...
    
            # Now you can run the model to get predictions
            batch_x = ...load some data...
            predictions = sess.run(y_hat, feed_dict={x: batch_x})
    

    以下是for
    变量
    s,其中包括保存和恢复。以下是
    保存程序的设置
    模型有两部分,模型定义由
    Supervisor
    graph.pbtxt
    的形式保存在模型目录中,张量的数值保存在检查点文件中,如
    model.ckpt-1003418

    可以使用
    tf.import\u graph\u def
    恢复模型定义,并使用
    Saver
    恢复权重

    但是,
    Saver
    使用特殊的集合来保存附加到模型图的变量列表,并且此集合没有使用import\u Graph\u def初始化,因此您目前不能同时使用这两个集合(这在我们的路线图上)。现在,您必须使用Ryan Sepassi的方法——手动构造一个具有相同节点名称的图,并使用
    Saver
    将权重加载到其中


    (或者,您可以通过使用
    import\u graph\u def
    ,手动创建变量,并使用
    tf.为每个变量向集合(tf.GraphKeys.variables,variable)
    添加变量,然后使用
    Saver
    )进行破解)

    正如Yaroslav所说,您可以通过导入图形从图形定义和检查点进行黑客还原,手动创建变量,然后使用保护程序

    我实现这个是为了我个人使用,所以我想在这里分享代码

    链接:


    (当然,这是一种黑客行为,不能保证以这种方式保存的模型在TensorFlow的未来版本中仍然可读。)

    您也可以签入,它提供了
    save
    restore
    方法,可以帮助您轻松管理模型。它有一些参数,您还可以控制备份模型的频率

    如果它是一个内部保存的模型,您只需为所有变量指定一个恢复器即可

    restorer = tf.train.Saver(tf.all_variables())
    
    并使用它还原当前会话中的变量:

    restorer.restore(self._sess, model_file)
    
    对于外部模型,需要指定从its变量名到变量名的映射。可以使用命令查看模型变量名称

    python /path/to/tensorflow/tensorflow/python/tools/inspect_checkpoint.py --file_name=/path/to/pretrained_model/model.ckpt
    
    可以在tensorflow源的“/tensorflow/python/tools”文件夹中找到inspect_checkpoint.py脚本

    要指定映射,可以使用my,它包含一组用于训练和重新训练不同模型的类和脚本。它包括一个重新培训ResNet模型的示例,位于TensorFlow版本0.11.0RC1中(及之后),您可以通过调用
    tf.train.export\u meta\u graph
    tf.train.import\u meta\u graph
    直接保存和恢复模型

    保存模型 恢复模型 如本期所述:

    而不是

    saver.restore('my_model_final.ckpt')
    

    你也可以采取更简单的方法

    步骤1:初始化所有变量 步骤2:将会话保存在model
    Saver
    中并保存它 步骤3:恢复模型 步骤4:检查变量
    在不同的python实例中运行时,使用

    with tf.Session() as sess:
        # Restore latest checkpoint
        saver.restore(sess, tf.train.latest_checkpoint('saved_model/.'))
    
        # Initalize the variables
        sess.run(tf.global_variables_initializer())
    
        # Get default graph (supply your custom graph if you have one)
        graph = tf.get_default_graph()
    
        # It will give tensor object
        W1 = graph.get_tensor_by_name('W1:0')
    
        # To get the value (numpy array)
        W1_value = session.run(W1)
    

    在大多数情况下,使用
    tf.train.Saver
    从磁盘保存和恢复是您的最佳选择:

    ... # build your model
    saver = tf.train.Saver()
    
    with tf.Session() as sess:
        ... # train the model
        saver.save(sess, "/tmp/my_great_model")
    
    with tf.Session() as sess:
        saver.restore(sess, "/tmp/my_great_model")
        ... # use the model
    
    还可以保存/恢复图形结构本身(有关详细信息,请参阅)。默认情况下,
    Saver
    将图形结构保存到
    .meta
    文件中。您可以调用
    import\u meta\u graph()
    将其还原。它恢复图形结构并返回一个
    保存程序
    ,您可以使用该保存程序恢复模型的状态:

    saver = tf.train.import_meta_graph("/tmp/my_great_model.meta")
    
    with tf.Session() as sess:
        saver.restore(sess, "/tmp/my_great_model")
        ... # use the model
    
    然而,在某些情况下,您需要更快的速度。例如,如果实现提前停止,则希望在训练期间每次模型改进时保存检查点(在验证集上测量),然后如果一段时间内没有进展,则希望回滚到最佳模型。如果每次模型改进时都将其保存到磁盘,则会大大降低训练速度。诀窍是将变量状态保存到内存中,然后稍后再恢复:

    ... # build your model
    
    # get a handle on the graph nodes we need to save/restore the model
    graph = tf.get_default_graph()
    gvars = graph.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
    assign_ops = [graph.get_operation_by_name(v.op.name + "/Assign") for v in gvars]
    init_values = [assign_op.inputs[1] for assign_op in assign_ops]
    
    with tf.Session() as sess:
        ... # train the model
    
        # when needed, save the model state to memory
        gvars_state = sess.run(gvars)
    
        # when needed, restore the model state
        feed_dict = {init_value: val
                     for init_value, val in zip(init_values, gvars_state)}
        sess.run(assign_ops, feed_dict=feed_dict)
    

    快速解释:当您创建变量
    X
    时,TensorFlow会自动创建赋值操作
    X/Assign
    ,以设置变量的初始值。我们不需要创建占位符和额外的分配操作(这只会使图形变得混乱),而只需要使用这些现有的分配操作。每个赋值运算的第一个输入是对它应该初始化的变量的引用,第二个输入(
    assign\u op.inputs[1]
    )是初始值。因此,为了设置我们想要的任何值(而不是初始值),我们需要使用
    feed\u dict
    并替换初始值。是的,TensorFlow允许您为任何操作输入值,而不仅仅是占位符,因此这很好。

    以下是我的简单解决方案,用于两种基本情况,它们不同于您是要从文件加载图形还是在运行时生成图形

    这个答案适用于Tensorflow 0.12+(包括1.0)

    用代码重建图形 拯救 加载 还从文件加载图形 使用此技术时,请确保所有层/变量都显式设置了唯一的名称。否则Tensorflow将使名称本身唯一,因此它们与文件中存储的名称不同。这在以前的技术中不是问题,becau
    model_saver = tf.train.Saver()
    
    # Train the model and save it in the end
    model_saver.save(session, "saved_models/CNN_New.ckpt")
    
    with tf.Session(graph=graph_cnn) as session:
        model_saver.restore(session, "saved_models/CNN_New.ckpt")
        print("Model restored.") 
        print('Initialized')
    
    W1 = session.run(W1)
    print(W1)
    
    with tf.Session() as sess:
        # Restore latest checkpoint
        saver.restore(sess, tf.train.latest_checkpoint('saved_model/.'))
    
        # Initalize the variables
        sess.run(tf.global_variables_initializer())
    
        # Get default graph (supply your custom graph if you have one)
        graph = tf.get_default_graph()
    
        # It will give tensor object
        W1 = graph.get_tensor_by_name('W1:0')
    
        # To get the value (numpy array)
        W1_value = session.run(W1)
    
    ... # build your model
    saver = tf.train.Saver()
    
    with tf.Session() as sess:
        ... # train the model
        saver.save(sess, "/tmp/my_great_model")
    
    with tf.Session() as sess:
        saver.restore(sess, "/tmp/my_great_model")
        ... # use the model
    
    saver = tf.train.import_meta_graph("/tmp/my_great_model.meta")
    
    with tf.Session() as sess:
        saver.restore(sess, "/tmp/my_great_model")
        ... # use the model
    
    ... # build your model
    
    # get a handle on the graph nodes we need to save/restore the model
    graph = tf.get_default_graph()
    gvars = graph.get_collection(tf.GraphKeys.GLOBAL_VARIABLES)
    assign_ops = [graph.get_operation_by_name(v.op.name + "/Assign") for v in gvars]
    init_values = [assign_op.inputs[1] for assign_op in assign_ops]
    
    with tf.Session() as sess:
        ... # train the model
    
        # when needed, save the model state to memory
        gvars_state = sess.run(gvars)
    
        # when needed, restore the model state
        feed_dict = {init_value: val
                     for init_value, val in zip(init_values, gvars_state)}
        sess.run(assign_ops, feed_dict=feed_dict)
    
    graph = ... # build the graph
    saver = tf.train.Saver()  # create the saver after the graph
    with ... as sess:  # your session object
        saver.save(sess, 'my-model')
    
    graph = ... # build the graph
    saver = tf.train.Saver()  # create the saver after the graph
    with ... as sess:  # your session object
        saver.restore(sess, tf.train.latest_checkpoint('./'))
        # now you can use the graph, continue training or whatever
    
    graph = ... # build the graph
    
    for op in [ ... ]:  # operators you want to use after restoring the model
        tf.add_to_collection('ops_to_restore', op)
    
    saver = tf.train.Saver()  # create the saver after the graph
    with ... as sess:  # your session object
        saver.save(sess, 'my-model')
    
    with ... as sess:  # your session object
        saver = tf.train.import_meta_graph('my-model.meta')
        saver.restore(sess, tf.train.latest_checkpoint('./'))
        ops = tf.get_collection('ops_to_restore')  # here are your operators in the same order in which you saved them to the collection
    
    import tensorflow as tf
    
    #Prepare to feed input, i.e. feed_dict and placeholders
    w1 = tf.placeholder("float", name="w1")
    w2 = tf.placeholder("float", name="w2")
    b1= tf.Variable(2.0,name="bias")
    feed_dict ={w1:4,w2:8}
    
    #Define a test operation that we will restore
    w3 = tf.add(w1,w2)
    w4 = tf.multiply(w3,b1,name="op_to_restore")
    sess = tf.Session()
    sess.run(tf.global_variables_initializer())
    
    #Create a saver object which will save all the variables
    saver = tf.train.Saver()
    
    #Run the operation by feeding input
    print sess.run(w4,feed_dict)
    #Prints 24 which is sum of (w1+w2)*b1 
    
    #Now, save the graph
    saver.save(sess, 'my_test_model',global_step=1000)
    
    import tensorflow as tf
    
    sess=tf.Session()    
    #First let's load meta graph and restore weights
    saver = tf.train.import_meta_graph('my_test_model-1000.meta')
    saver.restore(sess,tf.train.latest_checkpoint('./'))
    
    
    # Access saved Variables directly
    print(sess.run('bias:0'))
    # This will print 2, which is the value of bias that we saved
    
    
    # Now, let's access and create placeholders variables and
    # create feed-dict to feed new data
    
    graph = tf.get_default_graph()
    w1 = graph.get_tensor_by_name("w1:0")
    w2 = graph.get_tensor_by_name("w2:0")
    feed_dict ={w1:13.0,w2:17.0}
    
    #Now, access the op that you want to run. 
    op_to_restore = graph.get_tensor_by_name("op_to_restore:0")
    
    print sess.run(op_to_restore,feed_dict)
    #This will print 60 which is calculated 
    
    # Some graph defined up here with specific names
    
    saver = tf.train.Saver()
    save_file = 'model.ckpt'
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        saver.save(sess, save_file)
    
    # Same graph defined up here
    
    saver = tf.train.Saver()
    save_file = './' + 'model.ckpt' # String addition used for emphasis
    
    with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        saver.restore(sess, save_file)
    
    import tensorflow as tf
    
    # define the tensorflow network and do some trains
    x = tf.placeholder("float", name="x")
    w = tf.Variable(2.0, name="w")
    b = tf.Variable(0.0, name="bias")
    
    h = tf.multiply(x, w)
    y = tf.add(h, b, name="y")
    sess = tf.Session()
    sess.run(tf.global_variables_initializer())
    
    # save the model
    export_path =  './savedmodel'
    builder = tf.saved_model.builder.SavedModelBuilder(export_path)
    
    tensor_info_x = tf.saved_model.utils.build_tensor_info(x)
    tensor_info_y = tf.saved_model.utils.build_tensor_info(y)
    
    prediction_signature = (
      tf.saved_model.signature_def_utils.build_signature_def(
          inputs={'x_input': tensor_info_x},
          outputs={'y_output': tensor_info_y},
          method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME))
    
    builder.add_meta_graph_and_variables(
      sess, [tf.saved_model.tag_constants.SERVING],
      signature_def_map={
          tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY:
              prediction_signature 
      },
      )
    builder.save()
    
    import tensorflow as tf
    sess=tf.Session() 
    signature_key = tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY
    input_key = 'x_input'
    output_key = 'y_output'
    
    export_path =  './savedmodel'
    meta_graph_def = tf.saved_model.loader.load(
               sess,
              [tf.saved_model.tag_constants.SERVING],
              export_path)
    signature = meta_graph_def.signature_def
    
    x_tensor_name = signature[signature_key].inputs[input_key].name
    y_tensor_name = signature[signature_key].outputs[output_key].name
    
    x = sess.graph.get_tensor_by_name(x_tensor_name)
    y = sess.graph.get_tensor_by_name(y_tensor_name)
    
    y_out = sess.run(y, {x: 3.0})
    
    # -------------------------
    # -----  Toy Context  -----
    # -------------------------
    import tensorflow as tf
    
    
    class Net(tf.keras.Model):
        """A simple linear model."""
    
        def __init__(self):
            super(Net, self).__init__()
            self.l1 = tf.keras.layers.Dense(5)
    
        def call(self, x):
            return self.l1(x)
    
    
    def toy_dataset():
        inputs = tf.range(10.0)[:, None]
        labels = inputs * 5.0 + tf.range(5.0)[None, :]
        return (
            tf.data.Dataset.from_tensor_slices(dict(x=inputs, y=labels)).repeat().batch(2)
        )
    
    
    def train_step(net, example, optimizer):
        """Trains `net` on `example` using `optimizer`."""
        with tf.GradientTape() as tape:
            output = net(example["x"])
            loss = tf.reduce_mean(tf.abs(output - example["y"]))
        variables = net.trainable_variables
        gradients = tape.gradient(loss, variables)
        optimizer.apply_gradients(zip(gradients, variables))
        return loss
    
    
    # ----------------------------
    # -----  Create Objects  -----
    # ----------------------------
    
    net = Net()
    opt = tf.keras.optimizers.Adam(0.1)
    dataset = toy_dataset()
    iterator = iter(dataset)
    ckpt = tf.train.Checkpoint(
        step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator
    )
    manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)
    
    # ----------------------------
    # -----  Train and Save  -----
    # ----------------------------
    
    ckpt.restore(manager.latest_checkpoint)
    if manager.latest_checkpoint:
        print("Restored from {}".format(manager.latest_checkpoint))
    else:
        print("Initializing from scratch.")
    
    for _ in range(50):
        example = next(iterator)
        loss = train_step(net, example, opt)
        ckpt.step.assign_add(1)
        if int(ckpt.step) % 10 == 0:
            save_path = manager.save()
            print("Saved checkpoint for step {}: {}".format(int(ckpt.step), save_path))
            print("loss {:1.2f}".format(loss.numpy()))
    
    
    # ---------------------
    # -----  Restore  -----
    # ---------------------
    
    # In another script, re-initialize objects
    opt = tf.keras.optimizers.Adam(0.1)
    net = Net()
    dataset = toy_dataset()
    iterator = iter(dataset)
    ckpt = tf.train.Checkpoint(
        step=tf.Variable(1), optimizer=opt, net=net, iterator=iterator
    )
    manager = tf.train.CheckpointManager(ckpt, "./tf_ckpts", max_to_keep=3)
    
    # Re-use the manager code above ^
    
    ckpt.restore(manager.latest_checkpoint)
    if manager.latest_checkpoint:
        print("Restored from {}".format(manager.latest_checkpoint))
    else:
        print("Initializing from scratch.")
    
    for _ in range(50):
        example = next(iterator)
        # Continue training or evaluate etc.
    
    
    # Create some variables.
    v1 = tf.get_variable("v1", shape=[3], initializer = tf.zeros_initializer)
    v2 = tf.get_variable("v2", shape=[5], initializer = tf.zeros_initializer)
    
    inc_v1 = v1.assign(v1+1)
    dec_v2 = v2.assign(v2-1)
    
    # Add an op to initialize the variables.
    init_op = tf.global_variables_initializer()
    
    # Add ops to save and restore all the variables.
    saver = tf.train.Saver()
    
    # Later, launch the model, initialize the variables, do some work, and save the
    # variables to disk.
    with tf.Session() as sess:
      sess.run(init_op)
      # Do some work with the model.
      inc_v1.op.run()
      dec_v2.op.run()
      # Save the variables to disk.
      save_path = saver.save(sess, "/tmp/model.ckpt")
      print("Model saved in path: %s" % save_path)
    
    tf.reset_default_graph()
    
    # Create some variables.
    v1 = tf.get_variable("v1", shape=[3])
    v2 = tf.get_variable("v2", shape=[5])
    
    # Add ops to save and restore all the variables.
    saver = tf.train.Saver()
    
    # Later, launch the model, use the saver to restore variables from disk, and
    # do some work with the model.
    with tf.Session() as sess:
      # Restore variables from disk.
      saver.restore(sess, "/tmp/model.ckpt")
      print("Model restored.")
      # Check the values of the variables
      print("v1 : %s" % v1.eval())
      print("v2 : %s" % v2.eval())
    
    import tensorflow as tf
    from tensorflow.saved_model import tag_constants
    
    with tf.Graph().as_default():
        with tf.Session() as sess:
            ...
    
            # Saving
            inputs = {
                "batch_size_placeholder": batch_size_placeholder,
                "features_placeholder": features_placeholder,
                "labels_placeholder": labels_placeholder,
            }
            outputs = {"prediction": model_output}
            tf.saved_model.simple_save(
                sess, 'path/to/your/location/', inputs, outputs
            )
    
    graph = tf.Graph()
    with restored_graph.as_default():
        with tf.Session() as sess:
            tf.saved_model.loader.load(
                sess,
                [tag_constants.SERVING],
                'path/to/your/location/',
            )
            batch_size_placeholder = graph.get_tensor_by_name('batch_size_placeholder:0')
            features_placeholder = graph.get_tensor_by_name('features_placeholder:0')
            labels_placeholder = graph.get_tensor_by_name('labels_placeholder:0')
            prediction = restored_graph.get_tensor_by_name('dense/BiasAdd:0')
    
            sess.run(prediction, feed_dict={
                batch_size_placeholder: some_value,
                features_placeholder: some_other_value,
                labels_placeholder: another_value
            })
    
    import os
    import shutil
    import numpy as np
    import tensorflow as tf
    from tensorflow.python.saved_model import tag_constants
    
    
    def model(graph, input_tensor):
        """Create the model which consists of
        a bidirectional rnn (GRU(10)) followed by a dense classifier
    
        Args:
            graph (tf.Graph): Tensors' graph
            input_tensor (tf.Tensor): Tensor fed as input to the model
    
        Returns:
            tf.Tensor: the model's output layer Tensor
        """
        cell = tf.nn.rnn_cell.GRUCell(10)
        with graph.as_default():
            ((fw_outputs, bw_outputs), (fw_state, bw_state)) = tf.nn.bidirectional_dynamic_rnn(
                cell_fw=cell,
                cell_bw=cell,
                inputs=input_tensor,
                sequence_length=[10] * 32,
                dtype=tf.float32,
                swap_memory=True,
                scope=None)
            outputs = tf.concat((fw_outputs, bw_outputs), 2)
            mean = tf.reduce_mean(outputs, axis=1)
            dense = tf.layers.dense(mean, 5, activation=None)
    
            return dense
    
    
    def get_opt_op(graph, logits, labels_tensor):
        """Create optimization operation from model's logits and labels
    
        Args:
            graph (tf.Graph): Tensors' graph
            logits (tf.Tensor): The model's output without activation
            labels_tensor (tf.Tensor): Target labels
    
        Returns:
            tf.Operation: the operation performing a stem of Adam optimizer
        """
        with graph.as_default():
            with tf.variable_scope('loss'):
                loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
                        logits=logits, labels=labels_tensor, name='xent'),
                        name="mean-xent"
                        )
            with tf.variable_scope('optimizer'):
                opt_op = tf.train.AdamOptimizer(1e-2).minimize(loss)
            return opt_op
    
    
    if __name__ == '__main__':
        # Set random seed for reproducibility
        # and create synthetic data
        np.random.seed(0)
        features = np.random.randn(64, 10, 30)
        labels = np.eye(5)[np.random.randint(0, 5, (64,))]
    
        graph1 = tf.Graph()
        with graph1.as_default():
            # Random seed for reproducibility
            tf.set_random_seed(0)
            # Placeholders
            batch_size_ph = tf.placeholder(tf.int64, name='batch_size_ph')
            features_data_ph = tf.placeholder(tf.float32, [None, None, 30], 'features_data_ph')
            labels_data_ph = tf.placeholder(tf.int32, [None, 5], 'labels_data_ph')
            # Dataset
            dataset = tf.data.Dataset.from_tensor_slices((features_data_ph, labels_data_ph))
            dataset = dataset.batch(batch_size_ph)
            iterator = tf.data.Iterator.from_structure(dataset.output_types, dataset.output_shapes)
            dataset_init_op = iterator.make_initializer(dataset, name='dataset_init')
            input_tensor, labels_tensor = iterator.get_next()
    
            # Model
            logits = model(graph1, input_tensor)
            # Optimization
            opt_op = get_opt_op(graph1, logits, labels_tensor)
    
            with tf.Session(graph=graph1) as sess:
                # Initialize variables
                tf.global_variables_initializer().run(session=sess)
                for epoch in range(3):
                    batch = 0
                    # Initialize dataset (could feed epochs in Dataset.repeat(epochs))
                    sess.run(
                        dataset_init_op,
                        feed_dict={
                            features_data_ph: features,
                            labels_data_ph: labels,
                            batch_size_ph: 32
                        })
                    values = []
                    while True:
                        try:
                            if epoch < 2:
                                # Training
                                _, value = sess.run([opt_op, logits])
                                print('Epoch {}, batch {} | Sample value: {}'.format(epoch, batch, value[0]))
                                batch += 1
                            else:
                                # Final inference
                                values.append(sess.run(logits))
                                print('Epoch {}, batch {} | Final inference | Sample value: {}'.format(epoch, batch, values[-1][0]))
                                batch += 1
                        except tf.errors.OutOfRangeError:
                            break
                # Save model state
                print('\nSaving...')
                cwd = os.getcwd()
                path = os.path.join(cwd, 'simple')
                shutil.rmtree(path, ignore_errors=True)
                inputs_dict = {
                    "batch_size_ph": batch_size_ph,
                    "features_data_ph": features_data_ph,
                    "labels_data_ph": labels_data_ph
                }
                outputs_dict = {
                    "logits": logits
                }
                tf.saved_model.simple_save(
                    sess, path, inputs_dict, outputs_dict
                )
                print('Ok')
        # Restoring
        graph2 = tf.Graph()
        with graph2.as_default():
            with tf.Session(graph=graph2) as sess:
                # Restore saved values
                print('\nRestoring...')
                tf.saved_model.loader.load(
                    sess,
                    [tag_constants.SERVING],
                    path
                )
                print('Ok')
                # Get restored placeholders
                labels_data_ph = graph2.get_tensor_by_name('labels_data_ph:0')
                features_data_ph = graph2.get_tensor_by_name('features_data_ph:0')
                batch_size_ph = graph2.get_tensor_by_name('batch_size_ph:0')
                # Get restored model output
                restored_logits = graph2.get_tensor_by_name('dense/BiasAdd:0')
                # Get dataset initializing operation
                dataset_init_op = graph2.get_operation_by_name('dataset_init')
    
                # Initialize restored dataset
                sess.run(
                    dataset_init_op,
                    feed_dict={
                        features_data_ph: features,
                        labels_data_ph: labels,
                        batch_size_ph: 32
                    }
    
                )
                # Compute inference for both batches in dataset
                restored_values = []
                for i in range(2):
                    restored_values.append(sess.run(restored_logits))
                    print('Restored values: ', restored_values[i][0])
    
        # Check if original inference and restored inference are equal
        valid = all((v == rv).all() for v, rv in zip(values, restored_values))
        print('\nInferences match: ', valid)
    
    $ python3 save_and_restore.py
    
    Epoch 0, batch 0 | Sample value: [-0.13851789 -0.3087595   0.12804556  0.20013677 -0.08229901]
    Epoch 0, batch 1 | Sample value: [-0.00555491 -0.04339041 -0.05111827 -0.2480045  -0.00107776]
    Epoch 1, batch 0 | Sample value: [-0.19321944 -0.2104792  -0.00602257  0.07465433  0.11674127]
    Epoch 1, batch 1 | Sample value: [-0.05275984  0.05981954 -0.15913513 -0.3244143   0.10673307]
    Epoch 2, batch 0 | Final inference | Sample value: [-0.26331693 -0.13013336 -0.12553    -0.04276478  0.2933622 ]
    Epoch 2, batch 1 | Final inference | Sample value: [-0.07730117  0.11119192 -0.20817074 -0.35660955  0.16990358]
    
    Saving...
    INFO:tensorflow:Assets added to graph.
    INFO:tensorflow:No assets to write.
    INFO:tensorflow:SavedModel written to: b'/some/path/simple/saved_model.pb'
    Ok
    
    Restoring...
    INFO:tensorflow:Restoring parameters from b'/some/path/simple/variables/variables'
    Ok
    Restored values:  [-0.26331693 -0.13013336 -0.12553    -0.04276478  0.2933622 ]
    Restored values:  [-0.07730117  0.11119192 -0.20817074 -0.35660955  0.16990358]
    
    Inferences match:  True
    
    import tensorflow as tf
    import os
    
    tf.enable_eager_execution()
    
    checkpoint_directory = "/tmp/training_checkpoints"
    checkpoint_prefix = os.path.join(checkpoint_directory, "ckpt")
    
    checkpoint = tf.train.Checkpoint(optimizer=optimizer, model=model)
    status = checkpoint.restore(tf.train.latest_checkpoint(checkpoint_directory))
    for _ in range(num_training_steps):
      optimizer.minimize( ... )  # Variables will be restored on creation.
    status.assert_consumed()  # Optional sanity checks.
    checkpoint.save(file_prefix=checkpoint_prefix)
    
    self.saver = tf.train.Saver()
    with tf.Session() as sess:
                sess.run(tf.global_variables_initializer())
                ...
                self.saver.save(sess, filename)
    
    saver = tf.train.import_meta_graph(filename)
    name = 'name given when you saved the file' 
    with tf.Session() as sess:
          saver.restore(sess, name)
          print(sess.run('W1:0')) #example to retrieve by variable name
    
    saver = tf.train.Saver() 
    saver.save(sess, 'path of save/fileName.ckpt')
    
    saver = tf.train.Saver()
    saver.restore(sess, tf.train.latest_checkpoint('path of save/')
    sess.run(....) 
    
    # Save the model
    model.save('path_to_my_model.h5')
    
    new_model = tensorflow.keras.models.load_model('path_to_my_model.h5')
    
    tensorflow (1.13.1)
    tensorflow-gpu (1.13.1)
    
    model.save("model.h5")
    
    model = tf.keras.models.load_model("model.h5")
    
    tf.keras.models.save_model(model_name, filepath, save_format)
    
    model = tf.keras.models.load_model(filepath)
    
    import tensorflow as tf
    from tensorflow import keras
    mnist = tf.keras.datasets.mnist
    
    #import data
    (x_train, y_train),(x_test, y_test) = mnist.load_data()
    x_train, x_test = x_train / 255.0, x_test / 255.0
    
    # create a model
    def create_model():
      model = tf.keras.models.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(512, activation=tf.nn.relu),
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(10, activation=tf.nn.softmax)
        ])
    # compile the model
      model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy'])
      return model
    
    # Create a basic model instance
    model=create_model()
    
    model.fit(x_train, y_train, epochs=1)
    loss, acc = model.evaluate(x_test, y_test,verbose=1)
    print("Original model, accuracy: {:5.2f}%".format(100*acc))
    
    # Save entire model to a HDF5 file
    model.save('./model_path/my_model.h5')
    
    # Recreate the exact same model, including weights and optimizer.
    new_model = keras.models.load_model('./model_path/my_model.h5')
    loss, acc = new_model.evaluate(x_test, y_test)
    print("Restored model, accuracy: {:5.2f}%".format(100*acc))
    
    model.fit(x_train, y_train, epochs=5)
    loss, acc = model.evaluate(x_test, y_test,verbose=1)
    print("Original model, accuracy: {:5.2f}%".format(100*acc))
    
    # Save the weights
    model.save_weights('./checkpoints/my_checkpoint')
    
    # Restore the weights
    model = create_model()
    model.load_weights('./checkpoints/my_checkpoint')
    
    loss,acc = model.evaluate(x_test, y_test)
    print("Restored model, accuracy: {:5.2f}%".format(100*acc))
    
    # include the epoch in the file name. (uses `str.format`)
    checkpoint_path = "training_2/cp-{epoch:04d}.ckpt"
    checkpoint_dir = os.path.dirname(checkpoint_path)
    
    cp_callback = tf.keras.callbacks.ModelCheckpoint(
        checkpoint_path, verbose=1, save_weights_only=True,
        # Save weights, every 5-epochs.
        period=5)
    
    model = create_model()
    model.save_weights(checkpoint_path.format(epoch=0))
    model.fit(train_images, train_labels,
              epochs = 50, callbacks = [cp_callback],
              validation_data = (test_images,test_labels),
              verbose=0)
    
    latest = tf.train.latest_checkpoint(checkpoint_dir)
    
    new_model = create_model()
    new_model.load_weights(latest)
    loss, acc = new_model.evaluate(test_images, test_labels)
    print("Restored model, accuracy: {:5.2f}%".format(100*acc))
    
    import tensorflow as tf
    from tensorflow import keras
    mnist = tf.keras.datasets.mnist
    
    (x_train, y_train),(x_test, y_test) = mnist.load_data()
    x_train, x_test = x_train / 255.0, x_test / 255.0
    
    # Custom Loss1 (for example) 
    @tf.function() 
    def customLoss1(yTrue,yPred):
      return tf.reduce_mean(yTrue-yPred) 
    
    # Custom Loss2 (for example) 
    @tf.function() 
    def customLoss2(yTrue, yPred):
      return tf.reduce_mean(tf.square(tf.subtract(yTrue,yPred))) 
    
    def create_model():
      model = tf.keras.models.Sequential([
        tf.keras.layers.Flatten(input_shape=(28, 28)),
        tf.keras.layers.Dense(512, activation=tf.nn.relu),  
        tf.keras.layers.Dropout(0.2),
        tf.keras.layers.Dense(10, activation=tf.nn.softmax)
        ])
      model.compile(optimizer='adam',
                  loss='sparse_categorical_crossentropy',
                  metrics=['accuracy', customLoss1, customLoss2])
      return model
    
    # Create a basic model instance
    model=create_model()
    
    # Fit and evaluate model 
    model.fit(x_train, y_train, epochs=1)
    loss, acc,loss1, loss2 = model.evaluate(x_test, y_test,verbose=1)
    print("Original model, accuracy: {:5.2f}%".format(100*acc))
    
    model.save("./model.h5")
    
    new_model=tf.keras.models.load_model("./model.h5",custom_objects={'customLoss1':customLoss1,'customLoss2':customLoss2})
    
    import numpy as np
    import tensorflow as tf
    from tensorflow.keras.layers import Input, Lambda
    from tensorflow.keras import Model
    
    def my_fun(a):
      out = tf.tile(a, (1, tf.shape(a)[0]))
      return out
    
    a = Input(shape=(10,))
    #out = tf.tile(a, (1, tf.shape(a)[0]))
    out = Lambda(lambda x : my_fun(x))(a)
    model = Model(a, out)
    
    x = np.zeros((50,10), dtype=np.float32)
    print(model(x).numpy())
    
    model.save('my_model.h5')
    
    #load the model
    new_model=tf.keras.models.load_model("my_model.h5")
    
    new_model = tf.keras.models.load_model("./model.h5"})
    
    import tensorflow as tf
    
    model.save("model_name")
    
    model = tf.keras.models.load_model('model_name')