加载tensorflow模型后运行正向支撑功能

加载tensorflow模型后运行正向支撑功能,tensorflow,machine-learning,model,neural-network,Tensorflow,Machine Learning,Model,Neural Network,加载保存的Tensorflow模型后,无法运行正向传播函数。我能够成功地提取权重,但是当我尝试将新输入传递给forward prop函数时,它会抛出“尝试使用未初始化值”错误 我的占位符如下: x = tf.placeholder('int64', [None, 4], name='input') # Number of examples x features y = tf.placeholder('int64', [None, 1], name='output') # Number of

加载保存的Tensorflow模型后,无法运行正向传播函数。我能够成功地提取权重,但是当我尝试将新输入传递给forward prop函数时,它会抛出“尝试使用未初始化值”错误

我的占位符如下:

x = tf.placeholder('int64', [None, 4], name='input')  # Number of examples x features
y = tf.placeholder('int64', [None, 1], name='output')  # Number of examples x output
前进道具功能:

def forwardProp(x, y):

    embedding_mat = tf.get_variable("EM", shape=[total_vocab, e_features], initializer=tf.random_normal_initializer(seed=1))

    # m x words x total_vocab * total_vocab x e_features = m x words x e_features
    # embed_x = tf.tensordot(x, tf.transpose(embedding_mat), axes=[[2], [0]])
    # embed_y = tf.tensordot(y, tf.transpose(embedding_mat), axes=[[2], [0]])

    embed_x = tf.gather(embedding_mat, x)  # m x words x e_features
    embed_y = tf.gather(embedding_mat, y)  # m x words x e_features

    #print("Shape of embed x", embed_x.get_shape())

    W1 = tf.get_variable("W1", shape=[n1, e_features], initializer=tf.random_normal_initializer(seed=1))
    B1 = tf.get_variable("b1", shape=[1, 4, n1], initializer=tf.zeros_initializer())

    # m x words x e_features *  e_features x n1 = m x words x n1
    Z1 = tf.add(tf.tensordot(embed_x, tf.transpose(W1), axes=[[2], [0]]), B1, )
    A1 = tf.nn.tanh(Z1)

    W2 = tf.get_variable("W2", shape=[n2, n1], initializer=tf.random_normal_initializer(seed=1))
    B2 = tf.get_variable("B2", shape=[1, 4, n2], initializer=tf.zeros_initializer())

    # m x words x n1 *  n1 x n2 = m x words x n2
    Z2 = tf.add(tf.tensordot(A1, tf.transpose(W2), axes=[[2], [0]]), B2)
    A2 = tf.nn.tanh(Z2)

    W3 = tf.get_variable("W3", shape=[n3, n2], initializer=tf.random_normal_initializer(seed=1))
    B3 = tf.get_variable("B3", shape=[1, 4, n3], initializer=tf.zeros_initializer())

    # m x words x n2  * n2 x n3 = m x words x n3
    Z3 = tf.add(tf.tensordot(A2, tf.transpose(W3), axes=[[2], [0]]), B3)
    A3 = tf.nn.tanh(Z3)

    # Convert m x words x n3 to m x n3

    x_final = tf.reduce_mean(A3, axis=1)
    y_final = tf.reduce_mean(embed_y, axis=1)

    return x_final, y_final
def backProp(X_index, Y_index):
    x_final, y_final = forwardProp(x, y)
    cost = tf.nn.l2_loss(x_final - y_final)
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
    init = tf.global_variables_initializer()
    saver = tf.train.Saver()
    total_batches = math.floor(m/batch_size)


    with tf.Session() as sess:
        sess.run(init)

        for epoch in range(epochs):
            batch_start = 0

            for i in range(int(m/batch_size)):

                x_hot = X_index[batch_start: batch_start + batch_size]
                y_hot = Y_index[batch_start: batch_start + batch_size]
                batch_start += batch_size

                _, temp_cost = sess.run([optimizer, cost], feed_dict={x: x_hot, y: y_hot})

                print("Cost at minibatch:  ", i , " and epoch ", epoch, " is ", temp_cost)

            if m % batch_size != 0:
                x_hot = X_index[batch_start: batch_start+m - (batch_size*total_batches)]
                y_hot = Y_index[batch_start: batch_start+m - (batch_size*total_batches)]
                _, temp_cost = sess.run([optimizer, cost], feed_dict={x: x_hot, y: y_hot})
                print("Cost at minibatch: (beyond floor)  and epoch ", epoch, " is ", temp_cost)


        # Saving the model
        save_path = saver.save(sess, "./model_neural_embeddingV1.ckpt")
        print("Model saved!")
def predict_search():

    # Initialize variables
    total_features = 4
    extra = len(word_to_indice)
    query = input('Enter your query')
    words = word_tokenize(query)
    # For now, it will throw an error if a word not present in dictionary is present
    features = [word_to_indice[w.lower()] for w in words]
    len_features = len(features)
    X_query = []
    Y_query = [[0]]  # Dummy variable, we don't care about the Y query while doing prediction
    if len_features < total_features:
        features += [extra] * (total_features - len_features)
    elif len_features > total_features:
        features = features[:total_features]

    X_query.append(features)
    X_query = np.array(X_query)
    print(X_query)
    Y_query = np.array(Y_query)

    # Load the model

    init_global = tf.global_variables_initializer()
    init_local = tf.local_variables_initializer()

    #X_final, Y_final = forwardProp(x, y)

    with tf.Session() as sess:
        sess.run(init_global)
        sess.run(init_local)
        saver = tf.train.import_meta_graph('./model_neural_embeddingV1.ckpt.meta')
        saver.restore(sess, './model_neural_embeddingV1.ckpt')
        print("Model loaded")
        print("Loaded variables are: ")
        print(tf.trainable_variables())
        print(sess.graph.get_operations())
        embedMat = sess.run('EM:0')  # Get the word embedding matrix
        W1 = sess.run('W1:0')
        b1 = sess.run('b1:0')
        W2 = sess.run('W2:0')
        b2 = sess.run('B2:0')
        print(b2)
        W3 = sess.run('W3:0')
        b3 = sess.run('B3:0')

        **#This part is not working, calling forward prop gives an 'attempting to use uninitialized value' error.** 
        X_final = sess.run(forwardProp(x, y), feed_dict={x: X_query, y: Y_query})

        print(X_final)
背撑功能:

def forwardProp(x, y):

    embedding_mat = tf.get_variable("EM", shape=[total_vocab, e_features], initializer=tf.random_normal_initializer(seed=1))

    # m x words x total_vocab * total_vocab x e_features = m x words x e_features
    # embed_x = tf.tensordot(x, tf.transpose(embedding_mat), axes=[[2], [0]])
    # embed_y = tf.tensordot(y, tf.transpose(embedding_mat), axes=[[2], [0]])

    embed_x = tf.gather(embedding_mat, x)  # m x words x e_features
    embed_y = tf.gather(embedding_mat, y)  # m x words x e_features

    #print("Shape of embed x", embed_x.get_shape())

    W1 = tf.get_variable("W1", shape=[n1, e_features], initializer=tf.random_normal_initializer(seed=1))
    B1 = tf.get_variable("b1", shape=[1, 4, n1], initializer=tf.zeros_initializer())

    # m x words x e_features *  e_features x n1 = m x words x n1
    Z1 = tf.add(tf.tensordot(embed_x, tf.transpose(W1), axes=[[2], [0]]), B1, )
    A1 = tf.nn.tanh(Z1)

    W2 = tf.get_variable("W2", shape=[n2, n1], initializer=tf.random_normal_initializer(seed=1))
    B2 = tf.get_variable("B2", shape=[1, 4, n2], initializer=tf.zeros_initializer())

    # m x words x n1 *  n1 x n2 = m x words x n2
    Z2 = tf.add(tf.tensordot(A1, tf.transpose(W2), axes=[[2], [0]]), B2)
    A2 = tf.nn.tanh(Z2)

    W3 = tf.get_variable("W3", shape=[n3, n2], initializer=tf.random_normal_initializer(seed=1))
    B3 = tf.get_variable("B3", shape=[1, 4, n3], initializer=tf.zeros_initializer())

    # m x words x n2  * n2 x n3 = m x words x n3
    Z3 = tf.add(tf.tensordot(A2, tf.transpose(W3), axes=[[2], [0]]), B3)
    A3 = tf.nn.tanh(Z3)

    # Convert m x words x n3 to m x n3

    x_final = tf.reduce_mean(A3, axis=1)
    y_final = tf.reduce_mean(embed_y, axis=1)

    return x_final, y_final
def backProp(X_index, Y_index):
    x_final, y_final = forwardProp(x, y)
    cost = tf.nn.l2_loss(x_final - y_final)
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
    init = tf.global_variables_initializer()
    saver = tf.train.Saver()
    total_batches = math.floor(m/batch_size)


    with tf.Session() as sess:
        sess.run(init)

        for epoch in range(epochs):
            batch_start = 0

            for i in range(int(m/batch_size)):

                x_hot = X_index[batch_start: batch_start + batch_size]
                y_hot = Y_index[batch_start: batch_start + batch_size]
                batch_start += batch_size

                _, temp_cost = sess.run([optimizer, cost], feed_dict={x: x_hot, y: y_hot})

                print("Cost at minibatch:  ", i , " and epoch ", epoch, " is ", temp_cost)

            if m % batch_size != 0:
                x_hot = X_index[batch_start: batch_start+m - (batch_size*total_batches)]
                y_hot = Y_index[batch_start: batch_start+m - (batch_size*total_batches)]
                _, temp_cost = sess.run([optimizer, cost], feed_dict={x: x_hot, y: y_hot})
                print("Cost at minibatch: (beyond floor)  and epoch ", epoch, " is ", temp_cost)


        # Saving the model
        save_path = saver.save(sess, "./model_neural_embeddingV1.ckpt")
        print("Model saved!")
def predict_search():

    # Initialize variables
    total_features = 4
    extra = len(word_to_indice)
    query = input('Enter your query')
    words = word_tokenize(query)
    # For now, it will throw an error if a word not present in dictionary is present
    features = [word_to_indice[w.lower()] for w in words]
    len_features = len(features)
    X_query = []
    Y_query = [[0]]  # Dummy variable, we don't care about the Y query while doing prediction
    if len_features < total_features:
        features += [extra] * (total_features - len_features)
    elif len_features > total_features:
        features = features[:total_features]

    X_query.append(features)
    X_query = np.array(X_query)
    print(X_query)
    Y_query = np.array(Y_query)

    # Load the model

    init_global = tf.global_variables_initializer()
    init_local = tf.local_variables_initializer()

    #X_final, Y_final = forwardProp(x, y)

    with tf.Session() as sess:
        sess.run(init_global)
        sess.run(init_local)
        saver = tf.train.import_meta_graph('./model_neural_embeddingV1.ckpt.meta')
        saver.restore(sess, './model_neural_embeddingV1.ckpt')
        print("Model loaded")
        print("Loaded variables are: ")
        print(tf.trainable_variables())
        print(sess.graph.get_operations())
        embedMat = sess.run('EM:0')  # Get the word embedding matrix
        W1 = sess.run('W1:0')
        b1 = sess.run('b1:0')
        W2 = sess.run('W2:0')
        b2 = sess.run('B2:0')
        print(b2)
        W3 = sess.run('W3:0')
        b3 = sess.run('B3:0')

        **#This part is not working, calling forward prop gives an 'attempting to use uninitialized value' error.** 
        X_final = sess.run(forwardProp(x, y), feed_dict={x: X_query, y: Y_query})

        print(X_final)
通过调用预测函数重新加载模型:

def forwardProp(x, y):

    embedding_mat = tf.get_variable("EM", shape=[total_vocab, e_features], initializer=tf.random_normal_initializer(seed=1))

    # m x words x total_vocab * total_vocab x e_features = m x words x e_features
    # embed_x = tf.tensordot(x, tf.transpose(embedding_mat), axes=[[2], [0]])
    # embed_y = tf.tensordot(y, tf.transpose(embedding_mat), axes=[[2], [0]])

    embed_x = tf.gather(embedding_mat, x)  # m x words x e_features
    embed_y = tf.gather(embedding_mat, y)  # m x words x e_features

    #print("Shape of embed x", embed_x.get_shape())

    W1 = tf.get_variable("W1", shape=[n1, e_features], initializer=tf.random_normal_initializer(seed=1))
    B1 = tf.get_variable("b1", shape=[1, 4, n1], initializer=tf.zeros_initializer())

    # m x words x e_features *  e_features x n1 = m x words x n1
    Z1 = tf.add(tf.tensordot(embed_x, tf.transpose(W1), axes=[[2], [0]]), B1, )
    A1 = tf.nn.tanh(Z1)

    W2 = tf.get_variable("W2", shape=[n2, n1], initializer=tf.random_normal_initializer(seed=1))
    B2 = tf.get_variable("B2", shape=[1, 4, n2], initializer=tf.zeros_initializer())

    # m x words x n1 *  n1 x n2 = m x words x n2
    Z2 = tf.add(tf.tensordot(A1, tf.transpose(W2), axes=[[2], [0]]), B2)
    A2 = tf.nn.tanh(Z2)

    W3 = tf.get_variable("W3", shape=[n3, n2], initializer=tf.random_normal_initializer(seed=1))
    B3 = tf.get_variable("B3", shape=[1, 4, n3], initializer=tf.zeros_initializer())

    # m x words x n2  * n2 x n3 = m x words x n3
    Z3 = tf.add(tf.tensordot(A2, tf.transpose(W3), axes=[[2], [0]]), B3)
    A3 = tf.nn.tanh(Z3)

    # Convert m x words x n3 to m x n3

    x_final = tf.reduce_mean(A3, axis=1)
    y_final = tf.reduce_mean(embed_y, axis=1)

    return x_final, y_final
def backProp(X_index, Y_index):
    x_final, y_final = forwardProp(x, y)
    cost = tf.nn.l2_loss(x_final - y_final)
    optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate).minimize(cost)
    init = tf.global_variables_initializer()
    saver = tf.train.Saver()
    total_batches = math.floor(m/batch_size)


    with tf.Session() as sess:
        sess.run(init)

        for epoch in range(epochs):
            batch_start = 0

            for i in range(int(m/batch_size)):

                x_hot = X_index[batch_start: batch_start + batch_size]
                y_hot = Y_index[batch_start: batch_start + batch_size]
                batch_start += batch_size

                _, temp_cost = sess.run([optimizer, cost], feed_dict={x: x_hot, y: y_hot})

                print("Cost at minibatch:  ", i , " and epoch ", epoch, " is ", temp_cost)

            if m % batch_size != 0:
                x_hot = X_index[batch_start: batch_start+m - (batch_size*total_batches)]
                y_hot = Y_index[batch_start: batch_start+m - (batch_size*total_batches)]
                _, temp_cost = sess.run([optimizer, cost], feed_dict={x: x_hot, y: y_hot})
                print("Cost at minibatch: (beyond floor)  and epoch ", epoch, " is ", temp_cost)


        # Saving the model
        save_path = saver.save(sess, "./model_neural_embeddingV1.ckpt")
        print("Model saved!")
def predict_search():

    # Initialize variables
    total_features = 4
    extra = len(word_to_indice)
    query = input('Enter your query')
    words = word_tokenize(query)
    # For now, it will throw an error if a word not present in dictionary is present
    features = [word_to_indice[w.lower()] for w in words]
    len_features = len(features)
    X_query = []
    Y_query = [[0]]  # Dummy variable, we don't care about the Y query while doing prediction
    if len_features < total_features:
        features += [extra] * (total_features - len_features)
    elif len_features > total_features:
        features = features[:total_features]

    X_query.append(features)
    X_query = np.array(X_query)
    print(X_query)
    Y_query = np.array(Y_query)

    # Load the model

    init_global = tf.global_variables_initializer()
    init_local = tf.local_variables_initializer()

    #X_final, Y_final = forwardProp(x, y)

    with tf.Session() as sess:
        sess.run(init_global)
        sess.run(init_local)
        saver = tf.train.import_meta_graph('./model_neural_embeddingV1.ckpt.meta')
        saver.restore(sess, './model_neural_embeddingV1.ckpt')
        print("Model loaded")
        print("Loaded variables are: ")
        print(tf.trainable_variables())
        print(sess.graph.get_operations())
        embedMat = sess.run('EM:0')  # Get the word embedding matrix
        W1 = sess.run('W1:0')
        b1 = sess.run('b1:0')
        W2 = sess.run('W2:0')
        b2 = sess.run('B2:0')
        print(b2)
        W3 = sess.run('W3:0')
        b3 = sess.run('B3:0')

        **#This part is not working, calling forward prop gives an 'attempting to use uninitialized value' error.** 
        X_final = sess.run(forwardProp(x, y), feed_dict={x: X_query, y: Y_query})

        print(X_final)
def predict_search():
#初始化变量
总功能=4
额外=长度(字到符号)
query=input('输入您的查询')
words=word\u标记化(查询)
#目前,如果字典中没有一个单词出现,它将抛出一个错误
特征=[word_to_表示[w.lower()]表示单词中的w]
len_功能=len(功能)
X_查询=[]
Y_query=[[0]]#伪变量,在进行预测时,我们不关心Y查询
如果len_特征总功能:
特征=特征[:总特征]
X_query.append(功能)
X_query=np.array(X_query)
打印(X_查询)
Y_query=np.array(Y_query)
#加载模型
init_global=tf.global_variables_initializer()
init_local=tf.local_variables_initializer()
#X_final,Y_final=forwardProp(X,Y)
使用tf.Session()作为sess:
sess.run(初始化全局)
sess.run(初始化本地)
saver=tf.train.import_meta_图('./model_neural_embeddingV1.ckpt.meta')
saver.restore(sess’./模型_neural _embeddingV1.ckpt’)
打印(“已加载模型”)
打印(“加载的变量为:”)
打印(tf.trainable_variables())
打印(sess.graph.get\u操作())
embeddemat=sess.run('EM:0')#获取单词嵌入矩阵
W1=sess.run('W1:0')
b1=sess.run('b1:0')
W2=sess.run('W2:0')
b2=sess.run('b2:0')
打印(b2)
W3=sess.run('W3:0')
b3=sess.run('b3:0')
**#此部分不工作,调用forward prop会出现“尝试使用未初始化值”错误。**
X\u final=sess.run(forwardProp(X,y),feed\u dict={X:X\u query,y:y\u query})
打印(X_最终版)

在从元图加载图形变量后,您意外地使用
forwardProp
函数创建了一组图形变量,有效地复制了您的变量,而无意这样做

在创建会话之前,您应该重构代码,以遵循创建图形变量的最佳实践

例如,在名为
build\u graph
的函数中创建所有变量。在创建会话之前,您可以调用
build\u graph
,但以后不能调用。这将避免像这样的混乱

您几乎应该始终避免从sess调用函数。运行以下操作:

X_final = sess.run(forwardProp(x, y), feed_dict={x: X_query, y: Y_query})
你这样问虫子

请注意在
forwardProp(x,y)
中发生的情况,您正在创建tensorflow结构、所有权重和偏差

但请注意,您是在以下两行代码中创建的:

saver = tf.train.import_meta_graph('./model_neural_embeddingV1.ckpt.meta')
saver.restore(sess, './model_neural_embeddingV1.ckpt')
另一个选项(可能是您正在尝试的)是不使用
import\u meta\u graph
。您可以创建所有tensorflow操作和变量,然后运行
saver.restore
来恢复检查点,这将检查点数据映射到您已经创建的变量中

注意,在tensorflow中实际上有两个选项,这有点混乱。您已经完成了这两项工作(导入包含所有操作和变量的图形),以及重新创建图形。你必须选一个


我通常使用第一个选项,不要使用
import\u meta\u graph
,只需通过调用
build\u graph
函数以编程方式重新创建图形即可。然后调用
saver.restore
以引入检查点。当然,您将重复使用
build\u graph
函数进行训练和推理,这样您两次都会得到相同的图形。

谢谢,这很有效!因此,基本思想是首先通过调用适当的函数重新创建图形,然后加载保存的模型,对吗?是的,这可能是最简单的方法。导入元图相当于重新创建变量,因此您必须选择一个异或另一个,而不是像以前那样得到两组变量。