Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/342.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181

Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow:将现有图复制到新图中多次_Python_Tensorflow - Fatal编程技术网

Python Tensorflow:将现有图复制到新图中多次

Python Tensorflow:将现有图复制到新图中多次,python,tensorflow,Python,Tensorflow,我想将现有的tensorflow图粘贴到新的图中 假设我创建了一个图形计算y=tanh(x@w) 太好了。现在假设我丢失了生成该图的代码,但我仍然可以访问变量(x,y)。现在我想获取这个图(使用当前的w值),并将它复制两次到一个新的图中(两条路径应该共享相同的w),这样我现在就可以计算d=tf。通过添加行来减少求和((tanh(x1@w)-tanh(x2@w))**2): # Starting with access to tensors: x, y <SOMETHING HERE>

我想将现有的tensorflow图粘贴到新的图中

假设我创建了一个图形计算
y=tanh(x@w)

太好了。现在假设我丢失了生成该图的代码,但我仍然可以访问变量(
x
y
)。现在我想获取这个图(使用当前的w值),并将它复制两次到一个新的图中(两条路径应该共享相同的
w
),这样我现在就可以计算
d=tf。通过添加行来减少求和((tanh(x1@w)-tanh(x2@w))**2)

# Starting with access to tensors: x, y
<SOMETHING HERE>
d = tf.reduce_sum((y1-y2)**2)
val_x1 = np.random.randn(3, 4)
val_x2 = np.random.randn(3, 4)
val_d = sess.run([d], feed_dict = {x1: val_x1, x2: val_x2})
#从访问张量开始:x,y
d=tf.减少总和((y1-y2)**2)
val_x1=np.random.randn(3,4)
val_x2=np.random.randn(3,4)
val_d=sess.run([d],feed_dict={x1:val_x1,x2:val_x2})
我应该为
填写什么才能使这项工作正常?(显然,不需要重新创建第一个图形)

有一个模块可以帮助完成此类操作。它的主要缺点是在修改图形时不能有正在运行的会话。但是,如果需要,可以检查会话、修改图表并将其还原回来

你想要的问题是,你基本上需要复制一个子图,除非你不想复制变量。因此,您可以简单地排除变量类型(主要是
variable
VariableV2
,也可能是
VarHandleOp
,尽管我在中还发现了一些)。您可以使用如下函数来完成此操作:

import tensorflow as tf

# Receives the outputs to recalculate and the input replacements
def replicate_subgraph(outputs, mappings):
    # Types of operation that should not be replicated
    # Taken from tensorflow/python/training/device_setter.py
    NON_REPLICABLE = {'Variable', 'VariableV2', 'AutoReloadVariable',
                      'MutableHashTable', 'MutableHashTableV2',
                      'MutableHashTableOfTensors', 'MutableHashTableOfTensorsV2',
                      'MutableDenseHashTable', 'MutableDenseHashTableV2',
                      'VarHandleOp', 'BoostedTreesEnsembleResourceHandleOp'}
    # Find subgraph ops
    ops = tf.contrib.graph_editor.get_backward_walk_ops(outputs, stop_at_ts=mappings.keys())
    # Exclude non-replicable operations
    ops_replicate = [op for op in ops if op.type not in NON_REPLICABLE]
    # Make subgraph viewitems
    sgv = tf.contrib.graph_editor.make_view(*ops_replicate)
    # Make the copy
    _, info = tf.contrib.graph_editor.copy_with_input_replacements(sgv, mappings)
    # Return new outputs
    return info.transformed(outputs)
例如,一个与您类似的示例(我对其进行了一些编辑,以便很容易看到输出是正确的,因为第二个值是第一个值的十倍)

输出:

[2.3356955 2.277849 0.58513653 2.0919807-0.15102367]
[23.356955  22.77849    5.851365  20.919807  -1.5102367]

编辑:

下面是另一个使用的解决方案。这要求您实际拥有该函数的代码,但这是一种更干净和“更正式”的支持子图重用的方式

import tensorflow as tf

def some_function(x):
    w = tf.get_variable('W', (5,), initializer=tf.random_normal_initializer())
    # Or if the variable is only local and not trainable
    # w = tf.Variable(initial_value=tf.random_normal(5,), dtype=tf.float32, trainable=False)
    return 2 * (x * w)

x1 = tf.placeholder(shape=(), dtype=tf.float32, name='X1')
x2 = tf.placeholder(shape=(), dtype=tf.float32, name='X2')
some_function_tpl = tf.make_template('some_function', some_function)
y1 = some_function_tpl(x1)
y2 = some_function_tpl(x2)
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print(*sess.run([y1, y2], feed_dict={x1: 1, x2: 10}), sep='\n')

你从什么开始?A
GraphDef
?或者只是内存中的某个图形,从中复制一个子图(由输入和输出分隔)并将其连接到其他地方?另外,在创建图表时或在活动会话中,您是否需要执行此操作?我从访问张量
x
y
开始,没有其他内容(编辑问题以澄清,谢谢)很好!谢谢。现在唯一不清楚的是,当“子图”已经有与变量相关联的值时(例如,您使用
new\u saver=tf.train.import\u meta\u graph(…)
new\u saver.restore(sess,…)恢复了状态),如何执行此操作
因为您似乎无法在活动会话中执行此操作。@Peter是的,使用Graph Editor,您必须在编辑之前保存并关闭会话,然后再将其还原。但是变量对象实际上是相同的,因此如果您这样做,一切都会正常工作。
import tensorflow as tf

def some_function(x):
    w = tf.Variable(initial_value=tf.random_normal((5,)), dtype=tf.float32)
    return 2 * (x * w)

x1 = tf.placeholder(shape=(), dtype=tf.float32, name='X1')
x2 = tf.placeholder(shape=(), dtype=tf.float32, name='X2')
y1 = some_function(x1)
y2, = replicate_subgraph([y1], {x1: x2})
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print(*sess.run([y1, y2], feed_dict={x1: 1, x2: 10}), sep='\n')
import tensorflow as tf

def some_function(x):
    w = tf.get_variable('W', (5,), initializer=tf.random_normal_initializer())
    # Or if the variable is only local and not trainable
    # w = tf.Variable(initial_value=tf.random_normal(5,), dtype=tf.float32, trainable=False)
    return 2 * (x * w)

x1 = tf.placeholder(shape=(), dtype=tf.float32, name='X1')
x2 = tf.placeholder(shape=(), dtype=tf.float32, name='X2')
some_function_tpl = tf.make_template('some_function', some_function)
y1 = some_function_tpl(x1)
y2 = some_function_tpl(x2)
init = tf.global_variables_initializer()
with tf.Session() as sess:
    sess.run(init)
    print(*sess.run([y1, y2], feed_dict={x1: 1, x2: 10}), sep='\n')