同一会话中的多个顺序Tensorflow操作。run()调用

同一会话中的多个顺序Tensorflow操作。run()调用,tensorflow,Tensorflow,正如标题所示,我想在同一个Session.run()调用中运行多个Tensorflow操作。具体来说,为了使问题更具体,假设我想在一个调用中运行多个训练迭代 使用多个Session.run()调用执行此操作的标准方法如下: # Declare the function that we want to minimize func = ... # Create the optimizer which will perform a single optimization iteration opti

正如标题所示,我想在同一个
Session.run()调用中运行多个Tensorflow操作。具体来说,为了使问题更具体,假设我想在一个调用中运行多个训练迭代

使用多个
Session.run()
调用执行此操作的标准方法如下:

# Declare the function that we want to minimize
func = ...

# Create the optimizer which will perform a single optimization iteration
optimizer = tf.train.AdamOptimizer().minimize(func)

# Run N optimization iterations
N = 10
with tf.Session() as sess:

    sess.run( tf.global_variables_initializer() )
    for i in range(N):
        sess.run( optimizer )
# Declare the function that we want to minimize
func = ...

# Create the optimizer which will perform a single optimization iteration
optimizer = tf.train.AdamOptimizer().minimize(func)

# Create the compound operation that will run the optimizer 10 times
optimizeNIterations = ?????
with tf.Session() as sess:

    sess.run( tf.global_variables_initializer() )
    sess.run( optimizeNIterations )
然而,这当然会有一些开销,因为我们正在进行多个会话调用。我假设我们可以通过某种方式对操作进行分组来消除一些显著的开销。我假设
计数到
是我应该使用的,但我找不到任何示例来演示如何在这种情况下使用它们。谁能给我指一下正确的方向吗

最终目标是定义一些复合操作,这些操作将在单个调用中运行
N
迭代,以便将上述操作转换为如下内容:

# Declare the function that we want to minimize
func = ...

# Create the optimizer which will perform a single optimization iteration
optimizer = tf.train.AdamOptimizer().minimize(func)

# Run N optimization iterations
N = 10
with tf.Session() as sess:

    sess.run( tf.global_variables_initializer() )
    for i in range(N):
        sess.run( optimizer )
# Declare the function that we want to minimize
func = ...

# Create the optimizer which will perform a single optimization iteration
optimizer = tf.train.AdamOptimizer().minimize(func)

# Create the compound operation that will run the optimizer 10 times
optimizeNIterations = ?????
with tf.Session() as sess:

    sess.run( tf.global_variables_initializer() )
    sess.run( optimizeNIterations )
编辑::

正如
音乐上所指出的那样,我确实可以通过强制问题使用提要词典将操作链接在一起。但这感觉像是一个非常具体问题的解决方案。我最关心的是如何在单个会话运行中顺序执行操作。我可以再举一个例子,为什么你会想要这个

假设现在除了要运行优化器外,我还想检索优化值,假设这些值位于变量
X
中。如果我想优化并得到优化的值,我可以尝试这样做

with tf.Session() as sess:

    sess.run( tf.global_variables_initializer() )
    o, x = sess.run( [ optimizer, X ] )
但事实上,这不会起作用,因为操作(优化器,X)不是按顺序运行的。我基本上需要两个会话呼叫:

with tf.Session() as sess:

    sess.run( tf.global_variables_initializer() )
    o = sess.run( optimizer )
    x = sess.run( X )

问题是如何将这两个电话合并为一个

听起来,您可以将希望多次运行的任何操作放在一个文件夹中。如果操作是独立的,您可能必须将
并行_迭代
设置为
1
,或者(更好)使用控制依赖项对优化器调用进行排序。例如:

将tensorflow导入为tf
使用tf.Graph()作为默认值():
opt=tf.列AdamOptimizer(0.1)
#将资源变量用于真正的“读取操作”
var=tf.get_变量(name=“var”,shape=[],use_resource=True)
定义条件(i,uu):
返回tf.less(i,20)#20次迭代
def_主体(i,定序器):
使用tf.control_依赖项([sequencer]):
损失=.5*(变量-10.)**2
打印(损失,[“评估损失”,i,损失])
使用tf.control_依赖项([print_op]):
列车运行=选择最小化(损失)
具有tf.控制依赖项([训练操作]):
next_sequencer=tf.ones([])
返回i+1,下一个\u序列器
初始值=变量读取值()
使用tf.control_依赖项([初始值]):
_,sequencer=tf。while_循环(cond=_cond,body=_body,loop_vars=[0,1.]))
使用tf.control_依赖项([sequencer]):
最终值=变量读取值()
init_op=tf.global_variables_initializer()
使用tf.Session()作为会话:
session.run([init_op])
打印(会话运行([初始值,最终值])
印刷品:

2017-12-21 11:40:35.920035: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][0][46.3987083]
2017-12-21 11:40:35.920317: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][1][45.4404]
2017-12-21 11:40:35.920534: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][2][44.4923515]
2017-12-21 11:40:35.920715: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][3][43.55476]
2017-12-21 11:40:35.920905: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][4][42.6277695]
2017-12-21 11:40:35.921084: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][5][41.711544]
2017-12-21 11:40:35.921273: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][6][40.8062363]
2017-12-21 11:40:35.921426: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][7][39.9120026]
2017-12-21 11:40:35.921578: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][8][39.028965]
2017-12-21 11:40:35.921732: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][9][38.1572723]
2017-12-21 11:40:35.921888: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][10][37.2970314]
2017-12-21 11:40:35.922053: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][11][36.4483566]
2017-12-21 11:40:35.922187: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][12][35.6113625]
2017-12-21 11:40:35.922327: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][13][34.7861366]
2017-12-21 11:40:35.922472: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][14][33.9727631]
2017-12-21 11:40:35.922613: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][15][33.1713257]
2017-12-21 11:40:35.922777: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][16][32.3818779]
2017-12-21 11:40:35.922942: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][17][31.6044941]
2017-12-21 11:40:35.923115: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][18][30.8392067]
2017-12-21 11:40:35.923253: I tensorflow/core/kernels/logging_ops.cc:79] [Evaluating loss][19][30.0860634]
[0.36685812, 2.3390481]

由于您没有使用
feed\u dict
,我假设您正在使用队列输入?在这种情况下,
sess.run([optimizer]*N)
也许同样有效。但是,我不确定开销是否足够大,是否可以进行优化,您手头有任何数字吗?我有两种情况(有和没有提要dict)。我的理解是,你提议的东西不能保证按顺序运行……我只是检查了一下。您建议的对象不会多次执行同一命令。结果相当于sess.run(optimizer)
我确实理解你说的话。。。我可以使用提要命令将操作链接在一起。但还有一个问题会突然出现,我希望看到它得到解决(见编辑)