Tensorflow 调用TensorArray.close()的效果如何?

Tensorflow 调用TensorArray.close()的效果如何?,tensorflow,Tensorflow,(tensorflow版本:“0.12.head”) tensorray.close的文档说明它关闭了当前的tensorray。对于TensorArray的状态意味着什么?我尝试以下代码 import tensorflow as tf sess = tf.InteractiveSession() a1 = tf.TensorArray(tf.int32, 2) a1.close().run() a2 = a1.write(0, 0) a2.close().run() print(a2.read(

(tensorflow版本:“0.12.head”)

tensorray.close
的文档说明它关闭了当前的tensorray。对于
TensorArray
的状态意味着什么?我尝试以下代码

import tensorflow as tf
sess = tf.InteractiveSession()
a1 = tf.TensorArray(tf.int32, 2)
a1.close().run()
a2 = a1.write(0, 0)
a2.close().run()
print(a2.read(0).eval())
而且没有错误。
close
的用法是什么

。我搞不懂这个评论是什么意思


更新

比如,

import tensorflow as tf

sess = tf.InteractiveSession()

N = 3

def cond(i, arr):
    return i < N

def body(i, arr):
    arr = arr.write(i, i)
    i += 1
    return i, arr

arr = tf.TensorArray(tf.int32, N)
_, result_arr = tf.while_loop(cond, body, [0, arr])
reset = arr.close() # corresponds to https://github.com/deepmind/learning-to-learn/blob/6ee52539e83d0452051fe08699b5d8436442f803/meta.py#L370

NUM_EPOCHS = 3
for _ in range(NUM_EPOCHS):
    reset.run() # corresponds to https://github.com/deepmind/learning-to-learn/blob/6ee52539e83d0452051fe08699b5d8436442f803/util.py#L32
    print(result_arr.stack().eval())
将tensorflow导入为tf
sess=tf.InteractiveSession()
N=3
def cond(i,arr):
返回i

为什么
arr.close()
不会使while循环失败?在每个时代开始时调用arr.close()有什么好处?

这是一个Python操作,它包装了一个本机操作,并且都有帮助字符串,但是本机操作帮助字符串的信息量更大。如果您查看
inspect.getsourcefile(fx\u array.close)
它将指向
tensorflow/python/ops/tensor\u array\u ops.py
。在实现内部,您可以看到它遵从
\u tensor\u array\u close\u v2
。所以你可以这样做

> from tensorflow.python.ops import gen_data_flow_ops
> help(gen_data_flow_ops._tensor_array_close_v2)
Delete the TensorArray from its resource container.  This enables
the user to close and release the resource in the middle of a step/run.
同样的文档字符串也在
tensorraryclosev2下

查看您可以看到,
tensorarracoloseop
是在中注册的
tensorarracolosev2
的实现,并且有更多信息

// Delete the TensorArray from its resource container.  This enables
// the user to close and release the resource in the middle of a step/run.
// TODO(ebrevdo): decide whether closing the grad op should happen
// here or on the python side.
class TensorArrayCloseOp : public OpKernel {
 public:
  explicit TensorArrayCloseOp(OpKernelConstruction* context)
      : OpKernel(context) {}

  void Compute(OpKernelContext* ctx) override {
    TensorArray* tensor_array;
    OP_REQUIRES_OK(ctx, GetTensorArray(ctx, &tensor_array));
    core::ScopedUnref unref(tensor_array);
    // Instead of deleting this TA from the ResourceManager, we just
    // clear it away and mark it as closed.  The remaining memory
    // consumed store its mutex and handle Tensor.  This will be
    // cleared out at the end of the step anyway, so it's fine to keep
    // it around until the end of the step.  Further calls to the
    // TensorArray will fail because TensorArray checks internally to
    // see if it is closed or not.

描述似乎与您看到的行为不一致,可能是一个bug。

在Learning to learn示例中关闭的
Tensorary
不是传递给while循环的原始
Tensorary

# original array (fx_array) declared here
fx_array = tf.TensorArray(tf.float32, size=len_unroll + 1,
                          clear_after_read=False)
# new array (fx_array) returned here
_, fx_array, x_final, s_final = tf.while_loop(
    cond=lambda t, *_: t < len_unroll,
    body=time_step,
    loop_vars=(0, fx_array, x, state),
    parallel_iterations=1,
    swap_memory=True,
    name="unroll")

如果
tensorray已关闭,则此操作将失败。
因为
丢失
op尝试在关闭的阵列上运行
pack()

谢谢@Yaroslav Bulatov。阅读文档字符串使我更加困惑。它说,这使得用户能够在步骤/运行的中间关闭和释放资源。对Tensorary的进一步调用将失败,因为Tensorary会在内部检查它是否关闭。我更新我的问题。在“学会学习”中,它在每个时代的开始称为TensorArray.close。我不明白如果
close
进一步调用Tensorary会失败意味着什么。我也很困惑,这种用法似乎与文档相矛盾根据@sirfz的回答,
Tensorary
的状态在每次运行中似乎是独立的。
session.run([reset, loss])