Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/361.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python TensorFlow序列的可训练重量_Python_Tensorflow - Fatal编程技术网

Python TensorFlow序列的可训练重量

Python TensorFlow序列的可训练重量,python,tensorflow,Python,Tensorflow,我想在seq2seq.sequence\u loss\u by\u example()中有一个可训练的重量,例如 w = tf.get_variable("w", [batch_size*num_steps]) loss = seq2seq.sequence_loss_by_example([logits_1], [tf.reshape(self._targets, [-1])], w,vocab_size_all) 但是,运行此代码会导致以下错

我想在
seq2seq.sequence\u loss\u by\u example()
中有一个可训练的重量,例如

w = tf.get_variable("w", [batch_size*num_steps])
loss = seq2seq.sequence_loss_by_example([logits_1],
            [tf.reshape(self._targets, [-1])],
            w,vocab_size_all)
但是,运行此代码会导致以下错误:

seq2seq.py, line 654, in sequence_loss_by_example
if len(targets) != len(logits) or len(weights) != len(logits):
根据
seq2seq.py
中此函数的docstring:

weights: list of 1D batch-sized float-Tensors of the same length as logits.
它需要一个“张量”,但我想传递一个
tf.Variable
。有没有办法在该函数中设置可训练权重?

在TensorFlow a中,可以在任何需要a(相同元素类型和形状)的地方使用a

因此,如果要定义可训练权重,可以将
tf.Variable
对象列表作为
weights
参数传递给。例如,您可以执行以下操作:

# Defines a list of `num_steps` variables, each 1-D with length `batch_size`.
weights = [tf.get_variable("w", [batch_size]) for _ in range(num_steps)]

loss = seq2seq.sequence_loss_by_example([logits_1, ..., logits_n],
                                        [targets_1, ..., targets_n],
                                        weights,
                                        vocab_size_all)
在TensorFlow中,a可用于任何需要a(相同元素类型和形状)的地方

因此,如果要定义可训练权重,可以将
tf.Variable
对象列表作为
weights
参数传递给。例如,您可以执行以下操作:

# Defines a list of `num_steps` variables, each 1-D with length `batch_size`.
weights = [tf.get_variable("w", [batch_size]) for _ in range(num_steps)]

loss = seq2seq.sequence_loss_by_example([logits_1, ..., logits_n],
                                        [targets_1, ..., targets_n],
                                        weights,
                                        vocab_size_all)

你能提供完整的回溯(错误)信息吗?请提供@Tadhgmdonald Jensen,好的,下次会这样做。你能提供完整的回溯(错误)信息吗?请提供@Tadhgmdonald Jensen,好的,下次会这样做。