Python 如何处理Tensorflow中的动态输入大小

Python 如何处理Tensorflow中的动态输入大小,python,tensorflow,Python,Tensorflow,我有一个占位符是这样声明的: self._sentence_lengths = tf.placeholder('int32', shape=[None], name='sen_len') 我还有一个嵌入张量,其形状是(?,300) 我想根据句子长度对嵌入进行拆分:句子=tf.split(嵌入,self.\u句子长度) 但是,我得到以下错误: ValueError: Cannot infer num from shape Tensor("joint_architecture_1/encoder_

我有一个占位符是这样声明的:

self._sentence_lengths = tf.placeholder('int32', shape=[None], name='sen_len')
我还有一个
嵌入
张量,其形状是
(?,300)

我想根据
句子长度
嵌入进行拆分:
句子=tf.split(嵌入,self.\u句子长度)

但是,我得到以下错误:

ValueError: Cannot infer num from shape Tensor("joint_architecture_1/encoder_1/sen_len:0", shape=(?,), dtype=int32)
最初,我是这样创建self.\u句子长度
(一切正常):

self.\u句子长度=tf.placeholder('int32',shape=[self.batch\u size],name='sen\u len')

我想将其更改为动态方法的原因是我不想受到批处理大小的限制。事实上,当进行培训时,可以使用批量大小,比如说128。但在进行推理时,我需要小批量

到目前为止,我一直在做的是在恢复时更改
self.batch\u大小
,但这似乎并不优雅


有没有办法克服这个问题?

我用一个小技巧设法找到了解决这个问题的办法。我创建了一个常量张量,它将保存变量的内容。张量的大小将限制森-伦的大小,但如果我们选择足够大的张量,这不应该是一个问题

下面是我的解决方案的一个玩具示例>

    embeds_raw = tf.constant(np.array([
    [1, 1],
    [1, 1],
    [2, 2],
    [3, 3],
    [3, 3],
    [3, 3],
    [4, 4],
    [4, 4],
    [4, 4],
    [4, 4],
], dtype='float32'))
# These play the role of embeddings.
embeds = tf.Variable(initial_value=embeds_raw)
# This variable plays the role of a container. We chose zeros because they are neutral to addition.
container_variable = tf.zeros([512], dtype=tf.int32, name='container_variable')
# Our placeholder for sentence lengths.
sen_len = tf.placeholder('int32', shape=[None], name='sen_len')
# Getting the length of the longest sentence.
max_l = tf.reduce_max(sen_len)
# Number of sentences.
nbr_sentences = tf.shape(sen_len)[0]
# We pad the sentence length var to match that of the container variable.
padded_sen_len = tf.pad(sen_len, [[0, 512 - nbr_sentences]], 'CONSTANT')
# We add the sentence lengths to our container variable.
added_container_variable = tf.add(container_variable, padded_sen_len)
# Create a TensorArray that will contain the split.
u1 = tf.TensorArray(dtype=tf.float32, size=512, clear_after_read=False)
# Split the embeddings by the sentence lengths.
u1 = u1.split(embeds, added_container_variable)

# Loop variables. An index and a variable containing our concatenated arrays.
i = tf.constant(0, shape=(), dtype='int32', name='i')
x = tf.constant(0, shape=[1, 2], dtype=tf.float32)

def condition(_i, _):
    """Checking whether _i is less than the number of sentences."""
    return tf.less(_i, nbr_sentences)

def body(_i, _x):
    """Padding and concatenating with _x."""
    temp = tf.pad(u1.read(_i), [[0, max_l - sen_len[_i]], [0, 0]], 'CONSTANT')
    return _i + 1, tf.concat([_x, temp], 0)

# Looping.
idx, padded_concatenated_sentences = tf.while_loop(
    condition,
    body,
    [i, x],
    shape_invariants=[tf.TensorShape([]), tf.TensorShape([None, 2])]
)

# Getting rid of the first row since it contains 0s.
padded_concatenated_sentences = padded_concatenated_sentences[1:]

# Reshaping to obtain the desired results. In our case 2 would be the word embedding dimensionality.
reshaped_elements = tf.reshape(padded_concatenated_sentences, [nbr_sentences, max_l, 2])

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    sents = sess.run(reshaped_elements, feed_dict={sen_len: [2, 1, 3, 4]})
    print(sents)

我用一个小技巧设法找到了这个问题的解决办法。我创建了一个常量张量,它将保存变量的内容。张量的大小将限制森-伦的大小,但如果我们选择足够大的张量,这不应该是一个问题

下面是我的解决方案的一个玩具示例>

    embeds_raw = tf.constant(np.array([
    [1, 1],
    [1, 1],
    [2, 2],
    [3, 3],
    [3, 3],
    [3, 3],
    [4, 4],
    [4, 4],
    [4, 4],
    [4, 4],
], dtype='float32'))
# These play the role of embeddings.
embeds = tf.Variable(initial_value=embeds_raw)
# This variable plays the role of a container. We chose zeros because they are neutral to addition.
container_variable = tf.zeros([512], dtype=tf.int32, name='container_variable')
# Our placeholder for sentence lengths.
sen_len = tf.placeholder('int32', shape=[None], name='sen_len')
# Getting the length of the longest sentence.
max_l = tf.reduce_max(sen_len)
# Number of sentences.
nbr_sentences = tf.shape(sen_len)[0]
# We pad the sentence length var to match that of the container variable.
padded_sen_len = tf.pad(sen_len, [[0, 512 - nbr_sentences]], 'CONSTANT')
# We add the sentence lengths to our container variable.
added_container_variable = tf.add(container_variable, padded_sen_len)
# Create a TensorArray that will contain the split.
u1 = tf.TensorArray(dtype=tf.float32, size=512, clear_after_read=False)
# Split the embeddings by the sentence lengths.
u1 = u1.split(embeds, added_container_variable)

# Loop variables. An index and a variable containing our concatenated arrays.
i = tf.constant(0, shape=(), dtype='int32', name='i')
x = tf.constant(0, shape=[1, 2], dtype=tf.float32)

def condition(_i, _):
    """Checking whether _i is less than the number of sentences."""
    return tf.less(_i, nbr_sentences)

def body(_i, _x):
    """Padding and concatenating with _x."""
    temp = tf.pad(u1.read(_i), [[0, max_l - sen_len[_i]], [0, 0]], 'CONSTANT')
    return _i + 1, tf.concat([_x, temp], 0)

# Looping.
idx, padded_concatenated_sentences = tf.while_loop(
    condition,
    body,
    [i, x],
    shape_invariants=[tf.TensorShape([]), tf.TensorShape([None, 2])]
)

# Getting rid of the first row since it contains 0s.
padded_concatenated_sentences = padded_concatenated_sentences[1:]

# Reshaping to obtain the desired results. In our case 2 would be the word embedding dimensionality.
reshaped_elements = tf.reshape(padded_concatenated_sentences, [nbr_sentences, max_l, 2])

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    sents = sess.run(reshaped_elements, feed_dict={sen_len: [2, 1, 3, 4]})
    print(sents)

可以有张量的动态大小,但不能有张量的动态数量。具有动态张量数意味着图结构取决于运行调用的输出,但图结构必须独立于所传递的数据。因此,您必须设计动态计算,而不使用split(可能是dynamic_partition,以及固定数量的输出张量,其中一些是空的)。实际上,我正在考虑使用一个固定大小的变量,该变量的大小足以容纳我的占位符的大小。进行计算,然后截断无用的部分。你可以有张量的动态大小,但不能有张量的动态数量。具有动态张量数意味着图结构取决于运行调用的输出,但图结构必须独立于所传递的数据。因此,您必须设计动态计算,而不使用split(可能是dynamic_partition,以及固定数量的输出张量,其中一些是空的)。实际上,我正在考虑使用一个固定大小的变量,该变量的大小足以容纳我的占位符的大小。进行计算,然后截断无用的部分。