Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/tensorflow/5.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python Tensorflow:对张量进行整形,并在某些行的末尾填充零_Python_Tensorflow - Fatal编程技术网

Python Tensorflow:对张量进行整形,并在某些行的末尾填充零

Python Tensorflow:对张量进行整形,并在某些行的末尾填充零,python,tensorflow,Python,Tensorflow,我正在寻找一种在Tensorflow中重塑张量的方法。我有一个包含行序列的张量。我想重塑那个张量,使给定序列的所有行都在重塑后的张量中的一行上 困难在于序列的长度不同。在下面的示例中,我知道一个序列最多有3行。第一个序列是2行,第二个序列是3行,第三个序列是1行 #Data Tensor [ [1,1,1], [2,2,2], [4,4,4], [5,5,5], [6,6,6], [7,7,7]] #To be reshaped into [ [1,1,1,2,2,2,0,0,0], [4,4

我正在寻找一种在Tensorflow中重塑张量的方法。我有一个包含行序列的张量。我想重塑那个张量,使给定序列的所有行都在重塑后的张量中的一行上

困难在于序列的长度不同。在下面的示例中,我知道一个序列最多有3行。第一个序列是2行,第二个序列是3行,第三个序列是1行

#Data Tensor
[
[1,1,1],
[2,2,2],
[4,4,4],
[5,5,5],
[6,6,6],
[7,7,7]]

#To be reshaped into
[
[1,1,1,2,2,2,0,0,0],
[4,4,4,5,5,5,6,6,6],
[7,7,7,0,0,0,0,0,0]]

#Argument could be of the form: rows to pad
[1 0 2]

#Or its complementary: sequence length
[2 3 1]
有人知道怎么做吗

一种方法是在初始张量的正确位置插入一些零行,然后使用简单的tf.reformate。但我不知道如何插入零行


另一种方法是在直接重塑形状时进行。而且我也不知道怎么做。

这应该可以,而且很容易扩展(例如,使用不同种类的填充物等)。请让我知道它是否如您所期望的那样工作

import tensorflow as tf

def split_and_pad_tensor(tensor, lengths):
    """
    Input: a rank 2 tensor of shape (A,B) and a collection of indexes that
    sum up to A (otherwise tf.split crashes).
    The tensor is then split in len(lengths) tensors of the given lengths,
    and then each splitted tensor is zero-padded at the right until all have
    B*max(idxs) elements. Output is then a rank 2 tensor of shape
    (len(idxs), B*max(idxs))
    """
    length_result, max_length = len(lengths), max(lengths)
    splitted = tf.split(tensor, lengths, 0)
    # pad's second argument can be seen as [[left, right], [up, down]]
    padded = tf.stack([tf.pad(s, [[0,max_length-l],[0,0]]) for l,s in zip(lengths, splitted)])
    # flatten last two axes:
    return tf.reshape(padded, [length_result, tf.shape(tensor)[1]*max_length])

# make some data and test for different valid inputs:
DATA = tf.constant([[x,x,x] for x in [1,2,4,5,6,7]])
with tf.Session() as sess:
    for lengths in ([4,2], [2,3,1], [2,2,1,1]):
        print sess.run(split_and_pad_tensor(DATA, lengths))
产出:

[[1 1 1 2 2 2 4 4 4 5 5 5]
 [6 6 6 7 7 7 0 0 0 0 0 0]]
[[1 1 1 2 2 2 0 0 0]
 [4 4 4 5 5 5 6 6 6]
 [7 7 7 0 0 0 0 0 0]]
[[1 1 1 2 2 2]
 [4 4 4 5 5 5]
 [6 6 6 0 0 0]
 [7 7 7 0 0 0]]

带占位符的纯TF版本: 以下代码具有与上面相同的功能,但输入是占位符,并且+组合用于允许完整形状动态性:

import tensorflow as tf

class SplitAndPadGraph(object):
    def __init__(self):
        # minimal assumptions on the placeholderes' shapes
        data_ph = tf.placeholder(tf.float32, shape=[None, None])
        lengths_ph = tf.placeholder(tf.int32, shape=[None])
        # extract information about input shapes
        data_len = tf.shape(data_ph)[0]
        out_dim0 = tf.shape(lengths_ph)[0]
        out_dim1 = tf.reduce_max(lengths_ph)
        out_dim2 = tf.shape(data_ph)[-1]
        # create a [[x,y,z], ...] tensor, where x=start_idx, y=length, z=pad_size
        start_idxs = tf.concat([[0], tf.cumsum(lengths_ph)], 0)[:-1]
        pads = tf.fill([out_dim0], out_dim1)-lengths_ph
        reconstruction_metadata = tf.stack([start_idxs, lengths_ph, pads], axis=1)
        # pass the xyz tensor to map_fn to create a tensor with the proper indexes.
        # then gather the indexes from data_ph and reshape
        reconstruction_data = tf.map_fn(lambda x: tf.concat([tf.range(x[0],x[0]+x[1]),
                                                             tf.fill([x[2]], data_len)],
                                                            0), reconstruction_metadata)
        output = tf.gather(tf.concat([data_ph, tf.zeros((1,out_dim2))], 0),
                           tf.reshape(reconstruction_data, [out_dim0*out_dim1]))
        output = tf.reshape(output, [out_dim0, out_dim1*out_dim2])
        # graph interface to access input and output nodes from outside
        self.data_ph = data_ph
        self.lengths_ph = lengths_ph
        self.output = output

DATA = [[x,x,x] for x in [1,2,4,5,6,7]]
g = SplitAndPadGraph()
with tf.Session() as sess:
    for lengths in [[4,2], [2,3,1], [2,2,1,1]]:
        print "lengths =", lengths
        print sess.run(g.output, feed_dict={g.data_ph:DATA, g.lengths_ph:lengths})
干杯!
安德烈斯

不错!谢谢。不需要详细说明,非常清楚。我封装了功能,添加了更好的文档,并合并了缺少的最终
重塑
。应该是这样!谢谢。它按预期工作!但是当我把这个包括在我的模型中时,我遇到了麻烦。“长度”现在是一个占位符,“拆分”是使用几个涉及占位符和变量的运算来计算的。似乎我需要使用tf.map\u fn。下面是一行错误消息computing padded:TypeError:
Tensor
如果未启用急切执行,则对象不可执行。要迭代此张量,请使用
tf.map\u fn
。我想我需要用这个tf.map_fn替换for循环。我回到了您的初始版本:padded=[tf.pad(s,[[0,max(length)-tf.shape(s)[0]],[0,0]]),在这个版本中,我删除了for循环中的l。这样,我就不会循环长度,这是我的模型中的占位符。这让我避免了那个错误。@Tom这花费了比预期更长的时间,比我想承认的要复杂,但你就是这样!让我知道这是否与您的代码更好地集成