Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/294.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 层gru的输入0与层不兼容:预期ndim=3,发现ndim=2_Python_Tensorflow - Fatal编程技术网

Python 层gru的输入0与层不兼容:预期ndim=3,发现ndim=2

Python 层gru的输入0与层不兼容:预期ndim=3,发现ndim=2,python,tensorflow,Python,Tensorflow,我正在使用TF2.5,我会准时地按照教程上的内容来学习,只是为了获得一些自信 但我不明白为什么在建立模型后,我在维度上会出现不一致;它需要一个三维层,但只接收两个 ValueError: Input 0 of layer gru is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (64, 100) 同时,即使使用他们自己的数据集,在教程中也没有发现这个问题,这让我感到困惑 然后

我正在使用TF2.5,我会准时地按照教程上的内容来学习,只是为了获得一些自信

但我不明白为什么在建立模型后,我在维度上会出现不一致;它需要一个三维层,但只接收两个

ValueError: Input 0 of layer gru is incompatible with the layer: expected ndim=3, found ndim=2. Full shape received: (64, 100)
同时,即使使用他们自己的数据集,在教程中也没有发现这个问题,这让我感到困惑

然后我尝试了各种组合,甚至在数据集中生成了一个虚拟维度,但没有得到期望的结果

我知道问题应该出在数据集的构造上,因为:

<PrefetchDataset shapes: ((64, 100), (64, 100)), types: (tf.int64, tf.int64)>

你说的
self.embedding=GRU(…)
可能是在学习教程时输入的错误。嵌入+GRU应该适合这种形状。
BATCH_SIZE = 64
BUFFER_SIZE = 10000

dataset = (dataset
           .shuffle(BUFFER_SIZE)
           .batch(BATCH_SIZE, drop_remainder=True)
           .prefetch(tf.data.experimental.AUTOTUNE))

vocab_size = len(vocab)
embedding_dim = 256
rnn_units = 1024


class MyModel(tf.keras.Model):
    def __init__(self, vocab_size, embedding_dim, rnn_units):
        super().__init__(self)
        self.embedding = tf.keras.layers.GRU(rnn_units, return_sequences=True, return_state=True)
        self.dense = tf.keras.layers.Dense(vocab_size)
        
    def call(self, inputs, states=None, return_state=False, training=False):
        x = inputs
        x = self.embedding(x, training=training)
        if states is None:
            states = self.gru.get_initial_state(x)
        x = self.dense(x, training=training)
        
        if return_state:
            return x, states
        else:
            return x

model = MyModel(vocab_size=len(ids_from_chars.get_vocabulary()), embedding_dim=embedding_dim, rnn_units=rnn_units)

for input_example_batch, target_example_batch in dataset.take(1):
    example_batch_predictions = model(input_example_batch)
    print (example_batch_predictions.shape, "# (batch_size, sequence_length, vocab_size)")