Tensorflow 简单的LSTM在TySoFLoad中的实现:考虑铸造元素到支持类型错误
我试图在Tensorflow上实现一个简单的LSTM单元,以将其性能与我之前实现的另一个进行比较Tensorflow 简单的LSTM在TySoFLoad中的实现:考虑铸造元素到支持类型错误,tensorflow,lstm,tf.keras,Tensorflow,Lstm,Tf.keras,我试图在Tensorflow上实现一个简单的LSTM单元,以将其性能与我之前实现的另一个进行比较 x = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE]) y = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE]) weights = { 'out': tf.Variable(tf.random_normal([FEATURE_SIZE, 8
x = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE])
y = tf.placeholder(tf.float32,[BATCH_SIZE,SEQ_LENGTH,FEATURE_SIZE])
weights = { 'out': tf.Variable(tf.random_normal([FEATURE_SIZE, 8 * FEATURE_SIZE, NUM_LAYERS]))}
biases = { 'out': tf.Variable(tf.random_normal([4 * FEATURE_SIZE, NUM_LAYERS]))}
def RNN(x, weights, biases):
x = tf.unstack(x, SEQ_LENGTH, 1)
lstm_cell = tf.keras.layers.LSTMCell(NUM_LAYERS)
outputs = tf.keras.layers.RNN(lstm_cell, x, dtype=tf.float32)
return outputs
pred = RNN(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=pred, labels=y))
我使用了在GitHub上找到的一个示例,并尝试对其进行更改以获得所需的行为,但收到以下错误消息:
TypeError: Failed to convert object of type <class 'tensorflow.python.keras.layers.recurrent.RNN'> to Tensor. Contents: <tensorflow.python.keras.layers.recurrent.RNN object at 0x7fe437248710>. Consider casting elements to a supported type.
TypeError:无法将类型的对象转换为Tensor。目录:。将铸造元素考虑为支持类型。
试试看
反而
以下是以下示例:
试一试
反而
以下是以下示例:
outputs = tf.keras.layers.RNN(lstm_cell, dtype=tf.float32) (x)
# Let's use this cell in a RNN layer:
cell = MinimalRNNCell(32)
x = keras.Input((None, 5))
layer = RNN(cell)
y = layer(x)
# Here's how to use the cell to build a stacked RNN:
cells = [MinimalRNNCell(32), MinimalRNNCell(64)]
x = keras.Input((None, 5))
layer = RNN(cells)
y = layer(x)