Tensorflow 如何获取keras中LSTM的隐藏节点表示
我在keras中使用LSTM程序实现了一个模型。我试图获得LSTM层的隐藏节点的表示。这是获取隐藏节点表示(存储在activations变量中)的正确方法吗Tensorflow 如何获取keras中LSTM的隐藏节点表示,tensorflow,deep-learning,keras,Tensorflow,Deep Learning,Keras,我在keras中使用LSTM程序实现了一个模型。我试图获得LSTM层的隐藏节点的表示。这是获取隐藏节点表示(存储在activations变量中)的正确方法吗 model = Sequential() model.add(LSTM(50, input_dim=sample_index)) activations = model.predict(testX) model.add(Dense(no_of_classes, activation='softmax')) model.compile(l
model = Sequential()
model.add(LSTM(50, input_dim=sample_index))
activations = model.predict(testX)
model.add(Dense(no_of_classes, activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adagrad', metrics=['accuracy'])
hist=model.fit(trainX, trainY, validation_split=0.15, nb_epoch=5, batch_size=20, shuffle=True, verbose=1)
编辑:获取隐藏表示的方法也是正确的。 参考: 训练模型后,可以保存模型和权重。像这样:
from keras.models import model_from_json
json_model = yourModel.to_json()
open('yourModel.json', 'w').write(json_model)
yourModel.save_weights('yourModel.h5', overwrite=True)
from keras.models import model_from_json
import matplotlib.pyplot as plt
model = model_from_json(open('yourModel.json').read())
model.load_weights('yourModel.h5')
layers = model.layers[1] # find the LSTM layer you want to visualize, [1] is just an example
weights, bias = layers.get_weights()
plt.matshow(weights, fignum=100, cmap=plt.cm.gray)
plt.show()
然后可以可视化LSTM层的权重。像这样:
from keras.models import model_from_json
json_model = yourModel.to_json()
open('yourModel.json', 'w').write(json_model)
yourModel.save_weights('yourModel.h5', overwrite=True)
from keras.models import model_from_json
import matplotlib.pyplot as plt
model = model_from_json(open('yourModel.json').read())
model.load_weights('yourModel.h5')
layers = model.layers[1] # find the LSTM layer you want to visualize, [1] is just an example
weights, bias = layers.get_weights()
plt.matshow(weights, fignum=100, cmap=plt.cm.gray)
plt.show()
你的代码有什么问题?你认为这是正确的吗?为什么您认为不是?代码运行正确,我不确定这是否是获取LSTM隐藏表示的正确方法