Warning: file_get_contents(/data/phpspider/zhask/data//catemap/2/python/280.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 有没有办法用tensorflow 2.0计算神经网络输出w.r.t输入的梯度下降?_Python_Tensorflow_Lstm_Recurrent Neural Network_Gradient Descent - Fatal编程技术网

Python 有没有办法用tensorflow 2.0计算神经网络输出w.r.t输入的梯度下降?

Python 有没有办法用tensorflow 2.0计算神经网络输出w.r.t输入的梯度下降?,python,tensorflow,lstm,recurrent-neural-network,gradient-descent,Python,Tensorflow,Lstm,Recurrent Neural Network,Gradient Descent,我已经训练了一个用于时间序列预测的递归神经网络,现在我正试图找到网络的最佳输入(即使输出最小化的输入)。为此,我考虑应用梯度下降法(例如,经典的SGD法),使用由经过训练的网络本身表示的输入输出函数。我设法建立了一个tf函数,计算网络输出w.r.t.输入的梯度,但我真的不知道如何使用它来实现下降算法。 我附上我为我的网络结构和梯度函数编写的代码。如有任何建议,将不胜感激 import tensorflow as tf from keras.models import Sequential, Mo

我已经训练了一个用于时间序列预测的递归神经网络,现在我正试图找到网络的最佳输入(即使输出最小化的输入)。为此,我考虑应用梯度下降法(例如,经典的SGD法),使用由经过训练的网络本身表示的输入输出函数。我设法建立了一个tf函数,计算网络输出w.r.t.输入的梯度,但我真的不知道如何使用它来实现下降算法。 我附上我为我的网络结构和梯度函数编写的代码。如有任何建议,将不胜感激

import tensorflow as tf
from keras.models import Sequential, Model
from keras.layers import Dense, LSTM, Concatenate, Dropout
from keras import regularizers 
from keras import optimizers
from keras.callbacks import EarlyStopping
from keras import backend as K

#Define the first RNN
first_model = Sequential()
first_model.add(LSTM(64, input_shape=(lb, nbr_features), return_sequences=False))

#Define the second RNN
second_model = Sequential()
second_model.add(LSTM(64, input_shape=(ph2, nbr_features-1), return_sequences=False))

#Concatenate the two models' outputs and pass it to the overall output layer
MergedOutput = Concatenate()([first_model.output, second_model.output])
MergedOutput = Dense(ph1, activation='relu')(MergedOutput)

#Generate the overall model
final_model = Model([first_model.input, second_model.input], MergedOutput)

#Model summary
final_model.summary()

#Compile the model
opt=optimizers.Adam(learning_rate=0.001)
final_model.compile(optimizer=opt, loss='mean_squared_error')

#Train the model 
stopping = EarlyStopping(monitor='val_loss', patience=4)
history = final_model.fit([x_train1, x_train2], y_train, batch_size=40, epochs=300, validation_data=([x_valid1, x_valid2], y_valid), callbacks=[stopping], verbose=2, shuffle=True)

#Define the gradient of the output w.r.t the input
grads = K.gradients(final_model.output, final_model.input)

#Define a TensorFlow function that computes this gradient
gradient_func = K.function([final_model.input], grads)