相同的LSTM(GRU)实施不同的结果(pytorch和keras)

相同的LSTM(GRU)实施不同的结果(pytorch和keras),keras,pytorch,time-series,lstm,Keras,Pytorch,Time Series,Lstm,嗨,我正在研究时间序列数据。我有一个困扰我的问题。我调试了一个神经网络,在keras和pytorch中实现相同,但结果不同 这不是唯一的问题。keras模型总是给出相同的结果(每次我做训练模型时)。但Pytorch模型在10%的情况下给出的结果与cross模型一致。大多数时候,我认为它的结果非常糟糕(当然,我不喜欢keras的结果)。 请引导我。谢谢 keras模型: model_input = keras.Input(shape=(x_train_T.shape[1], 8)) x_1 = l

嗨,我正在研究时间序列数据。我有一个困扰我的问题。我调试了一个神经网络,在keras和pytorch中实现相同,但结果不同

这不是唯一的问题。keras模型总是给出相同的结果(每次我做训练模型时)。但Pytorch模型在10%的情况下给出的结果与cross模型一致。大多数时候,我认为它的结果非常糟糕(当然,我不喜欢keras的结果)。 请引导我。谢谢

keras模型:

model_input = keras.Input(shape=(x_train_T.shape[1], 8))
x_1 = layers.GRU(75,return_sequences=True)(model_input)
x_1 = layers.GRU(90)(x_1)
x_1 = layers.Dense(95)(x_1)
x_1 = layers.Dense(15)(x_1)
model = keras.models.Model(model_input, x_1)
model.compile(optimizer= adam_optim, loss= "mse" , metrics='accuracy')
model.fit(x_train_T, y_train, batch_size=1, epochs = 100)
pytorch型号:

class GRU(nn.Module):
    def __init__(self,input_size, hidden_size_1, hidden_size_2, hidden_size_3, output_size, num_layers, device):
        super(GRU, self).__init__()
        self.input_size = input_size
        self.hidden_size_1 = hidden_size_1
        self.hidden_size_2 = hidden_size_2
        self.hidden_size_3 = hidden_size_3
        self.num_layers = num_layers
        self.device = device
        
        self.gru_1 = nn.GRU(input_size, hidden_size_1, num_layers, batch_first=True)
        self.gru_2 = nn.GRU(hidden_size_1, hidden_size_2, num_layers, batch_first=True)
        self.fc_1 = nn.Linear(hidden_size_2, hidden_size_3)
        self.fc_out = nn.Linear(hidden_size_3, output_dim)

    def forward(self, x):
        input_X = x
        h_1 = torch.zeros(self.num_layers, input_X.size(0), self.hidden_size_1, device=self.device)
        h_2 = torch.zeros(self.num_layers, input_X.size(0), self.hidden_size_2, device=self.device)

        out_gru_1 , h_1 = self.gru_1(input_X, h_1)
        out_gru_2 , h_2 = self.gru_2(out_gru_1, h_2) 
        out_Dense_1 = self.fc_1(out_gru_2[:,-1,:]) 
        out_Dense_out = self.fc_out(out_Dense_1)

        return out_Dense_out
##############################
input_dim = 8
hidden_dim_1 = 75
hidden_dim_2 = 90
hidden_dim_3 = 95
num_layers = 1
output_dim = 15
num_epochs = 100

model = GRU(input_size=input_dim, hidden_size_1 = hidden_dim_1, hidden_size_2 = hidden_dim_2, hidden_size_3 = hidden_dim_3,output_size = output_dim, num_layers=num_layers, device = device)
criterion = torch.nn.MSELoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

import time
for t in range(num_epochs ):
  start_time = time.time()
  loss_p = []
  for i in range(x_train_T.size(0)):
    inputs, target = x_train_T[i:i+1] , y_train[i:i+1]
    inputs = torch.tensor(inputs, dtype=torch.float32).to(device)
    target = torch.tensor(target, dtype=torch.float32).to(device)
    y_train_pred = model(inputs)

    loss_ = criterion(y_train_pred, target)

    optimizer.zero_grad()
    loss_.backward()
    optimizer.step()

    loss_p.append(loss_)
  loss_p = np.array(loss_p)
  loss_P = loss_p.sum(0)/loss_p.shape[0] 
  end_time = time.time()
  print("Epoch ", t, "MSE: ", loss_P.item() , "///epoch time: {0} seconds".format(round(end_time - start_time, 2)))
##############################
在极少数情况下,两者的损失结果从大约0.09开始,到大约0.015结束。 在大多数情况下,keras模型的损失相同,但pytorch的损失保持在0.08

i、 有时Pytorch经过培训,有时没有

我认为应该将pytorch层初始化为与keras层相同的层。 但是怎么办

keras中的lstm初始化如下所示:

def __init__(units, activation='tanh', recurrent_activation='sigmoid', use_bias=True, kernel_initializer='glorot_uniform', recurrent_initializer='orthogonal', bias_initializer='zeros', kernel_regularizer=None, recurrent_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, recurrent_constraint=None, bias_constraint=None, dropout=0.0, recurrent_dropout=0.0, return_sequences=False, return_state=False, go_backwards=False, stateful=False, unroll=False, time_major=False, reset_after=True, **kwargs)
和线性层:

def __init__(units, activation=None, use_bias=True, kernel_initializer='glorot_uniform', bias_initializer='zeros', kernel_regularizer=None, bias_regularizer=None, activity_regularizer=None, kernel_constraint=None, bias_constraint=None, **kwargs)
我如何在pytorch中初始化层