Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/vb.net/14.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Pytorch Pyrotch LSTM模型中的窗口大小在哪里?_Pytorch_Lstm - Fatal编程技术网

Pytorch Pyrotch LSTM模型中的窗口大小在哪里?

Pytorch Pyrotch LSTM模型中的窗口大小在哪里?,pytorch,lstm,Pytorch,Lstm,我已经建立了一个lstm模型,该模型采用3个特征的输入数据,滚动窗口大小为18。我的模型有我在下面的代码中附加的层。我不明白的是,如果窗口大小从未作为参数传递给模型,那么18的滚动窗口大小如何包含在模型中。如果模型一次只接受一行输入,这不等于使用windowsize=1吗 class LSTMnetwork(nn.Module): def __init__(self,input_size=3,hidden_size1=24, hidden_size2=50, hidden_size3=20,ou

我已经建立了一个lstm模型,该模型采用3个特征的输入数据,滚动窗口大小为18。我的模型有我在下面的代码中附加的层。我不明白的是,如果窗口大小从未作为参数传递给模型,那么18的滚动窗口大小如何包含在模型中。如果模型一次只接受一行输入,这不等于使用windowsize=1吗

class LSTMnetwork(nn.Module):
def __init__(self,input_size=3,hidden_size1=24, hidden_size2=50, hidden_size3=20,output_size=1):
    super().__init__()
    self.hidden_size1 = hidden_size1
    self.hidden_size2 = hidden_size2
    self.hidden_size3 = hidden_size3
    
    # Add an LSTM and dropout layer:
    self.lstm1 = nn.LSTM(input_size,hidden_size1)
    self.dropout1 = nn.Dropout(p=0.2)
    
    # Add second LSTM and dropout layer:
    self.lstm2 = nn.LSTM(hidden_size1,hidden_size2)
    self.dropout2 = nn.Dropout(p=0.2)
    
    # Add a fully-connected layer:
    self.fc1 = nn.Linear(hidden_size2,hidden_size3)
    
    # Add a fully-connected layer:
    self.fc2 = nn.Linear(hidden_size3,output_size)
    
    # Initialize h0 and c0:
    self.hidden1 = (torch.zeros(1,1,self.hidden_size1),
                   torch.zeros(1,1,self.hidden_size1))
    
    # Initialize h1 and c1:
    self.hidden2 = (torch.zeros(1,1,self.hidden_size2),
                   torch.zeros(1,1,self.hidden_size2))

def forward(self,seq):
    lstm1_out, self.hidden1 = self.lstm1(seq.view(len(seq),1,-1), self.hidden1)
    dropout1 = self.dropout1(lstm1_out)
    lstm2_out, self.hidden2 = self.lstm2(dropout1.view(len(dropout1),1,-1), self.hidden2)
    dropout2 = self.dropout2(lstm2_out)
    fc1_out = F.relu(self.fc1(dropout2))
    fc2_out = self.fc2(fc1_out)
    return fc2_out[-1]