Python Pyrotch lstm重量的含义是什么?

Python Pyrotch lstm重量的含义是什么?,python,pytorch,lstm,Python,Pytorch,Lstm,我有一个简单的lstm pytorch模型,模型结构为: LSTM( (lstm): LSTM(1, 2) (fc): Linear(in_features=2, out_features=1, bias=True) ) 这是一个简单的任务,我想预测字母顺序 这意味着:x是[[[1,2,3]]->[4](1-A,2-B,3-C),我想输入'ABC',输出'D' 培训后,我想更多地了解我的lstm层。所以我打印模型参数如下: n [7]: l = a._modules['lstm']

我有一个简单的lstm pytorch模型,模型结构为:

LSTM(
  (lstm): LSTM(1, 2)
  (fc): Linear(in_features=2, out_features=1, bias=True)
)
这是一个简单的任务,我想预测字母顺序

这意味着:x是[[[1,2,3]]->[4](1-A,2-B,3-C),我想输入'ABC',输出'D' 培训后,我想更多地了解我的lstm层。所以我打印模型参数如下:

n [7]: l = a._modules['lstm']

In [8]: l.__dict__
Out[8]: 
{'training': True,
 '_parameters': OrderedDict([('weight_ih_l0',
               Parameter containing:
               tensor([[ 0.2127],
                       [ 1.5807],
                       [-0.8566],
                       [ 1.0215],
                       [-0.7563],
                       [ 0.8248],
                       [-0.7307],
                       [ 1.4174]], dtype=torch.float64, requires_grad=True)),
              ('weight_hh_l0',
               Parameter containing:
               tensor([[ 0.0245, -0.5089],
                       [ 0.0338,  2.8269],
                       [-1.0781, -0.4691],
                       [-0.2368,  2.2788],
                       [-1.0743,  0.5130],
                       [ 0.8970, -0.0829],
                       [-0.7051, -4.8892],
                       [-0.5335,  1.8777]], dtype=torch.float64, requires_grad=True)),
              ('bias_ih_l0',
               Parameter containing:
               tensor([ 1.7890,  2.4625, -0.4471,  0.8364, -1.2260,  1.5116,  2.1067,  1.6485],
                      dtype=torch.float64, requires_grad=True)),
              ('bias_hh_l0',
               Parameter containing:
               tensor([ 1.5659,  2.6634, -0.2972,  0.6908, -1.1136,  0.8588,  1.4372,  1.6157],
                      dtype=torch.float64, requires_grad=True))]),
 '_buffers': OrderedDict(),
 '_non_persistent_buffers_set': set(),
 '_backward_hooks': OrderedDict(),
 '_is_full_backward_hook': None,
 '_forward_hooks': OrderedDict(),
 '_forward_pre_hooks': OrderedDict(),
 '_state_dict_hooks': OrderedDict(),
 '_load_state_dict_pre_hooks': OrderedDict(),
 '_modules': OrderedDict(),
 'mode': 'LSTM',
 'input_size': 1,
 'hidden_size': 2,
 'num_layers': 1,
 'bias': True,
 'batch_first': False,
 'dropout': 0.0,
 'bidirectional': False,
 'proj_size': 0,
 '_flat_weights_names': ['weight_ih_l0',
  'weight_hh_l0',
  'bias_ih_l0',
  'bias_hh_l0'],
 '_all_weights': [['weight_ih_l0',
   'weight_hh_l0',
   'bias_ih_l0',
   'bias_hh_l0']],
 '_flat_weights': [Parameter containing:
  tensor([[ 0.2127],
          [ 1.5807],
          [-0.8566],
          [ 1.0215],
          [-0.7563],
          [ 0.8248],
          [-0.7307],
          [ 1.4174]], dtype=torch.float64, requires_grad=True),
  Parameter containing:
  tensor([[ 0.0245, -0.5089],
          [ 0.0338,  2.8269],
          [-1.0781, -0.4691],
          [-0.2368,  2.2788],
          [-1.0743,  0.5130],
          [ 0.8970, -0.0829],
          [-0.7051, -4.8892],
          [-0.5335,  1.8777]], dtype=torch.float64, requires_grad=True),
  Parameter containing:
  tensor([ 1.7890,  2.4625, -0.4471,  0.8364, -1.2260,  1.5116,  2.1067,  1.6485],
         dtype=torch.float64, requires_grad=True),
  Parameter containing:
  tensor([ 1.5659,  2.6634, -0.2972,  0.6908, -1.1136,  0.8588,  1.4372,  1.6157],
         dtype=torch.float64, requires_grad=True)]}
您能帮助理解模型输出、权重、权重、偏差和其他内容的含义吗?

该层的文档()非常详细。特别是它描述了你要问的张量。您应该先阅读它,如果您发现文档和您不理解的代码之间存在差异,请返回这里