Python 如何使用pytorch创建的模型(model.pt)将学习转移到使用keras训练新数据集的新模型?

Python 如何使用pytorch创建的模型(model.pt)将学习转移到使用keras训练新数据集的新模型?,python,keras,deep-learning,pytorch,transfer-learning,Python,Keras,Deep Learning,Pytorch,Transfer Learning,我想冻结层并使用模型将学习转移到另一个数据集,但如何 顺便说一句,这个模型(model.pt)是由pytorch创建的,我想用它将学习转移到一个新模型,使用keras来训练一个新的数据集 我试着从皮托克到凯拉斯,我有很多困惑。你能帮我把学习转移到另一个数据集吗 Model( (data_bn): BatchNorm1d(54, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True) (st_gcn_networks)

我想冻结层并使用模型将学习转移到另一个数据集,但如何

顺便说一句,这个模型(model.pt)是由pytorch创建的,我想用它将学习转移到一个新模型,使用keras来训练一个新的数据集

我试着从皮托克到凯拉斯,我有很多困惑。你能帮我把学习转移到另一个数据集吗

Model(
  (data_bn): BatchNorm1d(54, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
  (st_gcn_networks): ModuleList(
    (0): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(3, 192, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(64, 64, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
      (relu): ReLU(inplace=True)
    )
    (1): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(64, 192, kernel_size=(1, 1), stride=(1, 1))
      )
        (1): ReLU(inplace=True)
        (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(64, 64, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
      (relu): ReLU(inplace=True)
    )
    (2): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(64, 192, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(64, 64, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
      (relu): ReLU(inplace=True)
    )
    (3): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(64, 192, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(64, 64, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
    )
    (4): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(64, 384, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(128, 128, kernel_size=(9, 1), stride=(2, 1), padding=(4, 0))
        (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
      (residual): Sequential(
        (0): Conv2d(64, 128, kernel_size=(1, 1), stride=(2, 1))
        (1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (relu): ReLU(inplace=True)
    )
    (5): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(128, 128, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (relu): ReLU(inplace=True)
    )
    (6): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(128, 384, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(128, 128, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
      (relu): ReLU(inplace=True)
    )
    (7): st_gcn(
        (conv): Conv2d(128, 768, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(256, 256, kernel_size=(9, 1), stride=(2, 1), padding=(4, 0))
        (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
      (residual): Sequential(
        (0): Conv2d(128, 256, kernel_size=(1, 1), stride=(2, 1))
        (1): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
      )
    )
    (8): st_gcn(
      (gcn): ConvTemporalGraphical(
      )
      (tcn): Sequential(
        (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(256, 256, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
      (relu): ReLU(inplace=True)
    )
    (9): st_gcn(
      (gcn): ConvTemporalGraphical(
        (conv): Conv2d(256, 768, kernel_size=(1, 1), stride=(1, 1))
      )
      (tcn): Sequential(
        (0): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (1): ReLU(inplace=True)
        (2): Conv2d(256, 256, kernel_size=(9, 1), stride=(1, 1), padding=(4, 0))
        (3): BatchNorm2d(256, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
        (4): Dropout(p=0, inplace=True)
      )
      (relu): ReLU(inplace=True)
    )
  )
  (edge_importance): ParameterList(
      (0): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (1): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (2): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (3): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (4): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (5): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (6): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (7): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (8): Parameter containing: [torch.FloatTensor of size 3x18x18]
      (9): Parameter containing: [torch.FloatTensor of size 3x18x18]
  )
  (fcn): Conv2d(256, 400, kernel_size=(1, 1), stride=(1, 1))
) 
检查ONNX(开放式神经网络交换)文件格式;它通常用于这类任务。Pytorch提供了一种从Pytorch转换为ONNX格式的方法。我相信也有将ONNX加载到Keras/TF的功能