Python Pyrotch闪电中回归结果的积累和绘制

Python Pyrotch闪电中回归结果的积累和绘制,python,neural-network,pytorch,conv-neural-network,pytorch-lightning,Python,Neural Network,Pytorch,Conv Neural Network,Pytorch Lightning,我用PyTorch lightning开发了一个回归问题的卷积神经网络。我已将数据划分为训练、验证和测试数据加载程序,并编写了LightningModule训练和验证步骤的基本代码。以下是我到目前为止的情况: class Model(pl.LightningModule): def __init__(self,lr=3e-5): super(Model, self).__init__() self.save_hyperparameters('lr')

我用PyTorch lightning开发了一个回归问题的卷积神经网络。我已将数据划分为训练、验证和测试数据加载程序,并编写了LightningModule训练和验证步骤的基本代码。以下是我到目前为止的情况:

class Model(pl.LightningModule):
    def __init__(self,lr=3e-5):
        super(Model, self).__init__()
        self.save_hyperparameters('lr')

        self.cnn_layers = nn.Sequential(
            nn.Conv3d(4, 8, 5, stride=1, padding=2),
            nn.ReLU(inplace=True),
            nn.Conv3d(8, 2, 5, stride=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv3d(2, 2, 3, stride=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv3d(2,1,3,stride=3, padding=0)
        )

    def forward(self, x):
        x = self.cnn_layers(x).squeeze()
        return x    

    def configure_optimizers(self):
        optimizer = torch.optim.Adam(self.parameters(), self.hparams.lr)
        return optimizer
    
    def L1_loss(self,pred,actual):
        return F.l1_loss(pred, actual)
    
    def L2_loss(self,pred,actual):
        return F.mse_loss(pred,actual)
    
    def training_step(self,batch,batch_idx):
        x,y = batch
        pred = self.forward(x)
        loss = self.L2_loss(pred,y)
        err = self.L1_loss(pred,y)
        res = {'loss':loss,'train_err':err}
        self.log_dict(res, on_step=True, on_epoch=True, prog_bar=True, logger=True)
        return res        
    
    def validation_step(self,batch,batch_idx):
        x,y = batch 
        pred = self.forward(x)
        loss = self.L2_loss(pred,y)
        err = self.L1_loss(pred,y)
        res = {'val_loss':loss,'val_err':err}
        self.log_dict(res, on_step=True, on_epoch=True, prog_bar=True, logger=True)
        return res
    
    def training_epoch_end(self,outputs):
        avg_loss = torch.stack([x['loss'] for x in outputs]).mean()
        avg_err = torch.stack([x['train_err'] for x in outputs]).mean()
        self.logger.experiment.add_scalar("Train/loss",avg_loss,self.current_epoch)
        self.logger.experiment.add_scalar("Train/err",avg_err,self.current_epoch)
    
    def validation_epoch_end(self,outputs):
        avg_loss = torch.stack([x['val_loss'] for x in outputs]).mean()
        avg_err = torch.stack([x['val_err'] for x in outputs]).mean()
        self.logger.experiment.add_scalar("Validation/loss",avg_loss,self.current_epoch)
        self.logger.experiment.add_scalar("Validation/err",avg_err,self.current_epoch)
在培训阶段之后,我想冻结模型,并收集所有三个数据集的测试预测。最后,我想为所有点制作一个预测值与实际值的奇偶散点图(通过它们所属的集合用颜色区分)。我研究的方法之一是修改init,同时声明6个单独的列表,如下所示:

class Model(pl.LightningModule):
    def __init__(self,lr=3e-5):
        super(Model, self).__init__()
        self.save_hyperparameters('lr')

        self.cnn_layers = nn.Sequential(
            nn.Conv3d(4, 8, 5, stride=1, padding=2),
            nn.ReLU(inplace=True),
            nn.Conv3d(8, 2, 5, stride=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv3d(2, 2, 3, stride=3, padding=1),
            nn.ReLU(inplace=True),
            nn.Conv3d(2,1,3,stride=3, padding=0)
        )
        self.train_actual,self.train_pred = [],[]
        self.val_actual,self.val_pred = [],[]
        self.test_actual,self.test_pred = [],[]
然后,我尝试分别在三个数据加载程序上运行trainer.test(),但我不确定如何指定使用每个test()调用填充哪对列表。我也不确定这是否是记录最终回归结果的最佳方式。对此有任何见解将不胜感激