Python 多变量&;具有深度学习的多目标回归问题
我想设计一个具有多个输入变量(4)和多个输出变量(3)的神经网络模型。不知道在哪里可以改进。 可能的问题:Python 多变量&;具有深度学习的多目标回归问题,python,neural-network,deep-learning,regression,pytorch,Python,Neural Network,Deep Learning,Regression,Pytorch,我想设计一个具有多个输入变量(4)和多个输出变量(3)的神经网络模型。不知道在哪里可以改进。 可能的问题: 数据加载程序和数据规范化: 我发现我的输入数据有4个3~4位数的值,因此进行了标准化。我不确定我的方法是否正确 我也对输入值进行了归一化,虽然输出值也有很多数字,但我想正确预测输出值 模型本身: 我正在尝试建立多输入多输出模型 所以我把4个输入变量,得到3个输出变量。 我应该如何改进我的模型? (感谢您的帮助,但仅使用CNN、RNN、LSTM…并没有真正的帮助。) 损失函数与优化器
- 我发现我的输入数据有4个3~4位数的值,因此进行了标准化。我不确定我的方法是否正确李>
- 我也对输入值进行了归一化,虽然输出值也有很多数字,但我想正确预测输出值
- 我正在尝试建立多输入多输出模型 所以我把4个输入变量,得到3个输出变量。 我应该如何改进我的模型? (感谢您的帮助,但仅使用CNN、RNN、LSTM…并没有真正的帮助。)
- 目前正在使用nn.MSELoss()loss函数和torch.optim.SGD(model.parameters(),lr=learning_rate)优化器
- 然而,这使得NaN,NaN,NaN输出(可能有些数字太大了?) 并尝试nn.L1Loss()
- Loss.item值显著降低,但不确定这是否是我想要的值李>
- 我看不出结果有什么变化(学习过程)
- 我似乎无法绘制预测和y,因为它是3d向量 而且也不能做简单的等式比较,因为数字在许多小数点上通常不匹配
- 在这种情况下,我应该如何进行评估
#这是一个数据加载器,我使用f.normalize进行了规范化
类加载器(数据集):
定义初始化(自身,数据):
self.data=数据
//无规范化过程
//self.features=torch.tensor(data.iloc[:,1:4].值)
//使用规范化过程
self.features=f.normalize(torch.tensor(data.iloc[:,1:5].values),p=2,dim=[-2,1])
self.targets=torch.tensor(data.iloc[:,5:8].值)
定义uu获取项目uu(自身,索引):
功能=自身。功能[索引]
目标=自我。目标[索引]
返回特征,目标
定义(自我):
返回self.data.shape[0]
数据格式:标准化之前
tensor([[3671.4275, 3920.7729, 3562.8547, 3409.9354],
[2613.0593, 4144.3052, 3963.4261, 2520.0331],
[3217.1497, 4907.6748, 3618.3077, 1625.5708],
[3495.5350, 3740.1072, 3372.8023, 3222.0030],
[2668.3124, 4059.1723, 3856.5733, 2555.0729],
[3148.7100, 3581.6413, 3238.2163, 2892.8446],
[3259.0295, 4951.7812, 3691.3874, 1729.6103],
[2383.1404, 4416.9705, 4282.0778, 2368.6338],
[3233.4030, 3509.1483, 3131.9635, 2950.0310],
[2796.2451, 4666.8626, 3963.1061, 2038.5555]], device='cuda:0',
dtype=torch.float64)
tensor([[ 86.8043, -125.5781, 514.8793],
[ 599.4942, -25.5526, 869.6853],
[ 767.3758, -575.2572, 932.4783],
[ 85.4733, -129.7880, 653.4727],
[ 539.4341, -38.1205, 870.6907],
[ 150.6393, -117.4500, 877.1568],
[ 766.7727, -558.0292, 871.3760],
[ 806.4800, 11.9661, 873.6658],
[ 98.3526, -130.0950, 883.7505],
[ 780.5635, -253.1192, 876.8691]], device='cuda:0',
dtype=torch.float64)
tensor([[0.0063, 0.0069, 0.0061, 0.0058],
[0.0067, 0.0072, 0.0065, 0.0062],
[0.0064, 0.0070, 0.0062, 0.0059],
[0.0056, 0.0092, 0.0077, 0.0039],
[0.0051, 0.0081, 0.0078, 0.0049],
[0.0058, 0.0074, 0.0068, 0.0054],
[0.0054, 0.0078, 0.0073, 0.0051],
[0.0063, 0.0097, 0.0073, 0.0034],
[0.0047, 0.0087, 0.0084, 0.0046],
[0.0058, 0.0093, 0.0076, 0.0038]], device='cuda:0',
dtype=torch.float64)
tensor([[ 98.2855, -130.0667, 883.9658],
[ 86.6183, -130.0066, 716.1242],
[ 93.5488, -130.8305, 837.4871],
[ 778.3415, -295.5290, 876.0615],
[ 598.6805, -25.7674, 869.6824],
[ 322.8932, -82.1197, 873.7214],
[ 473.1678, -51.6608, 871.5651],
[ 768.4887, -535.2723, 871.2855],
[ 806.6830, 6.9960, 877.1038],
[ 775.0177, -365.2128, 875.5223]], device='cuda:0',
dtype=torch.float64)
数据格式:标准化后
tensor([[3671.4275, 3920.7729, 3562.8547, 3409.9354],
[2613.0593, 4144.3052, 3963.4261, 2520.0331],
[3217.1497, 4907.6748, 3618.3077, 1625.5708],
[3495.5350, 3740.1072, 3372.8023, 3222.0030],
[2668.3124, 4059.1723, 3856.5733, 2555.0729],
[3148.7100, 3581.6413, 3238.2163, 2892.8446],
[3259.0295, 4951.7812, 3691.3874, 1729.6103],
[2383.1404, 4416.9705, 4282.0778, 2368.6338],
[3233.4030, 3509.1483, 3131.9635, 2950.0310],
[2796.2451, 4666.8626, 3963.1061, 2038.5555]], device='cuda:0',
dtype=torch.float64)
tensor([[ 86.8043, -125.5781, 514.8793],
[ 599.4942, -25.5526, 869.6853],
[ 767.3758, -575.2572, 932.4783],
[ 85.4733, -129.7880, 653.4727],
[ 539.4341, -38.1205, 870.6907],
[ 150.6393, -117.4500, 877.1568],
[ 766.7727, -558.0292, 871.3760],
[ 806.4800, 11.9661, 873.6658],
[ 98.3526, -130.0950, 883.7505],
[ 780.5635, -253.1192, 876.8691]], device='cuda:0',
dtype=torch.float64)
tensor([[0.0063, 0.0069, 0.0061, 0.0058],
[0.0067, 0.0072, 0.0065, 0.0062],
[0.0064, 0.0070, 0.0062, 0.0059],
[0.0056, 0.0092, 0.0077, 0.0039],
[0.0051, 0.0081, 0.0078, 0.0049],
[0.0058, 0.0074, 0.0068, 0.0054],
[0.0054, 0.0078, 0.0073, 0.0051],
[0.0063, 0.0097, 0.0073, 0.0034],
[0.0047, 0.0087, 0.0084, 0.0046],
[0.0058, 0.0093, 0.0076, 0.0038]], device='cuda:0',
dtype=torch.float64)
tensor([[ 98.2855, -130.0667, 883.9658],
[ 86.6183, -130.0066, 716.1242],
[ 93.5488, -130.8305, 837.4871],
[ 778.3415, -295.5290, 876.0615],
[ 598.6805, -25.7674, 869.6824],
[ 322.8932, -82.1197, 873.7214],
[ 473.1678, -51.6608, 871.5651],
[ 768.4887, -535.2723, 871.2855],
[ 806.6830, 6.9960, 877.1038],
[ 775.0177, -365.2128, 875.5223]], device='cuda:0',
dtype=torch.float64)
从那以后有什么改进吗?
# Result
Epoch [1/100], Step [1000/5500], Loss: 320.4197
Epoch [1/100], Step [2000/5500], Loss: 262.3183
Epoch [1/100], Step [3000/5500], Loss: 220.1559
Epoch [1/100], Step [4000/5500], Loss: 236.1293
Epoch [1/100], Step [5000/5500], Loss: 34.1634
Epoch [2/100], Step [1000/5500], Loss: 440.7753
Epoch [2/100], Step [2000/5500], Loss: 494.4890
Epoch [2/100], Step [3000/5500], Loss: 224.8119
Epoch [2/100], Step [4000/5500], Loss: 145.8345
Epoch [2/100], Step [5000/5500], Loss: 97.7785