Lua 在Torch中进行线性回归将NaN作为误差

Lua 在Torch中进行线性回归将NaN作为误差,lua,neural-network,linear-regression,torch,mean-square-error,Lua,Neural Network,Linear Regression,Torch,Mean Square Error,我是新来的火炬手。最近,我尝试用torch做多元线性回归。但误差总是无穷大 对于前两个错误,它显然在增加。 这是我的密码 dataset= 124.0000 81.6900 64.5000 118.0000 150.0000 103.8400 73.3000 143.0000 ... 137.0000 94.9600 67.0000 191.0000 110.0000 99.7900 75.5000 192.0000 ... 94.0

我是新来的火炬手。最近,我尝试用torch做多元线性回归。但误差总是无穷大

对于前两个错误,它显然在增加。 这是我的密码

dataset= 
124.0000   81.6900   64.5000  118.0000
 150.0000  103.8400   73.3000  143.0000
   ...
 137.0000   94.9600   67.0000  191.0000
 110.0000   99.7900   75.5000  192.0000
   ...
  94.0000   89.4000   64.5000  139.0000
  74.0000   93.0000   74.0000  148.0000
  89.0000   93.5900   75.5000  179.0000
linLayer = nn.Linear(3,1)
model = nn.Sequential()  
model:add(linLayer)
criterion = nn.MSECriterion()

feval = function(x_new)
    if x ~= x_new then
      x:copy(x_new)
   end
   _nidx_ = (_nidx_ or 0) + 1
   if _nidx_ > (#dataset_inputs)[1] then _nidx_ = 1 end

   local sample = dataset[_nidx_]
   local inputs = sample[{ {2,4} }]
   local target = sample[{ {1} }] 

   dl_dx:zero()

   local loss_x = criterion:forward(model:forward(inputs),target)
   model:backward(inputs, criterion:backward(model.output,target))

   -- return loss(x) and dloss/dx
   return loss_x, dl_dx
end


sgd_params = {
   learningRate = 1e-3,
   learningRateDecay = 1e-4,
   weightDecay = 0,
   momentum = 0
}
epochs = 100


  for i = 1,epochs do
        current_loss = 0
        for i = 1,(#dataset_inputs)[1] do

            _,fs = optim.sgd(feval,x,sgd_params)

            current_loss = current_loss + fs[1]
        end
        current_loss = current_loss / (#dataset_inputs)[1]
        print('epoch = ' .. i .. 
         ' of ' .. epochs .. 
         ' current loss = ' .. current_loss)
    end

And the result:
epoch = 1 of 100 current loss = 8.1958765768632e+138    
epoch = 2 of 100 current loss = 5.0759297005752e+278    
epoch = 3 of 100 current loss = inf 
epoch = 4 of 100 current loss = inf 
epoch = 5 of 100 current loss = nan 
... ...
epoch = 97 of 100 current loss = nan    
epoch = 98 of 100 current loss = nan    
epoch = 99 of 100 current loss = nan    
epoch = 100 of 100 current loss = nan
我该怎么解决这个问题?我用同样的方法进行训练逻辑回归。结果似乎比这个好。但仍然不够好。
有什么不对劲吗?非常感谢。

我也有同样的问题。我想你是在学南多·德·弗雷塔斯的课程吧?你解决了吗?哦,我实际上解决了,只是通过调整学习率和学习率衰减。收敛对这些非常敏感..非常感谢你,guillefix。这对我帮助很大。