Numpy SGD在改变学习率后发散

Numpy SGD在改变学习率后发散,numpy,machine-learning,linear-regression,gradient-descent,Numpy,Machine Learning,Linear Regression,Gradient Descent,我正在用岭回归创建一个随机梯度下降函数。我在1800次迭代中保持步长不变,然后将其更改为1/n或1/sqrt(n)。当我使用1/sqrt(n)时,我的损失在减少并且几乎收敛。然而,当我使用1/n时,我的损失减少,然后开始增加!谁能帮帮我吗。下面是SGC的代码,也是我用来计算每次更新后整个批次损失的函数 def stochastic_grad_descent_steptrial(x,y,thetha,alpha,num_iter,lambda_reg): loss_log=[]

我正在用岭回归创建一个随机梯度下降函数。我在1800次迭代中保持步长不变,然后将其更改为1/n或1/sqrt(n)。当我使用1/sqrt(n)时,我的损失在减少并且几乎收敛。然而,当我使用1/n时,我的损失减少,然后开始增加!谁能帮帮我吗。下面是SGC的代码,也是我用来计算每次更新后整个批次损失的函数

def stochastic_grad_descent_steptrial(x,y,thetha,alpha,num_iter,lambda_reg):

    loss_log=[]
    theta_log=[]
    ridge_log=[]

    total_loss_log=[]  # total loss for updates(for plotting)
    total_iter_count=[]
    for j in range(num_iter): # j epochs
        for i in range(x.shape[0]): 


            diff=np.dot(x.iloc[i,:],thetha)-y[i]  # loss of each i
            loss=np.sum(diff**2)+lambda_reg*np.sum(thetha**2)
            loss_log.append(loss) # append to log

            diff=np.dot(x.iloc[i,:],thetha)-y[i] # specific i
            grad=(2/N)*np.dot(x.iloc[i,:].T,diff)+2*lambda_reg*thetha # grad for i


            total_iter=((j+1)*(i+1)) # Total step number till now 
            total_iter_count.append(total_iter)


            if total_iter<1800:        # change to step size function only after n steps 
                step=alpha             # Can use function and step condition as hyper parameter
            else:
                step=1/(total_iter)
#                 step=1/np.sqrt(total_iter)

            thetha=thetha-step*grad #update


            #appends
            theta_log.append(thetha) 
            ridge_log.append(lambda_reg*np.sum(thetha**2))  


            # compute loss on entire data 
            total_loss=ridge_loss(x,y,thetha,lambda_reg) 
            total_loss_log.append(total_loss) # append

    normal_loss=cost(x,y,thetha) #final loss(without ridge)

    loss_log=np.array(loss_log)  # conversions to np,pd
    theta_log=pd.DataFrame(theta_log)
    ridge_log=np.array(ridge_log)

    return(loss_log,theta_log,ridge_log,thetha,normal_loss,total_loss_log)
为什么损失在增加?(见下图)损失怎么会增加,是不是应该慢慢减少

图1为等速,图2为平方根迭代 图3是迭代次数的1


不包括您的损失图表。我编辑了这个问题,这样它就不会涉及到它们,但是请自己编辑它,如果可能的话添加它们。我试图把这些图放进去,但是它把它们重新识别为代码并给出缩进错误。我添加了图像,但是出于某种原因,文本被识别为代码谢谢!如果可以,也可以解决问题!断然的!谢谢不正确的迭代计数器
def ridge_loss(x,y,thetha,lambda_reg):

    diff=np.dot(x,thetha)-y
    cost=(1/N)*np.sum(diff**2)+lambda_reg*np.sum(thetha**2)
    return(cost)