Warning: file_get_contents(/data/phpspider/zhask/data//catemap/1/typo3/2.json): failed to open stream: No such file or directory in /data/phpspider/zhask/libs/function.php on line 167

Warning: Invalid argument supplied for foreach() in /data/phpspider/zhask/libs/tag.function.php on line 1116

Notice: Undefined index: in /data/phpspider/zhask/libs/function.php on line 180

Warning: array_chunk() expects parameter 1 to be array, null given in /data/phpspider/zhask/libs/function.php on line 181
Python 我的预测值只有在梯度下降时才会下降_Python_Machine Learning_Gradient Descent - Fatal编程技术网

Python 我的预测值只有在梯度下降时才会下降

Python 我的预测值只有在梯度下降时才会下降,python,machine-learning,gradient-descent,Python,Machine Learning,Gradient Descent,我目前正在写一个梯度下降的实现,我遇到了一个问题,我的预测值(y_hat)只会不断下降。即使在训练标签为1而不是0的情况下,它也不会增加。我的列车功能代码如下: def sigma(self, a): ans = 1/(1+np.exp(-a)) return ans def get_loss(self, y_i, y_hat): loss = -(y_i * np.log(y_hat) + (1 - y_i) * np.log(1 - y_hat)) re

我目前正在写一个梯度下降的实现,我遇到了一个问题,我的预测值(y_hat)只会不断下降。即使在训练标签为1而不是0的情况下,它也不会增加。我的列车功能代码如下:

def sigma(self, a):
    ans = 1/(1+np.exp(-a))
    return ans

  def get_loss(self, y_i, y_hat):
    loss = -(y_i * np.log(y_hat) + (1 - y_i) * np.log(1 - y_hat))
    return loss

def train(self, X, y, step_size, num_iterations):
    b_0 = 0
    rows = X.shape[0]
    columns = X.shape[1]
    weights = np.zeros(columns)
    losses = []
    for iteration in range(num_iterations):
      #Step 1: calculate y hat for row 
      summation = 0
      summation_k = np.zeros(columns)
      total_loss = 0
      for i in range(rows):
        row_total = np.sum(np.multiply(X[i], weights))
        y_hat = self.sigma(b_0 + row_total)
        y_i = y[i]
        # print('y_i: ', y_i)
        # print('y_hat: ', y_hat)
        # print()
        total_loss += self.get_loss(y_i, y_hat)
        diff = y_i - y_hat
        summation += diff

        # summation_k_i = summation_k_i + X[i] * diff
        summation_k = np.add(summation_k, np.multiply(diff, X[i]))
        
      # Compute change for each weight based on errors, then update the weights
      # Update b_0
      b_0 = b_0 + step_size * ((1/rows) * (-summation))

      # Update b_k
      # for j in range(columns):
      #   weights[j] = weights[j] + step_size * ((1/rows) * (-summation_k[j]))
      weights = np.add(weights, np.multiply(summation_k, (-step_size/rows)))
      
      # Keeping track of average loss for each iteration.
      losses.append(total_loss/rows)
    
    self.weights = np.insert(weights, 0, b_0)
    return np.array(losses)

当我运行这个程序时,每一行和每一次迭代的y_hat值都会减少。我找不到导致这种情况的bug。

梯度下降不是关于最小化代价函数吗,代价函数是
1/N*np.sum(np.sqrt(y_hat-y))
?错误是否在下降,这就是问题
error=y\u predicted-y\u actual
。否,错误正在增加。y_实际值为0或1,y_预测值为该范围内的数字。我得到的是一个递减的y_预测值(它下降到0.000000001甚至更远)。也是的,我们试图找到损失最小的重量。这是一个课堂作业,我们的教授给了我们一张工作表,上面有更新b_0值和权重的方程,所以我遵循这些公式。梯度下降不是关于最小化成本函数吗?成本函数是
1/N*np.sum(np.sqrt(y_hat-y))
?错误是否在下降,这就是问题
error=y\u predicted-y\u actual
。否,错误正在增加。y_实际值为0或1,y_预测值为该范围内的数字。我得到的是一个递减的y_预测值(它下降到0.000000001甚至更远)。也是的,我们试图找到损失最小的重量。这是一个课堂作业,我们的教授给了我们一张工作表,上面有更新b_0值和权重的公式,所以我遵循这些公式。