Python softmax_损耗功能:将回路转换为矩阵运算

Python softmax_损耗功能:将回路转换为矩阵运算,python,numpy,machine-learning,softmax,Python,Numpy,Machine Learning,Softmax,我现在正在学习斯坦福cs231n课程。在完成softmax_loss函数时,我发现用完全矢量化的类型编写是不容易的,尤其是处理dw项。下面是我的代码。有人能优化代码吗。将不胜感激 def softmax_loss_vectorized(W, X, y, reg): loss = 0.0 dW = np.zeros_like(W) num_train = X.shape[0] num_classes = W.shape[1] scores = X.dot(W) sc

我现在正在学习斯坦福cs231n课程。在完成softmax_loss函数时,我发现用完全矢量化的类型编写是不容易的,尤其是处理dw项。下面是我的代码。有人能优化代码吗。将不胜感激

def softmax_loss_vectorized(W, X, y, reg):

  loss = 0.0
  dW = np.zeros_like(W)


  num_train = X.shape[0]
  num_classes = W.shape[1]

  scores = X.dot(W)
  scores -= np.max(scores, axis = 1)[:, np.newaxis]
  exp_scores = np.exp(scores)
  sum_exp_scores = np.sum(exp_scores, axis = 1)
  correct_class_score = scores[range(num_train), y]

  loss = np.sum(np.log(sum_exp_scores)) - np.sum(correct_class_score)

  exp_scores = exp_scores / sum_exp_scores[:,np.newaxis]

  # **maybe here can be rewroten into matrix operations** 
  for i in xrange(num_train):
    dW += exp_scores[i] * X[i][:,np.newaxis]
    dW[:, y[i]] -= X[i]

  loss /= num_train
  loss += 0.5 * reg * np.sum( W*W )
  dW /= num_train
  dW += reg * W


  return loss, dW

下面是一个矢量化的实现。但我建议你试着多花一点时间,自己解决问题。其思想是用所有softmax值构造一个矩阵,并从正确的元素中减去
-1

def softmax_loss_vectorized(W, X, y, reg):
  num_train = X.shape[0]

  scores = X.dot(W)
  scores -= np.max(scores)
  correct_scores = scores[np.arange(num_train), y]

  # Compute the softmax per correct scores in bulk, and sum over its logs.
  exponents = np.exp(scores)
  sums_per_row = np.sum(exponents, axis=1)
  softmax_array = np.exp(correct_scores) / sums_per_row
  information_array = -np.log(softmax_array)
  loss = np.mean(information_array)

  # Compute the softmax per whole scores matrix, which gives the matrix for X rows coefficients.
  # Their linear combination is algebraically dot product X transpose.
  all_softmax_matrix = (exponents.T / sums_per_row).T
  grad_coeff = np.zeros_like(scores)
  grad_coeff[np.arange(num_train), y] = -1
  grad_coeff += all_softmax_matrix
  dW = np.dot(X.T, grad_coeff) / num_train

  # Regularization
  loss += 0.5 * reg * np.sum(W * W)
  dW += reg * W

  return loss, dW

下面是一个矢量化的实现。但我建议你试着多花一点时间,自己解决问题。其思想是用所有softmax值构造一个矩阵,并从正确的元素中减去
-1

def softmax_loss_vectorized(W, X, y, reg):
  num_train = X.shape[0]

  scores = X.dot(W)
  scores -= np.max(scores)
  correct_scores = scores[np.arange(num_train), y]

  # Compute the softmax per correct scores in bulk, and sum over its logs.
  exponents = np.exp(scores)
  sums_per_row = np.sum(exponents, axis=1)
  softmax_array = np.exp(correct_scores) / sums_per_row
  information_array = -np.log(softmax_array)
  loss = np.mean(information_array)

  # Compute the softmax per whole scores matrix, which gives the matrix for X rows coefficients.
  # Their linear combination is algebraically dot product X transpose.
  all_softmax_matrix = (exponents.T / sums_per_row).T
  grad_coeff = np.zeros_like(scores)
  grad_coeff[np.arange(num_train), y] = -1
  grad_coeff += all_softmax_matrix
  dW = np.dot(X.T, grad_coeff) / num_train

  # Regularization
  loss += 0.5 * reg * np.sum(W * W)
  dW += reg * W

  return loss, dW

谢谢你的回答和建议!我仍在按照你的建议学习。谢谢你的回答和建议!我仍在按照你的建议学习。