Python L2矩阵行归一化梯度

Python L2矩阵行归一化梯度,python,numpy,neural-network,deep-learning,backpropagation,Python,Numpy,Neural Network,Deep Learning,Backpropagation,我试图为卷积神经网络实现L2范数层,但我仍停留在反向过程中: def forward(self, inputs): x, = inputs self._norm = np.expand_dims(np.linalg.norm(x, ord=2, axis=1), axis=1) z = np.divide(x, self._norm) return z, def backward(self, inputs, grad_outputs): x, = inp

我试图为卷积神经网络实现L2范数层,但我仍停留在反向过程中:

def forward(self, inputs):
    x, = inputs
    self._norm = np.expand_dims(np.linalg.norm(x, ord=2, axis=1), axis=1)
    z = np.divide(x, self._norm)
    return z,

def backward(self, inputs, grad_outputs):
    x, = inputs
    gz, = grad_outputs
    gx = None # how to compute gradient here?
    return gx,
如何计算gx? 我的第一个猜测是

gx = - gz * x / self._norm**2
但是这个似乎是错的。

正确的答案是

gx = np.divide(gz, self._norm)