Optimization 执行梯度下降时检查梯度

Optimization 执行梯度下降时检查梯度,optimization,neural-network,gradient,derivative,backpropagation,Optimization,Neural Network,Gradient,Derivative,Backpropagation,我正在尝试使用梯度下降实现前馈反向传播自动编码器训练,并希望验证我是否正确计算了梯度。这建议一次计算一个参数的导数:grad_itheta=Jtheta_i+ε-Jtheta_i-ε/2*ε。我已经在Matlab中编写了一段示例代码来实现这一点,但运气不太好-从导数计算的梯度和数值计算的梯度之间的差异往往更大>>4个有效数字 如果有人能提供任何建议,我将非常感谢在我的梯度计算或如何执行检查的帮助。因为我已经大大简化了代码,使其更具可读性,所以我没有包含任何偏差,也不再绑定权重矩阵 首先,我初始化

我正在尝试使用梯度下降实现前馈反向传播自动编码器训练,并希望验证我是否正确计算了梯度。这建议一次计算一个参数的导数:grad_itheta=Jtheta_i+ε-Jtheta_i-ε/2*ε。我已经在Matlab中编写了一段示例代码来实现这一点,但运气不太好-从导数计算的梯度和数值计算的梯度之间的差异往往更大>>4个有效数字

如果有人能提供任何建议,我将非常感谢在我的梯度计算或如何执行检查的帮助。因为我已经大大简化了代码,使其更具可读性,所以我没有包含任何偏差,也不再绑定权重矩阵

首先,我初始化变量:

numHidden = 200;
numVisible = 784;
low = -4*sqrt(6./(numHidden + numVisible));
high = 4*sqrt(6./(numHidden + numVisible));
encoder = low + (high-low)*rand(numVisible, numHidden);
decoder = low + (high-low)*rand(numHidden, numVisible);
接下来,给定一些输入图像x,进行前馈传播:

a = sigmoid(x*encoder);
z = sigmoid(a*decoder); % (reconstruction of x)
我使用的损失函数是标准的∑0.5*z-x^2:

% first calculate the error by finding the derivative of sum(0.5*(z-x).^2), 
% which is (f(h)-x)*f'(h), where z = f(h), h = a*decoder, and 
% f = sigmoid(x). However, since the derivative of the sigmoid is 
% sigmoid*(1 - sigmoid), we get:
error_0 = (z - x).*z.*(1-z);

% The gradient \Delta w_{ji} = error_j*a_i
gDecoder = error_0'*a;

% not important, but included for completeness
% do back-propagation one layer down
error_1 = (error_0*encoder).*a.*(1-a);
gEncoder = error_1'*x;
最后,检查梯度是否正确在这种情况下,只需对解码器执行此操作:

epsilon = 10e-5;
check = gDecoder(:); % the values we obtained above
for i = 1:size(decoder(:), 1)
    % calculate J+
    theta = decoder(:); % unroll
    theta(i) = theta(i) + epsilon;
    decoderp = reshape(theta, size(decoder)); % re-roll
    a = sigmoid(x*encoder);
    z = sigmoid(a*decoderp);
    Jp = sum(0.5*(z - x).^2);

    % calculate J-
    theta = decoder(:);
    theta(i) = theta(i) - epsilon;
    decoderp = reshape(theta, size(decoder));
    a = sigmoid(x*encoder);
    z = sigmoid(a*decoderp);
    Jm = sum(0.5*(z - x).^2);

    grad_i = (Jp - Jm) / (2*epsilon);
    diff = abs(grad_i - check(i));
    fprintf('%d: %f <=> %f: %f\n', i, grad_i, check(i), diff);
end
在MNIST数据集上为第一个条目运行此操作会产生如下结果:

2: 0.093885 <=> 0.028398: 0.065487
3: 0.066285 <=> 0.031096: 0.035189
5: 0.053074 <=> 0.019839: 0.033235
6: 0.108249 <=> 0.042407: 0.065843
7: 0.091576 <=> 0.009014: 0.082562

不要在a和z上都使用S形。只需在z上使用它

a = x*encoder;
z = sigmoid(a*decoderp);