Optimization MatConvNet中的Adam优化器

Optimization MatConvNet中的Adam优化器,optimization,neural-network,deep-learning,stochastic,matconvnet,Optimization,Neural Network,Deep Learning,Stochastic,Matconvnet,我尝试通过更改cnn_train中的以下代码来实现Adam,而不是默认的SGD优化器: opts.solver = [] ; % Empty array means use the default SGD solver [opts, varargin] = vl_argparse(opts, varargin) ; if ~isempty(opts.solver) assert(isa(opts.solver, 'function_handle') && nargout(o

我尝试通过更改cnn_train中的以下代码来实现Adam,而不是默认的SGD优化器:

opts.solver = [] ;  % Empty array means use the default SGD solver
[opts, varargin] = vl_argparse(opts, varargin) ;
if ~isempty(opts.solver)
  assert(isa(opts.solver, 'function_handle') && nargout(opts.solver) == 2,...
    'Invalid solver; expected a function handle with two outputs.') ;
  % Call without input arguments, to get default options
  opts.solverOpts = opts.solver() ;
end
致:

但是,我得到一个错误:

Insufficient number of outputs from right hand side of equal sign to satisfy assignment.
Error in cnn_train>accumulateGradients (line 508)
params.solver(net.layers{l}.weights{j}, state.solverState{l}{j}, ...
你们中有人试过改变默认编译器吗?在cnn_火车上我还应该换什么


Adam函数的代码:

function [w, state] = adam(w, state, grad, opts, lr)
%ADAM
%   Adam solver for use with CNN_TRAIN and CNN_TRAIN_DAG
%
%   See [Kingma et. al., 2014](http://arxiv.org/abs/1412.6980)
%    |  ([pdf](http://arxiv.org/pdf/1412.6980.pdf)).
%
%   If called without any input argument, returns the default options
%   structure. Otherwise provide all input arguments.
%   
%   W is the vector/matrix/tensor of parameters. It can be single/double
%   precision and can be a `gpuArray`.
%
%   STATE is as defined below and so are supported OPTS.
%
%   GRAD is the gradient of the objective w.r.t W
%
%   LR is the learning rate, referred to as \alpha by Algorithm 1 in 
%   [Kingma et. al., 2014].
%
%   Solver options: (opts.train.solverOpts)
%
%   `beta1`:: 0.9
%      Decay for 1st moment vector. See algorithm 1 in [Kingma et.al. 2014]
%
%   `beta2`:: 0.999
%      Decay for 2nd moment vector
%
%   `eps`:: 1e-8
%      Additive offset when dividing by state.v
%
%   The state is initialized as 0 (number) to start with. The first call to
%   this function will initialize it with the default state consisting of
%
%   `m`:: 0
%      First moment vector
%
%   `v`:: 0
%      Second moment vector
%
%   `t`:: 0
%      Global iteration number across epochs
%
%   This implementation borrowed from torch optim.adam

% Copyright (C) 2016 Aravindh Mahendran.
% All rights reserved.
%
% This file is part of the VLFeat library and is made available under
% the terms of the BSD license (see the COPYING file).

if nargin == 0 % Returns the default solver options
  w = struct('beta1', 0.9, 'beta2', 0.999, 'eps', 1e-8) ;
  return ;
end

if isequal(state, 0) % start off with state = 0 so as to get default state
  state = struct('m', 0, 'v', 0, 't', 0);
end

% update first moment vector `m`
state.m = opts.beta1 * state.m + (1 - opts.beta1) * grad ;

% update second moment vector `v`
state.v = opts.beta2 * state.v + (1 - opts.beta2) * grad.^2 ;

% update the time step
state.t = state.t + 1 ;

% This implicitly corrects for biased estimates of first and second moment
% vectors
lr_t = lr * (((1 - opts.beta2^state.t)^0.5) / (1 - opts.beta1^state.t)) ;

% Update `w`
w = w - lr_t * state.m ./ (state.v.^0.5 + opts.eps) ;
“等号右侧的输出数量不足,无法满足分配。” 你的输出数量似乎与cnn\U train的要求不符。 你能展示一下你的adam功能吗

在MatConvNet的最新版本中

    [net.layers{l}.weights{j}, state.solverState{l}{j}] = ...
        params.solver(net.layers{l}.weights{j}, state.solverState{l}{j}, ...
        parDer, params.solverOpts, thisLR) ;
它似乎与您的adam函数相匹配

你为什么不试试这个:

opts.solver = @adam;

而不是opts.solver='adam'

嗨,有人知道答案吗?提前谢谢!我使用了默认函数-我修改了我的问题并将其添加到那里
opts.solver = @adam;