Optimization Toolbox | ![]() ![]() |
New Calling Sequences
Version 2 of the toolbox makes these changes in the calling sequences:
options.GradObj = 'on'
indicates the user-supplied gradient of the objective function is available.options.GradConstr = 'on'
indicates the user-supplied gradient of the constraints is available.options.Hessian = 'on'
indicates the user-supplied Hessian of the objective function is available.options
structure to adjust parameters to the optimization functions (see optimset
, optimget
).options.display = 'final'
).exitflag
that denotes the termination state.options.LargeScale = 'off'
.Algorithm terminating conditions have been fine tuned. The stopping conditions relating to TolX
and TolFun
for the large-scale and medium-scale code are joined using OR instead of AND for these functions: fgoalattain
, fmincon
, fminimax
, fminunc
, fseminf
, fsolve
, and lsqnonlin
. As a result, you may need to specify stricter tolerances; the defaults reflect this change.
Each function now has an output
structure that contains information about the problem solution relevant to that function.
The lambda
is now a structure where each field is the Lagrange multipliers for a type of constraint. For more information, see the individual functions in the Function Reference chapter.
The sections below describe how to convert from the old function names and calling sequences to the new ones. The calls shown are the most general cases, involving all possible input and output arguments. Note that many of these arguments are optional; see the online help for these functions for more information.
Converting from attgoal to fgoalattain
In Version 1.5, you used this call to attgoal
OPTIONS = foptions; [X,OPTIONS] = attgoal('FUN',x0,GOAL, WEIGHT, OPTIONS, VLB, VUB, 'GRADFUN', P1, P2,...);
with [F] = FUN(X,P1,...)
and [DF] = GRADFUN(X,P1,...)
.
In Version 2, you call fgoalattain
like this
OPTIONS = optimset('fgoalattain'); [X,FVAL,ATTAINFACTOR,EXITFLAG,OUTPUT,LAMBDA] = fgoalattain(@FUN,x0,GOAL,WEIGHT,A,B,Aeq,Beq,VLB,VUB, @NONLCON,OPTIONS,P1,P2,...);
with [F,DF] = FUN(X,P1,P2,...)
and NONLCON = []
.
The fgoalattain
function now allows nonlinear constraints, so you can now define
[Cineq,Ceq,DCineq,DCeq] = NONLCON(X,P1,...)
Converting from conls to lsqlin
In Version 1.5, you used this call to conls
.
[X,LAMBDA,HOW] = conls(A,b,C,d,VLB,VUB,X0,N,DISPLAY);
In Version 2, convert the input arguments to the correct form for lsqlin
by separating the equality and inequality constraints.
Ceq = C(1:N,:); deq = d(1:N); C = C(N+1:end,:); d = d(N+1:end,:);
Now call lsqlin
like this.
OPTIONS = optimset('Display','final'); [X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA] = lsqlin(A,b,C,d,Ceq,deq,VLB,VUB,X0,OPTIONS);
Converting from constr to fmincon
In Version 1.5, you used this call to constr
[X,OPTIONS,LAMBDA,HESS] = constr('FUN',x0,OPTIONS,VLB,VUB,'GRADFUN',P1,P2,...);
with [F,C] = FUN(X,P1,...)
and [G,DC] = GRADFUN(X,P1,...)
.
In Version 2, replace FUN
and GRADFUN
with two new functions:
OBJFUN
, which returns the objective function, the gradient (first derivative) of this function, and its Hessian matrix (second derivative).[F,G,H] = OBJFUN(X,P1,...)
NONLCON
, which returns the functions for the nonlinear constraints (both inequality and equality constraints) and their gradients.[C,Ceq,DC,DCeq] = NONLCON(X,P1,...)
Now call fmincon
like this.
% OBJFUN supplies the objective gradient and Hessian; % NONLCON supplies the constraint gradient. OPTIONS = optimset('GradObj','on','GradConstr','on','Hessian','on'); [X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN] = fmincon(@OBJFUN,x0,A,B,Aeq,Beq,VLB,VUB,@NONLCON,OPTIONS, P1,P2,...);
See Example of Converting from constr to fmincon for a detailed example.
Converting from curvefit to lsqcurvefit
In Version 1.5, you used this call to curvefit
[X,OPTIONS,FVAL,JACOBIAN] = curvefit('FUN',x0,XDATA,YDATA,OPTIONS,'GRADFUN',P1,P2,...);
with F = FUN(X,P1,...)
and G = GRADFUN(X,P1,...)
.
In Version 2, replace FUN
and GRADFUN
with a single function that returns both F
and J
, the objective function and the Jacobian. (The Jacobian is the transpose of the gradient.)
[F,J] = OBJFUN(X,P1,...)
Now call lsqcurvefit
like this.
OPTIONS = optimset('Jacobian','on'); % Jacobian is supplied VLB = []; VUB = []; % New arguments not in curvefit [X,RESNORM,F,EXITFLAG,OUTPUT,LAMBDA,JACOB] = lsqcurvefit(@OBJFUN,x0,XDATA,YDATA,VLB,VUB,OPTIONS, P1,P2,...);
Converting from fmin to fminbnd
In Version 1.5, you used this call to fmin
.
[X,OPTIONS] = fmin('FUN',x1,x2,OPTIONS,P1,P2,...);
In Version 2, you call fminbnd
like this.
[X,FVAL,EXITFLAG,OUTPUT] = fminbnd(@FUN,x1,x2,... OPTIONS,P1,P2,...);
Converting from fmins to fminsearch
In Version 1.5, you used this call to fmins
.
[X,OPTIONS] = fmins('FUN',x0,OPTIONS,[],P1,P2,...);
In Version 2, you call fminsearch
like this.
[X,FVAL,EXITFLAG,OUTPUT] = fminsearch(@FUN,x0,... OPTIONS,P1,P2,...);
Converting from fminu to fminunc
In Version 1.5, you used this call to fminu
.
[X,OPTIONS] = fminu('FUN',x0,OPTIONS,'GRADFUN',P1,P2,...);
with F = FUN(X,P1, ...)
and G = GRADFUN(X,P1, ...)
.
In Version 2, replace FUN
and GRADFUN
with a single function that returns both F
and G
(the objective function and the gradient).
[F,G] = OBJFUN(X,P1, ...)
(This function can also return the Hessian matrix as a third output argument.)
Now call fminunc
like this.
OPTIONS = optimset('GradObj','on'); % Gradient is supplied [X,FVAL,EXITFLAG,OUTPUT,GRAD,HESSIAN] = fminunc(@OBJFUN,x0,OPTIONS,P1,P2,...);
If you have an existing FUN
and GRADFUN
that you do not want to rewrite, you can pass them both to fminunc
by placing them in a cell array.
OPTIONS = optimset('GradObj','on'); % Gradient is supplied [X,FVAL,EXITFLAG,OUTPUT,GRAD,HESSIAN] = fminunc({@FUN,@GRADFUN},x0,OPTIONS,P1,P2,...);
Converting to the new form of fsolve
In Version 1.5, you used this call to fsolve
[X,OPTIONS] = fsolve('FUN',x0,XDATA,YDATA,OPTIONS,'GRADFUN',P1,P2,...);
with F = FUN(X,P1,...)
and G = GRADFUN(X,P1,...)
.
In Version 2, replace FUN
and GRADFUN
with a single function that returns both F
and G
(the objective function and the gradient).
[F,G] = OBJFUN(X,P1, ...)
Now call fsolve
like this.
OPTIONS = optimset('GradObj','on'); % Gradient is supplied [X,FVAL,EXITFLAG,OUTPUT,JACOBIAN] = fsolve(@OBJFUN,x0,OPTIONS,P1,P2,...);
If you have an existing FUN
and GRADFUN
that you do not want to rewrite, you can pass them both to fsolve
by placing them in a cell array.
OPTIONS = optimset('GradObj','on'); % Gradient is supplied [X,FVAL,EXITFLAG,OUTPUT,JACOBIAN] = fsolve({@FUN,@GRADFUN},x0,OPTIONS,P1,P2,...);
Converting to the new form of fzero
In Version 1.5, you used this call to fzero
.
X = fzero('F',X,TOL,TRACE,P1,P2,...);
In Version 2, replace the TRACE
and TOL
arguments with
if TRACE == 0, val = 'none'; elseif TRACE == 1 val = 'iter'; end OPTIONS = optimset('Display',val,'TolX',TOL);
Now call fzero
like this.
[X,FVAL,EXITFLAG,OUTPUT] = fzero(@F,X,OPTIONS,P1,P2,...);
Converting from leastsq to lsqnonlin
In Version 1.5, you used this call to leastsq
[X,OPTIONS,FVAL,JACOBIAN] = leastsq('FUN',x0,OPTIONS,'GRADFUN',P1,P2,...);
with F = FUN(X,P1,...)
and G = GRADFUN(X,P1, ...)
.
In Version 2, replace FUN
and GRADFUN
with a single function that returns both F
and J
, the objective function and the Jacobian. (The Jacobian is the transpose of the gradient.)
[F,J] = OBJFUN(X,P1, ...)
Now call lsqnonlin
like this.
OPTIONS = optimset('Jacobian','on'); % Jacobian is supplied VLB = []; VUB = []; % New arguments not in leastsq [X,RESNORM,F,EXITFLAG,OUTPUT,LAMBDA,JACOBIAN] = lsqnonlin(@OBJFUN,x0,VLB,VUB,OPTIONS,P1,P2,...);
Converting from lp to linprog
In Version 1.5, you used this call to lp
.
[X,LAMBDA,HOW] = lp(f,A,b,VLB,VUB,X0,N,DISPLAY);
In Version 2, convert the input arguments to the correct form for linprog
by separating the equality and inequality constraints.
Aeq = A(1:N,:); beq = b(1:N); A = A(N+1:end,:); b = b(N+1:end,:); if DISPLAY val = 'final'; else val = 'none'; end OPTIONS = optimset('Display',val);
Now call linprog
like this.
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = linprog(f,A,b,Aeq,beq,VLB,VUB,X0,OPTIONS);
Converting from minimax to fminimax
In Version 1.5, you used this call to minimax
[X,OPTIONS] = minimax('FUN',x0,OPTIONS,VLB,VUB,'GRADFUN',P1,P2,...);
with F = FUN(X,P1,...)
and G = GRADFUN(X,P1,...)
.
In Version 2, you call fminimax
like this
OPTIONS = optimset('fminimax'); [X,FVAL,MAXFVAL,EXITFLAG,OUTPUT,LAMBDA] = fminimax(@OBJFUN,x0,A,B,Aeq,Beq,VLB,VUB,@NONLCON,OPTIONS, P1,P2,...);
with [F,DF] = OBJFUN(X,P1,...)
and [Cineq,Ceq,DCineq,DCeq] = NONLCON(X,P1,...)
.
Converting from nnls to lsqnonneg
In Version 1.5, you used this call to nnls
.
[X,LAMBDA] = nnls(A,b,tol);
In Version 2, replace the tol
argument with
OPTIONS = optimset('Display','none','TolX',tol);
Now call lsqnonneg
like this.
[X,RESNORM,RESIDUAL,EXITFLAG,OUTPUT,LAMBDA] = lsqnonneg(A,b,X0,OPTIONS);
Converting from qp to quadprog
In Version 1.5, you used this call to qp
.
[X,LAMBDA,HOW] = qp(H,f,A,b,VLB,VUB,X0,N,DISPLAY);
In Version 2, convert the input arguments to the correct form for quadprog
by separating the equality and inequality constraints.
Aeq = A(1:N,:); beq = b(1:N); A = A(N+1:end,:); b = b(N+1:end,:); if DISPLAY val = 'final'; else val = 'none'; end OPTIONS = optimset('Display',val);
Now call quadprog
like this.
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = quadprog(H,f,A,b,Aeq,beq,VLB,VUB,X0,OPTIONS);
Converting from seminf to fseminf
In Version 1.5, you used this call to seminf
[X,OPTIONS] = seminf('FUN',N,x0,OPTIONS,VLB,VUB,P1,P2,...);
with [F,C,PHI1,PHI2,...,PHIN,S] = FUN(X,S,P1,P2,...)
.
In Version 2, call fseminf
like this.
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA] = fseminf(@OBJFUN,x0,N,@NONLCON,A,B,Aeq,Beq,VLB,VUB,OPTIONS, P1,P2,...);
with F = OBJFUN(X,P1,...)
and [Cineq,Ceq,PHI1,PHI2,...,PHIN,S] = NONLCON(X,S,P1,...)
.
![]() | Using optimset and optimget | Example of Converting from constr to fmincon | ![]() |