Optimization Toolbox    

Nonlinear Minimization with Gradient and Hessian Sparsity Pattern

Next we solve the same problem but the Hessian matrix is now approximated by sparse finite-differences instead of explicit computation. To use the large-scale method in fminunc, you must compute the gradient in fun; it is not optional as in the medium-scale method.

The M-file function brownfg computes the objective function and gradient.

Step 1: Write an M-file brownfg.m that computes the objective function and the gradient of the objective

To allow efficient computation of the sparse finite-difference approximation of the Hessian matrix H(x), the sparsity structure of H must be predetermined. In this case assume this structure, Hstr, a sparse matrix, is available in file brownhstr.mat. Using the spy command you can see that Hstr is indeed sparse (only 2998 nonzeros). Use optimset to set the HessPattern parameter to Hstr. When a problem as large as this has obvious sparsity structure, not setting the HessPattern parameter will require a huge amount of unnecessary memory and computation since fminunc will attempt to use finite-differencing on a full Hessian matrix of one million nonzero entries.

You must also set the GradObj parameter to 'on' using optimset since the gradient is computed in brownfg.m. Then execute fminunc as shown in Step 2.

Step 2: Call a nonlinear minimization routine with a starting point xstart

This 1000 variable problem is solved in 8 iterations and 7 conjugate gradient iterations with a positive exitflag indicating convergence. The final function value and measure of optimality at the solution x are both close to zero (for fminunc, the first order optimality is the infinity norm of the gradient of the function, which is zero at a local minimum).


 Nonlinear Minimization with Gradient and Hessian Nonlinear Minimization with Bound Constraints and Banded Preconditioner