|
- In this chapter we present the most popular methods of unconstrained optimization.
All the methods allow you to play with predefined testcases with well known properties
but using different choices of algorithmic parameters (in order to study their effects
and the possible limits of the methods), but you also can provide your own testcases.
It may be instructive to try several solvers on the same testcase using the same
settings as far as possible.
Again we limit here problem size to keep this manageable on our server.
But in principle you can (for small examples) use this also as
a compute service.
- We present here methods using so called gradient related methods,
which make explicit use of the gradient of the function (and at least for
their convergence analysis also the existence of continuous second derivatives)
as well as methods which make explicit use of function values only.
- For the concepts behind the gradient related method look
here.
These are
- the classical gradient method, already used by Cauchy,
- the method of conjugate gradients, originally designed for
convex quadratic functions only, but then generalized
- the method of coordinate descent, very popular (in its block form)
e.g. in the field of support vector machines and for other
large scale convex problems
- the BFGS quasi-Newton method, the most popular and most
successful method, but unfortunately limited to medium scale problems
- a limited memory quasi-Newton method
- the damped Newton method, here enhanced by using directions of negative
curvature in order to identify second order necessary points.
- From the growing class of derivative free methods we present the
popular Nelder-Mead simplex search, but in a mathematically sound version
and Powells method NEWUOA based on quadratic interpolation and the
trust region concept.
- For the specialized one dimensional case you have the choice between
four classical approaches: golden section search, iterated quadratic interpolation
and the descent tests used in the gradient related methods for determining the
step length.
- For the nonlinear least squares problem we present the popular Levenberg-Marquardt
method, based on the trust region concept. For this type of problem you
find the Gauss-Newton method in the chapter on nonlinear systems of equations.
|