Overview for this chapter

 
  • In this chapter we present the most popular methods of constrained optimization. All the methods allow you to play with predefined testcases with well known properties but using different choices of algorithmic parameters (in order to study their effects and the possible limits of the methods), but you also can provide your own testcases. It may be instructive to try several solvers on the same testcase using the same settings as far as possible. Again we limit here problem size to keep this manageable on our server. But in principle you can (for small examples) use this also as a compute service.
  • There are some specialized solvers for the LP problem (the classical Dantzig's Simplex method and Karmarkars affine dual scaling), and for the convex QP problem (Goldfard-Idnani dual QP solver), but of course all the solvers for nonlinear problems also accept LP and QP testcases (and sometimes do a better job here than the specialized codes). For the nonlinear case we provide classical penalty methods, the augmented Lagrangian method, the SQP- and the active set method (generalized reduced gradient)
  • Our general form for the constrained problem is f(x) = min, h(x)=0, g(x)>=0 .
    If you are not acquainted with the basic theory for constrained optimization, then reading this short introduction should suffice.
  • For the Simplex method we use here the classical ''tableau'' method, since this is most easily understood and presented in compact form on the screen. In view of numerical stability and efficiency this is far from the state of the art, but understanding the method in terms of matrix decompositions and their update may be cumbersome for the beginner.
  • for the dual affine scaling we employ the QR decomposition in full matrix form, sufficient for small testcases, but also far from the state of the art for real life problems.
  • The Goldfarb-Idnani method is implemented exactly as described in the original paper in Math. Prog. 1983, with the extension of allowing equality constraints.
  • For LP and QP there is also available the active set method which is part of the code BFGS-grg in the section for nonlinear problems. For the QP case we work with the exact Hessian and exact unidimensional minimization, such that this is an ''exact'' (this means finite) primal QP solver as opposed to the dual approach of Goldfarb and Idnani. For the LP case you can present the problem here with general inequality constraints (i.e. the dual form of the one used in the standard simplex formulation), hence this provides you with the dual simplex method.
  • The nonlinear solvers presently available here all require exact gradients. We dispense you from providing these by using a tool known as "automatic differentiation". This is a code which reads the function code, decodes it like a compiler and produces from the result a new code for computing the gradient analytically. This is achieved by applying the elementary rules of differentiation to the computing formulas of the function evaluation. We use JAKEF from netlib for it.
  • The penalty method implements the classical ''quadratic loss'' function in which weighted squares of the constraints values are added to the objective function, using unconstrained minimization afterwards (here the iterates usually stay infeasible in approaching the solution), as well as the ''logarithmic barrier'' function (for inequalities) which adds terms which grow to infinity on the boundary of the feasible set but get more and more damped as the weighting parameter goes to zero. Here the iterates stay strictly feasible.
  • The augmented Lagrangian method reformulates the constrained optimization problems as an unconstrained max--min--problem. This works well in the convex case but can be quite problematic in the general case.
  • The SQP solver we use here uses the classical L1 exact penalty function for guaranteeing convergence (a so called ''merit function'' approach). This is a code which is quite competitive with the currently known best solvers, but restricted to small to medium scale problems.
  • The grg code uses the BFGS unconstrained minimizer (via a projected Hessian approximation) on the constrained surfaces and ''restoration'' (return to the constraining surfaces) by the simplifed Newton's method. Hence it is a classical active set method. As mentioned above the code contains special sections for dealing with LP and QP problems maintaining finite solvability of these problems.
 
Back to the top!

29.06.2010