|
The dual QP solver of Goldfarb and Idnani |
|
|
|
|
|
|
|
- The method of Goldfarb and Idnani solves strictly convex quadratic optimization problems of
the form
f(x) = 1/2 xTGx -g0Tx = minx ,
subject to the constraints
HTx+h0 = 0,
CTx + c0 >= 0 .
- The matrix G of dimension n
is assumed as symmetric and positive definite. This is checked first
using its Cholesky decomposition.
The matrix H (with dimension n × p) must have rank p.
If these conditions are not met, the code terminates unsuccessfully.
The column number of C and hence the number of inequality constraints
is m. Of course p or m can be zero.
- The theory of this method is shortly explained
here.
- There are two different input pages:
You may generate a QP problem randomly, fixing some important
properties at your choice or you explicitly define the defining
matrices and vectors typing them in the corresponding input fields.
For the artificial genrated problem we restrict the inequality constraints
to the simple form x >= 0, (hence m=n and C=Id, c0=0).
Indeed the general constraints formulated above can always be written as a system
of only equality constraints with positivity constraints on all the variables, such
that this means no severe restriction.
- The artificial generated problems give you the chance to test the method by varying
the properties which are of fundamental importance for the conditioning of the problem.
|
|
|
|
|
Input: artificial problem generation |
|
- The primal dimension n of x, hence also G. Important: 1 <= n <= 100 !
- The number of equality constraints p. Important: 0 <= p <= n !
- The number i0 of bound constraints active at the solution.
Important : 0 <= i0+p <= n !
- The reciprocal condition number lammin of the reduced Hessian: this is the
matrix QTGQ with Q an orthonormal basis of the span of the
gradients of constraints active at the solution.
Important: 0 < lammin < 1 !
Useful : 1.E-8 < lammin.
The maximal eigenvalue is set to 1.
- The reciprocal condition number sigmin of H (i.e. the smallest eigenvalue of
HTH is (sigmin)2.)
Important : 0 < sigmin < 1 !
Useful : 1.E-8 < sigmin.
The maximum singular value of H is set to 1.
H is generated using the singular value decomposition.
- The lower bound xlow of the variables x(i),
which are above their lower bound at the solution.
Important: 0 < xlow < 1 !
Useful : 1.E-8 < xlow.
Each ''free'' x(i) is in [xlow,1],
the binding variables x(i) are 0 in the solution.
- From your inputs lammin, sigmin and lammax = sigmax = 1
the matrices G and H are generated using the spectral- resp. the singular
value decomposition with randomly generated unitary matrices.
- Equally the optimal x* is generated from random numbers with the given restrictions.
(bounds are always obtained).
Then, from G, H, x* we generate the vectors h0 and g0
such that x* satisifes the optimality conditions, i.e. the Kuhn-Tucker conditions.
|
|
|
Input: your own data |
|
- A name for your problem: (appears back in the output).
- The dimensions n, p, m .
We have the restrictions 1 <= n <= 100, 0 <= p <= n, 0 <= m <= 300!
- The matrices G, H and C as well as the vectors
g0, h0 and c0.
Please observe the dimensions:
G: n × n, g0: n
H: n × p, h0: p
C: n × m, c0: m
The code requires the input of nonzero values only. These are to be given in the form
index1, index2, value for a matrix resp. index, value for a vector.
Which matrix or vector to use is indicated by the input field, such that no confusion is
possible here.
For setting G3,2=7,
you type line
3,2,7,
Important: each matrix or vector entry must appear in a separate line
Important for the constraints: you type C, not the transpose appearing in the definition
of the constraints.
This means that the first index you type corresponds to the variable number and the second to the
inequality number.
The same applies to the equality constraints.
- You have the choice to display the intermediate steps or the final results only.
|
|
|
Output |
|
- Either your problem name or the parameters you used for problem generation.
- The computing time (zero means less than 0.01 seconds)
- In case of a generated problem the exact solution is known, hence here the
final errors are printed. These errors are mere roundoff effects of course. Hence this
allows you to study the sensitivity of the method against roundoff effects by
varying the three parameters sigmin, lammin, xlow.
- In case of a problem of your own you see the results, and if required, all the
intermediate results.
- A plot showing infeasibility, measured as
∑i=1 to p |hi(x)| - ∑j=1 to m min{0,gi(x)} .
- A second plot showing the number of partial steps over the number of the full steps.
- The plots appear only if the number of full steps is 3 at least.
|
|
|
|
|
|
|