|
The LP problem : the simplex method |
|
|
|
|
|
|
- Here you find the classical LP simplex method. We present it for the
standard form (primal problem)
f(x) = cTx = max, subject to Ax=b, x >= 0
x has n components, A has p < n
rows and has full rank. (Otherwise either the row number can be
reduced or the problem is trivial, ending in a linear systems solve.)
We also assume b >= 0.
Each LP problem can be rewritten in this standard form, but in real applications
this is not necessary. But the explanation of the algorithm is more involved then.
Using the saddle point theorem of convex optimization we get
the dual problem
bTy = max subject to
-c-ATy >= 0 .
The following is known:
If the primal problem is infeasible, then the dual problem is unbounded
and if the primal problem is unbounded, then the dual problem is infeasible.
If the primal is feasible, then there exists at least one vertex (a point
there at least n of the constraints are fulfilled with equality
(''active constraints'') and the matrix of their gradients has full rank n) where the
optimum is attained. It is however possible that a full boundary flat is optimal.
If a vertex has more than n active constraints then it is called degenerate.
Assuming feasibility of the primal problem one has the necessary and
sufficient optimality conditions (the Kuhn--Tucker conditions,
written for a minimum problem, remember we maximize above, hence
the sign reversion in c):
- -c-AT μ -λ =0
- Ax -b = 0
- xiλi = 0
- x >= 0 , λ >= 0 .
For the following derivation we assume that no vertex is degenerate.
Then from the Kuhn--Tucker conditions it follows that for each feasible vertex x0 the following holds:
there is an index set B (called the ''basis'') with exactly p components such
that
xi0 > 0 for i in B and xi0 = 0 otherwise.
In the following C(B) denotes the complement of B in {1,..,n}.
Given such a basis we solve the conditions (1), (2) and (3) for the other unknowns:
Because of λB = 0 we have
-cB - (AB)T μ = 0 hence
μ = - ((AB)T)-1cB
and
λC(B) = -cC(B) +(AC(B))Tμ .
These are the so called ''reduced costs'' and must be nonnegative in the optimum. The reason for this:
if we restrict x to vary on the linear manifold defined by the equality constraints, then the
objective function can be expressed as
-f(x) = λC(B)T xC(B) .
Obviously we can take these reduced costs as the driving forces for the algorithm: as long as there is
a negative one say with index in, then we can reduce -f by increasing the variable xin
from zero into the strictly positive range. This is the so called ''incoming'' variable, hence this index name.
Maintaining the feasibility of x with respect to the equality constraint means
xB = xB0-(AB)-1Ain σ
xin = σ
where σ becomes larger than zero. And
xi = 0 for the indices of C(B) other than in .
This defines a path along an edge of the feasible domain. It may be that xB stays nonnegative along
this path for arbitrarily large σ. Then the primal problem is unbounded and we stop. This is the case if and only if
(AB)-1Ain <= 0
Otherwise at least one component of this column is strictly positive and the corresponding component in xB
decreases. The first of these components which becomes zero defines the maximum steplength σ possible
for maintaining primal feasibility. There cannot be two or more components reaching zero simultaneously if the nondegeneracy
condition holds, since otherwise we would get more than n active constraints.
The component becoming zero is xout, the ''outgoing'' variable. We have got a new vertex with better
objective value and repeat this cycle. Since there can be only finitely (but exponential) many vertices, the algorithm is finite.
Up today it is unclear whether there is a version of this algorithm (i.e. a choice for the incoming variable)
which gives polynomial complexity. There are polynomial algorithms in the class of the so called interior point
methods.
In the printing scheme of this application you can see the so called ''tableau'' containing the essential data :
(AB)-1AC(B) | xB |
- λC(B) | -f |
In the code we maximize! Hence the red minus sign.
- Finding an initial vertex is difficult, hence ''Phase 1'' is included here: the problem is extended
as
-∑ yi = max subject to Ax+y = b, x >= 0, y >= 0
starting with the vertex x = 0 , y = b . If the objective value of this becomes zero, we obtain
(possibly after additional eliminations) a vertex of the original problem and if it stays strictly
negative then the original problem is infeasible.
|
|
|
Input |
|
- An identification text for the output. (you may run several cases, saving the output by ''cut and paste''
and this allows you later to identify these cases)
- The dimension n of x.
(n = dim(x) = dim(c))
Important : 1 <= n <= 99 !
- The row number p of A and b.
Of course : 1 <= p <= 99 !
- There are two input formats
- dense matrix:
Direct input of c (free format).
Rowwise input of A=(a(i,j)) followed by b(i). That is you
type a(i,1),...,a(i,n),b(i),
hence n+1 entries. A row may extend over several input lines but always begins at a new line.
- sparse matrix:
For the two vectors c and b you type rowwise
index, value, hence i, c(i) or i, b(i)
resp. for A index1, index2, value, hence i, j, a(i,j)
for the nonzero entries only, but each such tuple or triple in one separate line.
- You have the choice to start with an arbitrary x. In this case the so called "Phase 1" is initialized.
- Phase 1 computes a feasible initial vertex
(requires n+p <= 99 !)
and decides feasibility.
- Or you specify a ''basis'' B yourself: an integer list
basis(i),i=1,..,p.
In this case feasibility is checked and in case of infeasibility the run is terminated.
If b>0 componentwise and A
has the identity matrix as submatrix, then {1,..,n} is a basis.
- Choice of the rule for choosing the incoming variable:
- most negative reduced cost (inequality multiplier) or
- rule of Bland (least index rule)
The rule of Bland works also in the degenerate case, if one applies the least index rule also for the
outgoing variable.
- The maximum number of vertex changes allowed: maxstep
- you may wish to see the full tableau.
|
|
|
Output |
|
- For every step the index list of ''basic'' and ''nonbasic'' variables (i.e. B, C(B).)
- If desired the full tableau.
- the final solution
|
|
|
|
|
|
|