The LP problem - dual affine scaling (Karmarkar)

 

directly to the dense matrix input form

 

directly to the sparse matrix input form

 
  • We deal with the LP problem in standard form
    cTx = max, subject to Ax=b, x>=0 .

    The dimensions are n for c and x resp. p × n and p for A and b with p < n. We assume that b > 0 and that A is of full rank p.
  • Here we present the method of dual affine scaling (published by Karmarkar in Math. Prog. 44, 1989). We use either the direct method or the dual extended problem. Hence
    bTy = max, subject to -c-ATy >= 0
    resp.
    bTy-big × yp+1 = max ,
    subject to
    -c-ATy +e ×yp+1 >= 0 , e=(1,...,1)T
    yp+1 >= 0 .

    If the penalization parameter "big" is large enough, namely >= ||x*||1 and the primal problem is feasible and bounded, then the slack variable yp+1 becomes zero in the optimum. If the primal problem is unbounded, then the dual problem is infeasible, hence yp+1 stays strictly positive.
  • Observe that the variable y corresponds to the Lagrange multipliers of the equality constraints in the primal problem and v to the Lagrange multipliers λ of the lower bound constraints on x. The iterates for these are assumed as strictly positive, hence this method belongs to the class of interior point methods.
  • The method corrects the dual solution y as follows:
    1. vk = -c -AT yk
    2. Dk = diag( vk1,...,vkn)
    3. (A Dk-2 AT) hk = b a linear systems solve
    4. dk = -AThk
    5. γ = α × min{ -vik/dik: dik <0 }
    6. yk+1 = yk + γ × hk

    In order to preserve numerical stability (and since our dimensions are tiny) we solve the linear system using a QR decomposition of Dk-1*AT:
    Qk Dk-1 AT =
    [Rk]
    [O]

    resulting in two triangular solves :
    RkTRkhk = b .
  • The iteration is linearly convergent with an asymptotic rate of roughly 2/exp(1) for α near 0.5 but for small step numbers error reduction depends also very much on y0. It is unknown whether the method is of polynomial complexity.
  • After a successful solution of the dual problem we solve the primal with the help of its Kuhn-Tucker conditions:
    1. -c-AT y -v =0
    2. Ax -b = 0
    3. xivi = 0
    4. x >= 0 , v >= 0 .
    but in case of the extended problem only if the slack variable has become sufficiently small:
    yp+1<= sqrt(tiny) =sqrt(2.2d-16 × (∑i,j{ |Ai,j|}+∑i{|bi|} +p × (1+big))
  • There are two different input pages for the entries c, A and b. If A is a dense matrix, then the usual full rowwise input is used. Otherwise one has the possibility to specify the nonzero entries only.
 

Input

 
  • An identification text for the output. (you may run several cases, saving the output by ''cut and paste'' and this allows you later to identify these cases)
  • The dimension n of x. (n = dim(x) = dim(c))
    Important : 1 <= n <= 99 !
  • The row number p of A and b. Of course : 1 <= p <= 99 !
  • There are two input formats
    1. dense matrix:
      Direct input of c (free format).
      Rowwise input of A=(a(i,j)) followed by b(i). That is you type a(i,1),...,a(i,n),b(i), hence n+1 entries. A row may extend over several input lines but always begins at a new line.
    2. sparse matrix:
      For the two vectors c and b you type rowwise index, value, hence i, c(i) or i, b(i)
      resp. for A index1, index2, value, hence i, j, a(i,j)
      for the nonzero entries only, but each such tuple or triple in one separate line.
  • You may vary some algorithmic parameters:
    1. Standard settings are : alpha=0.5, epsc=1.d-10, rho=2.2d-16, maxstep=100 or
    2. you specify directly alpha, epsc, rho, maxstep
    alpha=α is the stepsize reduction factor used above and must be chosen < 2/3. We terminate the iteration, if the dual objective function varies by less than epsc. 1/rho is an upper bound for the diagonal part of the triangular factor of the QR-decomposition of Dk-1AT. maxstep is the maximum number of steps allowed in the iteration.
  • You may use the extended problem which allows the choice of a strictly feasible first guess trivially.
    1. If you decide so, then you must specify the penalty parameter big for the slack variable. (big must be larger than ||x*||1, otherwise the original problem will not be solved. Clearly you must guess here, but avoid extreme choices!)
    2. If you do not want to use this feature,
      then either you take the standard initial value -c for v (i.e. y0 = 0 ) provided this is strictly positive or you specify a dual initial guess yourself: y(1),...,y(p) such that - c - AT y > 0 (This input will be checked.)
  • In order to keep the output manageable on the screen you must decide on a short selection of variables to be displayed in the iteration protocol.
 

Output

 
  • You get an iteration protocol showing the dual objective function and the values of selected variables.
  • The final solution and the termination criterion, plus the primal solution. For the extended problem you also get the final slack variable.
 

dense matrix input form

 

sparse matrix input form

 
Back to the top!

21.09.2010