Lanczos' eigensolver

 

Directly to the input form

 
  • In order to determine a part of the eigenvalues of a large real symmetric matrix A the method of Lanczos is often used.
  • The idea of this method is in principle simple: analogous to the power method one computes a sequence Akx0, k=0,...,i, but unlike to that method one considers this as the basis of a subspace of Rn, a so called Krylow subspace, and computes an orthogonal basis of this subspace Qi and uses the eigenvalues of the so called matricial Rayleighquotient QiTAQi, that means the projection of A on this subspace, as approximate eigenvalues of A.
  • Fortunately this can be put in an algorithm which combines iteration and orthogonalization in one step which yields a short recursion:
    Let x0 be the normalized initial vector.
    Q(.,1) = x0
    δ1 = Q(.,1)T *A* Q(.,1)
    r1 = A*Q(.,1)- δ1 * Q(.,1)
    Iterate for i=2,3,... as long as ||ri-1|| > 0 :
    εi-1 = ||ri-1||
    Q(.,i)= ri-1/ ||ri-1||
    v = A*Q(.,i)
    δi = Q(,.1)T * v
    ri = v - δi* Q(.,i) -εi-1 * Q(.,i-1)
  • The matrix QiT * A * Qi = Ti turns out to be tridiagonal with the entries εj-1, δj, εj defined above, with j=1,...,i
  • The eigenproblem of Ti can be solved efficiently to high accurary.
  • Some of its eigenvalues converge to eigenvalues of A very fast and the corresponding eigenvectors are the corresponding columns of the matrix
    Y(.,j=1,...,i) = Q(.,j=1,...,i)*Vi
    where Vi denotes the complete eigensystem of Ti .
  • Which of the eigenvalues are of this type can be decided in different ways. Here we use Parlett's test: accept λj in step i if
    i*V(i,j)| <= εaccept*norm(Ti) , norm = Frobeniusnorm.
  • The following provides an error estimation for the relative error in λj:
    ||A*Y(.,j)-λj*Y(.,j)||/||A*Y(.,j)|| >= |λj - μ|/|μ|
    for some exact eigenvalue μ of A. This relative error bound can be much larger than the bound in the acceptance criterion.
  • The method terminates, as soon as some εj vanishes, which however will occur almost never under roundoff. This depends on the initial vector and the eigenvalue distribution of A. For example A must not have multiple eigenvalues and x0 must have contributions of all eigenvectors of A.
  • This looks very nice, but unfortunately the orthogonality of Q(:,.) is lost by roundoff effects, and this very early. If one employs no remedy for that this doesn't let break down the method completely, but strange effects can occur: for example multiple approximation to one and the same single eigenvalue of A, completely wrong "approximations" etc. Hence the method usually is enhanced by additional variants of reorthogonalization. With the so called complete reorthogonalization one uses a complete Gram Schmidt orthogonalization of the i-th column of Q with respect to all previously computed ones, whereas with selective orthogonalization this takes place only with respect to the columns which belong to eigenvalues which have been accepted by the acceptance test.
 

The inputs

 
  • The matrix A is to be defined only via a piece of code, which, given the vector x, computes the vector y=A*x. If you want to combine the method with inverse iteration in order to get small eigenvalues, then your piece of code must solve A*y=x
  • Algorithmic variables are
    1. n the dimension
    2. numew the number of desired eigenvalues (attention: these are not necessarily only dominant ones)
    3. numstep the maximum number of steps (hence matrix vector multiplies) allowed.
    4. type of reorthogonalization desired.
    5. precision required in the acceptance test εaccept
    6. the initial vector x0 (with possible automatic generation)
 

The results

 
  • A table of length numew or numstep of λj with an indicator, whether they are accepted (a=1 ) or rejected (a=-1) by the acceptance test and the corresponding error bound (which can be much larger, of course)
  • A plot of the number of accepted eigenvalues over the step number
 

Questions?!

 
  • How does the eigenvalue distribution of A influence the convergence ?
  • How depends the convergence on the initial vector?
  • How depend the results on the type of reorthogonalization?
 

The input form

 
Back to the top!

27.03.2013