|
- In order to determine a part of the eigenvalues of a large real symmetric matrix A the method
of Lanczos is often used.
- The idea of this method is in principle simple: analogous to the power method one computes a sequence
Akx0, k=0,...,i, but unlike to that method one considers this as the basis of a subspace
of Rn, a so called Krylow subspace, and computes an orthogonal basis of this subspace
Qi and uses the eigenvalues of the so called matricial Rayleighquotient
QiTAQi, that means the projection of A
on this subspace, as approximate eigenvalues of A.
- Fortunately this can be put in an algorithm which combines iteration and orthogonalization in one step which yields a short recursion:
Let x0 be the normalized initial vector.
Q(.,1) = x0
δ1 = Q(.,1)T *A* Q(.,1)
r1 = A*Q(.,1)- δ1 * Q(.,1)
Iterate for i=2,3,... as long as ||ri-1|| > 0 :
εi-1 = ||ri-1||
Q(.,i)= ri-1/ ||ri-1||
v = A*Q(.,i)
δi = Q(,.1)T * v
ri = v - δi* Q(.,i) -εi-1 * Q(.,i-1)
- The matrix QiT * A * Qi = Ti turns out to be tridiagonal with the entries
εj-1, δj, εj defined above,
with j=1,...,i
- The eigenproblem of Ti can be solved efficiently to high accurary.
- Some of its eigenvalues converge to eigenvalues of A very fast and the
corresponding eigenvectors are the corresponding columns of the matrix
Y(.,j=1,...,i) = Q(.,j=1,...,i)*Vi
where Vi denotes the complete eigensystem of Ti .
- Which of the eigenvalues are of this type can be decided in different ways. Here we use Parlett's test:
accept λj in step i if
|εi*V(i,j)| <= εaccept*norm(Ti) ,
norm = Frobeniusnorm.
- The following provides an error estimation for the relative error in λj:
||A*Y(.,j)-λj*Y(.,j)||/||A*Y(.,j)|| >= |λj - μ|/|μ|
for some exact eigenvalue μ of A.
This relative error bound can be much larger than the bound in the acceptance criterion.
- The method terminates, as soon as some εj vanishes,
which however will occur almost never under roundoff. This depends on
the initial vector and the eigenvalue distribution of A.
For example A must not have multiple eigenvalues and x0
must have contributions of all eigenvectors of A.
- This looks very nice, but unfortunately the orthogonality of Q(:,.) is lost by roundoff effects, and this
very early.
If one employs no remedy for that this doesn't let break down the method completely,
but strange effects can occur: for example multiple approximation to one and the same
single eigenvalue of A, completely wrong "approximations" etc.
Hence the method usually is enhanced by additional variants of reorthogonalization.
With the so called complete reorthogonalization one uses a complete Gram Schmidt orthogonalization
of the i-th column of Q with respect to all previously computed ones, whereas with selective
orthogonalization this takes place only with respect to the columns which belong to eigenvalues
which have been accepted by the acceptance test.
|