|
- In this chapter we present the most important methods for solving the
matrix eigenvalue problem. We restrict ourselves to the case of real matrices.
The complex case has analogous solutions but the I/O would be more cumbersome.
- For general matrices of moderate size (up to some thousand rows/columns say)
the QR algorithm with Wilkinson shifts and the Francis double step technique
is the method of choice for solving the complete problem, determining
all eigenvalues and eigenvectors. We present it here in the version of netlib/eispack.
- The simplest and best known method for computing a single eigenvector and
its associated eigenvalue is undoubtedly the von Mises method, also known as
direct iteration. In its basic form it determines the eigenvalue of largest absolute size,
assuming that this absolute value occurs only once. Its counterpart for determining the eigenvalue nearest to zero
is the so called inverse iteration (which avoids explicit use of the inverse) first proposed
by Wielandt. We present both methods in one application. You might play with prepared examples
using different shifts of the origin or use your own matrix case.
- For real symmetric matrices we have the counterpart to the QR method, the QL algorithm
with Wilkinson shift. This is applied after transforming the matrix to tridiagonal form
using the Householder transformation. This is always convergent, normally of even supercubic
speed.
- The method of Jacobi, applying plane rotations in the (xi,xj)-plane
annihilating ai,j and aj,i
in order to reduce the sum of squared nondiagonal elements to zero, developed
more than 100 years before QL, is no longer a competitor to QL concerning total operations
count, but has some advantages concerning the precision of the small eigenvalues.
We use it here in the implementation of Rutishauser.
- For high dimensional problems the methods mentioned above are no longer applicable.
Also, in the large scale case typically only a small number of eigenvalues from the lower part of the spectrum
is desired. Two methods are popular for this, the simultaneous vector iteration (a generalization
of inverse iteration to a subspace), which we present in Rutishausers implementation ''RITZIT''
(which has a lot of enhancements for speeding up convergence) and the Lanczos method,
which is now a standard despite possible pitfalls. We allow you to play with different versions
of this method. For the simultaneous vector iteration we have also an application example,
the vibration of a clamped membrane.
- If only a selection of eigenvalues with associated eigenvectors is sought, then the use
of the interval bisection (using Sylvesters theorem on inertia which in this case says that
the number of positive and negative pivots in tridiagonal LU decomposition of a regular
tridiagonal matrix is equal to the number of positive and negative eigenvalues respectively)
combined with inverse iteration (for the eigenvectors) is a choice.
We present it here together with Householders tridiagonalization for a general symmetric matrix.
- Finally, for the general eigenvalue problem
Ax = λ Bx
we present the QZ method, which is a generalization of the QR algorithm.
This method is applicable up to medium large problems and the only one
which works without restrictive assumptions on A and B.
|