|
The QR iteration for the nonhermitian eigenvalue problem |
|
|
|
- In order to determine all eigenvalues (including the complex ones) and the
corresponding eigenvectors of a nonsymmetric real matrix A, one normally
uses the QR iteration with double shifts.
- This procedure can be interpreted as a combination of inverse iteration
using a variable shift μ, with the Schur transformation, which in
the case that μ is an exact eigenvalue transforms the matrix into a
block triangular form, so that one can find the remaining eigenvalues
from a problem of reduced dimension. In this new matrix the eigenvalue
which has just been determined will not occur again. This procedure
is repeated, until all eigenvalues are obtained. Since only unitary similarity
transformations are used, this reduces the total matrix to upper triangular form,
the so called Schur normal form. Eigenvectors are found by computing the eigenvectors of this
Schur form and transforming back using the accumulated product of the transformations used.
Here the matrix is
first transformed to upper Hessenberg form, so that the convergence can
be observed from the vanishing of the last (or second last) subdiagonal element .
If the second last subdiagonal should become quite small, two eigenvalues can be determined
from the right lower 2 × 2 block.
If the Hessenberg form decomposes, one can decompose the problem into a
series of smaller ones.
- Formally (and completely in complex arithmetic) one can describe the method by means of the QR decomposition as follows:
Ak-μkI = QkRk
Ak+1 = RkQk+μkI
Algorithmically we avoid the explicit use of the shifts and use the shift
implicitly only.
- The convergence is quadratic in general if the shifts are chosen as given below. If there is no fast reduction
of the last (or second last) subdiagonal element, e.g. in the case with several
eigenvalues of the same absolute value and μ=0, then this situation is
handled by an "exceptional shift" μ. This will break eventual occurence of
several eigenvalues of the same distance from μ.
- Since a real nonsymmetric matrix can well have conjugate complex eigenvalues,
and one wants to avoid complex arithmetic the method uses two successive shifts obtained from the right lower 2 by 2 matrix
and the Francis theorem in order to avoid complex arithmetic: the first unitary transformation
applied transforms the first column of (A-μk*I)*(A-μk+1*I),
which is real, to upper Hessenberg form. This transformation is applied
to Ak, destroying thereby the
Hessenberg form (which is invariant under the original algorithm) and by a successive application of additional unitary transforms
the Hessenberg form is reestablished. From Francis' theorem it follows that this is
equivalent to two successive steps of the complex form above. (This is also called the implicit
shift technique).
|
|
|
Input data |
|
- Here there are several possibilities for selecting the matrix A :
- A nonsymmetric predefined matrix with eigenvalues 1, 2, 3, 4.
- A nonsymmetric predefined matrix with eigenvalues 1+/-5i, 2, 12.
- A nonsymmetric predefined matrix with eigenvalues 1, 1, 2+/-i, 3,
3. This matrix is not diagonalizable.
- you can construct a matrix A by prescribing its n complex eigenvalues
(in conjugate pairs) and an internally fixed complete eigensystem. Input of eigenvalues
is done in pairs (x,y) for the complex value x+iy.
- and finally you may type in a nonsymmetric matrix of your own.
- Further inputs: dimension n in the cases 4 and 5. Important: 2 <= n <= 30 !
- you may wish a printout of A.
- you may wish a printout of the complete eigensystem.
|
|
|
The output |
|
- A printout of A, if requested.
- The computed eigenvalues and their errors, as far as known (cases 1 to 4 )
- The complete eigensystem, if requested.
- A short iteration record, which indicates the number of iterations
needed for the computation of each individual eigenvalue. In the table appear
the last and second last or only the last subdiagonal element of the
current Hessenberg form .
|
|
|
Questions?! |
|
- How does the convergence depend on the eigenvalue distribution?
- How do eigenvalues and eigenvectors change, if one changes the
matrix slightly? Try especially case three for this!
|
|
|
|