Bisection/Inverse iteration

Directly to the input form

 
 
  • In the following A is assumed as real and symmetric.
  • In order to find all eigenvalues (or eigenvalues in a given interval) of a hermitian matrix one can use Sylvesters theorem of inertia: provided that the matrix is invertible and the LU-decomposition of A is possible without using interchanges
    A = L U , U = D LT
    then the number of positive and negative diagonal elements of D is equal to the number of positive resp. negative eigenvalues of A. This applied to A- μ I gives the number of eigenvalues larger or less than μ. Shifting μ and simple counting can hence isolate any desired eigenvalue of A. The corresponding eigenvectors may be found subsequently using inverse iteration with the eigenvalue as a shift. Unfortunately this form of LU-decomposition is unstable and hence this cannot be used for a full matrix. However, if A is tridiagonal, then this process is stable and even a zero pivot may be replaced by a tiny ε which, following the perturbation theory of eigenvalues of hermitian matrices, changes also the eigenvalues only by ε without introducing any trouble. Hence a general matrix is transformed to tridiagonal form first using a similarity transformation with Householder matrices.
  • Hence we assume now that the matrix is already tridiagonal real symmetric, T = tridiag ( βi-1ii )
  • Is μ a test value for an eigenvalue, we hence have simply to compute
    1. γ1 = αi - μ
    2. γi = αi - μ - βi-12i-1, i=2,..,n
    replacing a zero denominator by ε ( as if we had changed αi-1 to αi-1). The number of negative γi is the number of eigenvalues of T less than μ. Evaluating this number at two different values for μ gives the number of eigenvalues within that interval. Clearly the interval [-||T||,||T||] contains all eigenvalues and by bisecting we finally find every eigenvalue to every desired precision (within the roundoff level). One should observe that a double eigenvalue can occur only if some βi=0 but in practice extremely near eigenvalues do not produce necessaritly a small βi, as you might try out.
  • If an eigenvalue has been found accurately, a very rudimentary version of inverse iteration will produce a corresponding eigenvector. However, working with these eigenvectors individually might produce an eigensystem of very poor orthogonality. Hence this implementation allows you to require that the current eigenvector be orthogonalized with respect to the already found ones.
  • The eigenvectors of the tridiagonal matrix can be retransformed to the eigenvectors of the original A using the Householder transformations in reverse order. This does not influence their orthogonality.
 

Input

 
  • The input matrix can be chosen from five cases:
    1. A predefined 8 ×8 matrix with known exact eigenvalues.
    2. A quindiagonal matrix which occurs in the discretized bar-bending problem. Here the dimension is variable, the exact eigenvalues are known.
    3. You can provide a set of eigenvalues yourself. A is then generated as
      A = V diag(λi) VT
      with an internally stored fixed orthonormal matrix V. This allows you to study the effect of different eigenvalue distributions on the efficiency and accuracy of the method (and the tridiagonal form).
    4. You may use a matrix of your own, typing the lower triangle in an input field.
    5. You may directly type a tridiagonal matrix using only its two nontrivial diagonals.
  • The dimension of the matrix with exception of case 1.
  • You can require a printed output of A.
  • You can require a printed output of the eigensystem.
  • You can choose whether to reorthogonalize the eigenvectors or not.
 

Output

 
  • If required, the matrix A.
  • A table of the computed eigenvalues (if no problem had been encountered) and their errors, as far as the exact eigenvalues are known (cases 1 to 3).
  • The eigensystem, if required.
  • The deviation of the eigensystem from orthogonality, measured in the Frobenius norm.
  • A plot which shows the positions of the test values μ. Since we compute them all to machine precision, you will see about 50 grid positions for any eigenvalue.
 

Questions?!

 
  • Influences the eigenvalue distribution the convergence essentially?
  • How sensitive are the eigenvalues and eigenvectors against changes in the input matrix?

To the input form

 
 
Back to the top!

12.10.2010