The QL method

 

Directly to the input form

 
  • In order to determine all eigenvalues and eigenvectors of a real symmetric matrix one usually uses the QL method which uses the QL decomposition. This is analogous to the QR decomposition, but obtained by going from right to left in the matrix and establishing the lower triangular instead of the upper triangular form. This has advantages with respect to the roundoff for so called "graded" matrices, which occur in transforming a general symmetric eigenproblem
    A x= λ B x
    to standard form, where the entries grow along the diagonal. In order to minimize the computational effort the matrix is transformed to tridiagonal form T first, using Householder transformations. The tridiagonal structure is maintained by the algorithm.
    The method can be interpreted as inverse iteration using variable shifts combined with the transformation to Schur normal form, which in this case is diagonal. With T0=T the computation goes as follows:
    TkkI= LkQk
    Tk+1= QkLkkI .
    For numerical reasons we use however the implicit form where the shift enters the construction of the unitary transformation only and is not applied explicitly to the diagonal. That means that a Givens rotation is constructed to annihilate element (1,2) of TkkI but applied (as similarity transformation) to Tk. This introduces an element (3,1) which destroys the tridiagonal form and which is driven out by a sequence of further Givens rotations. Applying the Francis theorem it follows that this results in the same effect as the step with the explicit shift, up to a similarity transformation with a diagonal matrix of 1 and -1. Here Lk is lower bidiagonal , Qk a product of n-1 Givens rotations and T the tridiagonal matrix obtained from A by the preliminary transformation using Householder matrices. For μk one choses here the so called " Wilkinson shift", that is the eigenvalue of the left upper 2 by 2 submatrix of Tk which is nearer to the (1,1) element and in case of a tie the lower one.
    It can be shown that this method is always convergent and supercubic convergent in general. That means that the element (1,2) in the sequence approaches zero extremely fast.
  • after accepting the (1,1) element as an eigenvalue the dimension of the problem is reduced by one and the process repeated until all eigenvalues are found. The product of all unitary transformations used is the approximate complete eigensystem.
 

The inputs

 
  • you have the choice from 4 cases:
    1. A 8x8 matrix with known eigenvalues.
    2. A quindiagonal matrix representing the standard discretization of a bar bending problem. Here the eigenvalues are also known analytically. the dimension is variable.
    3. Matrix generation from an internally fixed complete eigensystem and eigenvalues from an input list
    4. Direct input of a matrix (thought of as symmetric) via its lower triangle.
  • the dimension n in the cases 2,3,4 . 3 <= n <= 30 in case 2 and 2 <= n <= 30 otherwise!
  • you may require a printout of the matrix.
  • you may require the printout of the eigensystem.
 

The output

 
  • The matrix A if requested.
  • The computed eigenvalues and their errors, as far as the exact eigenvalues are known (cases 1,2,3)
  • The eigensystem, if requested.
  • A short iteration protocol showing the current element (1,2) for each dimension in the reduction process.
 

Questions ?!

 
  • How does the eigenvalue distribution influence speed of convergence?
  • What about the sensitivity of the eigensystem, if you change matrix entries?
 

to the input form

 
Back to the top!

24.05.2016