Simultaneous vector iteration : A real symmetric

 

Directly to the input form

 
  • Simultaneous vector iteration proceeds similar to the power method: from a computed sequence Xk=AkX0 one gains eigenvalue information using an approximated eigenvector system. Here one uses not a single vector, but a system of p linearly independent vectors. Linear independency, which would be lost by simple iterating as indicated above (why?) is maintained by reorthogonalizing this vector system after a fixed number of steps. Since there are many possible orthogonal bases of such a system one uses one which is optimal in the sense of minimizing the defect ||AU-UD||F with a diagonal matrix D which constitutes the matrix of eigenvalue estimates. This way one finds the p largest eigenvalues of A under conditions which completely correspond to those of the simple power method. The rate of convergence is linear and differs for the different eigenvalues. It behaves like
    λp+1i, i=1,...,p.
    provided λp+1 < λp This means that the largest eigenvalue converges fastest, followed by the lower ones. For this reason one usually chooses p a bit larger than the actually desired number of eigenvalues.
  • Here we assume A real symmetric, but there exists also a variant for the general case, in which D will be triangular, corrsponding to the Schur normal form.
    In order to find that optimal U one must solve the complete eigenvalue problem for VTAV with V some orthogonal basis of Xk. Hence all this makes sense only if n > > p .
  • If one discretizes the eigenvalue problem of an elliptic partial differential equation (here the problem of a clamped quadratic membrane, i.e. the negative Laplacian operator) one obtains a huge finite dimensional eigenvalue problem of which only some of the smallest eigenvalues are of interest. In this case one combines the method with the technique of inverse iteration, getting approximations of the p smallest eigenvalues.
  • The eigenvectors represent the vibrational modes of the membrane and the reciprocals of the eigenvalues are the corresponding frequencies.
  • In our example we have a quadratic membrane which is clambed on its complete circumference, that means we have zero boundary values. Because of the symmetry of the problem multiple eigenvalues occur and for this reason one gets into trouble with erroneously chosen p.
  • The computational algorithm implemented here is Rutishausers RITZIT from the Handbook Linear Algebra. If p eigenvalues are required we iterate with p+3 vectors simultaneously. In step k we have an orthogonal matrix Qk-1 whose columns are an approximate basis of the desired eigenspace. We use Gram-Schidt-orthogonalization in order to get a first orthogonal basis of the next iterate:
    A Xk = Qk-1, Qk-1 orthogonal , for the inverse iteration case
    resp.
    Xk = A Qk-1 for the direct iteration case
    Xk = Uk Rk Gram Schmidt orthogonalization
    The next basis of Xk is now chosen such that the error in the matrix- Rayleigh-quotient is minimal.
    RkRkT = Vk Dk2VkT complete eigensystem computation
    Qk = Uk Vk
    For properly chosen Q0=X0 the entries of Dk converge to the reciprocals of the smallest eigenvalues of A (in the inverse iteration case) resp. to the largest eigenvalues and the columns of Qk to the correponding eigenvectors. This process is quite expensive. Hence Rutishauser uses the construction of the new orthogonal basis only from time to time and introduces additional speed ups. Hence you will see the number of steps in the reported ''Lambda-History'' much smaller than the number of matrix vector multiplies.
  • You also have the option to use a test case of your own, in this case with direct iteration to avoid the need of providing your own (sparse) linear system solver. This means that now Xk is obtained as
    Xk = A Qk-1
 

The inputs

 
  • You either define a problem of your own, simply by providing a piece of code which realizes the matrix vector multiplication y = A*x , (in which case the p largest eigenvalues are found ) or you simply may run the example of the vibrating membrane (getting the p smallest eigenvalues).
  • In the first case your required inputs are:
  • In the corrsponding textfield a piece of code which realizes y=A*x. This of course does not require that you store A as a full or sparse matrix. Look at the prepared input to see how this might be done.
  • The necessary information beyond this are
    1. n the dimension of the matrix
    2. p the number of eigenvalues required.
    3. steps the maximum number of steps allowed.
    4. the wanted precision epsaccept
    5. if you want to do so, the initial vector system X0, which need not be orthogonal (it becomes orthogonalized internally.)
  The example of the vibrating membrane:
 
  • The gridsize of the discretization hx in form of
    nx = 1/hx .
    We use a quadratic grid. The size of the matrix is then formally n = (nx-1)2. Important: 2 <= nx <= 33 !
  • The number p of desired eigenvalues. Important: 1 <= p <= 30 !
 

The results

 
  • The computed eigenvalues with their bound for the relative error
    ||A*x - λ*x||/||A*x||
    and
  • a plot which shows convergence of the different eigenvalues over the k-axis. These are shown in groups of 5 eigenvalues each, so that you will see several plots, for the first five, the second five etc. Here you can see the different speed of convergence for different eigenvalues, depending on the distribution of the eigenvalues.
  • In the case of the membrane additionally an animated view of the different eigenmodes which is repeated in a period of some seconds, plotted as a surface over the unit square.
 

Questions ?!

 
  • Do the practical results correspond to the theoretical claims above? How does this depend on X0?
  • In case of the membrane: how depends the error in the computed eigenvalues of the discretized problem on the grid size (true eigenvalues are (k*π)2,k=1,2,..)?
 

The input form

 
Back to the top!

23.02.2012