next up previous contents index
Next: Preconditioned Steepest Ascent/Descent Methods Up: Preconditioned Eigensolvers   A. Knyazev Previous: General Framework of Preconditioning   Contents   Index

Preconditioned Shifted Power Method

The preconditioned power method with a shift is the simplest preconditioned iterative method. It can only find the smallest (or largest, for a different shift) eigenvalue and the corresponding eigenvector.

We present the single-vector version of the method for the pencil $B - \mu A$ in Algorithm 11.5. On the output, $ \mu^{(k)}$ and $x^{(k)}$ approximate the largest eigenvalue $\mu_{1}$ and its corresponding eigenvector.

We note that the shift $\tau $ requires knowledge of $\mu_{\min}$, or its estimate from below. If $B$ is nonnegative definite, then $\mu_{\min}$ may simply be replaced with $0$.


\begin{algorithm}
{Preconditioned Power Method for GHEP
\index{preconditioned!po...
... (7)} \> \> \> $x\sup{i+1} = x / \Vert x\Vert _A $\end{tabbing}}
\end{algorithm}

The standard arguments, based on the eigendecomposition of $A^{-1}B$ for the pencil $B - \mu A$, do not allow us to show that the power method converges, as eigenvectors in the eigendecomposition are not necessarily eigenvectors of the iteration operator. This makes convergence theory quite tricky. D'yakonov and his colleagues [150,146,147] obtained explicit estimates of linear convergence for iterative Algorithm 11.5 using assumption (11.9). Somewhat simplified (see [264,265,268]), the convergence rate estimate for Algorithm 11.5 can be written as

\begin{displaymath}
\frac{\mu_1 - \mu^{(n)}}{\mu^{(n)} - \mu_2} \leq (1 - \xi )^...
...{\delta_0}{\delta_1}
\frac{\mu_1 - \mu_2}{\mu_1 - \mu_{\min}},
\end{displaymath} (283)

under the assumption that $\mu^{(0)} > \mu_2.$

The estimate shows that the convergence is (at least) linear. Note that condition numbers of $A$ and $B$ do not appear in the estimate.

Similar results can be obtained if $\mu_p \geq \mu^{(0)} > \mu_{p+1}$ for some $p>1$ holds [147].

In numerical experiments, Algorithm 11.5 usually converges to $\mu_1$ for a random initial guess. When $\mu_p \geq \mu^{(0)} > \mu_{p+1}$, the sequence $x^{(k)}$ needs to pass $p$ eigenvectors, which are saddle points of the Rayleigh quotient, to reach $u_1$. The convergence may slow down, in principle, near every saddle point. For a general preconditioner $T$, it is hard to predict whether this can happen for a given initial guess $ x^{(0)}$.

We have collected the dominant costs per iteration for Algorithm 11.5 in terms of storage and floating point operations respectively, in the following two tables.

Item Storage
Residual $1$ $n$-vector
Approximate eigenvector $1$ $n$-vector
Temporary vectors 1-2 $n$-vectors

Action Major cost
Rayleigh quotient $\mu^{(i)}$ $2$ dots
Residual $r$ $2$ matrix-vector products
Preconditioned residual $w $ Depends on preconditioner $T$
Approximate eigenvector $1$ update

The main advantage of the preconditioned shifted power method is, of course, its algorithmic simplicity and the low costs per iteration. Using an a priori chosen shift provides the total control of the iterative method. The method is very stable and robust. A solid convergence theory exists.

The need of bounds for $\delta_1$ and $\lambda_{\min}$ to calculate the shift is a clear disadvantage. Also, other preconditioned eigensolvers we consider below converge linearly as well, but typically with a better rate.


next up previous contents index
Next: Preconditioned Steepest Ascent/Descent Methods Up: Preconditioned Eigensolvers   A. Knyazev Previous: General Framework of Preconditioning   Contents   Index
Susan Blackford 2000-11-20