next up previous contents index
Next: Software Availability Up: Practical Algorithm Previous: Deflation.   Contents   Index


Restarting a Block Arnoldi Reduction.

In §7.6, we motivated and explained how to implicitly restart the Arnoldi method. The experimental results in [292,290,24] show that an implicitly restarted scheme is superior to other block methods that have appeared in the literature [391,398]. These explicit restarting techniques select a subsequent starting block based on some criterion and then restart from scratch. Not only does BIRAM use dramatically fewer matrix-vector products; it also requires significantly fewer Arnoldi vectors to be stored. The reader is referred to [391,227,245,398] for information on explicitly restarted Arnoldi methods.

The extension of the scheme in §7.6 within a block Arnoldi method is straightforward. The chief difference is that an implicitly shifted QR algorithm that retains band Hessenberg form is needed. As with the standard implicitly shifted QR algorithm, a sequence of unitary matrices are constructed and applied so that an updated band Hessenberg matrix results.

Numerous choices are possible for the selection of the $p$ shifts used during the QR algorithm, including the specific choice of $p.$ If the shifts are in complex conjugate pairs, the implicit double shift [198, pp. 355-358] can be used to avoid complex arithmetic.

Typically, the $p$ shifts are selected by utilizing the spectral information contained in $H_{[r+p]}$. Partition the eigenvalues of ${H}_{[m]}$ so that

\begin{displaymath}
\{ \underbrace{\theta_1,\ldots,\theta_r}_{\mbox{wanted}}\}
...
...ce{\theta_{r+1},\ldots,\theta_{m\cdot b}}_{\mbox{unwanted}}\}.
\end{displaymath} (137)

For an unblocked reduction, the $p$ shifts are selected from the unwanted eigenvalues of $H_{[m]}$ where $r=k.$ Sorensen [419] proposed this as an exact shift strategy. This strategy is equivalent to restarting the subsequent reduction with a linear combination of the approximate Schur vectors associated with the $k$ wanted eigenvalues. Other interesting strategies include the roots of Chebyshev polynomials [383], harmonic Ritz values [331,337,349,411], the roots of Leja polynomials [23], the roots of least squares polynomials [384], and refined shifts [244]. In particular, the Leja and harmonic Ritz values have been used to estimate the interior eigenvalues of $A$.

In Algorithm 7.12, the integer $r$ is typically set to $k$, the number of wanted eigenvalues, during the input step. Once the iteration loop has been entered, the values of $r$, $p$, and thus $m=r+p$ may vary for every value of $i.$ When $b > 1$, we cannot apply all $p=m \cdot b - k$ unwanted eigenvalues as shifts. We are then faced with the question of selecting which $p$ shifts to apply and whether we should consider applying more than $p$ shifts.

For example, $m$ shifts can be applied until a Ritz pair satisfies the convergence tolerance. The Ritz pairs can then be deflated (or locked). (This is equivalent to the deflated iterative Arnoldi algorithm given by Saad [387, p. 181] and used in the implementations in [24,398].) This approach allows us to implicitly apply a polynomial filter of the maximum degree. (Application of more than $r+p$ shifts will require applying explicit polynomials in $A.$) However, as more shifts are applied, the cost in computing the subsequent Arnoldi reduction increases.

A strategy that varies $r$, $p$ (relative to $k$) and the shifts used during every iteration will give the best results. This is the subject of current research. The recent report [421] discusses an adaptive strategy for symmetric eigenvalue problems.


next up previous contents index
Next: Software Availability Up: Practical Algorithm Previous: Deflation.   Contents   Index
Susan Blackford 2000-11-20