next up previous
Next: Orthogonal Polynomials and Up: Introduction Previous: Related Eigenvalue Problems

1.3 Candidate Methods for Sparse Matrices

  Preferred methods for computing singular triplets of large sparse and symmetric eigenproblems include the Lanczos [11] and Arnoldi [20] methods. Other related methods include subspace-iteration [19] and trace minimization [3]. Most of these methods obtain approximations to the eigenvalues and eigenvectors of a symmetric matrix A by constructing elements from a Krylov-like basis [19] through the operation for the subspace spanned by the eigenvectors. Thus, the matrix A is used only to compute the matrix-vector product As, and implementations of these algorithms need not make any assumptions about the structure/storage format of A. In one sense, the efficiency of these methods is determined primarily by the performance of the matrix-vector product and the storage scheme used for the matrix.

The Lanczos method solves the eigenvalue problem through partial tridiagonalizations of the matrix C using Krylov bases. Unlike direct factorization methods (e.g., QR factorization), no intermediate dense submatrices (resulting from fill-in) are generated. Also, information about C's extremal eigenvalues tend to emerge long before tridiagonalization is complete. Hence, the Lanczos algorithm is particularly useful in situations where only a few of C's largest or smallest eigenvalues are desired.

An alternative method (referred to as CSI-MSVD) for producing tridiagonal matrices whose eigenvalues approximate those of the original matrix C was presented in [10]. This method, based on the extraction of modified moments from the Chebyshev semi-iterative method [23], is an attractive alternative given its scope for parallelism and reduced memory requirements.

Michael W. Berry (
Sun May 19 11:34:27 EDT 1996