CSI-MSVD is a new
procedure for approximating both the singular values and singular
vectors of large, sparse matrices. When compared to popular
single-processor algorithms such as the Lanczos method, CSI-MSVD has
the attractive properties of reduced memory requirements, and
accelerated convergence to the largest singular values. In addition,
the CSI-MSVD algorithm allows both functional and data parallelism,
making it suitable for heterogeneous computing environments. This
method can also be used to accelerate the convergence of other eigensystem
or SVD methods. Since the
Chebyshev semi-iterative method requires
the computation of matrix-vector products which are also
required by Krylov subspace methods, it would be possible to interleave the
computations of CSI-MSVD with other Krylov subspace methods such as
block Lanczos methods so that the eigenvalue estimates from
CSI-MSVD could be used within a Krylov subspace method to solve a * shifted *
eigenvalue problem.

The biggest challenge encountered with the CSI-MSVD algorithm is the absence of a relationship between the moments of the iterative method and the eigenvectors of the iteration matrix. The absence of this relationship necessitates the use of external refinement schemes like SYMMLQ when a high accuracy in the singular vector estimates is desired. It has been pointed out in [5] and [20] that the Lanczos procedure is the Stieltjes algorithm for computing a sequence of orthogonal polynomials with respect to the inner product. Deriving a relationship between the Lanczos vectors and the Stieltjes moments could provide a better understanding between the eigenvectors and the moments arising from quadrature formulae, so that convergence to the current eigenvalue and eigenvector could be obtained simultaneously, and more efficient deflation schemes could then be used.

Michael W. Berry (berry@cs.utk.edu)

Sun May 19 11:34:27 EDT 1996