Next: Error Estimation for
Up: The CSI-MSVD Algorithm
Previous: Parallel Program Flow
Although [2] describes an implementation of CSI-MSVD
to approximate singular values, singular vectors are not estimated.
Two problems which arise from the lack of singular vector approximations
are listed below.
- Estimation of the error in approximated singular values
necessitates a comparison with true singular values generated by
some other scheme, in order to detect if the current approximation
to the singular value is within some acceptable bound.
- A set of singular values of acceptable accuracy must not be
recomputed in succeeding iterations to prevent wasteful computation.
Also, since CSI-MSVD approximates the largest singular values
first, it is necessary to deflate the converged singular values
out of the computation to allow convergence of the smaller
singular value approximations. Most candidate schemes for
deflation (see [20]), however,
require singular vector approximations.
The implementation described in [2] addresses these
two issues in the following manner.
The estimation of singular vectors from the eigenvectors
derived by CSI-MSVD is a non-trivial task.
The tridiagonal Jacobi matrix defined by
Equation (17) has been constructed
so that characteristic equation corresponding to has the
same roots as the iteration matrix. Thus, the method does not
explicitly construct a basis for the subspace spanned by the
eigenvectors, so that the methods used in Lanczos or other popular
Krylov-subspace methods do not have obvious analogs in CSI-MSVD.
However, as discussed in Section 2.1,
especially when parameters such as defined in Equation
(10) are chosen optimally, the
iterative method defined by Equation (11) should
converge to such that . As ,
the eigenvector of the appropriately scaled matrix M corresponds to the
eigenvalue nearest 1. The parameters that affect the convergence of CSI-MSVD
are:
- the scaling factor , chosen so that the
matrix has singular values (or equivalently, the two-cyclic matrix M has eigenvalues such
that ). Ideally, since the eigenvector is the
solution of a system of the form , an optimal
choice for the case b=0 would be .
- the damping-parameter for the Chebyshev polynomials
defined in Equations (21) and (22).
As discussed in [2], setting effectively
suppresses all singular values having magnitude less
than . It is therefore desirable to set in order to accelerate convergence to the
largest singular value.
The convergence of the iterative method to the eigenvector
is affected by the choice of the above parameters, which are not known
a priori. Also, after k steps of the iterative method, the recurrences
provided by Equations (22) and (21) allow the
calculation of 2k moments from the Chebyshev iterates, so that the
construction of the Jacobi matrix is accelerated. However, in the
absence of any obvious method to approximate Chebyshev
iterates from the Jacobi matrix, the convergence of the iterative method
to the eigenvector is slower than the convergence of the Jacobi matrix to
the eigenvalues. To overcome these problems, a two-pass scheme has been
developed.
Figure 3: Simple flow chart for computing the k-largest singular triplets
using CSI-MSVD.
Figure 3 illustrates the following steps of the two-pass
scheme.
- PASS1: In the first pass, perform at most
steps of CSI-MSVD. This involves the execution of all of the steps shown
in Figure 2. The iteration is terminated when either
- the convergence tests [2] are satisfied, or
- exactly iterates have been calculated.
- PASS2: Let
be the eigenvalues of at the end of PASS1, and
the current Chebyshev iterate.
If the residual norm
is not within some desired tolerance , the scaling
parameter should be set to , and the damping
parameter set to .
Then, at most steps of the Chebyshev iteration are performed
while examining the convergence as in Step 1.
- ACCEPT: Accept the approximate singular value corresponding to
(as defined in Section 1.2).
If a higher accuracy in the current approximations to the
singular value and corresponding singular vector obtained
from is desired, some refinement procedure may be used, with
starting values set to the current estimates.
In practice, the accelerated construction of the Jacobi matrix produces
eigenvalue approximations of at least accuracy, and
as discussed in [19] a single step of inverse-iteration could
be used to approximate the eigenvectors to accuracy. However,
as mentioned in Section 1.3,
for large, sparse matrices it is desirable to avoid fill-in from direct
methods. The current PVM implementation of CSI-MSVD uses an
ANSI-C translation of
the subroutine SYMMLQ [1,18] for refinement
of the eigenvector approximation. SYMMLQ is a Conjugate Gradient method
for symmetric indefinite systems of the form
where is a specified scalar value.
By setting b=0, to , the
computed vector x may approximate an eigenvector of the matrix B.
After the residual error has been reduced to the
desired tolerance, Wielandt deflation
[20] can be employed to repeat the above 3 steps
and thereby approximate the next largest singular triplet.
Next: Error Estimation for
Up: The CSI-MSVD Algorithm
Previous: Parallel Program Flow
Michael W. Berry (berry@cs.utk.edu)
Sun May 19 11:34:27 EDT 1996