Next: Stability and Accuracy Assessments Up: Generalized Hermitian Eigenvalue Problems Previous: Numerical Example.   Contents   Index

# Jacobi-Davidson Methods G. Sleijpen and H. van der Vorst

We consider the application of the Jacobi-Davidson approach to the GHEP (5.1). We can, similarly as for the Lanczos method treated in the previous section (see also [444]), apply the Jacobi-Davidson method to (5.1), with a -orthogonal basis for the search subspace; that is,

if we let denote the matrix with columns .

The Ritz-Galerkin condition for vectors in this subspace leads to

 (88)

or, because of the -orthogonality of :

This leads to Ritz vectors and Ritz values . We will assume that these Ritz vectors are normalized with respect to the -inner product.

The correction equation for the eigenvector component for the generalized eigenproblem can be written as

 (89)

If linear systems with the matrix are efficiently solvable, then we can compute the exact solution . In other cases we can, as is usual for the Jacobi-Davidson methods, compute approximate solutions to with a Krylov solver applied to (5.25). Note that the operator

maps the space onto the space , so that preconditioning is always required if we use a Krylov solver in order to get a mapping between and (see item (31) of Algorithm 5.6).

As for the standard Hermitian case, the resulting scheme can be combined with restart and deflation. If we want to work with orthogonal operators in the deflation, then we have to work with -orthogonal matrices that reduce the given generalized system to Schur form:

in which and is -orthogonal. The matrix is a diagonal matrix with the computed eigenvalues on its diagonal; the columns of are eigenvectors of . This leads to skew projections for the deflation with the first eigenvectors:

It is easy to verify that the deflated operator is still symmetric positive definite with respect to the space . We can simply use the -inner product in that space, since and the deflated coincide over that space.

If is not well-conditioned, that means that if leads to a highly distorted inner product, then we suggest using the approach with Jacobi-Davidson (see §8.4). The QZ approach does not exploit symmetry of the involved matrices.

Algorithm 5.6 represents a Jacobi-Davidson template with restart and deflation for exterior eigenvalues. A template for a left-preconditioned Krylov solver is given in Algorithm 5.7.

To apply this algorithm we need to specify a tolerance , a target value , and a number that specifies how many eigenpairs near should be computed. The value of denotes the maximum dimension of the search subspace. If it is exceeded then a restart takes place with a subspace of specified dimension . We also need to give a starting vector .

On completion the largest eigenvalues are delivered when is chosen larger than ; the smallest eigenvalues are delivered if is chosen smaller than . The computed eigenpairs , , satisfy , where denotes the th column of . The eigenvectors are -orthogonal: for .

Let us now discuss the different steps of Algorithm 5.6.

(1)
Initialization phase.

(3)-(5)
The new vector is made orthogonal with respect to the current search subspace by means of modified Gram-Schmidt. This can be replaced, for improved numerical stability, by a template as in Algorithm 4.14, in which all inner products and norms should be interpreted as -inner products, -norms, respectively.

If then this is an empty loop.

(7)
We expand the by matrices , , and , denotes the matrix with the current basis vectors for the search subspace as its columns; likewise and .

(8)-(10)
The th column of the symmetric matrix (of order ) is computed.

(11)
The eigenproblem for the by matrix can be solved with a suitable routine for Hermitian dense matrices from LAPACK. We have chosen to compute the standard Ritz values, which makes the algorithm suitable for computing exterior eigenvalues of close to a specified . If eigenvalues

in the interior part of the spectrum have to be computed, then the computation of harmonic Petrov values is advocated; see §8.4.

(13)
The stopping criterion is to accept an eigenvector approximation as soon as the norm of the residual (for the normalized vector approximation) is below . This means that we accept inaccuracies in the order of in the computed eigenvalues and inaccuracies (in angle) in the eigenvectors of (provided that the concerned eigenvalue is simple and well separated from the others and is not ill-conditioned; see (5.33) and (5.34) in §5.7).

Detection of all wanted eigenvalues cannot be guaranteed; see item (13) of Algorithm 4.17 (p. ).

(17)-(20)
After acceptance of a Ritz pair, we continue the search for a next pair, with the remaining Ritz vectors as a basis for the initial search space.

(23)-(29)
We restart as soon as the dimension of the search space for the current eigenvector exceeds . The process is restarted with the subspace spanned by the Ritz vectors corresponding to the Ritz values closest to the target value . The construction of that subspace is done in (25)-(27).

(31)
We have collected the locked (computed) eigenvectors in , and the matrix is expanded with the current eigenvector approximation . This is done in order to obtain a more compact formulation; the correction equation in step (31) of Algorithm 5.6 is equivalent to the one in equation (5.25). The new correction has to be orthogonal to the columns of as well as to .

Of course, the correction equation can be solved by any suitable process, for instance, a preconditioned Krylov subspace method that is designed to solve unsymmetric systems. However, because of the skew projections, we always need a preconditioner (which may be the identity operator if nothing else is available) that is deflated by the same skew projections so that we obtain a mapping between and itself. Because of the occurrence of and , one has to be careful with the usage of preconditioners for the matrix . The inclusion of preconditioners can be done as in Algorithm 5.7. Make sure that the starting vector for an iterative solver satisfies the orthogonality constraints . Note that significant savings per step can be made in Algorithm 5.7 if is kept the same for a (few) Jacobi-Davidson iterations. In that case, columns of can be saved from previous steps. Also the matrix can be updated from previous steps, as well as its decomposition.

It is not necessary to solve the correction equation very accurately. A strategy, often used for inexact Newton methods [113], works well here also: increase the accuracy with the Jacobi-Davidson iteration step; for instance, solve the correction equation with a residual reduction of in the th Jacobi-Davidson iteration ( is reset to 0 when an eigenvector is detected).

For a full theoretical background of this method see [172]. For details on the deflation technique with eigenvectors see also §4.7.3.

Next: Stability and Accuracy Assessments Up: Generalized Hermitian Eigenvalue Problems Previous: Numerical Example.   Contents   Index
Susan Blackford 2000-11-20