next up previous contents index
Next: Single- and Multiple-Vector Iterations Up: Non-Hermitian Eigenvalue Problems Previous: Accuracy of Eigenvalues Computed   Contents   Index


Direct Methods

The primary direct method[*] used in practice to solve the NHEP (7.1) (and (7.2)) is the QR algorithm. It first computes the Schur canonical form (or simply called Schur decomposition) of a non-Hermitian matrix $A$:

\begin{displaymath}
A = U T U^{\ast},
\end{displaymath} (121)

where $U$ is a unitary matrix, $U^{\ast} U = I$, and $T$ is an upper tridiagonal matrix. The eigenvalues $\lambda$ of $A$ are the diagonal entries of $T$ (see also §2.5). The eigenvectors $x$ of $A$ are $U s$, where $s$ are the eigenvectors of $T$, which can be obtained by solving triangular systems. When $A$ is real, the QR algorithm computes the real Schur decomposition to avoid complex numbers for saving in floating point operations and memory.

The QR algorithm is originated from the simple QR iteration:


		 		 Let $A_0 = A$, 

For $k = 1,2, \ldots, $
QR-factorize $A_{k-1} = Q_k R_k$
Compute $A_{k} = R_k Q_k$

Under certain conditions, $A_k$ converges to Schur form $T$. However, the convergence of the QR iteration is extremely slow for practical usage. To make QR iteration a fast, effective method for computing Schur decomposition, a number of crucial improvements have been developed, including Hessenberg reduction, implicitly shifting, deflation, and matrix balancing. We refer the interested reader to [198,114] and references therein for the details of the related theory and implementations.

The QR algorithm costs $O(n^3)$ floating point operations and $O(n^2)$ memory for a general $n \times n$ matrix. A crude estimate for the costs for a real $n \times n$ matrix is $25n^3$ floating point operations if both $U$ and $T$ are computed. If only eigenvalues are desired, then about $10n^3$ floating point operations are necessary.

The QR algorithm is backward stable; i.e., for the computed unitary matrix $\widehat{U}$ (within machine precision) and the computed upper triangular matrix $\widehat{T}$, we have

\begin{displaymath}
A + E = \widehat{U}\widehat{T}\widehat{U}^{\ast},
\end{displaymath}

where $\Vert E\Vert \leq p(n)\epsilon\Vert A\Vert$, where $p(n)$ is a modestly growing polynomial function of $n$.

The subroutine that implements the QR algorithm is available in almost every linear algebra-related software package. It is used as the eig command in MATLAB.[*]In LAPACK [12], the following driver routines are available for performing a variant of tasks for computing Schur decompositions, eigenvalues, eigenvectors, and estimates for the accuracy of computed results:

xGEES compute Schur decomposition with eigenvalue ordering,
xGEESX xGEES plus condition estimates,
xGEEV eigenvalues and eigenvectors,
xGEEVX xGEEVX plus condition estimates,
where ${\tt x}$ is S or D for real single or double precision data types, or C or Z for complex single or double precision data types.

In ScaLAPACK [52], computational routines are provided for solving the parallel Hessenberg reduction and the parallel QR iteration with implicit shifting. They are PxGEHRD and PxLAHQR, respectively.


next up previous contents index
Next: Single- and Multiple-Vector Iterations Up: Non-Hermitian Eigenvalue Problems Previous: Accuracy of Eigenvalues Computed   Contents   Index
Susan Blackford 2000-11-20