From the theory of summability of sequences , consider the more general iterative procedure
The requirement that if , then must be x, results in the constraint
The iterative method resulting from the sequence will be referred to as a semi-iterative method  and the error vector corresponding to is given by the expression
where is an degree polynomial with and .
In order to accelerate the convergence of the semi-iterative method, it is necessary to minimize the average rate of convergence, or, equivalently, obtain the solution of the minimization problem
The solution of this problem requires a priori determination of the eigenvalues. In its place, consider the new minimization problem
where . The solution of the new minimization problem is given in terms of the Chebyshev polynomials, , defined by
Using the 3-term recurrence for Chebyshev polynomials
and the fact that , one can generate an iteration of the form (see )
where and . This iterative procedure specifies the Chebyshev semi-iterative method with respect to the original iteration defined in Equation (3). Discussions of the convergence of this method are presented in  and .
As shown in , the iteration defined by Equation (10) can be used in combination with the theory of modified moments to produce approximations to the largest eigenvalues of the matrix M. Specifically, Equation (10) may be used to generate iterates through the iteration
Similar to the Lanczos algorithm (see  and ) the next section will show how modified moments derived from the iterates may be used to generate a sequence of bidiagonal matrices whose largest singular values approximate those of the sparse matrix A.