Background

Let us begin by observing that problems such as vertex cover have long been known to be solvable in time O(f(k)n^c), where k is the relevant problem parameter (in this case the size of the cover) and c is a constant independent of the problem instance. Yet other problems, such as dominating set, seem to yield only to methods that essentially reduce to checking all candidate solutions of size k or less, a task requiring time O(f(k)n^{g(k)}). Thus, although both problems are NP-complete, and although both can be solved in polynomial time when k is fixed, as we consider various values of k, vertex cover suffers only from a growing multiplicative factor while while dominating set appears to require a growing exponential.

Problems like vertex cover are said to be fixed-parameter tractable (FPT). As with the theory of NP-completeness, in which we cannot say with absolute certainty that even a single problem in NP is not also in P, we cannot at this juncture say with certainty that any NP-complete problem is not FPT. But what we can do is employ the notion of W-completeness. In this context, completeness serves as mathematical evidence that a growing exponential represents an intrinsic computational barrier. It has been proved, for example, that dominating set is complete for W[1], and hence exceedingly unlikely to be FPT.

The study of FPT algorithms has opened up something of a cottage industry in algorithm design. Researchers in several sites around the world are now developing amazingly fast and practical algorithms for problems previously considered unsolvable. Vertex cover, as just one example, has been found to be decidable in time O(1.28^k+n). Thus the requisite growth (modulo P not equal NP) is relegated to a mere additive term. Vertex cover is now considered well-solved for all parameter values less than or equal to about 200. Yet only a few years ago, most researchers in complexity theory would probably scoffed at any attempt to produce optimal solutions for realistic-sized instances of an NP-complete problem.

Sample Applications

Applications arise in a surprising variety of computational domains. Consider for example cryptography. By using randomization and elliptic curve factorization it has shown that, if one fixes merely the size or Hamming weight of keys, then the problem of finding prime divisors is FPT. On the other hand, fixed-parameter problems relying on k-subset sum, k-subset product or k-perfect code are hard for W[1].

FPT algorithms have also been devised for artificial intelligence and nonmonotonic reasoning. The same can be said for the study of logic programs, particularly for stable model semantics and directional type checking. As an example, consider type-checking in the logic programming language ML. This problem is EXP-complete, and thus believed to be extremely hard in classical terms. It has long been observed, however, that this fact does not deter practical implementations of ML. The explanation is that, given a program of size n, the complexity of type-checking can be handled by an FPT algorithm in time O(2^k + n), where the parameter k represents the nesting depth of the type declarations.

Parameterized methods seem to be intensely relevant in computational biology, where an abundance of natural parameters possess ranges somewhere south of 100. Here a parameter may represent the number of sequences to be aligned, the size of the sequencing alphabet (e.g., at most 30 for proteins), the number of hydrophobic contact points allowed in folding, the number of species in an evolutionary tree, or even something as mundane as the maximum number of errors permitted in a dataset. The parameterized complexity toolkit includes a variety of new techniques for devising robust algorithms for these applications.