Constraint Implementation

In this set of notes we describe a number of algorithms for satisfying constraints.

Eager versus Lazy Evaluation

Constraints can be evaluated in either a lazy or eager fashion. A lazy evaluator re-evaluates a constraint only if it affects a result the user requests. Eager evaluation re-evaluates a constraint as soon as one of its inputs changes. Thus a lazy evaluation system can contain variables that are out-of-date. Lazy evaluation avoids unnecessary work if relatively few values are needed to compute the result the user requests. For example, if portions of a drawing are off screen, they might not have to be re-evaluated. Lazy evaluation has two main drawbacks:

  1. A constraint may not be evaluated when a user expects the constraint to be evaluated. This could happen because the application does not request the constraint's value. A simple fix to this problem is to allow a constraint to be marked as "eager", which means that the constraint should always be evaluated if one of its inputs changes.

  2. If an edit introduces an error, the error may not be detected until much later in the program. The reason that the error may not be detected until later is that the constraint which detects the error may not be evaluated until much later. This problem is not so easy to solve and requires good debugging tools to help the programmer find where the error originates when the error is detected.

Eager evaluation is useful for immediately showing all the effects of any change. It will immediately detect any errors caused by the edited value. However, it can also unnecessarily re-evaluate constraints whose values are not currently needed. This unnecessary evaluation can slow an application's response time. If there are a large number of off-screen graphics or a lot of invisible graphics, this can be a particular problem.

Another problem with eager evaluation is that it can pre-maturely evaluate constraints. In other words, one or more a constraint's inputs may not have been initialized when the constraint's is evaluated by the evaluator. In this case the constraint may crash the application. Typically the programmer will have to use the language debugger to find the source of the crash, which is very difficult to do unless the programmer has an intimate knowledge of how the constraint solver is implemented.

A solution to the pre-mature evaluation problem can be devised as follows:

  1. Allow the user to provide a default value for the constraint

  2. If the constraint is evaluated pre-maturely, it can throw an exception.

  3. When a constraint throws an exception, the constraint solver can return the constraint's default value

Data Structures Used by A Dataflow Constraint Solver

The fundamental data structure used by a dataflow constraint solver is a dataflow graph. The dataflow graph is a bi-partite graph that keeps track of dependencies among variables and constraints. Variables and constraints comprise the two sets of vertices for the graph. There is a directed edge from a variable to a constraint if the constraint uses that variable as a parameter. There is a directed edge from a constraint to a variable if the constraint assigns a value to that variable. Formally the dataflow graph can be represented as G = {V, C, E}, where V represents the set of variables, C represents the set of constraints, and E represents the set of edges.

When a variable is edited, a constraint solver can find all the constraints that depend on this variable by using a depth-first search to follow the edges in the dataflow graph.

A constraint solver also typically uses a number of fields for each variable:

Finally each constraint contains a number of fields:

  • Eval: a method that computes the constraint's value.

  • Output: the variable whose value this constraint computes

  • Out_Of_Date: whether this constraint has already been visited

    The Importance of Incremental Solvers

    One possible approach to constraint satisfaction is to re-evaluate all the constraints when a variable changes. However, many constraints may not depend on the changed variable so a great many constraints could be unnecessarily evaluated. This would decrease the responsiveness of the application. As a result, almost all constraint solvers use some sort of incremental algorithm that tries to evaluate only those constraints that depend on a changed variable.

    The simplest possible incremental algorithm is the following one:

    	Change(V, new_value) {
    	  if V != new_value then
    	    V = new_value
    	    For each cn in V.dependencies
    		temp = cn.Eval()
    		Change( cn.Output , temp )

    Unfortunately, in the worst case, this algorithm is exponential in the number of variables that must be re-evaluated. In other words, if n variables have to be re-evaluated, this algorithm can evaluate as many as 2n constraints

    The following example graph shows the exponential case:

    	A -------> C --------> E    A -> B, B -> C, C -> D, D -> E
               \   /      \    /
    	     B           D
    	Change A
          	    Change(A) calls Change(C)
    	    Change(C) calls Change(E)
    	    Change(C) calls Change(D)
    	    Change(D) calls Change(E)
                Change(A) calls Change(B)
                Change(B) calls Change(C)

    A Spreadsheet Solver

    In order to reduce this exponential complexity, we need to be a little smarter about how we evaluate the constraints. Basically we want to topologically sort the constraints and then evaluate them in topological order. A list of constraints is topologically sorted if for any two constraints, ci and cj, such that i < j (assuming that i and j denotes the constraints' position in the list), either:

    1. there is a directed path from ci to cj in the dataflow graph, or

    2. there is no path between cj and ci in either direction in the dataflow graph.

    The former condition says that cj either directly or indirectly uses ci as an input. Hence ci should be evaluated prior to cj. The latter condition says that ci and cj are independent of one another, and hence it does not matter in which order they are evaluated.

    A simple way of topologically ordering constraints is to perform a depth-first search of the dataflow graph starting at an edited variable. The depth-first search maintains a list of constraints. It adds a constraint to the list only after it has visited all of the constraint's successors. Once all of the constraint's successors have been added to the list, the constraint itself can be added to the list, since by doing so, it will be evaluated before all of its successors. When the depth-first search terminates, the solver evaluates the constraints in the order they appear on the list. This is the approach used by most spreadsheet solvers. It is an eager evaluation approach because all constraints are immediately brought up to date.

    The algorithm is as follows:

    	Change (cell, new_value) {
    	    cell.Value = new_value
    	    cells_to_be_evaluated = empty
    	    Collect_Constraints(constraints_to_be_evaluated, cell)
    	    for each cn in constraints_to_be_evaluated do
    	        cn.output.value = cn.Eval();
    		cn.visited = false;
    		cn.output.visited = false;
    	Collect_Constraints(constraints_to_be_evaluated, cell)
    	    cell.visited = true
    	    for each cn in cell.Dependencies do
    	        if cn.visited = false then
    		    cn.visited = true
    		    if cn.output.visited = false then
    		        Collect_Constraints(constraints_to_be_evaluated, cn.output)

    This algorithm is linear in the number of constraints that must be evaluated. That is, if n constraints must be evaluated, the algorithm's time complexity is O(n).

    While simple, this algorithm has two drawbacks that tend to make it unuseable for graphical interfaces:

    1. It cannot support lazy evaluation.

    2. The edges of the dataflow graph cannot change during constraint satisfaction. An edge can change as the result of a pointer variable changing. The reason this algorithm can fail if an edge changes during constraint evaluation is that the topological order may change, thus forcing the constraint list to be recalculated.

    Mark/Sweep Algorithms

    We can remedy the deficiencies in the above algorithm and still obtain an O(n) running time for our constraint solver by using a strategy known as mark/sweep. A mark-sweep algorithm has two phases:

    1. mark phase: The mark phase marks as out-of-date all constraints that depend either directly or indirectly on a modified variable.

    2. sweep phase: The sweep phase evaluates out-of-date constraints and marks them up-to-date. A constraint is evaluated by executing its function. The evaluation process is recursive. If the function requests the value of an out-of-date parameter, the constraint solver evaluates the parameter's constraint before providing the parameter's value to the function.

    Mark Phase

    The mark phase is a simple depth-first search that starts at a modified variable and marks all the constraints and variables downstream of that variable out-of-date:

    	Change(V, new_value) {
    	  if V.value != new_value then
    	    V.value = new_value
    	    V.out_of_date = false
    	Mark(v) {
    	    v.out_of_date = true
    	    for each cn in v.dependencies do
    	        if (cn.out_of_date == false) then
    		    cn.out_of_date = true
    		    if (cn.output.out_of_date == false) then

    Note that at the end of the Change routine we change V's out_of_date flag back to false. This guarantees that if V has a constraint and V is involved in a cycle, that V's value will not be changed. It would be disconcerting to a user if they explicitly set the value of a variable and then had the constraint solver set it to some other value!

    Sweep Phase

    The sweep phase is fairly simple, provided that the edges in the dataflow graph are known in advance and are static. We do have to add a field to a variable which keeps track of the constraints that compute the variable's value. Although we have not made a big point about this yet, it is possible to attach multiple constraints to a variable. If multiple constraints are marked out of date at the same time, the last constraint to be evaluated will be the one that sets the variable's value. The fields for a variable now look as follows:

    The reason for allowing multiple constraints to compute a variable's value is that we might want to allow multiple views to see the variable's value. For example, we might want a variable to be set by either a text box or a slider. So we might have constraints that make the variable's value depend on both the text box and on the slider. To be honest I am not sure that I would advocate setting constraints up so that they determine the values of variables in the model. In general you want constraints to determine the properties of graphical objects in the interface. You can use callback procedures to have the graphical objects set values in the model.

    In any event, the sweep phase can be implemented as follows:

    	   if v.out_of_date = true then
    	       v.out_of_date = false /* essential for cycles */
    	       for each cn in v.constraints
    	           if cn.out_of_date = true then
    		      cn.out_of_date = false       
    		      v.value = cn.eval()

    The important thing to notice is that the out_of_date flags are set to false before a constraint is evaluated. Doing so ensures that any cycles will terminate when they revisit this variable. For example, suppose we have the two constraints a = b and b = a . Suppose that both a and b are marked invalid and that we call Get(a). a is marked up-to-date and then its constraint is evaluated. The constraint requests b's value. b is out-of-date, so it is marked up-to-date and its constraint is evaluated. Its constraint requests a's value. Since a has been marked up-to-date, it returns whatever old value it has and b's constraint terminates, followed by a's constraint terminating (both variables get a's old value). If we did not set the out_of_date flags to false until after a constraint is evaluated, we could get into an infinite loop with cycles. Check out what happens in the above circular case and you will see that a and b end up in an infinite cycle, requesting each other's value.

    Also note that the sweep phase supports lazy evaluation, since a variable's constraint is not evaluated until the variable's value is requested. The mark/sweep strategy can also support eager evaluation. The way that it supports eager evaluation is that the mark phase keeps a list of constraints that it marks out_of_date and which have no successors. When the sweep phase is called, it simply iterates through this list and evaluates each of the constraints.

    You may also recall that earlier in these notes we said that in lazy evaluation, we can allow a constraint to be marked "eager". If we add this feature to the constraint system, the mark phase needs to be modified so that it checks to see whether a constraint is marked eager and, if it is, add the constraint to a list of constraints that must be evaluated. When the sweep phase is called, it will iterate through this list and evaluate each of the constraints.

    In both the eager evaluation case and the lazy evaluation case in which constraints can be marked "eager", the constraint solver needs to allow an explicit call to be made to it to bring constraints up-to-date. For example:

        satisfy_constraints() {
            for each cn in cns_to_evaluate do
    	    cn.output.value = cn.eval()

    The cns_to_evaluate list would be constructed during the mark phase.

    A More Sophisticated Sweep Phase

    In a constraint system, it is often nice to be able to automatically construct the edges of a dataflow graph without forcing the user to declare what the edges are. In a spreadsheet, the user does not have to declare the edges because the formulas are so simple they can be parsed by a parser. However, if you allow a constraint to have arbitrary code, you may not want to write a parser to find out all the variables that the constraint uses. Further, if you also allow a constraint to call a function, then your parser may not even be able to discover all the variables that the constraint may reference. Consequently it would be nice if the constraint solver could automatically construct the edges of the dataflow graph. It turns out that this is possible if we make the sweep phase a bit more sophisticated.

    What we need to do to automatically construct dependency edges is to keep track of which constraint requested a variable's value. If we know which constraint requested a variable's value, then we can add the constraint to the variable's dependency list. An easy way to remember which constraint requested a variable's value is to keep a stack of constraints. Each time a constraint is about to be evaluated, the constraint is pushed onto the stack. When the constraint is finished executing, it is popped off the stack. The constraint that requests a variable's value is always the topmost constraint on the stack. So a variable can establish a dependency to the appropriate constraint by simply looking at the top of the stack.

    Here is the algorithm for doing that (the arrows denote the statements that have been added to the new Get routine):

    ->	   if constraint_stack != empty then
    ->		v.dependencies.insert(
    	   if v.out_of_date = true then
    	       v.out_of_date = false /* essential for cycles */
    	       for each cn in v.constraints
    	           if cn.out_of_date = true then
    		      cn.out_of_date = false       
    ->		      constraint_stack.push(cn)
    		      v.value = cn.eval()
    ->		      constraint_stack.pop(cn)
            Change(v, new_value) {
    ->	  if constraint_stack != empty then
    ->              cn =
    ->              if (v ∉ cn.outputs)
    ->                cn.outputs.insert(v)
                      // if we are adding the variable to the constraint's list
                      // of outputs, then we need to mark all variables and
                      // constraints that depend on v as out-of-date. If v
                      // was a previously known output of the constraint, then
                      // we previously marked the variables and constraints that
                      // depended on v as out-of-date and we do not need to do
                      // so again
                      if v.value != new_value then
    ->	            v.value = new_value
    ->                  Mark(v)  
              else if v.value != new_value then
                  v.value = new_value
              v.out_of_date = false

    A number of things should be noted about this algorithm. First, the constraint_stack is a global variable that is initialized at the start of the program. We need to check whether the constraint_stack is empty because the variable may be requested by the application rather than a constraint. In this case, no dependency should be created.

    Second, the code that inserts a constraint into the dependency list should check to see that the constraint is not already on the dependency list.

    Third, note that we have not changed the mark phase in any fashion. If we were trying to be really sophisticated, we would remove any dependency edges that were no longer needed (e.g., if a conditional executes a different branch, the constraint is no longer dependent on the variables in the previously executed branch). This might be done by modifying the mark phase or the sweep phase. However, in the first GUI project I was associated with, Garnet, we did not remove out of date dependency edges and they never caused us any trouble.

    Of course, when a constraint is deleted, something needs to be done to remove the dependency edges that point to the constraint. Since Java has garbage collection, the easiest thing to do is to probably do "lazy" dependency removal. We would add a flag to a constraint that indicates whether the constraint has been deleted. The mark phase would then check this flag before invalidating a constraint. If the flag were true, the mark phase would remove the constraint from its dependency list. Eventually the constraint will be removed from all dependency lists and it will be garbage collected.

    If we want to speed up the garbage collection process or if we are implementing the constraint solver in a non-garbage collected language like C++, the constraint will need to keep a list of backpointers to the variables that it depends on. When a constraint is deleted, it can then traverse the backpointers and remove itself from the appropriate variable dependency lists. A backpointer would be created at the same time that a dependency is created.

    Finally, note that in addition to automatically constructing dependencies, the algorithm will gracefully handle changes to pointer variables. When a pointer variable changes, the algorithm will automatically construct a dependency for the new variable to which the pointer variable points.

    Deleting Constraints

    When a constraint is deleted, it needs to remove the edges that point to it in the dataflow graph. Stated another way, the constraint needs to remove itself from the dependency lists of its parameters. There are two techniques that can be used to delete a constraint--eager removal of edges using backpointers and lazy removal of edges using a deletion flag.

    Eager Dependency Removal

    Eager dependency removal requires that the constraint solver keep backpointers to each of its parameters. When the constraint is deleted, it can simply go through its backpointers and remove itself from each of its parameter's dependency lists. The backpointers can be established at the same time as the dependencies are established.

    To implement eager removal, we need to add one field to the constraint data structure called parameters. We can modify the get method to construct parameters as follows (the arrows denote added or modified code):

    	   if constraint_stack != empty then
    ->	        cn =
    ->		v.dependencies.insert(cn)
    ->		cn.parameters.insert(v)
    	   if v.out_of_date = true then
    	       v.out_of_date = false /* essential for cycles */
    	       for each cn in v.constraints
    	           if cn.out_of_date = true then
    		      cn.out_of_date = false       
    		      v.value = cn.eval()

    The deletion procedure can be written as follows:

               for each v in cn.parameters
    	   destroy cn

    Lazy Dependency Removal

    Lazy dependency removal requires that a constraint maintain a deletion flag that indicates whether the constraint has been deleted. Instead of removing the dependencies right away, the constraint solver waits until the mark phase wants to traverse a dependency in order to remove it. When the mark phase is about to visit a constraint, it first checks the constraint's deletion flag. If the flag is true, then the mark phase removes the constraint from the dependency list of the variable being visited.

    To implement lazy removal, we need to add a flag to a constraint's data structure called deleted. The deletion procedure can then be written as follows:

               cn.deleted = true

    Notice that we do not destroy the constraint since there are still pointers to it on various variable's dependency lists. Once the constraint has been removed from all the dependency lists it can be deleted, presumably by a garbage collector.

    The mark phase needs to be modified as follows (the arrows denote modified or added code):

    	Mark(v) {
    	    v.out_of_date = true
    	    for each cn in v.dependencies do
    ->	        if (cn.deleted == true) then
    ->		    v.dependencies.remove(cn)
    ->	        else if (cn.out_of_date == false) then
    		    cn.out_of_date = true
    		    if (cn.output.out_of_date == false) then

    Comparison of Eager and Lazy Removal

    Eager and lazy removal both have their advantages. The advantage of eager removal is that the constraint can be immediately reclaimed and it works in non-garbage collected systems. The disadvantage is that each constraint needs to allocate space to hold the parameters list. Since the out-of-date flag and output field only occupy 8 bytes, the parameters list will almost certainly double the constraint's storage.

    Lazy removal has the advantage that it uses less space. The deletion flag comes for free since we already need to allocate 4 bytes for the out-of-date flag. The constraint cannot be immediately reclaimed, but unless constraints are being continuously destroyed, it is unlikely that the storage consumed by lazily removed constraints will outweight the space required by eager removal's parameters list. Lazy removal has the disadvantage that it only works in a garbage collected language and that it requires an extra check in the mark phase. However, the cost of the extra check should be minimal.