CS302 Lecture Notes - Breadth First Search
- James S. Plank
- November 2, 2009.
Latest Modification:
Mon Mar 28 12:39:55 EDT 2022
- Directory: /home/jplank/cs302/Notes/BFS
Reference Material Online
Topcoder Practice Problems
- BFS: The
Topcoder "CarrotJumping" problem (SRM 478, D1, 250). I go over this one in class.
- BFS:
The Topcoder "EmoticonsDiv1" problem (SRM 612, D1, 250. This link has an
explanation of the BFS and commented code. This problem is similar in flavor to
"CarrotJumping," because you build the graph as you go.
- BFS: The Topcoder "StepsConstruct" problem (SRM 707, D2, 500). I give you hints and programming tips. There is a commented solution
at the end.
- BFS: The
Topcoder "OneRegister" problem (SRM 486, D1, 250). I give you hints here, and not code.
- BFS: The
Topcoder "CsCourses" problem (SRM 340, D1, 500). I do include a link to my code, but
this is more useful if you write the code yourself.
- BFS: CollectingRiders (SRM 382, D1, 250). Hints and no code.
- BFS: FromToDivisible (SRM 699, D1, 500). Hints and no code.
- BFS: Leetcode "Word-Ladder-I". Hints and no code. The BFS is straightforward. Here the challenge is creating your adjacency lists quickly.
- Dijkstra: ColorfulRoad (SRM 596, D2, 500). Hints and no code.
- Dijkstra: The
Topcoder "ThreeTeleports" problem (SRM 519, D2, 600). I give you hints here, and not code.
- Dijkstra: InsertSort (SRM 351, D2, 1000-pointer). Hints and no code.
BFS: Breadth First Seach
Breadth First Search (BFS) is complementary to Depth First Search (DFS).
DFS works by visiting a node and then recursively visiting children.
You can view it as relying on a stack -- push a node onto a stack,
then go through the following algorithm:
- Pop a node off the stack.
- Do some processing on the node.
- Push all of the node's children onto the stack in reverse order.
- Repeat until the stack is empty.
In fact, it will be useful to revisit DFS with this view. Let's use the following
graph as an example:
|
Adjacency Lists:
0: 1, 3
1: 0, 2, 3
2: 1, 5
3: 0, 1, 4
4: 3, 5, 6
5: 4, 6, 7
6: 4, 5
7: 5
|
A recursive visiting of all nodes using DFS starting with node zero will
look as follows:
Were we to print out the nodes, they would be printed out in the order in which they are visited:
0, 1, 2, 5, 4, 3, 6, 7
Instead of recursion, let's use a stack. We'll push 0 onto the stack, then repeatedly
pop a node off the stack, visit the node, then push the non-visited children onto the stack
in reverse order.
Visited?
Node Print 01234567 Action Stack (Push-back and pop-back)
-------- Push 0 0
0 0 x------- Push 3, 1 3, 1
1 1 xx------ Push 3, 2 3, 3, 2
2 2 xxx----- Push 5 3, 3, 5
5 5 xxx--x-- Push 7, 6, 4 3, 3, 7, 6, 4
4 4 xxx-xx-- Push 6, 3 3, 3, 7, 6, 6, 3
3 3 xxxxxx-- No pushing 3, 3, 7, 6, 6
6 6 xxxxxxx- No pushing 3, 3, 7, 6
6 xxxxxxx- Visited 3, 3, 7
7 7 xxxxxxxx No pushing 3, 3
3 xxxxxxxx Visited 3
3 xxxxxxxx Visited
Done
|
As you see, the order of the nodes is the same as in the recursive case.
Now, breadth-first search works in the same manner, only we use a queue instead of a stack,
and we push the children onto the queue in their proper order.
Additionally, instead of a "visited" field, we are going to keep two pieces of data with
each node:
- The node's distance from node zero. Whenever a node is pushed onto the queue,
you set its distance to be the distance of the node that pushed it, plus one.
- The node's "back-link". This is the node that pushed it.
You can use the distance as a kind of "visited" field. If it is set,
then the node is either on the queue, or it has been processed already,
so you don't push the node in that case.
Here's how breadth-first search works on the graph above.
Distances Back Links Queue
Node Print Action 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 (Push-back and pop-front)
Push 0 0 - - - - - - - - - - - - - - - 0
0 0 Push 1, 3 0 1 - 1 - - - - - 0 - 0 - - - - 1, 3
1 1 Push 2 0 1 2 1 - - - - - 0 1 0 - - - - 3, 2
3 3 Push 4 0 1 2 1 2 - - - - 0 1 0 3 - - - 2, 4
2 2 Push 5 0 1 2 1 2 3 - - - 0 1 0 3 2 - - 4, 5
4 4 Push 6 0 1 2 1 2 3 3 - - 0 1 0 3 2 4 - 5, 6
5 5 Push 7 0 1 2 1 2 3 3 4 - 0 1 0 3 2 4 5 6, 7
6 6 Nothing 0 1 2 1 2 3 3 4 - 0 1 0 3 2 4 5 7
7 7 Nothing 0 1 2 1 2 3 3 4 - 0 1 0 3 2 4 5
|
The order of the nodes is now 0, 1, 3, 2, 4, 5, 6, 7.
BFS still visits all nodes and edges, but it does so in order of distance from the starting node.
Think about it.
- The first two nodes visited are those that are one edge from node 0: nodes 1 and 3.
- Next are the nodes that are two edges away: nodes 2 and 4.
- Next are the nodes that are three edges away: nodes 5 and 7.
- Finally comes the node that is four edges away: 7
These distances are conveniently stored in the "Distance" value of each node. And if
you want to find the shortest path from 0 to a node, you can do so by traversing the
back-links of the node to node zero.
Let's rehash the algorithm for BFS:
- For all nodes, set their backedges to NULL and their distances to -1.
- Set node 0's distance to zero and put it on the queue.
- Repeat the following:
- Remove a node n from the queue.
- For each edge e from n to n2 such that n2's distance is -1:
- Set n2's distance to n's distance plus one.
- Set n2's backedge to e (or n as above).
- Append n2 to the queue.
When the algorithm terminates, each node contains its shortest distance to node zero, and the
path to node zero can be obtained by traversing the back-links.
For example, if we traverse the back-links from node 7 to node zero, they go 7-5-2-1-0, so
the shortest path from node 0 to node 7 is 0-1-2-5-7.
Dijkstra's Algorithm
Dijkstra's algorithm is a simple modification to breadth first search. It is used to find the
shortest path from a given node to all other nodes, where edges may have non-negative lengths.
I will use the terms "length" and "weight" here interchangeably.
The modification uses a multimap instead of the queue. The multimap uses the distance to
from the starting node to the node as a key, and the node itself as a val.
The algorithm is as follows:
- For all nodes, set their back-links to NULL, their distances to -1, and their "visited"
field to be false.
- Set node 0's distance to zero and put it on the multimap.
- Repeat the following:
- Remove a node n from the front of the multimap and set its visited field to true.
- For each edge e from n to n2 such that n2 has not been visited.
Let d be n's distance plus the weight of edge e.
If n2's distance is -1, or if d is less than node n2's current distance:
- If n2 was in the multimap, remove it. [* We're going to revisit this below. *]
- Set n2's distance to d.
- Set n2's back-link to e.
- Insert n2 into the multimap, keyed on distance.
When the algorithm terminates, all the nodes will contain their shortest distance to node 0,
and their back-links will define the shortest paths.
[* Revisiting Here *]: You actually have a choice of whether to remove a node from the multimap
or not. It is often easier to code up Dijkstra's algorithm to leave nodes on the multimap
rather than remove them. In that case, when a node reaches the front of the multimap for
you to process, you need to check its distance versus its key in the multimap.
If they
differ, you simply ignore the node, because you have processed it already.
The tradeoff
is memory and potentially performance vs coding complexity. When you code, it is much
easier to leave the node on the multimap. However, if you end up replacing a lot of nodes
on the multimap, it can make performance and memory consumption suffer. Ideally, it is
better to remove the node before you re-insert it. To do that properly, you need to store
an iterator to the node's place in the multiple, in the node's class definition. Think
about that, especially if you decide to remove the node in your own implementation. BTW,
I do advocate that you try removing the node in your lab. Not only is the code better,
but it forces you to think about data structure design..
As an example, suppose we enrich the graph above with edge weights:
The following shows how Dijkstra's algorithm runs on the graph:
Distances Back Links Multimap
Node Action 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 (key=distance,val=node)
Add[0,0] 0 - - - - - - - -1 - - - - - - - [0,0]
0 Add[9,1] 0 9 - - - - - - -1 0 - - - - - - [9,1]
Add[12,3] 0 9 - 12 - - - - -1 0 - 0 - - - - [9,1] [12,3]
1 Add[11,2] 0 9 11 12 - - - - -1 0 1 0 - - - - [11,2][12,3]
Del[12,3],Add[10,3] 0 9 11 10 - - - - -1 0 1 1 - - - - [10,3][11,2]
3 Add[20,4] 0 9 11 10 20 - - - -1 0 1 1 3 - - - [11,2][20,4]
2 Add[16,5] 0 9 11 10 20 16 - - -1 0 1 1 3 2 - - [16,5][20,4]
5 Del[20,4],Add[18,4] 0 9 11 10 18 16 - - -1 0 1 1 5 2 - - [18,4]
Add[24,6] 0 9 11 10 18 16 25 - -1 0 1 1 5 2 5 - [18,4][24,6]
Add[31,7] 0 9 11 10 18 16 25 31 -1 0 1 1 5 2 5 5 [18,4][24,6][31,7]
4 Do nothing 0 9 11 10 18 16 25 31 -1 0 1 1 5 2 5 5 [24,6][31,7]
6 Do nothing 0 9 11 10 18 16 25 31 -1 0 1 1 5 2 5 5 [31,7]
7 Do nothing 0 9 11 10 18 16 25 31 -1 0 1 1 5 2 5 5
|
As with the BFS run above, you can use the back-links to find the shortest paths. For example,
the shortest path from node 0 to node 6 has a path length of 24, and contains the following
nodes in reverse order: 6-5-2-1-0. The following shows the path in forward order with
the weights below the edges:
0 ----- 1 ----- 2 ----- 5 ----- 6
9 2 5 8
|
Running Times
The running time of BFS (and therefore the unweighted shortest path problem) is O(|V| + |E|).
As with DFS, it visits each node and edge once.
The running time of Dijkstra's algorithm (and therefore the weighted shortest path problem)
is a little more complex: O(|V| + |E|log(|V|)).
This is because Dijkstra's algorithm visits each node and edge once,
and at each edge, it potentially inserts a node into the multimap.
Memorize those running times!