# CS140 Lecture notes -- More Big O

• Jim Plank with modifications by Brad Vander Zanden
• Directory: ~cs140/www-home/notes/MoreBigO
• Lecture notes: http://www.cs.utk.edu/~cs140/notes/MoreBigO
• Tue Nov 3 11:05:32 EST 1998

## More Big O

The goal with this big-O stuff is to be able to classify the running times of programs and procedures. For example, take the following code example, from section 2.4.1 of the book:
```int sum(int n)
{
int i, partial_sum;

partial_sum = 0;                          /* Line 1 */
for (i = 1; i <= n; i++) {                /* Line 2 */
partial_sum += i*i*i;                   /* Line 3 */
}
return partial_sum;                       /* Line 4 */
}
```
This returns the sum from i = 1 to n of i cubed. Suppose we count the number of statements that are executed in this procedure. You have one for line 1 and one for line 4. The for loop on line 2 can be more easily counted if you re-write it as:
```i = 1;
while (i <= n) {
...
i++
}
```
You should now be able to see that its statements are executed 2n+2 times:
• One for the i = 1.
• The i <= n is executed once for each value of i from 1 to n+1 (for n+1 it evaluates to false). That makes n+1 executions of the statement.
• The i++ is executed once for each time the body of the loop is executed. This is n times.
Thus, the sum is 1+(n+1)+n = 2n+2 times. Finally, the body of the loop is executed n times, so line 3 is executed n times. The total sum of statements is 1+(2n+2)+n+1 = 3n+4.

In terms of big-O notation, this is O(n). Why? Informally it is because n is the largest degree polynomial. Formally, it is because if we choose c=4, then we see that cn > 3n+4.

Of course, it is also O(n*log(n)) and O(n*n), since n*n > n*log(n) > 3n+4. However, to best characterize it, we choose the smallest function: it is O(n).

Given the definitions of last class, you'll also see that it is Omega(n), and therefore Theta(n) too. Why Omega(n)? Because if we choose c=1, then we see that cn < 3n+4. Since 3n+4 = O(n), and 3n+4 = Omega(n), 3n+4 = Theta(n) too.

Now, in line three, should we count each execution of that statement as one or more? Certainly executing ``partial_sum += i*i*i'' should be more expensive than executing ``partial_sum = 0.'' In fact, the book counts each execution of line three as four statements -- two multiplications, an addition, and an assignment. This is reasonable. However, in terms of big-O, it doesn't matter. Why? Well, suppose you do count it as 4 statements. Then line three contributes 4n statements to the running time, and the total running time is now 1+(2n+2)+4n+1 = 6n+4.

In terms of big-O, Omega and Theta, this changes nothing: 6n+4 = O(n) (choose c to be 7); 6n+4 = Omega(n) (choose c to be 1), and therefore 6n+4 = Theta(n) too.

The bottom line is that in terms of big-O, you can count statements as one, and not worry about their relative execution time, as long as it is O(1).

### General Rules

The book, in chapter 2.4.2 gives rules for counting the number of statements in a program or procedure:
• Rule 1: The running time of a for loop is at most the running time of the statements inside the for loop (including tests) times the number of iterations. Sometimes this can be tricky:
```  for (i = 0; i < n; i++) {
j = sum(n);
}
```
Here the body of the loop is 3n+4 since sum requires 3n+4 statements. I could add an additional statement for the assignment to j but I think that complicates matters, especially since it only adds a constant to the running time of the body. Since the loop iterates n times, the running time of the body is n * (3n + 4) = 3n2 + 4n. As before the for statement requires 2n+2 statements so the total running time is 3n2 + 4n + (2n + 2) = 3n2 + 6n + 2, which is O(n2).

We also could have analyzed the above loop slightly differently. We could include the increment of i and the comparison of i < n as part of each loop iteration. In this case the running time of the loop body would be 3n+6 and since the loop iterates n times, the running time of the body becomes n * (3n + 6) = 3n2 + 6n. The extra 2 statements come from the initial assignment of 1 to i and the final comparison of i with n that causes the loop to exit.

Now look at:

```  for (i = sum(0); sum(i) < sum(n); i++) {
j = sum(n);
}
```
Now, the loop body is 3n+4 and the test is 3i+4+3n+4. Since i is always less than or equal to n, we can say the loop test is definitely better than 6n+8. The increment is one, and it executes n times, so the total running time of the loop body is n*(3n+4+6n+8+1) = 9n2+13n. We also have to count the initial execution of i = sum(0) and the final execution of sum(i) < sum(n). sum(0) executes 3(0) + 4 = 7 statements. The final execution of sum(i) < sum(n) occurs when i = n, so this comparison requires 2 * (3n + 4) = 6n + 8 statements. Summing it all up, we get a total running time of:
```
running time = 9n2 + 13n + (7) + (6n + 8)
= 9n2 + 19n + 15
```
Again, this is O(n2).

• Rule 2: Analyze nested for loops inside out. The total running time of a statement inside a group of nested loops is the running time of the statement multiplied by the product of the sizes of all the for loops.

Look at:

```
for (i = sum(0); sum(i) < sum(n); i++) {
for (k = 0; k < n; k++) {
j = sum(n);
}
}
}
```
Earlier we analyzed the inner loop and determined that it executes 3n2+6n+2 statements. The test in the outer loop requires 6n+8 statements and iterates n times, so the running time of the outer loop is n(3n2+6n+2+6n+8+1) = 3n3+12n2+9n. Did we forget something? Yes! We forgot to include the execution time for the initial assignment in the outer loop, which is i = sum(0), and the final comparison, when i = n. We also did this analysis earlier and determined that the initial assignment requires 7 statements and the final comparison requires 6n+8 statements. Our final running time is therefore:
```running time = 3n3+12n2+9n + 7 + (6n + 8)
= 3n3+12n2+15n + 15
```
which is O(n3). Often you get sloppy counting statements, because more often than not, if you skip the initial assignment, the final comparison, or the loop increment, you don't affect the running time in terms of the big-O notation. In this course I will expect you to count everything however.

• Rule 3: Add the running time of consecutive statements to get the running time of the statements taken as a whole. For example, above we simply added the running times of lines 1-4 to get the running time of the procedure.

• Rule 4: The running time of an if-then-else statement is never more than the running time of the test plus the larger of the running time of the then clause and the running time of the else clause. For example, the running time of:
```     if (i == 3) {
j = 4;
} else {
j = sum(n);
}
```
is 1 plus the maximum of the running time of ``j = 4'' and ``j = sum(n)''. Taking the running time of sum(n) to be 3n+4, we get the running time of ``j = sum(n)'' to be 3n+4, and thus the running time of the if-then-else statement to be 3n+5. Of course, that is O(n), so we can just say it is O(n).

As you get better at this stuff, you learn to just quantify everything as big-O, and you use the following identities. (You might think about how you could prove these).

• If f(n) = O(g(n)) and h(n) = O(g(n)) too, then f(n)+h(n) = O(g(n)) as well. For example, if both f(n) and g(n) are O(n2), then f(n) + g(n) = O(n2).
• If f(n) = O(g(n)) and h(n) = O(y(n)) and y(n) > g(n), then f(n)+h(n) = O(y(n)). For example, if f(n) is O(n2) and h(n) is O(n3), then f(n) + h(n) = O(n3).
• If f(n) = O(g(n)) and h(n) = O(y(n)), then f(n)*h(n) = O(g(n)*y(n)) For example, if f(n) is O(n2) and h(n) is O(n3), then f(n) * h(n) = O(n5).
For example, in the sum() procedure, you see that lines 1, 3 and 4 are O(1). The for loop in line 2 iterates O(n) times. Therefore, by Rule 1 the running time of lines 2 and 3 is O(n*1) = O(n). Adding it all up, the running time of the whole program is O(1)+O(n)+O(1), which is O(n).

Suppose line 2 of the sum program was:

```   for (i = 0; i <= n; i++)
```
Does that change the running time of the program? No, because the for loop is still O(n). Similarly, if line 2 is
```   for (i = 0; i <= n/2; i++)
```
then it is still O(n).

If we change line 2 to be:

```  for (i = 1; i <= n; i *= 2)
```
then the loop iterates log(n) times. Now the running time of the program is O(1)+O(log(n))+O(1) = O(log(n)).

## Significance

Once again, quantifying the running time of the program in terms of big-O is important, because it allows you to classify your programs and the algorithms that it uses, independent of things like the speed of the machine that your program is running on. In general, you want to get it to be the lowest O(f(n)) that you can.

Since 4n+1 and 1000n+400000 are both O(n) does this mean that you shouldn't care if your program's running time is one or the other? No. If you can get it to be 4n+1 rather than 1000n+400000, you should. However, a more fundamental thing is to make sure that your program's running time is O(n) instead of O(n*log(n)) or O(n*log(n)) instead of O(n*n) if that is possible.