1. Problem: How do we compare the running times of different solutions to the same problem
    1. Experimental solution: try both on a few sample inputs and see how they do
      1. Problem: You will not be able to do an exhaustive set of inputs
      2. Problem: They may run with different speeds on different machines
    2. Mathematical solution: try to develop equations to describe the running times of different solutions (T(n))
      1. Problems
        1. Need to think about the size of the input (what is n)
        2. What do we do about the fact that the same instruction requires different amounts of time to execute on different machines
        3. What do we do about the fact that some instructions (e.g., division) require more time than other instructions (e.g., addition)
        4. What do we do about the fact that it is hard to count every instruction in a program?
          1. Example: for (i = 0; i < n; i++) -- it's difficult to count the exact number of times each of the three parts executes (it's 1, n+1, and n respectively)
      2. Solutions
        1. Ultimately the running time of an algorithm is determined by the term that in the limit is the costliest term in the equation.
        2. In the limit constants are unimportant, so we can ignore them
        3. Since constants are unimportant, we can assume that all atomic instructions take the same amount of time
        4. Since constants are unimportant, we don't lose sleep over the fact that we can't count each instruction. We simply try to get a rough estimate.
        5. n is usually related to the input size, such as the number of lines in a file, or the number of words in a file
  2. Counting Rules
    1. costs of sequential statements are summed
    2. costs of conditional statements are determined by taking the maximum cost of the various branches
    3. costs of loops are determined by multiplying the cost of one execution of the loop by the number of iterations
    4. declarations are ignored
  3. Examples
    1. Linear Search
           int linear_search(int array[], int size, int key) {
             int i;
             for (i = 0; i < size; i++) {
               if (array[i] == key)
                 return i;
             }
             return -1;
           }
      
    2. Binary Search
        
            int binary_search(int array[], int size, int key) {
              int mid, hi, low;
              low = 0;
              hi = size-1;
              mid = (low + hi) / 2;
              while (low <= hi) {
                 if (array[mid] == key)
      	     return mid;
                 else if (array[mid] < key) {
                   low = mid + 1;
                   mid = (low + hi) / 2;
                 }
                 else {
                   hi = mid - 1;
                   mid = (low + hi) / 2;
                 }
               }
               return -1;
              }
             }
      
    3. Selection Sort
            void selection_sort(int array[], int size) {
               int i, j;
               int min;
               int temp;
               for (i = 0; i < size-1; i++) {
                 min = i;
      	   for (j = i+1; j < size; j++) {
                   if (array[j] < array[min]) {
                      min = j;
                   }
                 }
                 if (i != min) {
                   temp = array[i];
                   array[i] = array[min];
                   array[min] = temp;
                 }
               }
             }
      
  4. Different types of Analysis
    1. Best-Case: Too optimistic and most algorithms will perform roughly the same in the best case
    2. Average-Case: Often too hard to compute because what is the "average" case?
    3. Worst-Case: Gives us an upper bound on how badly the algorithm can perform
  5. Big-O notation
    1. Practical: Big-O abstracts away constants and lower order terms, leaving you with just the costliest term in an equation
    2. Mathematical: We will say that one function f(n)is greater than another g(n) if there is a value x so that for all i >= x:

      f(i) >= g(i)

      Put graphically, it means that after a certain point on the x axis, as we go right, the curve for f(n) will always be higher than g(n).

      1. If the above condition holds, then we say that g is O(f(i))
    3. Some Big-O Properties
      1. Constant times are expressed as O(1): O(c) = O(1)
      2. Multiplicative constants are omitted: O(cT) = cO(T) = O(T)
      3. Addition is performed by taking the maximum
         
        O(T1) + O(T2) = O(T1 + T2) = max(O(T1), O(T2))
        
      4. Multiplication is not changed but often is rewrittem more compactly:
        O(T1)O(T2) = O(T1T2)
                 
  6. Common Cases
    ComplexityExamplealgorithm
    O(1)Fetching an element from an arrayreturn ith element
    O(log n)Splitting a set of data in half with one operation and throwing away one of the two halves binary search
    O(n)Traversing a set of data once linear search
    O(n log n)Splitting a set of data in half repeatedly and traversing each half quicksort
    O(n2)
    1. Traversing a set of data once for each member
    2. Traversing all the rows in a two-dimensional array, and for each row traversing all of its columns
    1. selection sort
    2. negate all elements of a pgm file
    O(2n)Generating all possible subsets of a set of data knapsack problem where you try to take a set of weights and divide them so that the two knapsacks have the minimal possible difference in weight--best known algorithm requires you to take all subsets of weights and put each set in knapsack 1 and the remainder in knapsack 2
    O(n!)Generating all possible permutations of a set of data traveling salesman problem--salesman must visit each city once and you want to find the cheapest total fare. One algorithm is to generate all permutations of cities and test each permutation for the cheapest fare.

  7. Graphical Comparison of Growth Rates of Different Big-O Functions
    and a zoomed in version that better shows the difference between some of the cheaper functions: