CS461

Midterm Solutions


  1. For each question below, fill in the blank with the best term from the following list:
    1. lexical analyzer Part of the compiler that generates a stream of tokens

    2. flow dependency A type of dependency between two statements that occurs when the first one writes a variable and the second one reads the same variable.

    3. symbol table The data structure used by a compiler to hold information about names so that it can perform type checking, verify that the number of arguments to a function is correct, verify that the types of arguments are correct, etc.

    4. parser Part of the compiler that organizes tokens into syntactic constructs

    5. output dependency The following type of dependence among two instructions is caused by each instruction writing to the same variable.

    6. recursive descent parser This type of parser is a predictive parser that must choose a production to expand a non-terminal as soon as it sees the first token in the string generated by that production.

    7. semantic analyzer Part of the compiler that extracts the "meaning" of a program and performs tasks such as type checking.

    8. predict set This type of set is used by a predictive parser to determine which production to expand a non-terminal by when it seens the next token in the program.

    9. registers The fastest type of memory available to the compiler.

    10. static link The name of the pointer used by a compiler to determine the value of a non-local, statically scoped variable at run-time.

  2. Answer true or false for the following questions

    1. false The number of CPU cycles that must be delayed while awaiting a load from main memory (not cache, but main memory) has been declining over time.

      The speed of CPUs has been increasing than the speed of memory, meaning that the number of CPU cycles that must be delayed because of a cache miss is increasing.

    2. true An anti-dependence may be eliminated by renaming registers.

    3. true The number of languages expressable by an LL(1) grammar is a strict subset of the number of languages expressable by an LR(1) grammar.

      An LR(1) grammar can be recognized by a bottom up parser that can see the whole right hand side of a production plus the next token, while an LL(1) grammar must be recognied by a top down parser that can only see the next token before deciding which production to use to expand the current non-terminal. Hence LR(1) grammars are much more expressive than LL(1) grammars.

    4. true A grammar is LL(1) if for each non-terminal in the grammar, the predict sets for the productions associated with that non-terminal do not intersect

      If the predict sets do not intersect, then a predictive compiler will always be able to guess which production to use to expand the current non-terminal by looking at the next token and choosing the production whose predict set contains that token.

    5. false Pointers make it easier to determine the flow, anti and output dependencies between instructions.

      Pointers set up aliases that make it difficult for a compiler to determine which variables may be affected by a read or a write, thus making it more difficult to determine dependencies between variables.

  3. Short answer. In two sentences or less, describe the functions of each of the following programs:

    1. compiler: Translate a file written in a high-level language into machine code and output an "export" table that lists the variables and functions that this file makes available to other file and an "import" table that lists the variables and functions that this variable uses but which are declared in other files.

    2. linker: Resolve external variable and function references in each object file and replace their stubs with the addresses of these variables and functions. Then combine the object files into a single executable.

    3. loader: Assigns physical memory addresses to an executable.

  4. Consider the following grammar for pre-fix expressions:
        Exp      -> Atom | Opn
        Atom     -> number | id
        Opn      -> ( + Exp_List )
                 |  ( * Exp_List )
        Exp_List -> Exp_List Exp
                 |  Exp
    
    1. Write a leftmost derivation for the string (+ a 23 (* 16 44))
      Exp_List -> Exp
               -> Opn
               -> ( + Exp_List )
               -> ( + Exp_List Exp )
               -> ( + Exp_List Exp Exp )
               -> ( + Exp Exp Exp )
               -> ( + Atom(a) Exp Exp )
               -> ( + Atom(a) Atom(23) Exp )
               -> ( + Atom(a) Atom(23) Opn )
               -> ( + Atom(a) Atom(23) ( * Exp_List ) )
               -> ( + Atom(a) Atom(23) ( * Exp_List Exp ) )
               -> ( + Atom(a) Atom(23) ( * Exp Exp ) )
               -> ( + Atom(a) Atom(23) ( * Atom(16) Exp ) )
               -> ( + Atom(a) Atom(23) ( * Atom(16) Atom(44) ) )
      
    2. Draw a parse tree for the string of part (a).
                                 Exp_List
                                    |
                                   Exp
                                    |
                                   Opn
                                    |
                            ( +  Exp_List  )
                                    |
                         |----------------------|
                      Exp_List                 Exp
                      /       \                 |
                  Exp_List    Exp              Opn
                     |         |                |
                    Exp      Atom(23)  (  *  Exp_List  )
                     |                        /      \
                  Atom(a)                Exp_List    Exp
                                            |         |
                                           Exp     Atom(44)
                                            |
                                         Atom(16)
      

  5. Suppose you have the following C++ code:
    class Menu {
      protected:
        string selectedItem;
        list *menuItems;
      public:
        getSelectedItem() { return selectedItem }
        Menu(list *items) { menuItems = items; }
        virtual void draw() = 0;
    };
    
    class PulldownMenu : public Menu {
      public:
        void draw() { 
           ...
           code that references selectedItem and menuItems
           ...
        }
    };
    
    Answer the following questions:

    1. Logically, is PulldownMenu lexically nested within Menu? Answer "yes" or "no".

      yes: If a method in PulldownMenu references a variable that is not declared in PulldownMenu, then the compiler will look to see if that variable is declared within Menu. Hence PulldownMenu is logically nexted within Menu.

    2. Write down the list of scope ids that get created and which name creates them. I will start your list:
           0: predefined names
           1: global names
           2: Menu
      
           3: getSelectedItem
           4: Menu
           5: PulldownMenu
           6: draw
      
      
      The declaration of draw as virtual in Menu does not create a new scope because it is simply a declaration, and not a definition of a function.

    3. Based on the LeBlanc-Cook symbol table presented in class, draw a picture of what the symbol table records for the following names might look like:

      1. Menu
        -------------------------------------     --------------------------------
        | Name     Category    Scope   Type | --> | Name  Category    Scope Type |
        | Menu      type         1     ---- |     | Menu  constructor   2   ---- |
        -------------------------------------     --------------------------------
        
      2. selectedItem
        -------------------------------------------------
        | Name              Category    Scope   Type    |
        | selectedItem      inst. var.    2     string  |
        -------------------------------------------------
        
      3. draw in PulldownMenu
        -------------------------------------
        | Name     Category    Scope   Type |
        | draw     function      5     void |
        -------------------------------------
        

  6. Suppose you have the following assembly pseudo-code:
    (1) r1 = A
    (2) r2 = B
    (3) r2 = r1 + r2
    (4) C = r2
    (5) r1 = D
    (6) r2 = r1 + 5
    (7) E = r2
    
    Assume that loads and stores to memory have a one-cycle delay (i.e., require 2 CPU cycles) and that arithmetic can be done in a single cycle. Answer the following questions:

    1. For each CPU "cycle" shown below, write the number of the instruction that would execute during that cycle or write "delay" if there is a load/store delay. Complete the first row of cycles before starting the second row of cycles. I have intentionally provided more cycles than are required--leave the excess ones blank.

      (1) (2) delay (3) (4) (5) delay (6) (7) delay
      
    2. List one flow dependence between two instructions (e.g., 7->5)

      1->3, 2->3, 3->4, 5->6, 6->7

    3. List one anti-dependence between two instructions (e.g., 7->5) 3->5, 3->6, 4->6

    4. List one output dependence between two instructions (e.g., 7->5)

      2->3, 3->6, 1->5

    5. Rewrite the above instructions in the same order, but use register renaming to remove the anti and output dependencies.
      (1) r1 = A
      (2) r2 = B
      (3) r2 = r1 + r2
      (4) C = r2
      (5) r3 = D
      (6) r4 = r3 + 5
      (7) E = r4
      

    6. Using your register renaming, reorder the instructions to produce a more efficient instruction schedule. Rather than writing out the instructions, just show me the new schedule below by filling in the CPU slots with either an instruction or the word "delay". You will not be able to eliminate all the delays (minimally there will be a delay after the last store instruction).
      (1) (2) (5) (3) (4) (6) (7) delay
      
      Rationale:
      1. (1): We start by loading A
      2. (2): While waiting for the load of A, we can start loading B
      3. (5): Instructions (2) and (3) have a flow dependency so we cannot execute (3) until B has finished loading, and (3) and (4) have a flow dependence, so we cannot start (4) either. However, we can start loading D (instruction 5), while waiting for B.
      4. (3): Now A and B are available so we can add them together
      5. (4): We have our choice of either storing r2 into C (instruction 4) or adding 5 to D (instruction 6) since the two instructions have no dependencies. Since the store will have a one cycle delay and the addition will have no delay, we start the store.
      6. (6): While waiting for the store to complete we add 5 to D
      7. (7): We store r2 into E and now have an unavoidable delay as our code is complete