CS560 Midterm Exam: March 15, 2006 - Answers
Question 1
- a. Interrupt vector: No. Sure, the OS has to define
the interrupt vector, but each entry simply points to the code in the
OS that handles various interrupts. That has nothing to do with
processes.
- b. Information about open files: Yes. Processes
have file descriptors, which are their handles to open files. The
OS has to maintain the meaning of the file descriptors when the
process is suspended.
- c. Register state: Yes. Obviously.
- d. Timer interrupt: No. The timer interrupt is
implemented in hardware and is a means of giving the OS access to
the CPU at regular intervals.
- e. Control status register: Yes/No. Now that
I think about it, either the CSR must be stored, or user programs
cannot be interrupted before branch operations. I'm going to
throw this one out, since you may not have remembered what the
control status register is.
- f. Console interrupt: No. This is like the
timer interrupt.
- g. Memory management information (e.g. main_memory pointer): Yes.
This is necessary to reload the process and have it use the correct memory.
- h. Kernel/User mode bit: No. This bit is set and unset
when the OS gets and releases control. It is not managed on behalf of
a process.
- i. Context switches: No. A context switch is a tool
that the OS uses to suspend and resume processes. It is not data to be
managed.
- j. Information about CPU speed: No. This is a
characteristic of a machine.
- k. The process' cache: No. Caches get invalidated
on context switches. They are not stored.
- l. The PC location of the process' next instruction: Yes.
This is part of the register set.
- m. Turnaround time: No. While a scheduler might
try to store/predict turnaround time, that is not information necessary
to resume a process.
- n. Disk controller information: No.
Grading
1.5 points for each of b, c, g and l. I ignored your
answer of e. Minus 0.5 for k.
Minus 1 point for any other answer. You couldn't
go negative.
Question 2
When the scheduler determines that it is time to schedule a STARTING
thread, it calls setjmp(), then modifies the stack and
frame pointers of the jmpbuf so that they point to the
base of the new thread's stack. Then the scheduler calls longjmp()
so that it starts executing on the new stack. Obviously, at this point,
the program cannot rely upon the values of any local variables. It
calls the thread's function on its argument, and when that returns,
it exits by either shutting down the thread system or switching to
another thread, never to return again.
Grading
- Setjmp is called: 1 point
- It is called by the suspending thread: 1 point
- The sp and fp are modified to point to the new stack: 1 point
- Longjmp is called: 1 point
- It is called so that you are executing on the new stack: 1 point
- You then call the function on the argument, so that it executes
on the new stack: 1 point
Question 3
The following will get you a merit raise: "Freddy's got the right
idea -- you can implement a mutex using a semaphore whose
initial value is one: lock() is implemented by calling P() on the
semaphore, and unlock() is implemented by calling V(). However,
there are two fundamental differences between mutexes and semaphores.
First, calling V() on a semaphore with a positive value is a
well-defined operation. Calling unlock() on an unlocked mutex
is, however, not defined. Second, there is no restriction on
which threads call P() and V() on semaphores, while only the
thread that has locked a mutex may unlock it.
Therefore, if you want the correct semantics of mutexes, you
can implement them with semaphores, but you have to augment
the implementation to enforce these two restrictions. That
is why we have mutexes as a separate data structure.
Grading
- 2.5 points for mentioning the distinction about calling
unlock on an unlocked semaphore.
- 2.5 points for mentioning that only the process that
locks the mutex may unlock it.
- 1 point for saying something positive about Freddy's claim,
so that the boss thinks that you are a team player.
- If you missed the first two points, but mentioned that you can
implement one with the other and vice versa, I gave you 1 point
(allocated as 0.5 for each of the first two points).
so that the boss thinks that you are a team player.
Question 4
The easiest way to do this is to prevent circular wait. Simply
put the resources into a red-black tree, keyed on name/instance,
and traverse it:
void lock_multiple(Dllist resources)
{
JRB t, tmp;
Dllist dtmp;
Resource *r;
char *key;
t = make_jrb();
dll_traverse(dtmp, resources) {
r = (Resource *) dtmp->val.v;
key = (char *) malloc(sizeof(char) * (strlen(r->name+20));
sprintf(key, "%10d %s", r->instance, r->name);
jrb_insert_str(t, key, new_jval_v((void *) r));
}
while (!jrb_empty(t)) {
r = (Resource *) t->flink->val.v;
key = t->flink->key.s;
P(r->s);
jrb_delete_node(t->flink);
free(key);
}
jrb_free_tree(t);
} |
Unlock_multiple() can remain unchanged.
This removes circular wait and thus prevents deadlock.
You need the first condition, because otherwise, processes
can lock resources in non-ascending order, and circular wait
can occur. The second condition is necessary for the
same reason -- if processes can lock resources without
calling lock_multiple, then they can try to lock them
in any order, and circular wait can occur again.
Grading
- Correctly implementing a way to lock the processes in
ascending (or descending) order: 5 points
- If you implemented the flavor, but didn't sort on
both the name and instance, you lost 2 points.
- Correctly implementing unlock_multiple(): 1.5
points.
- Saying that you prevent circular wait: 1.5 points
- Stating that breaking each resitriction no longer
ensures that processes lock resources in ascending/descending
order: 1 point each. I also accepted saying that this prevented
further hold-and-wait.
Question 5
- A. Round robin scheduling with a time quantum of 0.05 seconds.
Clearly there is no preference between the CPU and IO bound jobs, since
each IO bound job has to wait for roughly .5 seconds each time it wants
the CPU. This is seen by both the average time in the ready queue (> 300
seconds for the VI processe, and 133 seconds for the WAV processs) and
the max ready-queue time.
- B. The IO-bound jobs are spending too much time in the
ready queue, and the CPU-bound jobs are spending too much time context
switching.
- C. A multi-level feedback queue fixes these problems
nicely. The CPu-bound jobs quickly move to the third level queue,
and thus the IO-bound jobs get scheduled ahead of them, which reduces
both their overall ready queue time, and their maximum waiting time.
Additionally, the third-level queue has a larger time quantum, so
the CPU-bound jobs keep the CPU longer, and suffer less context switch
overhead. A good predictive scheduler can achieve the same effect,
but, as detailed in the lecture notes, must be programmed with care
so that the CPU-bound jobs are identified correctly and treated fairly
with respect to one another.
Grading
- A. 3 points for the answer and justification.
- B. 2 points per problem/justification.
- C. 3 points for stating correctly how these problems are fixed.