CS560 Midterm Exam - March 17, 2005 - Answers
Jim Plank
Question 1
Part 1:
With a three-level multilevel feedback queue, ready processes can
be in one of three queues, which for the sake of simplicity, we
will call Q1, Q2 and Q3. When it is time to schedule a process,
the first process in Q1 is always selected. If there is no process
in Q1, then the first process in Q2 is selected. If there is no
process in Q2, then the first process in Q3 is selected.
When a process first enters the ready
state, it enters Q1.
If it expires its time quantum, then it is moved to Q2. If it expires
its time quantum in Q2, then it moves to Q3, and stays there until
it leaves the READY/RUNNING state. When it is rescheduled again, it
will move back to Q1.
The three queues have different time quanta. Q2's is either the
same as Q1's (which is how we implemented it in class), or something
like double Q1's quantum (which is how the book described it).
Q3 either does not have a time quantum (that's how the book describes
it), or has a much longer time quantum than Q1 & Q2. For example, in
our implementation, we increased Q1's time quantum by a factor of 10.
This algorithm is a good one, because it does a decent job of identifying
I/O bound processes (those that stay in Q1/Q2), and these get serviced
in preference to the CPU bound processes. Thus, the I/O bound jobs
have decent response time, and the "convoy" effect is avoided. The CPU-bound
jobs get the rest of the CPU's resources, and if a time quantum is used,
they still get time-sliced. However, since the quantum in Q3 is either
big or infinite, these processes do not absorb the overhead that they
would absorb were the quantum lower.
Part 2:
This is subtle. Suppose this job gets scheduled with less than
400 seconds left until the timer interrupts. Note, we don't reset
the timer at each scheduling point, so it can interrupt a job after
less than the whole time quantum has taken place.
When this happens, the job is moved to Q2, and suppose Q2's time
quantum is double Q1's. Then at least one time quantum will be
dedicated to the job, and it will voluntarily give up the CPU and
move back to Q1. If we didn't have that second queue, the
job would go instantly onto the third queue, and may have to
wait a significant period of time for the CPU-bound jobs to
execute before it can get the CPU and go back to Q1.
Thus, that second queue makes sure that the I/O bound jobs
do not get put into the CPU-bound queue erroneously.
Grading
Correct description including:
- 2 points: All processes enter queue 1, both at first and when they
give up the CPU voluntarily.
- 1 point: Queues are serviced in order.
- 2 points: Processes move down to queues with higher numbers when they
expire their time quanta.
- 2 points: The queues (at least the last one) have different quanta, or
even none on the bottom queue.
- 1 point: It identifies I/O bound processes and prefers them to CPU-bound ones.
- 1 point: CPU-bound processes get longer quanta so that they may absorb less
context switch overhead.
- Part 2: 1 point: The process can start near the end of the quantum,
thereby expring their time quantum.
from being erroneously sent into the CPU-bound queue.
- Part 2: 2 points: The second level queue helps prevent processes
from being erroneously sent into the CPU-bound queue.
Question 2
Schedule A is serializable, since the schedule is
equivalent to one where Transaction 1 is executed, and
then Transaction 2 is executed. However, it cannot result
from two-phase locking, because Transaction 1 has to hold
the lock for A when Transaction 2 would hold the lock.
Schedule B is serializable, since the schedule is
equivalent to one where Transaction 1 is executed before
Transaction 2.
Moreover, it could result from a two phase locking
protocol. Assume that Transaction 1 first locks
A, C and D, and then it unlocks each item after
reading/writing it. It works.
Schedule C is neither serializable nor could it result
from a two-phase locking protocol.
Schedule D is serializable, since the schedule is
equivalent to one where Transaction 1 is executed before
Transaction 2. It could also result from a two-phase
locking protocol -- assume that Transaction 1 locks
all three items at the beginning, and then unlocks
them all after reading E.
Grading
Eight points -- one for each schedule & question.
Question 3
The seg fault occurs at line 78. What has happened? We have
just started running a new thread, and a fresh, empty stack was created
for it in lines 62/63 or 66/67, which set the sp/fp around a blank
area of memory. Then longjmp() is called in lin 71.
That longjmp() returns to the setjmp() call on line
57. At that point, all local variables have no values, since
the stack is a new one. Specifically, ktrun will be 0, and
when the program attempts to access ktrun->exitbuf on line
78, it seg faults.
The solution to this is pretty simple -- do not attempt to have a
local variable whose values are retained across the longjmp()
call. Instead of using ktrun, just use a global variable
(this is how the real kt.c does it -- it uses ktRunning
instead of ktrun).
Grading
5 points -- 1 for identifying the proper line, 2 for the proper
reason, 2 for the proper fix.
Question 4
A safe state is one in which we can be guaranteed that there is an
ordering, p_1, ..., p_n of the processes such that
all of the resources that p_1 needs are available. Then,
once p_i completes and releases all of its resources,
p_(i+1) may allocate all of its resources.
Safe states are used to avoid deadlocks in the following way.
Each process declares its resource needs to the system. Then,
whenever a process attempts to allocate a resource, the system
must ensure that it will still be in a safe state once the resource
is allocated. Otherwise, the process must block. In this manner,
deadlocks are avoided.
Grading
5 points -- three for the definition of safe states (and I was picky
here), and two for how it is used -- you must say that the
process is blocked if its resource aquisition would result in an
unsafe state to received these latter two points.
Question 5
One thread may read another thread's memory. By not freeing a
thread's stack until a thread is joined, we can be sure that
the memory locations on the thread's stack will be valid even
if the thread has exited.
Several of you said that threads cannot free their own stacks.
That is true; however, in the kthreads library, a thread's
stack is freed instantly when it exits -- it is just freed
by a different thread (the one that gets scheduled when the
thread dies).
Grading
3 points. Either you got it right, or you didn't.
Question 6
Ok -- the basics are straightforward -- traverse the list and check
to see if the resource is unlocked. If they are all unlocked, great --
lock them! The real problem is what to do when one of them is locked.
Unfortunately, no one gave a completely correct answer, which I have
below. Here is a decent spectrum of answers. Almost everyone's fell
into one of these classes.
- CORRECT:
We will keep a global list of blocked threads, and whenever a thread
unlocks resources, we will wake up all of the blocked threads so that
they may retest themselves.
This will prevent deadlock, although starvation is
a very real possibility. Also, if there are a lot of threads, this
will not be very efficient. It would be more efficient to
keep tabs of which threads are blocking on which resources, and then
only have those threads test themselves when a resource is free.
- UNBLOCK-ONE: This was the most popular solution. With
this solution, when a thread must block, it blocks on a condition
variable or semaphore, and then when unlock_resources() is
called, it unblocks one blocked thread. The problem is that you
don't know which thread or threads to unblock -- you really have to
unblock them all, or figure out which ones should be unblocked, and
unblock them. Regardless, this was a straightforward answer, and
received good credit.
- BUSY-WAIT: With this answer, a thread never blocks -- instead
it just keeps checking whether its resources are free, over and over.
While technically this solution should work, it is brutal in terms of
resource usage.
- LIVE-LOCK: With this answer, threads call lock_resource(),
until they can no longer lock a resource. Then they unlock all of the resources
and try again. This is worse than the BUSY-WAIT solution, because you
could get live-lock -- two threads could repeatedly block each other, and none
gets any resources. At least with the BUSY-WAIT solution, one of them
would get the resources.
- ONE-LOOP: With this answer, the resource loop is traversed once,
and if a resource is busy, the thread waits until it is free. At the end
of the loop, it tries to get all of the resources. Unfortunately, by the
time the loop is done, resources that were once free may no longer be free,
so this does not work.
- NO-CONTROL: With this answer, there is really no deadlock control --
basically, the loop is traversed, and lock_resource() (or its equivalent)
is called.
Here is a CORRECT solution.
mutex_t lock;
Dllist blocked;
Lock is initialized in the obvious way, and blocked
is initialized to be empty. Then the two routines may be implemented
as follows:
void lock_resources(Dllist rlist)
{
int ok;
Dllist tmp;
cond_t cond;
Resource *r;
mutex_lock(lock);
while (1) {
ok = 1;
dll_traverse(tmp, rlist) {
r = (Resource *) tmp->val.v;
if (r->inuse) ok = 0;
}
if (ok) {
dll_traverse(tmp, rlist) {
r = (Resource *) tmp->val.v;
resource_lock(r);
}
mutex_unlock(lock);
return;
} else {
cond = new_cond();
dll_append(blocked, new_jval_v((void *) cond));
cond_wait(cond, lock);
destroy_cond(cond);
}
}
}
void unlock_resources(Dllist rlist)
{
int ok;
Dllist tmp;
cond_t cond;
Resource *r;
mutex_lock(lock);
dll_traverse(tmp, rlist) {
r = (Resource *) tmp->val.v;
resource_unlock(r);
}
while (!dll_empty(blocked)) {
cond = (cond_t) blocked->flink->val.v;
dll_delete_node(blocked->flink);
cond_signal(cond);
}
mutex_unlock(lock);
}
Grading
12 points. Basically, you get graded on which answer you gave, and then
you get extra points for good things, or get deducted for bad things.
- CORRECT: 12 points. Irrelevant, because no one got a
correct answer.
- UNBLOCK-ONE: 10 points. You got an extra point or two if
you unblocked threads more often then once per unlock_resources()
call.
- BUSY-WAIT: 8 points. While technically correct, this will be a CPU
killer. If you held the global mutex in your while loop, you lost two more points.
- LIVE-LOCK: 6 points.
- ONE-LOOP: 4 points.
- NO-CONTROL: 1 point.
A standard deduction was 1 point if you didn't protect your loop traversal
with a semaphore or mutex.