B: c. Hopefully everyone got that one. Yes, one process can deadlock itself....
C: a. This one required you to think a bit. As discussed in class, timestamps are much more liberal in the operations that they allow -- as long as there are no conflicts, there are no restrictions. However, when a conflict is discovered, transactions must be rolled back. Thus, timestamps will work very well when there are few conflicts. On the other hand, two-phased commit is restrictive in the access patterns that it allows. Thus, even though transactions cannot have conflicts, they may not work as efficiently under two-phased commit. The other answers are not correct because the properties can affect either technique in either way.
D: d and e. See the lecture notes on dining philosophers. Yes, the lecture notes say that there is a possibility for starvation if the thread system is pathelogical, but that possibility is so remote that it shouldn't even be mentioned in the lecture notes.
E: c. Answer a is close to correct, however Pj and Pi may use instances of the same resource class.
F: e. Clearly, a time quantum that is too small will spend too much time context-switching. There are no problems with fairness (eliminating answers a through d). So what is the difference between asnwers e and f? As discussed in class, the reason that context switches are expensive has more do to with cache-flushing than operating system overhead. Thus, e is the correct answer.
G: d. Longer time quanta will be harder on I/O bound processes. However, this has nothing to do with CPU utilization -- FCFS scheduling with an infinite time quantum will have better CPU utilization than any round robin scheduling. However, process turnaround time is greatly increased when CPU-bound jobs get more of the CPU.
H: c. This one has to do with the definition of ``protects.'' Certainly kernel-only memory locations prevent users from corrupting devices and the interrupt vector, so a and b are true. Similarly, d is true, because without a hardware timer, a user process could prevent other processes from running. How abotu c? Certainly DMA frees up the CPU -- without DMA, the CPU will have to be more involved in mass transfers. But that is not a correctness feature, but a performance feature. Therefore, the answer is c. However, I will also accept ``none of the above'' if you want to argue that c is true. However, a, b and d are clearly true, and will not be accepted as answers.
I: c. See the kthreads implemetnation lecture.
The other dining philosophers solution breaks hold-and-wait. You only pick up chopsticks when you can be assured that both are available. Otherwise, you block on a condition variable (since you release the mock when you block on a condition variable, that is not hold-and-wait). You can modify this code in the same way -- have a condition variable upon which you block unless you can lock all three mutexes.
Here are the three pieces of code:
/* #1 -- break circular wait */ while(1) { if (id == 0) { pthread_mutex_lock(locks[0]); pthread_mutex_lock(locks[1]); pthread_mutex_lock(locks[NTHREADS-1]); } else if (id == NTHREADS - 1) { pthread_mutex_lock(locks[0]); pthread_mutex_lock(locks[NTHREADS-2]); pthread_mutex_lock(locks[NTHREADS-1]); } else { pthread_mutex_lock(locks[id-1]); pthread_mutex_lock(locks[id]); pthread_mutex_lock(locks[id+1]); } The rest is the same } /* #2 -- break hold-and-wait */ /* Global variables */ int executing[NTHREADS]; /* Initialized to all zeros */ pthread_mutex_t lock; pthread_cond_t cvs[NTHREADS]; while(1) { pthread_mutex_lock(&lock); while(executing[(id+NTHREADS-1)%NTHREADS] || executing[(id+1)%NTHREADS]) { pthread_cond_wait(cvs+id, &lock); } executing[id] = 1; pthread_mutex_lock{locks[(id+NTHREADS-1)%NTHREADS]); pthread_mutex_lock{locks[id]); pthread_mutex_lock{locks[(id+1)%NTHREADS]); pthread_mutex_unlock(&lock); sleep(random()%10+1); pthread_mutex_unlock{locks[(id+NTHREADS-1)%NTHREADS]); pthread_mutex_unlock{locks[id]); pthread_mutex_unlock{locks[(id+1)%NTHREADS]); pthread_mutex_lock(&lock); executing[id] = 0; pthread_cond_signal(cvs+(id+NTHREADS-1)%NTHREADS]); pthread_cond_signal(cvs+(id+1)%NTHREADS]); pthread_mutex_unlock(&lock); }
1 point was deductded if you didn't state (or incorrectly stated) which assumption was broken.
If you didn't have the threads lock all three mutexes (i.e. each thread just locks mutex i), then you only got one point. It certainly changes the behavior of the program.
There is one exception to this. Suppose east has the singleton queen of diamonds, and west has the A984. Then you can hold your losers to one. Look just at the diamond suit:
D JT52 D A984 D Q D K763You start by leading the king. This will lose to the ace, but it will also capture the queen. Now when you get back to your hand, here's the suit:
D JT5 D 984 D D 763lead the D7. If west ducks, your problems are over. So assume he covers with the 8 or 9. Win the trick and go back to your hand:
D T5 D 84 D D 63Now play the D6. Again, if west ducks, the six wins. If he covers it instead, win it, and now your D5 is a winner!
To give yourself maximum chances with the entire hand, you should take the HA at trick one, and run off 5 rounds of spades. Pitch two clubs and a heart from dummy. If the opponents pitch diamonds, trying to protect their hearts and clubs (they have no idea what your hand looks like) then you may be able to hold your diamond losers to one. For example, suppose west starts with A4 of diamonds, and east starts with Q98. If each of them discards a diamond on the run of the spades, then a low diamond out of hand will hold the diamond losers to one.
If neither of them pitches diamonds, then you play for the holding above.