CS360 Lecture notes -- Thread #2

  • Jim Plank
  • Directory: /blugreen/homes/plank/cs360/notes/Thread2
  • Lecture notes: http://www.cs.utk.edu/~plank/plank/classes/cs360/360/notes/Thread2/lecture.html
    In this lecture, we cover the preemption and the distinction between user and system threads in Solaris, plus race conditions and mutexes.

    Preemption

    The previous lecture notes talked a bit about preemption. Here is some more specific information about preemption in Solaris. It is kind of confusing, but once you understand all the details, you'll see that the Solaris thread system is very well designed.

    There are two kinds of threads in Solaris: user-level threads, and system-level threads. The distinction between the two is kind of confusing, but I'll try to enlighten you. User-level threads exist solely in the running process -- they have no operating system support. That means that if a program has many user-level threads, it looks the same to the operating system as a ``normal'' Unix program with just one thread. In Solaris, user-level threads are non-preemptive. In other words, when a thread is running, it will not be interrupted by another user-level thread unless it voluntarily blocks, through a call such as pthread_exit() or pthread_join().

    When one thread stops executing and another starts, we call that a thread context switch. To restate the above then, user level threads only context switch when they voluntarily block. If you think about it, you can implement thread context switching with setjmp()/longjmp(). What this means is that you don't need the operating system in order to do thread context switching. This in turn means that context switching between user-level threads can be very fast, since there are no system calls involved.

    So what is a system-level thread? It is a unit of execution as seen by the operating system. Standard non-threaded Unix programs are each managed by a separate system-level thread. The operating system performs time-slicing by periodically interrupting the system-level thread that is currently running, saving its state, and running a different system-level thread. This is how you can have multiple programs running simultaneously. Such an action is also called context switching.

    When you call pthread_create(), you create a new user-level thread that is managed by the same system-level thread as the calling thread. These two threads will execute non-preemptively in relation to each other. In fact, whenever a collection of user-level threads is serviced by the same system-level thread, they all execute non-preemptively in relation to each other. All of the programs in the previous threads lecture work in this way.

    Let's look at a few more. First, look at preempt1.c. This is a program that forks off two threads, each of which runs an infinite loop. When you run it:

    UNIX> preempt1
    thread 0.  i =          0
    thread 0.  i =          1
    thread 0.  i =          2
    thread 0.  i =          3
    thread 0.  i =          4
    thread 0.  i =          5
    ...
    
    You'll see that only thread 0 runs. (If you can't kill this with control-c, go into another window and kill the process with the kill command). The reason that thread 1 never runs is that thread 0 never voluntarily gives up the CPU. This is called starvation.

    Now, you can bind different user-level threads to different system-level threads. This means that if one user-level thread is running, then at some point the operating system will interrupt it and run another user-level thread. This is because the two user-level threads are bound to different system level threads.

    One way to bind a user-level thread to a different system level thread is to call pthread_create() in a different way. Look at preempt2.c. You'll see that you give an ``attribute'' to pthread_create() that says ``create this thread with a different system-level thread.'' Now when you run it, you'll see that the two threads interleave -- every now and then, the running thread is preempted, and the other thread gets to run:

    UNIX> preempt2
    thread 0.  i =          0
    thread 1.  i =          0
    thread 0.  i =          1
    thread 1.  i =          1
    thread 1.  i =          2
    thread 0.  i =          2
    thread 0.  i =          3
    thread 1.  i =          3
    thread 0.  i =          4
    thread 1.  i =          4
    thread 0.  i =          5
    thread 1.  i =          5
    
    Now, here's the tricky part of Solaris. If a thread makes a blocking system call, then if there are other user-level threads bound to the same system-level thread, a new system-level thread is created and the blocking thread is bound to it. What this does is let the other user-level threads run while the thread is blocked.

    This is a fundamental difference of Solaris and older operating systems such as SunOS (the precursor to Solaris). SunOS allows only one system-level thread per process. Therefore, if a user-level thread makes a blocking system call in SunOS, all threads block until the system call completes. This is a drag. The design in Solaris is very nice.

    So, look at preempt3.c. First, you should see that the threads are created as user-level threads bound to the same system-level thread. Next, you'll see that the thread 0 first reads a character from standard input before beginning its loop. This is a blocking system call. Therefore, it results this threads being bound to a separate system threads from the main thread and thread 1. Therefore, while it blocks, thread 1 can run. Go ahead and run it:

    UNIX> preempt3
    Thread 0: stopping to read
    thread 1.  i =          0
    thread 1.  i =          1
    thread 1.  i =          2
    thread 1.  i =          3
    ..
    
    So, thread 0 is blocked, and thread 1 is running. They are thus bound to separate system threads. Now, type RETURN, and thread 0 will start up again, and you'll see that they interleave as in preempt2:
    ...
    thread 1.  i =          3
                                    ( RETURN was typed here )
    Thread 0: Starting up again
    thread 0.  i =          0
    thread 1.  i =          4
    thread 0.  i =          1
    thread 1.  i =          5
    thread 0.  i =          2
    thread 1.  i =          6
    thread 0.  i =          3
    ...
    
    That's user/system level threads and preemption in a nutshell. Go over these examples again if you are confused.

    Race conditions and mutexes

    Look at race1.c. This is a pretty simple program. The command line arguments call for the user to specify the number of threads, a string size and a number of iterations. Then the program does the following. It allocates an array of stringsize characters. Then it forks off nthreads threads, passing each thread its id, the number of iterations, and the character array. Each thread is a user-level thread, so threads are non-preemptive. Now each thread loops for the specified number of iterations. At each iteration, it fills in the character array with one character -- thread 0 uses 'A', thread 1 uses 'B' and so on. At the end of an iteration, the thread prints out the character array. So, if we call it with the arguments 4, 4, 1, we'd expect the following output, and indeed that is what we get:
    UNIX> race1 4 4 1
    Thread 0: AAA
    Thread 1: BBB
    Thread 2: CCC
    Thread 3: DDD
    
    Similarly, the following make sense:
    UNIX> race1 4 4 2
    Thread 0: AAA
    Thread 0: AAA
    Thread 1: BBB
    Thread 1: BBB
    Thread 2: CCC
    Thread 2: CCC
    Thread 3: DDD
    Thread 3: DDD
    UNIX> race1 4 30 2
    Thread 0: AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    Thread 0: AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    Thread 1: BBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    Thread 1: BBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    Thread 2: CCCCCCCCCCCCCCCCCCCCCCCCCCCCC
    Thread 2: CCCCCCCCCCCCCCCCCCCCCCCCCCCCC
    Thread 3: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    Thread 3: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    UNIX> 
    
    Now, look at race2.c. The only difference here is that the threads are all bound to different system-level threads. This means that they may be preempted.

    Look at the output of the same calls to race2:

    UNIX> race2 4 4 1
    Thread 0: DDD
    Thread 1: DDD
    Thread 2: DDD
    Thread 3: DDD
    UNIX> race2 4 4 2
    Thread 0: CCC
    Thread 0: AAA
    Thread 3: AAA
    Thread 3: DDD
    Thread 1: DDD
    Thread 1: BBB
    Thread 2: BBB
    Thread 2: CCC
    UNIX> race2 4 30 1
    Thread 1: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    Thread 0: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    Thread 2: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    Thread 3: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    UNIX> 
    
    Is that what you expected? Obviously, the first and third calls look wierd, but the second call is messed up too: I'd expect to see the threads print out in a random order, but I wouldn't expect Thread 0 to print C's, or thread 1 to print D's. What is happening?

    What is happening is that threads can be preempted anywhere. In particular, they may be preempted right before the printf statement, which means that another thread can modify s, and then when the original thread actually calls the printf() statement, the values of s are not what the thread thought they were.

    The file output.txt illustrates the same point in another way. It was created by:

    UNIX> race2 10 70 100 > output.txt
    
    You'll notice that line 181 of this file is:
    Thread 1: HHHHHHHHHHHHHHHHHHHHHHHHHHFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
    
    So, not only is thread 1 printing out a line of characters other than 'B', but there are H's and F's. This means that the flow of control was something like the following:

    The bottom line is that race2.c has a race condition. This means that some memory shared by the threads, in this case the string s is accessed in an uncontrolled way, leading to very confusing results. When you program with threads, you must take care of shared memory. If more than one thread can modify the shared memory, then you often need to protect the memory so that wierd things do not happen to the memory.

    In our race program, we can fix the race condition by enforcing that no thread can be interrupted by another thread when it is modifying and printing s. This can be done with a mutex, sometimes called a ``lock'' or sometimes a ``binary semaphore.'' There are three procedures for dealing with mutexes in pthreads:

    pthread_mutex_init(pthread_mutex_t *mutex, NULL);
    pthread_mutex_lock(pthread_mutex_t *mutex);
    pthread_mutex_unlock(pthread_mutex_t *mutex);
    
    You create a mutex with pthread_mutex_init(). Then any thread may lock or unlock the mutex. When a thread locks the mutex, no other thread may lock it. If they call pthread_mutex_lock() while the thread is locked, then they will block until the thread is unlocked. Only one thread may lock the mutex at a time.

    So, we fix the race program with race3.c. You'll notice that a thread locks the mutex just before modifying s and it unlocks the mutex just after printing s. This fixes the program so that the output makes sense:

    UNIX> race3 4 4 1
    Thread 0: AAA
    Thread 1: BBB
    Thread 2: CCC
    Thread 3: DDD
    UNIX> race3 4 4 2
    Thread 0: AAA
    Thread 0: AAA
    Thread 2: CCC
    Thread 2: CCC
    Thread 1: BBB
    Thread 1: BBB
    Thread 3: DDD
    Thread 3: DDD
    UNIX> race3 4 70 1
    Thread 0: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    Thread 1: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    Thread 2: CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
    Thread 3: DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    UNIX> race3 10 70 100 > output3.txt
    

    Terse advice on mutexes

    One of the challenges in dealing with synchronization primitives is to get what you want without constricting the threads system too much. For example, in your jtalk_server program, you will have to have a data structure that holds all of the current connections. When someone attaches to the socket, you add the connection to that data structure. When someone quits his/her jtalk session, then you delete the connection from the data structure. And when someone sends a line to the server, you will traverse the data structure, and send the line to all the connections. You will need to protect the data structure with a mutex. For example, you do not want to be traversing the data structure and deleting a connection at the same time. One thing you want to think about is how to protect the data structure, but at the same time not cause too many threads to block on the mutex if they really don't have to. We'll talk more about this later.

    Threaded telnet

    Look at cat1.c. This is a simple cat program using a subroutine inout() that we will use in a bit.

    Next, look at cat2.c. This is another cat program that uses a separate inout thread.

    Finally, look at th_telnet1.c. This requests a connection to the desired host and port, and then forks off two inout() threads. The first reads from standard input, and sends the output to the socket connection. The second reads from the socket, and sends the output to standard output. If either detects that the socket or standard input has closed, then the telnet session ends.

    This is very simple code, and should show you the power of threads. This is much more straightforward than writing telnet with select, isn't it?