CS360 Lecture notes -- Thread #2

  • Jim Plank
  • Directory: http://www.cs.utk.edu/~mbeck/classes/cs560/360/notes/Thread2
  • Lecture notes: http://www.cs.utk.edu/~mbeck/classes/cs560/360/notes/Thread2/lecture.html
    In this lecture, we cover the preemption and the distinction between user and system threads in Solaris, plus race conditions and mutexes.

    Preemption

    Unfortunately, we no longer use non-preemptive Solaris-based systems; POSIX threads on current Solaris systems and on Linux are preemptive. So, read the text below with that in mind. The programs run as specified on old versions of Solaris, but not on current versions of Solaris or on Linux

    The previous lecture notes talked a bit about preemption. Here is some more specific information about preemption in Solaris. It is kind of confusing, but once you understand all the details, you'll see that the Solaris thread system is very well designed.

    There are two kinds of threads in Solaris: user-level threads, and system-level threads. The distinction between the two is kind of confusing, but I'll try to enlighten you. User-level threads exist solely in the running process -- they have no operating system support. That means that if a program has many user-level threads, it looks the same to the operating system as a ``normal'' Unix program with just one thread. In Solaris, user-level threads are non-preemptive. In other words, when a thread is running, it will not be interrupted by another user-level thread unless it voluntarily blocks, through a call such as pthread_exit() or pthread_join().

    When one thread stops executing and another starts, we call that a thread context switch. To restate the above then, user level threads only context switch when they voluntarily block. If you think about it, you can implement thread context switching with setjmp()/longjmp(). What this means is that you don't need the operating system in order to do thread context switching. This in turn means that context switching between user-level threads can be very fast, since there are no system calls involved.

    So what is a system-level thread? It is a unit of execution as seen by the operating system. Standard non-threaded Unix programs are each managed by a separate system-level thread. The operating system performs time-slicing by periodically interrupting the system-level thread that is currently running, saving its state, and running a different system-level thread. This is how you can have multiple programs running simultaneously. Such an action is also called context switching.

    When you call pthread_create(), you create a new user-level thread that is managed by the same system-level thread as the calling thread. These two threads will execute non-preemptively in relation to each other. In fact, whenever a collection of user-level threads is serviced by the same system-level thread, they all execute non-preemptively in relation to each other. All of the programs in the previous threads lecture work in this way.

    Let's look at a few more. First, look at preempt1.c. This is a program that forks off two threads, each of which runs an infinite loop. When you run it:

    UNIX> preempt1
    thread 0.  i =          0
    thread 0.  i =          1
    thread 0.  i =          2
    thread 0.  i =          3
    thread 0.  i =          4
    thread 0.  i =          5
    ...
    
    You'll see that only thread 0 runs. (If you can't kill this with control-c, go into another window and kill the process with the kill command). The reason that thread 1 never runs is that thread 0 never voluntarily gives up the CPU. This is called starvation.

    Now, you can bind different user-level threads to different system-level threads. This means that if one user-level thread is running, then at some point the operating system will interrupt it and run another user-level thread. This is because the two user-level threads are bound to different system level threads.

    One way to bind a user-level thread to a different system level thread is to call pthread_create() in a different way. Look at preempt2.c. You'll see that you give an ``attribute'' to pthread_create() that says ``create this thread with a different system-level thread.'' Now when you run it, you'll see that the two threads interleave -- every now and then, the running thread is preempted, and the other thread gets to run:

    UNIX> preempt2
    thread 0.  i =          0
    thread 1.  i =          0
    thread 0.  i =          1
    thread 1.  i =          1
    thread 1.  i =          2
    thread 0.  i =          2
    thread 0.  i =          3
    thread 1.  i =          3
    thread 0.  i =          4
    thread 1.  i =          4
    thread 0.  i =          5
    thread 1.  i =          5
    
    Now, here's the tricky part of Solaris. If a thread makes a blocking system call, then if there are other user-level threads bound to the same system-level thread, a new system-level thread is created and the blocking thread is bound to it. What this does is let the other user-level threads run while the thread is blocked.

    This is a fundamental difference of Solaris and older operating systems such as SunOS (the precursor to Solaris). SunOS allows only one system-level thread per process. Therefore, if a user-level thread makes a blocking system call in SunOS, all threads block until the system call completes. This is a drag. The design in Solaris is very nice.

    So, look at preempt3.c. First, you should see that the threads are created as user-level threads bound to the same system-level thread. Next, you'll see that the thread 0 first reads a character from standard input before beginning its loop. This is a blocking system call. Therefore, it results this threads being bound to a separate system threads from the main thread and thread 1. Therefore, while it blocks, thread 1 can run. Go ahead and run it:

    UNIX> preempt3
    Thread 0: stopping to read
    thread 1.  i =          0
    thread 1.  i =          1
    thread 1.  i =          2
    thread 1.  i =          3
    ..
    
    So, thread 0 is blocked, and thread 1 is running. They are thus bound to separate system threads. Now, type RETURN, and thread 0 will start up again, and you'll see that they interleave as in preempt2:
    ...
    thread 1.  i =          3
                                    ( RETURN was typed here )
    Thread 0: Starting up again
    thread 0.  i =          0
    thread 1.  i =          4
    thread 0.  i =          1
    thread 1.  i =          5
    thread 0.  i =          2
    thread 1.  i =          6
    thread 0.  i =          3
    ...
    
    That's user/system level threads and preemption in a nutshell. Go over these examples again if you are confused.

    Race conditions and mutexes

    From here on down, the lecture notes pertain to Linux and Solaris Look at race2.c. This is a pretty simple program. The command line arguments call for the user to specify the number of threads, a string size and a number of iterations. Then the program does the following. It allocates an array of stringsize characters. Then it forks off nthreads threads, passing each thread its id, the number of iterations, and the character array. Each thread is a user-level thread, so threads are non-preemptive. Now each thread loops for the specified number of iterations. At each iteration, it fills in the character array with one character -- thread 0 uses 'A', thread 1 uses 'B' and so on. At the end of an iteration, the thread prints out the character array. So, if we call it with the arguments 4, 4, 1, we'd expect the following output:
    UNIX> race2 4 4 1
    Thread 0: AAA
    Thread 1: BBB
    Thread 2: CCC
    Thread 3: DDD
    
    Similarly, the following make sense:
    UNIX> race1 4 4 2
    Thread 0: AAA
    Thread 0: AAA
    Thread 1: BBB
    Thread 1: BBB
    Thread 2: CCC
    Thread 2: CCC
    Thread 3: DDD
    Thread 3: DDD
    UNIX> race1 4 30 2
    Thread 0: AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    Thread 0: AAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    Thread 1: BBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    Thread 1: BBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    Thread 2: CCCCCCCCCCCCCCCCCCCCCCCCCCCCC
    Thread 2: CCCCCCCCCCCCCCCCCCCCCCCCCCCCC
    Thread 3: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    Thread 3: DDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    UNIX> 
    
    While that is what we do get on our Linux boxes, that output is not guaranteed. The reason is that threads can be preempted anywhere. In particular, they may be preempted in the middle of the for() loop, or in the middle of the printf() statement. This can lead to strange output. For example, try the following:
    UNIX> race2 2 70 100000 | grep 'AB'
    
    This searches for output lines where the character 'A' is followed by a B. When I ran this, I got:
    UNIX> race2 2 70 100000 | grep 'AB'
    Thread 0: AAAAAAAAAAAAAAAAAAAAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    Thread 1: AAAABBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    UNIX> 
    
    This shows two instances where thread 0 was interrupted by thread 1, which had been interrupted in the middle of its for loop. When thread 1 resumed, it overwrote the string with B's.

    The bottom line is that race2.c has a race condition. This means that some memory shared by the threads, in this case the string s is accessed in an uncontrolled way, leading to very confusing results. When you program with threads, you must take care of shared memory. If more than one thread can modify the shared memory, then you often need to protect the memory so that wierd things do not happen to the memory.

    In our race program, we can fix the race condition by enforcing that no thread can be interrupted by another thread when it is modifying and printing s. This can be done with a mutex, sometimes called a ``lock'' or sometimes a ``binary semaphore.'' There are three procedures for dealing with mutexes in pthreads:

    pthread_mutex_init(pthread_mutex_t *mutex, NULL);
    pthread_mutex_lock(pthread_mutex_t *mutex);
    pthread_mutex_unlock(pthread_mutex_t *mutex);
    
    You create a mutex with pthread_mutex_init(). Then any thread may lock or unlock the mutex. When a thread locks the mutex, no other thread may lock it. If they call pthread_mutex_lock() while the thread is locked, then they will block until the thread is unlocked. Only one thread may lock the mutex at a time.

    So, we fix the race program with race3.c. You'll notice that a thread locks the mutex just before modifying s and it unlocks the mutex just after printing s. This fixes the program so that the output makes sense:

    UNIX> race3 4 4 1
    Thread 0: AAA
    Thread 1: BBB
    Thread 2: CCC
    Thread 3: DDD
    UNIX> race3 4 4 2
    Thread 0: AAA
    Thread 0: AAA
    Thread 2: CCC
    Thread 2: CCC
    Thread 1: BBB
    Thread 1: BBB
    Thread 3: DDD
    Thread 3: DDD
    UNIX> race3 4 70 1
    Thread 0: AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
    Thread 1: BBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBBB
    Thread 2: CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
    Thread 3: DDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDDD
    UNIX> race3 10 70 100 > output3.txt
    

    Terse advice on mutexes

    One of the challenges in dealing with synchronization primitives is to get what you want without constricting the threads system too much. For example, in your jtalk_server program, you will have to have a data structure that holds all of the current connections. When someone attaches to the socket, you add the connection to that data structure. When someone quits his/her jtalk session, then you delete the connection from the data structure. And when someone sends a line to the server, you will traverse the data structure, and send the line to all the connections. You will need to protect the data structure with a mutex. For example, you do not want to be traversing the data structure and deleting a connection at the same time. One thing you want to think about is how to protect the data structure, but at the same time not cause too many threads to block on the mutex if they really don't have to. We'll talk more about this later.

    Threaded telnet

    Look at cat1.c. This is a simple cat program using a subroutine inout() that we will use in a bit.

    Next, look at cat2.c. This is another cat program that uses a separate inout thread.

    Finally, look at th_telnet1.c. This requests a connection to the desired host and port, and then forks off two inout() threads. The first reads from standard input, and sends the output to the socket connection. The second reads from the socket, and sends the output to standard output. If either detects that the socket or standard input has closed, then the telnet session ends.

    This is very simple code, and should show you the power of threads. This is much more straightforward than writing telnet with select, isn't it?