CS361: Operating System

Jian Huang — Spring 2012

EECS | University of Tennessee - Knoxville

I/O

In many ways, the common perception is that I/O is of lesser importance than other parts of a computer. That perception is wrong. Without I/O, computers are simply useless. Remember the example when we tried to write a program without making any system call?

I/O system is also very complex, because it is possible to have literally thousands of devices that needed to be supported on a general purpose OS. The question here is how we can standardize the interfaces to these devices? The challenge does not stop there - I/O devices are also unreliable due to potential media failures, transmission errors, etc., and also devices can be unpredictable and/or slow.

How can we elegant manage them without knowing what they will do, when they will do ..., or how they will perform? Despite wide range of different devices, the I/O subsystem of OS need to provide a uniform interface. Let's start by defining a few basic dimensions of I/O devices.

From a Unix point of view, we can now think about the system calls that we have learned and see that they fit different kinds of devices. open(), read(), write() and seek() are mostly for accessing blocks of data from raw I/O (file-system access); get(), put() etc. are for single character devices like keyboard, mice, serial ports; pipes, FIFO, streams are mostly for network devices, such as Ethernet, and Wireless.

Now, let us look at a second way to categorize I/O calls from the respect of timing. There are three kinds of interfaces. I explained the major differences among these three in class. Very important.

Device Driver

Device driver is device-specific code in the kernel that interacts directly with the device hardware. Device driver does not support one particular device, instead, device driver implements and supports a (internal) standard interface. Device driver is part of the kernel I/O subsystem, device specific configurations are handled through ioctl() system call.

Device driver typically has two pieces: the top half and the bottom half. Kernel accesses device driver through the top half, which is called when system calls are called, must implement calls that are standard in its interface. The bottom half of device driver does the work "asynchronously" and issues interrupts when done, at that time the OS knows that the I/O device has completed an operation and whether the operation has encountered an error.

Kernel can also poll the device driver, as opposed to receiving an asynchronous interrupt. In many cases, this is implemented by the kernel periodically checking a device-specific status register, which is upadted at fine time granularity.

In practice, both polling and interrupts can be used at the same time, for example in high-bandwidth networking devices, the first incoming packet would issue an interrupt, after that polling will be used to receive all remaining packets.

Disk

In many of our previous courses, we have covered the mechanical mechanisms of how a disk functions. From an OS point of view, the steps of a disk access are:

Looking at queueing, there is no surprise that this is considered a scheduling problem in the OS, and the goal is to make one physical disk to appear as a virtualized disk that can concurrently serve many processes. There are many scheduling algorithms, such as FIFO, shortest seek time first, SCAN (taking the closest request in the direction of travel), and etc. The mathematical tool behind is queueing theory.

Controller is a very efficient, simple and reliable module. It serves one request. The overhead due to controller is very low.

Seek is an operation that runs into physical limits. For spin disks, the seek time is between 5 to 10ms. The same physical law applies to rotational latency. On a 7200 RPM disk, one revolution takes roughly 8ms. Transfer time depends on where you are reading on the disk, recording density, RPM, and sector size. Using one disk, the bandwidth is typically in the range of 2 to 50MB/second. Assuming 4MB/second at 1KB/sector (disk block), you'd be expecting 4K sectors/second.

If you randomly read a disk block, aside from queuing time, the overheads on average are: seek time (5ms), average rotational latency (4ms, 7200RPM), transfer time (1/4096 sec, roughly 0.25ms). That amounts to about 100KB/second. There is no way that you can get to the advertised bandwidth. To accelerate, you should make sure that this particular request should not pay for the seeking time, or the rotational latency. Therefore, the way to get the highest bandwidth is to transfer large groups of blocks sequentially from one track.


Jian Huang / EECS /UTK / revised 01/2012