Answer to Question 1 -- 12 points

It's probably easiest to explain this in terms of the allocation algorithms. If the file is of fixed size and rather small (into the 10's of blocks, I'd imagine), then contiguous allocation is best. All file operations are fast, sequential access is extremely fast (since the blocks are laid out contiguously), and random access is efficient as well. Since the files are small, there shouldn't be too many problems with fragmentation. As the files get bigger, fragmentation becomes more of an issue, although sequential access will still be excellent. Interestingly, in a combined system like this one, you won't have to worry about external fragmentation as much because the linked and indexed files will take up the space that would otherwise be fragmented with solely contiguous allocation. Contiguous files do not mesh well with changing file sizes.

If a file is dynamically sized, and not too large, or accessed sequentially, then linked allocation is a good scheme. For good performance, you should have a file allocation table on disk and cached in memory so that random access does not involve a disk operation per file block. However, if the files are too large, then performing random access will involve too many pointer chases, hindering efficiency.

If a file is large, and randomly accessed, then an indexed allocation scheme is the best because any file block may be found in a few pointer operations. Double and triple indexing may be used for extremely large files. For smaller files, the other schemes are best.

Examples of files and their best allocation schemes: