Normally, the calculation translating a byte-position index i to a file-block location is straightforward and follows the similar virtual memory to physical memory translation:
assuming zero-origin indexing for both the block numbers and block offsets. However, when the file metadata are stored in the file blocks, the calculation becomes a bit more difficult, although not overly so.
First, some of the space in each disk block is taken up by metadata; assume the space used is given by metadata size in the same units as block size. Then the file block number is given by
Second, depending on design, the file block offset may have to be adjusted to skip over the metadata. A well designed file system would place the metadata at the end of the block, in which case no skipping is necessary and the offset calculation is as it was before:
If the metadata is stored at the start of the block, it has to be skipped over:
The metadata could be dispersed throughout the file data, but this design is too difficult to contemplate.
Explain a) the reasoning behind your colleague's analogy and b) why it is you agree or disagree with your colleague's reasoning.
When memory is managed in dynamically-sized segments, compaction is useful because it eliminates external fragmentation: All occupied storage moves to one end of the address space while all the unoccupied storage moves to the other end, where it is coalesced and allocated to waiting processes.
As an analogy with memory management, your colleague's conclusion isn't bad. Disks managed by file systems that allocate files contiguously are subject to external fragmentation, and compaction can help. However, unlike paged memory management, compaction is also useful when files are allocated block-by-block for at least two reasons. First, compacting blocks at one end of the disk decreases arm motion, both in an absolute and relative sense. Second, ambitious disk defragmentors can re-arrange file disk blocks to make them contiguous (or more contiguous then they were previously), improving their transfer characteristics into to and out of the disk.
RAM disks work best when they implement the same file system as is implemented on the true secondary-storage devices; that way a process can switch between the RAM-disk file system and the regular file systems without change. On the other hand, disk-based file systems, such as ext2 or NTFS, are complex to implement and incur high storage overhead. Taking these facts into consideration, one useful way to implement a RAM-disk file system is to implement a simple file system that simulates the disk-based file systems in most particulars.
Sketch out the design of a RAM-disk file system compatible with a typical tree-based file system such as the Unix file system or NTFS. You needn't go into great detail, just describe and justify your choice of directory structure and your choice for representing files and their metadata.
The two principle RAM-disk characteristics that influence file system design are high I-O rates and small storage size. High I-O rates eliminates the system-device bottleneck and techniques that ran afoul of the bottleneck are now possible. In the other direction, the small storage size works against techniques that rely on expansive storage and favor techniques that are space efficient.
The small amount of space available means that all the extravagences that go with a UNIX-style hierarchial directory system are too expensive: multiple directories, indirection through i-nodes, excess capacity for expansion and so on. A flat directory structure would be more apropriate for its simplicity and minimal need for space. Hierarchial file names would seem to be a problem, but they are easily handled by treating them as a single, long name with no other structure (these names also compresses nicely using the prefix compression scheme described in the text). Explicitly manipulating directories is a problem, but this can either be prohibited or simulated by a layer of software above the RAM-disk file system.
Direct access would be best for linking data blocks into files. Data-block access if fast, and the cost of traversing the linked list is small (and can be made smaller by saving a pointer into the list to rember the last data block accessed). Data blocks are kept in main memory, which makes data-block size arbitrary (more or less).
Encryption involves knowing a secret (the key) that can transform information from one form to another. If unique keys are assigned to individuals, then claims of identity can be verified by requesting that the claiment translate known information using the key. If the claiment can transform the information into something recognizable, the claim to identity stands.
Alternatively, posession of an encryption key could be considered to be an authorization to access (via decryption) any information encrypted by the key.
This page last modified on 17 December 2002.