Because the devices don't have mechanisms that would allow them to perform tasks independently (tasks such as moving data or signaling a change in state), they have to rely on the system's CPU to do everything. If the CPU is spending cycles managing devices, it can't be running processes. However, because processes do seem to be executing at a reasonable rate, as measured by the process throughput, it must be that there's a second CPU around to pick up the slack left by the device-managing CPU. The system has a multiprocessor hardware architecture.
Alternatively, you could argue that devices improve system performance by incorporating a CPU, which lets the system CPU download device-management functions into the devices themselves. In the absence of device-resident CPUs, the host system has to provide the device-managing CPU, otherwise device management will take away from the system CPU's process-execution duties, decreasing performance.
The answers to this question were interesting, ranging from semaphores to a RAID system, neither of which could be considered a system hardware architecture.
a) When I discussed spooling and deadlock in class, I was talking about deadlock due to competition over the printer itself. Spooling - virtual printers - ensures that competition for access to printers will not result in deadlock. However, spooling only solves the deadlock problem with respect to printer access; all other shared resources - which are not managed by spooling - can still lead to deadlock, which is what the book describes.
The textbook's example of deadlock occurs when processes fill up the printer buffer with partial print requests so that no request can start printing (because every request incomplete) and no request can be completed (because there's no space left in the buffer).
b) The deadlock prevention techniques are pre-emption, pre-declaration, release and retry, and cycle breaking.
Almost all the answers to a) were correct, although they could have been a bit more clearly stated. The answers to b) were less impressive, with people confusing "prevention" and "avoidance" as well as selecting among rebooting, prevention, avoidance, and detect and recovery.
It does not. A shared device is one that has no conflicts when undergoing concurrent access. A virtual device is a re-implementation of a non-sharable device to eliminate the problems that occur when the original device is accessed concurrently.
On the other hand, implementing a virtual-device interface to a sharable device lets you re-implement the device interface, to make it more convenient to use or to present a uniform device interface to the rest of the system (e.g., hiding an IDE device behind a SCSI-like virtual interface). In addition, implementing a virtual device moves local device management at each device to a more global position in the operating system, which enables better decision making.
Most people voted no, with one voting yes.
I also mentioned that, in general, output requests could be expected to unlatch locks earlier than would input requests. Explain why this could be expected.
The problem is contention over shared resources. In this case, the buffer passed along with the system call is shared between the calling user process and the device driver responsible for the requested I-O operation. The buffer lock should be released as early as possible to the calling process can issue as many I-O operations per unit time as possible, but releasing the buffer lock too early can result in unmediated concurrent access to the buffer, leading to data corruption.
In general, although it depends on implementation, output requests should result in the buffer lock being released earlier than would input requests. Given an output request, the device manager can copy the user data into a manager-local buffer, freeing up the user buffer in advance of the device itself being available for the requested operation.
On the other hand, given an input request, the device manager must wait for the device because that's where the data are coming from, and the buffer can be released only when the device has finished the executing request. For some input devices, such as a disk, it may be possible for the device manager to prefetch data, but in general that can't be relied on, and it doesn't work at all for devices such as modems and network interfaces.
Most answers had a lot of ambiguous (and safe) references to "resources" without explicitly mentioning buffers; which was odd because on re-reading my message I saw I gave the answer.
This page last modified on 1 November 2002.