It does, because you can devote one of the CPUs to managing the devices.
Most answers were correct, with varying degrees of clarity.
Divide the tape drives evenly into two groups called first and second; if there's a odd number of tape drives, put the extra drive in the first group. Now require that each process get a tape drive from the first group before it can get a tape drive from the second group.
Every process that gets a tape drive from the first group can get a tape drive from the second group, so there's no possibility of deadlock. This is true as long as there's at least one tape drive in the second group; adding more drives to the second group allows for greater concurrency, up to the smaller of the two group sizes.
None of the answers for this question were correct. Many answers didn't bother with resource ordering. Those answers that did put tape drives in a linear order, which, at best, limits concurrency and, at worst, still allows deadlock (although, in fairness, most answers weren't clear enough to determine if deadlock was possible).
If the job scheduler is able to divine the types of non-CPU recourses required by a program, it could bias its program selection towards the programs that require under-used resources, or at least avoid selecting programs that require heavily used resources. The information about current resource load would come from the resource manager.
Most answers to this question were correct.
try-take()
, a conditional version of the
take()
binary-semaphore operation. Rather than blocking when s is
false, s.try-take()
returns false to indicate that s is unavailable;
when s is available, s.try-take()
returns true to indicate the calling
process has seized s.
One motivation for implementing try-take()
is to allow optimizations not
possible with take()
. Give an example of such an optimization.
There are two main optimizations, one at application level and the other at
system level. At the application level, try-take()
allows an application to
do other things while it waits for a semaphore to become available; the
penalty for such concurrency is increasing the chance that some other process
will grab the semaphore.
At the system level, checking to the semaphore for availability is a read-only
operation, and can be performed in a way that doesn't require entry into the
kernel (which is not to suggest it's an unprotected operation). Once
try-get()
determines that the semaphore is available, it can then enter the
kernel and actually try for the semaphore.
Most answers described the process non-blocking optimization; none described the kernel optimization.
This page last modified on 2 November 2003.