After implementing the fair time-slice scheduler, you colleague's system essentially grinds to a halt: the average job completion time skyrockets, and system throughput flatlines near zero. How do you explain this behavior to your colleague?
Your colleague's ready queue most likely consists of IO-bound processes; processes that gain the CPU and immediately issue an IO request. Because these processes use almost no CPU time, their quantum is largely untouched, and so the process re-enters the ready queue near the head, starving the processes further back in the queue.
Some people confused a process's quantum with its total execution time. Others tried to argue about the size of the total execution time, which doesn't really matter here.
Let the execution time of process pi be Ei. In strictly sequential scheduling, the total execution time is DSPS() (a) E1 + S + E2 DSPE() where S is the cost of context switching from p1 to p2.
In a time-slicing scheduler with quantum q, process with execution time Ei incurs floor(Ei/q) context switches, where floor(x) returns the largest integer smaller than x. This analysis assumes that the process is compute-bound, staying resident in the CPU until its quantum expires. The analysis can be adjusted for IO-bound processes by reducing the quantum size to the average CPU residency time for the processes.
Assuming the scheduler always time-slices, independent of the number of processes in the run queue, the overhead for executing both processes is DSPS() (b) S(floor((E1i + E2)/q) + E1i + E2 DSPE() Comparing (a) and (b) shows the overhead for context switching is DSPS() S(floor((E1i + E2)/q) DSPE()
Most of the equations derived where either close or far away. The most common mistake was to divide each Ei by q and then multiply their sum by q again, which gave back the original execution-time sum.
Is hand-off scheduling more useful in a system that uses message passing to implement inter-process communication (IPC), or in a system that uses shared memory IPC? Explain.
In either case, the argument is roughly the same: a hand-off scheduler allows the blocking process to suggest to the scheduler that it execute the unblocking process next, which would decrease overall execution time for the cooperating processes.
In a message passing system, you could argue that hand-offs weren't necessary because the blocking process is easily identifiable by the address given on the message. On the other hand, shared memory is anonymous, particularly when shared among more than two processes, and so the OS could use a hint as to which processes exactly are blocked on a particular communication via shared memory.
Given that this was a question that could be answered either way, it should not be surprising that most people got this question mostly right.
The principle advantage to using kernel threads in a single process is the ability replace expensive IPC with easily and cheaply shared resources. Because these are kernel threads, there isn't a big savings in system-call overhead, but certain system calls, context-switching ones in particular, can be cheaper when kernel threads are involved.
Most people got this question, but too many people forgot to mention the advantage of intra-process threads over IPC.
This page last modified on 18 October 2003.