Test 1 - Processes

CS 438, Operating Systems Concepts


  1. A time-slice scheduler is unfair in the sense that a process could get ejected from the CPU before its quantum has expired. A colleague of yours, recognizing this unfairness, has implemented a fair time-slice scheduler with the following behavior: DSPS() A process ejected from the CPU after using n% of its quantum is inserted the ready queue so that only n% of the ready queue was ahead of it. DSPE() For example, suppose the ready queue contains 10 processes and the executing process is ejected after using only 10% of its quantum. The ejected process would be inserted into the ready queue so that 10% of the queue was ahead of it; that is, it would be inserted just after the head of the queue. A process ejected after using half its quantum would be inserted into the ready queue at the half-way point (with five processes ahead of it), and a process using all its quantum would be inserted at the end of the ready queue (100% of the queue is ahead if it).

    After implementing the fair time-slice scheduler, you colleague's system essentially grinds to a halt: the average job completion time skyrockets, and system throughput flatlines near zero. How do you explain this behavior to your colleague?


    Your colleague's ready queue most likely consists of IO-bound processes; processes that gain the CPU and immediately issue an IO request. Because these processes use almost no CPU time, their quantum is largely untouched, and so the process re-enters the ready queue near the head, starving the processes further back in the queue.

    Some people confused a process's quantum with its total execution time. Others tried to argue about the size of the total execution time, which doesn't really matter here.


  2. Let p1 and p2 be compute-bound processes. Derive a formula for the overhead in total execution time of time-slicing p1 and p2 with a quantum of q when compared to just running p1 to completion followed by p2. Don't forget to clearly label and identify your parameters.


    Let the execution time of process pi be Ei. In strictly sequential scheduling, the total execution time is DSPS() (a) E1 + S + E2 DSPE() where S is the cost of context switching from p1 to p2.

    In a time-slicing scheduler with quantum q, process with execution time Ei incurs floor(Ei/q) context switches, where floor(x) returns the largest integer smaller than x. This analysis assumes that the process is compute-bound, staying resident in the CPU until its quantum expires. The analysis can be adjusted for IO-bound processes by reducing the quantum size to the average CPU residency time for the processes.

    Assuming the scheduler always time-slices, independent of the number of processes in the run queue, the overhead for executing both processes is DSPS() (b) S(floor((E1i + E2)/q) + E1i + E2 DSPE() Comparing (a) and (b) shows the overhead for context switching is DSPS() S(floor((E1i + E2)/q) DSPE()

    Most of the equations derived where either close or far away. The most common mistake was to divide each Ei by q and then multiply their sum by q again, which gave back the original execution-time sum.


  3. A hand-off scheduler is a scheduler (of any kind) that accepts hints from the process leaving the CPU about what process should next occupy the CPU (that is, the outgoing process gets the chance to hand-off the CPU to the incoming process).

    Is hand-off scheduling more useful in a system that uses message passing to implement inter-process communication (IPC), or in a system that uses shared memory IPC? Explain.


    In either case, the argument is roughly the same: a hand-off scheduler allows the blocking process to suggest to the scheduler that it execute the unblocking process next, which would decrease overall execution time for the cooperating processes.

    In a message passing system, you could argue that hand-offs weren't necessary because the blocking process is easily identifiable by the address given on the message. On the other hand, shared memory is anonymous, particularly when shared among more than two processes, and so the OS could use a hint as to which processes exactly are blocked on a particular communication via shared memory.

    Given that this was a question that could be answered either way, it should not be surprising that most people got this question mostly right.


  4. In an operating system with kernel threads, is there any advantage to structuring a user computation as multiple threads in one process instead of multiple processes? Explain.


    The principle advantage to using kernel threads in a single process is the ability replace expensive IPC with easily and cheaply shared resources. Because these are kernel threads, there isn't a big savings in system-call overhead, but certain system calls, context-switching ones in particular, can be cheaper when kernel threads are involved.

    Most people got this question, but too many people forgot to mention the advantage of intra-process threads over IPC.



This page last modified on 18 October 2003.