Test 1 - Processes

CS 438, Operating Systems Concepts


  1. A hand-off scheduler is a scheduler (of any kind) that accepts hints from the process leaving the CPU about what process should next occupy the CPU (that is, the outgoing process gets the chance to hand-off the CPU to the incoming process).

    Is hand-off scheduling more appropriate for a system that supports inter-process communication, or for a system that does not support inter-process communication? Explain.


    In a system that doesn't support interprocess communication, processes are isolated from one another. In such an environment, you would expect that the advice that one process could given about another wouldn't be particularly useful.

    However, in a system with IPC, communicating processes can give an important piece of information to the operating system: which other process is needed to complete the communication. The OS can use this to schedule a process which is almost guaranteed to be ready to run, increasing responsiveness and system throughput.

    Most people got this right. Most of those that didn't tried to argue that a system without IPC could use extra help in scheduling, but didn't really say what kind of extra help the processes could provide through hand-off scheduling.


  2. Let p1 and p2 be compute-bound processes. Derive a formula for the overhead in total execution time of time-slicing p1 and p2 with a quantum of q when compared to just running p1 to completion followed by p2.


    Let the execution time of process pi be Ei. In strictly sequential scheduling, the total execution time is DSPS() (a) E1 + S + E2 DSPE() where S is the cost of context switching from p1 to p2.

    In a time-slicing scheduler with quantum q, process with execution time Ei incurs floor(Ei/q) context switches, where floor(x) returns the largest integer smaller than x. This analysis assumes that the process is compute-bound, staying resident in the CPU until its quantum expires. The analysis can be adjusted for IO-bound processes by reducing the quantum size to the average CPU residency time for the processes.

    Assuming the scheduler always time-slices, independent of the number of processes in the run queue, the overhead for executing both processes is DSPS() (b) S(floor((E1i + E2)/q) + E1i + E2 DSPE() Comparing (a) and (b) shows the overhead for context switching is DSPS() S(floor((E1i + E2)/q) DSPE()

    Most of the equations derived where either close or far away. The most common mistake was to divide each Ei by q and then multiply their sum by q again, which gave back the original execution-time sum.


  3. A time-slice scheduler is unfair in the sense that a process could get ejected from the CPU before its quantum has expired. A colleague of yours, recognizing this unfairness, has implemented a fair time-slice scheduler with the following behavior: DSPS() A process ejected from the CPU after using n% of its quantum is inserted the ready queue so that only n% of the ready queue was ahead of it. DSPE() For example, suppose the ready queue contains 10 processes and the executing process is ejected after using only 10% of its quantum. The ejected process would be inserted into the ready queue so that 10% of the queue was ahead of it; that is, it would be inserted just after the head of the queue. A process ejected after using half its quantum would be inserted into the ready queue at the half-way point, and a process using all its quantum would be inserted at the end of the ready queue (100% of the queue is ahead if it).

    After implementing the fair time-slice scheduler, your colleague's system runs pretty much as it did before. What kind of jobs are running on your colleague's system? Explain.


    Your colleague's ready queue most likely consists of CPU-bound processes; processes that gain the CPU stay for their full quantum. Because these processes use all their CPU time, the re-enter the ready queue at the end, effectively implementing round-robin scheduling.

    Many people had the general idea for this question, but were unable to identify the processes as being compute bound.


  4. In an operating system with kernel threads, is there any advantage to structuring a user computation as multiple threads in one process instead of multiple processes? Explain.


    The principle advantage to using kernel threads in a single process is the ability replace expensive IPC with easily and cheaply shared resources. Because these are kernel threads, there isn't a big savings in system-call overhead, but certain system calls, context-switching ones in particular, can be cheaper when kernel threads are involved.

    Most people got this question, but too many people forgot to mention the advantage of intra-process threads over IPC.



This page last modified on 18 October 2003.