Because the graph is a tree, it has no cycles. Because it is finite, there exists some n (with is no more than the number of states in the graph) such that any path ends after at most n state transitions.
A pre-emptive scheduler creates a cycle in the process-state graph between the ready and running states. The absence of cycles in the state graph means the scheduler must be non-pre-emptive.
You could imagine a scheduler that, for example, only pre-empts a process at most twice; say once for input at the start of execution and again for output at the end of execution. This would result in a tree-shaped process-state graph. However, because this is a scheduler for a general-purpose system, it is easy to imagine a process that executes the pre-emption condition more than n times, which can't be handled by the graph.
On a uniprocessor system, the answer must be yes; on a multi-processor system, the answer can be no.
On uniprocessor systems, the put()
call is non-blocking, so it requires no
service from the scheduler (however, you could argue that the scheduler might
like to know when the stone is put back in the bowl so it can deal with waiting
processes). The get()
call can block, which can be done either by
spin-locking or by blocking the process. Blocking the process clearly requires
the services of the scheduler. However, spin-locking also requires the
services of the scheduler, at least indirectly: a process spin-locking on a
non-pre-emptive system locks up the system because nothing can move it out of
the CPU. A semaphore implementing get()
with a spin lock must have a
pre-emptive scheduler to insure that progress can be made.
On multiprocessor systems, you can argue that un-pre-empted spin locking is
acceptable given the number of CPUs available and involvement with the
scheduler is unnecessary (although possibly helpful). You might want to argue
that all the CPUs could be engaged in get()
spin-locking; however, that's
not usually considered an OS problem, but rather a problem in the application
that's deadlocked the system with spin locks.
The principle advantage to scheduling batch jobs over interactive jobs is the amount and quality of the information available for batch-job execution behavior. The more and better the information available about a process's execution behavior, the better decisions the scheduler can make, and the easier it is to make those decisions. Because batch jobs tend to be less I-O driven than interactive jobs, it is easier to derive complete and accurate execution-time information about batch jobs than it is for interactive jobs.
A second scheduling advantage of batch jobs over interactive jobs is time insensitivity. Batch jobs tend to have global time considerations at large grain: throughput, turn-around time, and the like. Interactive jobs have moment-to-moment time considerations at small grain: interactivity, response time, and the like. Because batch time constraints can be met with schedulers simpler than those needed for interactive time constraints, scheduling batch jobs is more efficient than is scheduling interactive jobs in terms of work required per result.
A third scheduling advantage of batch jobs over interactive jobs is the batch jobs' orientation towards the CPU and the interactive jobs' orientation towards I-O. A batch job can be shoved into the CPU with a relatively large quantum (if that's being used) and left alone until the quantum expires. An interactive job needs a small quantum to insure good interactive performance, and it probably needs to be interrupted (both into and out of the CPU) on I-O. The relative lack of scheduling overhead when compared to interactive jobs makes batch jobs more efficient.
A process scheduler can do little about a particular process's I-O behavior, apart from handling it as best as possible when it occurs. The problem almost certainly lies with the job scheduler, which let in at one time too many processes competing for the same resource.
This page last modified on 13 October 2002.