This test has six questions. The test runs the entire class period, starting at 7:25 and ending at 9:15. When you finish the test, you may leave.
Always justify answers unless the question explicitly states it’s not necessary. Some questions require reasonable assumptions in order to be answered. In such cases the answer should clearly state the assumptions made.• Aging• Interactive processes• Non-pre-emptive scheduling• Round-robin scheduling• Starvation
• Aging:A time-based technique for compensating entities being resource starved by scheduling decisions.• Interactive processes:A class of I-O bound processes characterized by long system residency, medium I-O wait times, and short CPU residency.• Non-pre-emptive scheduling:Scheduling without the ability to unilaterally separate an entity and its allocated resource (typically the CPU).• Round-robin scheduling:A pre-emptive scheduler that allocates a resource to an entity for a fixed time (a quantum) and then moves on to the next entity. The pre-empted entity is returned to the wait queue, hence “round-robin.”• Starvation:A scheduler property which causes an entity to wait an unbounded, possibly infinite, amount of time to be allocated a resource.
The answers, approximately verbatim, and in no particular order:
• Aging:In a priority-based scheduler, the act of raising the priority of a process that has not yet received CPU time based on amount of time in the wait queue.• Interactive processes:Processes that are heavily dependent on real-time I-O, and they are less CPU intensive but require frequent on-demand CPU time.• Non-pre-emptive scheduling:A scheduler in which a process is loaded into the CPU and is not kicked out until it terminates.1• Round-robin scheduling:A scheduler that shares CPU-time between processes by allotting a time-slice (quantum) to allow processing before interruption.• Starvation:The act of preventing a process from continuing execution due to the unavailability of a resource for unacceptable period of time.
1: Or yields the CPU voluntarily.
• Aging:Change priorities2 of a process usually in an priority scheduler.• Interactive processes:Processes that run in short bursts w/ some sort of IO or other task between the bursts.• Non-pre-emptive scheduling:Scheduling that ignores factors like execution time, or priority and do not evict processes.• Round-robin scheduling:Uses any parameter of eviction algorithm3 and will put evicted processes at end of queue.• Starvation:Long residency time tasks being overtaken by smaller tasks like in a shortest job first scheduler.
2: Changing priorities for what purpose?
3: “any parameter of eviction algorithm”? What’s that?
• Aging:As a entity wait time increases, it ages. Scheduler may increase priority if entity ages too much.• Interactive processes:Processes in the foreground, requiring user input, not high utilization.• Non-pre-emptive scheduling:Form of scheduling that doesn’t evict,4 easy to implement but poor at handling misbehaving processes.• Round-robin scheduling:Form of scheduling with uniform residency times, ejected entities added to the end of the queue. Simple, but requires more timers.• Starvation:Common in shortest-job first, long residency entity will constantly be pushed back in queue by shorter residency times.
4: Doesn’t evict what?
• Aging:[nothing]5• Interactive processes:Processes that have the ability to change mid-cycle6 in order to work in the most efficient manner possible.• Non-pre-emptive scheduling:Scheduling that executes tasks (processes) to completion before moving onto other tasks. There is no eviction in non-pre-emptive scheduling.• Round-robin scheduling:A scheduling technique that utilizes a queue and any eviction algorithm. If a ask is not completed before it is evicted then it moves to the end of the queue.• Starvation:When specific tasks have a long wait time before execution due to the selection algorithm being used by the scheduler.
5: Always answer every question.
6: To change what mid-cycle? What is the cycle
• Aging:Increasing priority7 after some time8 to avoid starvation.• Interactive processes:Foreground processes that require some I/O,9 and generally have higher priority.10• Non-pre-emptive scheduling:A process executing in the CPU will stay until it or its time limit11 is up, even if a higher priority process enters the queue.12• Round-robin scheduling:Each process gets 1/n, n = # of processes, of CPU time of at most 1 time quantum.13• Starvation:When a process sits too long in the waiting queue.
7: Increasing priority on what?
8: What kind of time? You mean like time executing in the CPU?
9: Almost all processes require I-O. What’s so special about foreground processes? And what are foreground processes?
10: Higher priority than what?
11: What time limit? Like in round-robin scheduling?
12: Which queue? There’s lots of them.
13: The part after “CPU time” doesn’t make much sense.
• Aging:A process or thread gaining priority inside a scheduler.14• Interactive processes:A process that depends and waits on user input to execute tasks.15 These processes generally have high priority, but don’t have to.• Non-pre-emptive scheduling:A scheduling architecture that does not interrupt a process until it has finished executing. The scheduler switches to another process when the current process goes from running to waiting or the process terminates.• Round-robin scheduling:A pre-emptive scheduling algorithm that will also switch processes if the running process has exceeded a specified time quantum.• Starvation:A scheduled task that takes a while to complete because the tasks keeps being switched out in favor of other processes.16
14: Or in a queue.
15: Repeatedly, or just once?
16: But the task still makes progress?
A single scheduler is unlikely to provide both good throughput and good responsiveness. Good throughput requires long, uninterrupted residence in the CPU while good responsiveness requires a quick trip from I-O waiting to the CPU through the ready queue. Long CPU residence means slow trips through the ready queue, which is bad for responsiveness. Quick trips through the ready queue means frequent context switches, which is bad for throughput.
The answers, approximately verbatim, and in no particular order:
1: True, but what would the responsiveness be in such a system? Would there be a relation between throughput and responsiveness? Why?
2: And what happens to throughput when the jobs get larger?
3: And is it your argument that responsiveness is good in this case?
4: Ok, but the problem asks about the relation between responsiveness and throughput, not wait times.
5: Which is starvation; what about responsiveness and throughput?
Round-robin processing is better for responsiveness due to it having set quantum, so when that expires, the next set of jobs are processed. 80% of jobs should be able to execute before the allotted quantum expires, which would give a descent throughput as well, but it isn’t directly causational.7
6: Why is SJF best for throughput but poor for responsiveness?
7: So you’re saying it’s possible to provide good throughput and responsiveness?
8: How is picking the most sensible task for throughput detrimental to responsiveness?
9: Or queue?
10: But isn’t that just getting through the ready queue as quickly as possible? It sounds like scheduling for good throughput provides a quick ready
queue.11: Why is this assumption useful?
If you have many short processes, and one long process, your throughput will be high,13 but responsiveness will be lower once that long process gets on the CPU.
12: Ok, but what about in this particular case?
13: Why is throughput high in this case?
Poor throughput and starvation for CPU-intensive tasks under shortest-job next scheduling suggests a stream of shorter jobs are cutting in front of the CPU-intensive tasks in the ready queue. One possible fix is to age the CPU-intensive jobs by reducing their expected job time in proportion to their queue residence time. Another fix is to replace shortest-job next scheduling with round-robin scheduling, which eliminates starvation and improves throughput for CPU-intensive tasks, although probably at the expense of the throughput for other tasks due to increased context-switch overhead (and the CPU-intensive throughput would not be as good as it could be). A third solution is to move to a multi-level queue scheduler (with feedback or not) with two queues, one for CPU-intensive tasks and the other for other tasks. The two queues could be non-preemptively scheduled in proportion to the desired level of throughput.
The answers, approximately verbatim, and in no particular order:
I would try to increase throughput by increasing the time quantum for each executing process on the CPU, hopefully completing more processes per time quantum2 rather than sending uncompleted processes back to the waiting queue.
1: Wait — what happened to throughput?
2: That was a clumsy phrase. I suspect you meant “allowing processes to complete in fewer quanta.”
Or, replace SJF with round-robin so every task gets the same quantum and if the process doesn’t finish, there will be another chance after all other tasks get a chance to execute.
3: The scheduler could run in short bursts? Does this mean relative to the other queue, or something else?
4: But what would such a scheme do for the CPU-intensive tasks? Starvation is, I guess, eased, but what about throughput?
Or, change to round-robin, with quantums that allow all the shorter jobs to execute, but also avoid the starvation problem with long service time.5
5: What about throughput?
2: Another solution would be to incorporate multilevel queue scheduling. By doing this you essentially create several queues that each handle a specific type of task.7 Grouping similar tasks together and using separate scheduling techniques that work best for reach group8 would eliminate starvation since longer tasks would be on the same playing field as shorter tasks. And throughput would be higher since each group of similar tasks would be using a selection algorithm that works best for the given task type.9
6: Just a quantum, or would the scheduler have to change too?
7: What would these task types be?
8: Which scheduling techniques would these be?
9: What kind of scheduler would that be?
In priorities with pre-emptive time quantum, each time a task takes longer than the time quantum, the task will be switched, but gain priority. Eventually the task priority will exceed priority of remaining process time.10
In lottery pre-emptive time quantum, as tasks exceed the time quantum they are awarded more lottery tickets. Each task starts with with a certain amount of tickets but as a task gains more tickets it will have a greater chance of being selected to run next.1110: This helps throughput or starvation?
11: Again, better throughput or less starvation? And aren’t these two approaches just variations on assigning priorities?
Approach 2: Introduce a multi-level queue system. We add a second queue to the scheduler and fill it exclusively with CPU-intensive processes. Then, we denote the scheduler to get and run processes from this queue 30% of the time, we can allow moderate throughput for large tasks while still prioritizing short ones.
False. Whether or not the priority scheduler is pre-emptive depends on the scheduling invariant it maintains. For example, a priority scheduler implementing first-come first-serve scheduling doesn’t need pre-emption, while a traditional scheduling invariant (the CPU-resident entity has a priority no less than any priority in the ready queue) does require pre-emption to maintain.
The answers, approximately verbatim, and in no particular order:
1: Or select the task that satisfies the scheduling invariant.
2: Ok, but is that true for all cases? And what happens for those other cases?
3: Re-arranging processes in a queue doesn’t have much to do with pre-emption.
In this manner, the scheduler wouldn’t have to switch out processes.
4: True, but would such a scheduler be considered a priority scheduler?
5: High-priority pre-emption is one use for priority schedulers; is it the only one?
In pre-emptive priority scheduling, once a scheduler finds a higher priority task to run, the scheduler can stop the task currently in the CPU and switch out the task for the higher priority one.7
6: Ok, but is that a priority scheduler?
7: Ok, but is pre-emptive priority scheduling the only form of priority scheduling, or can there be other, non-pre-emptive forms?
No, a NDN scheduler generally won’t produce minimal wait times. Assuming processes are handled well enough so that missing deadlines aren’t a problem, the relation between a process’s deadline and its execution time is too weak to produce minimal wait times. In particular, the only thing a NDN scheduler knows about a process’s execution time is that the execution time is no greater than the deadline. Producing minimal wait times, as does SJN, requires knowing exactly the execution time, not just an upper-bound on the execution time.
The answers, approximately verbatim, and in no particular order:
1: What does “or any process after” mean?
2: Why is this true?
3: What is it about the difference causes the problem?
4: Up to the deadline, anyway.
5: It’s true that SJN and NDN run different task sets differently, but what do they do given the same task set with equivalent task lengths and deadlines?
6: Assuming non-pre-emptive what?
7: What is the relation between deadlines and run times, which is what produces wait times in the ready-queued processes?
8: How is this different from deadlines? Deadlines maintain their same relative order (time passes the same for all deadlines) in the ready queue.
9: The minimal wait-time property usually assumes a fixed set of processes; overtaking isn’t a problem.
10: So why not kill that process, since it’s missed its deadline?
[diagram]
11: A long process with a near deadline sounds like a problem in any case.
12: Is throughput why SJN scheduling produces minimal wait times?
13: What is a “residency time-intensive process”?
14: How could a deadline-limited task starve?
Examples: [a table]
In an STN: [a table]In NDN: [a table]
15: There has to be some correlation between CPU times and deadlines, otherwise it wouldn’t make sense to have deadlines.
16: At least until the process is killed for running past its deadline.
Shortest-job next scheduling provides good wait times by ordering the ready queue in increasing job times, but the difficulty is the process-type mix, as is starvation for compute-bound processes. One alternative is to move to multi-queue scheduling. Assuming a single single-core CPU, having three ready queues segregated by CPU-residency times (short, intermediate and long for example) would eliminate starvation and improve the wait times for larger tasks, at the cost of worsening wait times for shorter tasks. But apart from inter-queue scheduling, multiple queues are just a variant of SJN. Further adjustments could be made by moving to a quantum-based scheduler and adjusting the quantum to be proportional to the task size (that is, small tasks get small quanta).
The answers, approximately verbatim, and in no particular order:
1: SJN scheduling where? For all queues? Or to choose among queues (which, admittedly, doesn’t make much sense).
2: What kind of feedback?
3: Maximize utilization of what?
4: From where do the priority assignments come? What are they supposed to achieve?
5: What would the priorities mean?
6: Would the SJF scheduler need or use a quantum?
[ diagram ]
7: Wouldn’t longer processes cutting in front of shorter processes have a bad effect on wait times?
8: Perhaps, but those aren’t properties the problem asks you to consider.
9: CPU residency, or queue residency?
10: How would that process be characterized? Is it even possible to pick such a process?
11: Throughput is not a characteristic this problem asks you to consider.
12: Another characteristic with an unclear relation to wait times.
13: How would the priorities be set to achieve good wait times? Are such priorities possible?
However, rather than have a single queue, which could lead to indefinite wait times for long processes, there will be three queues, one for sub-20 ms processes, one for 20-60 ms processes, and one for 60+ ms processes.15 Each process queue gets 50%, 30%, and 20% CPU time respectively. In this way, as short jobs come in, they get run through the short job queue very quickly, weighing the average towards sort wait times. Long CPU processes get eventually completed, but since there will be inherently less of them,16 they will only loosely pull the average wait time higher.
14: Probably?
15: What scheduling discipline applies to each queue?
16: You mean by assumption?