Operating Systems • CS 438

Test 2, 2 October 2014


This test has six questions. The test runs the entire class period, starting at 7:25 and ending at 9:15. When you finish the test, you may leave.

Always justify answers unless the question explicitly states it’s not necessary. Some questions require reasonable assumptions in order to be answered. In such cases the answer should clearly state the assumptions made.
  1. Keeping in mind that giving an example of X is not defining X, briefly (in one or two sentences, 25 to 30 words total) define:

    • Aging

    • Interactive processes

    • Non-pre-emptive scheduling

    • Round-robin scheduling

    • Starvation


    • Aging:
    A time-based technique for compensating entities being resource starved by scheduling decisions.

    • Interactive processes:
    A class of I-O bound processes characterized by long system residency, medium I-O wait times, and short CPU residency.

    • Non-pre-emptive scheduling:
    Scheduling without the ability to unilaterally separate an entity and its allocated resource (typically the CPU).

    • Round-robin scheduling:
    A pre-emptive scheduler that allocates a resource to an entity for a fixed time (a quantum) and then moves on to the next entity. The pre-empted entity is returned to the wait queue, hence “round-robin.”

    • Starvation:
    A scheduler property which causes an entity to wait an unbounded, possibly infinite, amount of time to be allocated a resource.

    The answers, approximately verbatim, and in no particular order:

    1. • Aging:
      In a priority-based scheduler, the act of raising the priority of a process that has not yet received CPU time based on amount of time in the wait queue.

      • Interactive processes:
      Processes that are heavily dependent on real-time I-O, and they are less CPU intensive but require frequent on-demand CPU time.

      • Non-pre-emptive scheduling:
      A scheduler in which a process is loaded into the CPU and is not kicked out until it terminates.1

      • Round-robin scheduling:
      A scheduler that shares CPU-time between processes by allotting a time-slice (quantum) to allow processing before interruption.

      • Starvation:
      The act of preventing a process from continuing execution due to the unavailability of a resource for unacceptable period of time.

      1: Or yields the CPU voluntarily.

    2. • Aging:
      Change priorities2 of a process usually in an priority scheduler.

      • Interactive processes:
      Processes that run in short bursts w/ some sort of IO or other task between the bursts.

      • Non-pre-emptive scheduling:
      Scheduling that ignores factors like execution time, or priority and do not evict processes.

      • Round-robin scheduling:
      Uses any parameter of eviction algorithm3 and will put evicted processes at end of queue.

      • Starvation:
      Long residency time tasks being overtaken by smaller tasks like in a shortest job first scheduler.

      2: Changing priorities for what purpose?

      3: “any parameter of eviction algorithm”? What’s that?

    3. • Aging:
      As a entity wait time increases, it ages. Scheduler may increase priority if entity ages too much.

      • Interactive processes:
      Processes in the foreground, requiring user input, not high utilization.

      • Non-pre-emptive scheduling:
      Form of scheduling that doesn’t evict,4 easy to implement but poor at handling misbehaving processes.

      • Round-robin scheduling:
      Form of scheduling with uniform residency times, ejected entities added to the end of the queue. Simple, but requires more timers.

      • Starvation:
      Common in shortest-job first, long residency entity will constantly be pushed back in queue by shorter residency times.

      4: Doesn’t evict what?

    4. • Aging:
      [nothing]5

      • Interactive processes:
      Processes that have the ability to change mid-cycle6 in order to work in the most efficient manner possible.

      • Non-pre-emptive scheduling:
      Scheduling that executes tasks (processes) to completion before moving onto other tasks. There is no eviction in non-pre-emptive scheduling.

      • Round-robin scheduling:
      A scheduling technique that utilizes a queue and any eviction algorithm. If a ask is not completed before it is evicted then it moves to the end of the queue.

      • Starvation:
      When specific tasks have a long wait time before execution due to the selection algorithm being used by the scheduler.

      5: Always answer every question.

      6: To change what mid-cycle? What is the cycle

    5. • Aging:
      Increasing priority7 after some time8 to avoid starvation.

      • Interactive processes:
      Foreground processes that require some I/O,9 and generally have higher priority.10

      • Non-pre-emptive scheduling:
      A process executing in the CPU will stay until it or its time limit11 is up, even if a higher priority process enters the queue.12

      • Round-robin scheduling:
      Each process gets 1/n, n = # of processes, of CPU time of at most 1 time quantum.13

      • Starvation:
      When a process sits too long in the waiting queue.

      7: Increasing priority on what?

      8: What kind of time? You mean like time executing in the CPU?

      9: Almost all processes require I-O. What’s so special about foreground processes? And what are foreground processes?

      10: Higher priority than what?

      11: What time limit? Like in round-robin scheduling?

      12: Which queue? There’s lots of them.

      13: The part after “CPU time” doesn’t make much sense.

    6. • Aging:
      A process or thread gaining priority inside a scheduler.14

      • Interactive processes:
      A process that depends and waits on user input to execute tasks.15 These processes generally have high priority, but don’t have to.

      • Non-pre-emptive scheduling:
      A scheduling architecture that does not interrupt a process until it has finished executing. The scheduler switches to another process when the current process goes from running to waiting or the process terminates.

      • Round-robin scheduling:
      A pre-emptive scheduling algorithm that will also switch processes if the running process has exceeded a specified time quantum.

      • Starvation:
      A scheduled task that takes a while to complete because the tasks keeps being switched out in favor of other processes.16

      14: Or in a queue.

      15: Repeatedly, or just once?

      16: But the task still makes progress?

  2. A colleague of yours isn’t sure about the relation between scheduling for good throughput and scheduling for good responsiveness. In particular, your colleague believes that a scheduler providing one property (good throughput, for example) automatically provides the other (good responsiveness, in this example). Your colleague, recognizing your expertise in CPU scheduling, asks for your opinion. What do you say to your colleague?


    A single scheduler is unlikely to provide both good throughput and good responsiveness. Good throughput requires long, uninterrupted residence in the CPU while good responsiveness requires a quick trip from I-O waiting to the CPU through the ready queue. Long CPU residence means slow trips through the ready queue, which is bad for responsiveness. Quick trips through the ready queue means frequent context switches, which is bad for throughput.

    The answers, approximately verbatim, and in no particular order:

    1. My colleague assuming that there is a direct relationship between responsiveness + throughput is not correct. Because his system has good throughput doesn’t define anything, it could even mean that his processes are extremely small in which any scheduler would provide good throughput.1 However in the case of shortest job first scheduling, the throughput is high (because of short jobs) but as the jobs get larger, the responsiveness goes down.2 If his system hits a job in need of many resources, then the throughput will suffer assuming there is no eviction algorithm in place.3

      1: True, but what would the responsiveness be in such a system? Would there be a relation between throughput and responsiveness? Why?

      2: And what happens to throughput when the jobs get larger?

      3: And is it your argument that responsiveness is good in this case?

    2. A scheduler that has good throughput does not necessarily have good responsiveness. A shortest-job-first scheduler is a scheduler that runs the queued processes in order of shortest to longest estimated job length, which provides for a very short average wait time.4 However, the scheduler does not consider time in queue when deciding which process to run next, so a medium-to-long job request can wait indefinitely without responding.5

      4: Ok, but the problem asks about the relation between responsiveness and throughput, not wait times.

      5: Which is starvation; what about responsiveness and throughput?

    3. This is not necessarily true. Throughput is based on jobs/time, and in the case of scheduling, SJF would be best for throughput, but is poor on responsiveness.6

      Round-robin processing is better for responsiveness due to it having set quantum, so when that expires, the next set of jobs are processed. 80% of jobs should be able to execute before the allotted quantum expires, which would give a descent throughput as well, but it isn’t directly causational.7

      6: Why is SJF best for throughput but poor for responsiveness?

      7: So you’re saying it’s possible to provide good throughput and responsiveness?

    4. Considering that throughput and responsiveness are properties that are, for the most part, unrelated, I would tell my colleague that his/her opinion is not necessarily correct. Throughput is how many jobs that a system is able to complete per second. Schedulers that have the goal of maximizing throughput typically focus on picking a task that, at a given moment, makes the most sense to execute8 so that in the end the pool9 of tasks finishes in the quickest time. Responsiveness, on the other hand, is the time between I-O completion and execution in response to that. In order to have good responsiveness the scheduler’s mentality changes from getting as many tasks through the CPU as quick as possible to coordinating I-O completion and execution in an effective way.10

      8: How is picking the most sensible task for throughput detrimental to responsiveness?

      9: Or queue?

      10: But isn’t that just getting through the ready queue as quickly as possible? It sounds like scheduling for good throughput provides a quick ready

      queue.

    5. A scheduler providing good throughput does not automatically provide for good responsiveness. For example, a scheduler that is scheduling based on shortest remaining time will have high throughput if there are a lot of small tasks being scheduled. This will cause larger tasks such as I-O bursts to constantly be pushed back. These I-O tasks will therefore cause poor responsiveness. Assuming shortest remaining time is non-pre-emptive.11

      11: Why is this assumption useful?

    6. In this example, my colleague would be wrong. In general, if one property is good, it does not guarantee the other properties will be good as well.12

      If you have many short processes, and one long process, your throughput will be high,13 but responsiveness will be lower once that long process gets on the CPU.

      12: Ok, but what about in this particular case?

      13: Why is throughput high in this case?

  3. An operating system uses a shortest-job next scheduler for CPU scheduling, and the CPU-intensive tasks are experiencing poor throughput and starvation. Describe two different approaches to, if not eliminating, easing the CPU-intensive task’s scheduling woes. Justify your approaches.


    Poor throughput and starvation for CPU-intensive tasks under shortest-job next scheduling suggests a stream of shorter jobs are cutting in front of the CPU-intensive tasks in the ready queue. One possible fix is to age the CPU-intensive jobs by reducing their expected job time in proportion to their queue residence time. Another fix is to replace shortest-job next scheduling with round-robin scheduling, which eliminates starvation and improves throughput for CPU-intensive tasks, although probably at the expense of the throughput for other tasks due to increased context-switch overhead (and the CPU-intensive throughput would not be as good as it could be). A third solution is to move to a multi-level queue scheduler (with feedback or not) with two queues, one for CPU-intensive tasks and the other for other tasks. The two queues could be non-preemptively scheduled in proportion to the desired level of throughput.

    The answers, approximately verbatim, and in no particular order:

    1. If the throughput is poor, it could mean that even the shortest jobs are fairly long, so to solve the starvation,1 you could implement a round-robin scheduler to ensure every process gets some time on the CPU.

      I would try to increase throughput by increasing the time quantum for each executing process on the CPU, hopefully completing more processes per time quantum2 rather than sending uncompleted processes back to the waiting queue.

      1: Wait — what happened to throughput?

      2: That was a clumsy phrase. I suspect you meant “allowing processes to complete in fewer quanta.”

    2. One way to elevate the poor throughput and starvation is to add a second scheduler that only handles CPU intensive tasks. The intensive task scheduler could run in short bursts3 with a quantum of time for intensive tasks so the non-intensive tasks can breeze through.4

      Or, replace SJF with round-robin so every task gets the same quantum and if the process doesn’t finish, there will be another chance after all other tasks get a chance to execute.

      3: The scheduler could run in short bursts? Does this mean relative to the other queue, or something else?

      4: But what would such a scheme do for the CPU-intensive tasks? Starvation is, I guess, eased, but what about throughput?

    3. Change to a hybrid of SJN and priority. Once a process ages[?] increase its priority to assure it doesn’t starve.

      Or, change to round-robin, with quantums that allow all the shorter jobs to execute, but also avoid the starvation problem with long service time.5

      5: What about throughput?

    4. 1: One solution would be to incorporate a quantum.6 This will limit the given task’s time in CPU if it is taking longer to execute than expected. Using this technique you could start out with shortest job next, and then throw any tasks that exceed the max quantum to the end of the queue. (Round robin) This will allow long jobs at the end of the original queue to reach the CPU faster, and overall would increase the amount of jobs done per second.

      2: Another solution would be to incorporate multilevel queue scheduling. By doing this you essentially create several queues that each handle a specific type of task.7 Grouping similar tasks together and using separate scheduling techniques that work best for reach group8 would eliminate starvation since longer tasks would be on the same playing field as shorter tasks. And throughput would be higher since each group of similar tasks would be using a selection algorithm that works best for the given task type.9

      6: Just a quantum, or would the scheduler have to change too?

      7: What would these task types be?

      8: Which scheduling techniques would these be?

      9: What kind of scheduler would that be?

    5. Assuming the scheduler is non-pre-emptive, priorities with pre-emptive time quantum scheduling would be implemented, or a pre-emptive time quantum lottery scheduler could be implemented.

      In priorities with pre-emptive time quantum, each time a task takes longer than the time quantum, the task will be switched, but gain priority. Eventually the task priority will exceed priority of remaining process time.10

      In lottery pre-emptive time quantum, as tasks exceed the time quantum they are awarded more lottery tickets. Each task starts with with a certain amount of tickets but as a task gains more tickets it will have a greater chance of being selected to run next.11

      10: This helps throughput or starvation?

      11: Again, better throughput or less starvation? And aren’t these two approaches just variations on assigning priorities?

    6. Approach 1: Adding an aging system to the scheduler. Since the CPU intensive tasks are experiencing extremely long wait times, we can add a parameter to the system that decreases the estimated processing time of each process for each because it has been waiting. They will eventually have a low enough priority to be first in queue without drastically changing the scheduler algorithm.

      Approach 2: Introduce a multi-level queue system. We add a second queue to the scheduler and fill it exclusively with CPU-intensive processes. Then, we denote the scheduler to get and run processes from this queue 30% of the time, we can allow moderate throughput for large tasks while still prioritizing short ones.

  4. True of false: a priority scheduler necessarily has to be a pre-emptive scheduler. Justify your answer.


    False. Whether or not the priority scheduler is pre-emptive depends on the scheduling invariant it maintains. For example, a priority scheduler implementing first-come first-serve scheduling doesn’t need pre-emption, while a traditional scheduling invariant (the CPU-resident entity has a priority no less than any priority in the ready queue) does require pre-emption to maintain.

    The answers, approximately verbatim, and in no particular order:

    1. False, there is a possibility of setting all priorities to the same value so no one is more or less important than another, which is basically FCFS since there no other factors. Assuming the only factor in this priority scheduler is the priority if gets assigned and there will be no eviction because everyone has priority (X).
    2. This is true. In order to fulfill the definition of being a priority scheduler, a scheduler must, at a given time, select the task that has the highest priority.1 In many cases the priority arrangement of a group of tasks can change mid-execution of a task.2 In this case it is necessary to evict that task and load a new one into the CPU. Without the ability to evict tasks a scheduler is not a true priority scheduler.

      1: Or select the task that satisfies the scheduling invariant.

      2: Ok, but is that true for all cases? And what happens for those other cases?

    3. True. Queue organized and re-organized based on process priority. Must be done pre-emptively to determine what to process next. Also changing priorities to avoid starvation.3

      3: Re-arranging processes in a queue doesn’t have much to do with pre-emption.

    4. False, it does not necessarily have to be pre-emptive. If, for instance, a process that had the next highest priority is on the CPU, and a higher priority process lands on the queue, the process already on the CPU can finish its time quantum if the scheduler is programmed to work that way.4

      In this manner, the scheduler wouldn’t have to switch out processes.

      4: True, but would such a scheduler be considered a priority scheduler?

    5. True. One of the main features of a priority scheduler is that it allows for great responsiveness to real-time, time-critical operations. If a long-CPU intensive process has the CPU when a high-priority action needs to be taken, the process will have to wait an indefinite period of time before the CPU process finishes, allowing it to take action.5 This can lead to slow responsiveness to user input, or worse emergency operations. Pre-empting would allow these processes to kick out the CPU inhabitant so that it may run.

      5: High-priority pre-emption is one use for priority schedulers; is it the only one?

    6. False because a scheduler may choose the next task to enter the CPU and not dispatch the task until the task currently in the CPU has finished.6

      In pre-emptive priority scheduling, once a scheduler finds a higher priority task to run, the scheduler can stop the task currently in the CPU and switch out the task for the higher priority one.7

      6: Ok, but is that a priority scheduler?

      7: Ok, but is pre-emptive priority scheduling the only form of priority scheduling, or can there be other, non-pre-emptive forms?

  5. Assume each process (not thread) has an associated deadline, a time by which it must finish execution. The nearest deadline next (NDN) scheduler) is similar to the shortest job next (SJN) scheduler, except the NDN scheduler chooses the waiting process with the nearest deadline rather than the smallest total execution time. Is the NDN scheduler similar enough to the SJN scheduler to produce minimal process wait times as does the SJN scheduler? Justify your answer.


    No, a NDN scheduler generally won’t produce minimal wait times. Assuming processes are handled well enough so that missing deadlines aren’t a problem, the relation between a process’s deadline and its execution time is too weak to produce minimal wait times. In particular, the only thing a NDN scheduler knows about a process’s execution time is that the execution time is no greater than the deadline. Producing minimal wait times, as does SJN, requires knowing exactly the execution time, not just an upper-bound on the execution time.

    The answers, approximately verbatim, and in no particular order:

    1. No, it is not. If the next nearest deadline process had a large execution time, or any process after,1 the produced process wait times would be very different than a SJN scheduler.2

      1: What does “or any process after” mean?

      2: Why is this true?

    2. Since the NDN scheduler is ordering the execution queue differently it is not similar enough to SJN to produce minimal process wait times.3 In the case of NDN if a process has a deadline it needs to be in the CPU for as long as it needs to finish executing regardless of how long that is.4 So, in the same time that NDN finishes executing one process, SJN could have finished executing several processes.5

      3: What is it about the difference causes the problem?

      4: Up to the deadline, anyway.

      5: It’s true that SJN and NDN run different task sets differently, but what do they do given the same task set with equivalent task lengths and deadlines?

    3. Assuming non-preemptive6 The NDN algorithm will produce shorter wait times because as wait time increases, the deadline acts as a priority and will eventually be next to enter CPU.7 In SJN, the remaining time of a process doesn’t charge as wait time increases.8 This allows SJN to keep scheduling shorter processes ahead of larger processes. While the larger processes are unable to get higher priority scheduled.9

      6: Assuming non-pre-emptive what?

      7: What is the relation between deadlines and run times, which is what produces wait times in the ready-queued processes?

      8: How is this different from deadlines? Deadlines maintain their same relative order (time passes the same for all deadlines) in the ready queue.

      9: The minimal wait-time property usually assumes a fixed set of processes; overtaking isn’t a problem.

    4. No, the NDN scheduler is not similar enough to the SJN scheduler with minimal wait time because a process’s estimated wait time, and deadline could be drastically different. There can be a process whose deadline is so close that it becomes the next to be executed but the execution time can take forever10 so all other processes, short or long must wait, and their wait times depend on the long process that is being executed.

      10: So why not kill that process, since it’s missed its deadline?

    5. Not necessarily, one long process with a very near deadline11 would cause avg. wait time to rise dramatically, while SJN doesn’t have this issue due to also having the benefit of high throughput.12 Should another residency time-intensive process13 with a nearest deadline occur, short service time tasks may ever starve.14

      [diagram]

      11: A long process with a near deadline sounds like a problem in any case.

      12: Is throughput why SJN scheduling produces minimal wait times?

      13: What is a “residency time-intensive process”?

      14: How could a deadline-limited task starve?

    6. No, it will not produce similar wait times to STN schedulers, since deadlines have no correlation to actual CPU time and wait time.15 If a long process has the shortest deadline, it can drastically increase the total average wait time,16 while SJN has a provably optimal average wait time.

      Examples: [a table]

      In an STN: [a table]

      In NDN: [a table]

      15: There has to be some correlation between CPU times and deadlines, otherwise it wouldn’t make sense to have deadlines.

      16: At least until the process is killed for running past its deadline.

  6. The computing center at which you work wants to change its process (not thread) scheduling policy so it provides good wait times. That is, they want to ensure that processes spend a little time as possible in the ready queue. Your job is to reconfigure the CPU scheduler to provide good wait times. Describe the scheduler you would create, and how your scheduler provides good wait times. Assume a general and uniform mix of process types; be sure to state any other assumptions you may make.


    Shortest-job next scheduling provides good wait times by ordering the ready queue in increasing job times, but the difficulty is the process-type mix, as is starvation for compute-bound processes. One alternative is to move to multi-queue scheduling. Assuming a single single-core CPU, having three ready queues segregated by CPU-residency times (short, intermediate and long for example) would eliminate starvation and improve the wait times for larger tasks, at the cost of worsening wait times for shorter tasks. But apart from inter-queue scheduling, multiple queues are just a variant of SJN. Further adjustments could be made by moving to a quantum-based scheduler and adjusting the quantum to be proportional to the task size (that is, small tasks get small quanta).

    The answers, approximately verbatim, and in no particular order:

    1. Multilevel feedback queue
      • Queues of increasing quantum size for more residency time processes.
      • SJN for shortest residency time processes.1
      • Feedback2 used to maximize utilization3 assuming a queue may become empty.
      Having a short queue allows for less wait time to accrue, but having other queues to support it avoid hangups like starvation.

      1: SJN scheduling where? For all queues? Or to choose among queues (which, admittedly, doesn’t make much sense).

      2: What kind of feedback?

      3: Maximize utilization of what?

    2. I would like to use a multilevel feedback queue, a round-robin within priority assignment4 for tasks with a long residency time and then a SJF scheduler to breeze through any of the smaller processes that will not require much time or resources. One queue would assign priorities5 to intensive processes with a quantum, it will run between the quantum given for the SJF scheduler.6 If there is a larger or extremely large job, it would be dropped in the round-robin queue. This way, the responsiveness is high, the throughput is also high with so many small jobs competing, and the utilization is high since any downtime would be used to execute pieces of another intensive process.

      4: From where do the priority assignments come? What are they supposed to achieve?

      5: What would the priorities mean?

      6: Would the SJF scheduler need or use a quantum?

    3. Assume single-core CPU. Would use a pre-emptive round-robin algorithm with a decent size time quantum and priority implementation. This would essentially implement a type of FCFS algorithm, however interactive processes could receiver higher priority allowing them to interrupt other processes in favor of the interactive processes. As processes exceed time quantum their priority would be increased as well.7 This would decrease starvation and improve starvation and throughput,8 for longer processes. Priorities could be categorized and numbered, allowing classes of processes but also allowing starved processes eventually take first priority.

      [ diagram ]

      7: Wouldn’t longer processes cutting in front of shorter processes have a bad effect on wait times?

      8: Perhaps, but those aren’t properties the problem asks you to consider.

    4. In order to provide good wait times I would incorporate a pre-emptive scheduler that utilizes residency9 times to pick and choose the process, that at the given moment, would be the best to execute.10 This will give the system a high throughput,11 but also will lower the average time that a process is in the ready queue purely due to the fact that the set of processes will be executed with the greatest system utilization.12

      9: CPU residency, or queue residency?

      10: How would that process be characterized? Is it even possible to pick such a process?

      11: Throughput is not a characteristic this problem asks you to consider.

      12: Another characteristic with an unclear relation to wait times.

    5. I would design a prioritized SJN scheduler. It would handle the shortest jobs first, but could also start longer processes between, if prioritized to do so, depending on the computing center’s needs.13

      13: How would the priorities be set to achieve good wait times? Are such priorities possible?

    6. Assuming a single-core CPU and that wait times are all that matters and terminating processes, shortest-job-first probably has the smallest wait times.14

      However, rather than have a single queue, which could lead to indefinite wait times for long processes, there will be three queues, one for sub-20 ms processes, one for 20-60 ms processes, and one for 60+ ms processes.15 Each process queue gets 50%, 30%, and 20% CPU time respectively. In this way, as short jobs come in, they get run through the short job queue very quickly, weighing the average towards sort wait times. Long CPU processes get eventually completed, but since there will be inherently less of them,16 they will only loosely pull the average wait time higher.

      14: Probably?

      15: What scheduling discipline applies to each queue?

      16: You mean by assumption?


This page last modified on 2014 October 7.