Operating Systems

Test 3, 30 April 2012


This test has ten questions; answer at least eight of them.

Up to the first 75 words appearing after a question are taken to be the answer to the question. Words following the 75th word are not part of the answer and will be ignored.

  1. Is it possible for an operating system to distinguish between a process that's waiting for a long time (possibly forever) and a process that's deadlocked? If it is, explain how; if it's not, given an example the operating system would fail to distinguish. Your answer need not consider efficiency or performance.


    Yes, it is. If a process is deadlocked, the OS can trace an allocated-request chain through processes and resources starting and ending at the blocked process; that is, the OS can find a cycle in the resource-allocation graph. Because cycles are a necessary condition for deadlock, a process not in such a cycle can't be deadlocked and must instead be waiting for a long time.

    The answers, approximately verbatim, and in no particular order:

    1. Yes, it is possible for an operating system to distinguish between a process that's waiting for a long time and a process that's deadlocked. A deadlocked process must meet four requirements: 1) mutual exclusion condition, 2) hold and wait condition, 3) No preemption condition, 4) Circular wait condition. All four of these conditions must be present for a deadlock to occur. If one of them is absent a deadlock is impossible. Therefore by having the […]1

      1: … OS check if a process meets all four of the conditions above it knows a process is deadlocked. If the process does not meet all four of the conditions, that means the process is not deadlocked and can just be waiting.

    2. It is possible. A deadlocked process would be demanding resources that are indefinitely unavailable, but the waiting process is a victim of the algorithm used by the system,2 and thus would not hold resources in the same way a deadlocked process would.3

      2: The system uses a lot of algorithms; which one is referred to here?

      3: And what does that difference look like?

    3. A process is deadlocked usually when two processes are left open.4 An example would be if process A is running and is left open, then process B, which needs the same resources, starts running. This process is considered deadlocked.5 The operating system can distinguish between a process and a deadlock. If a process is running forever6 the operating system can check the processes being repeated and what processes are running. If multiple processes are running […]7

      4: What does it mean for a process to be left open?

      5: Which process? A or B? And why is the process deadlocked? Why doesn't it just wait until the other process is done with the resource?

      6: How does an operating system know if a process is running forever?

      7: … it can be assumed that the two or more processes are deadlocked. A counter that counts ticks and processes can be used to decide this. By seeing two processes are not doing anything rather then a process that is doing something but is not ending, the operating system can determine what process is deadlocked.

    4. An operating system can tell if a process is deadlocked. The operating system manages and allocates resources so it can tel if a resource is used and where it is. The operating system can check if a requested resource is in use and if two processes are requesting resources held by another process it can detect that there is a deadlock.8

      8: But is that a circumstance that leads to deadlock?

    5. Yes, the operating system can distinguish between the two by keeping track of the resources the process is using and requesting in a resource graph as well as all other processes.9 Let arrows lead from the process to the resources they are requesting and waiting for. If you can follow the lines of waiting requests in a cycle at any point from the process in question, then there is a knot, meaning a deadlock.

      9: What does the requesting in a resource graph? The requesting process or the OS?

    6. Yes, an operating system can do this by not only periodically checking for long waits10 but also check if there is cyclical resource allocation that can lead to deadlock. If there is in fact cyclical resource allocation and both processes have been waiting a very long time it is almost safe to assume deadlock.11

      10: Which indicate what? And when is a wait long?

      11: Suppose there is a cycle but the wait time isn't very long; what's the assumption then?

    7. No, it is not possible for a operating system to distinguish if a process has been waiting longer than another process.12 An example that I can think of is if an operating system stored process that are waiting for the CPU in a stack, last in first out.13 It is possible that the first process never gets executed.14

      12: Is that the problem given by this question?

      13: Why would an operating system do that?

      14: Which process is the first one? The one at the bottom of the stack, or the one at the top? Or some other one?

    8. It is possible to tell the difference. A process that is waiting has no resources allocated to it,15 and is simply waiting for resources to be freed up. A deadlocked process is a process that has resources but needs more in order to complete. An operating system could tell the difference by seeing which processes have no allocated resources and which ones do.16 This will show which processes are deadlocked and which processes are waiting […]17

      15: Why not? Can't a process allocate enough of one resource and then have to wait for another resource to become available?

      16: So a process with no resources can't deadlock? Is that true?

      17: … to begin.

    9. The operating system should be able to distinguish between a deadlocked process and a process that has been waiting by looking at the resources each process posses.18 If process t1 needs resource A but A is in use by another process, then it's deadlocked if the process wanting A also needs a resource from t119 (*). So if the operating system knows that a process is waiting for a resource it has20 being executed and […]21

      18: But if a process is deadlocked, it's waiting for something it doesn't have.

      19: But isn't t1 the process wanting A?

      20: What has the resource? The operating system?

      21: … has halted compared to a process that has been waiting in the queue for an extended period of time. Also using system states in a way to distinguish.

      * t1 -> A needs B ; t2 -> B needs A : deadlocked

    10. No. Process A is waiting for process B to release an event or information that only process B can provide. The OS has no way of knowing that process A is waiting is due to deadlock, circular wait condition, or for any other reason.
    11. Yes, the OS could terminate all other processes22 to free up resources for the potentially deadlocked process to use and remove itself from deadlock.23 Or the OS could take note of the amount of time the potentially deadlocked process has been running,24 terminate it, and restart it to see if it waits for the same amount of time or more to determine if deadlock exists.25

      22: All other processes? Even those that aren't deadlocked? And which process gets spared?

      23: But does this solve the problem raised by the question, which isn't about recovering from deadlock, but detecting it.

      24: How does an operating system determine if a process is potentially deadlocked? And what does “potentially deadlocked” mean?

      25: How would the operating system insure the process runs in the same environment in both executions?

    12. It is not possible for an operating system to distinguish between a process that is waiting for a long time and a process that is deadlocked in the OS. An example of what the OS would fail to distinguish is between the processes since deadlock is kind of a long process26 the OS would be confused as to whether if is one or the other.27 The OS would categorize the two processes as the same […]28

      26: Long in what sense?

      27: Why? What is the operating system looking for?

      28: … thing. thus making it hard to distinguish between the two.

    13. Yes it is possible by utilizing a deadlock detection algorithm much like the Banker's algorithm is used for deadlock prevention. By monitoring available resources, what is allocated to each process, and the current requests for each process, a detection algorithm29 can be used to investigate all possible allocation sequences for the processes that remain to be completed.30 Consider allocation and requests to both be m×n matrices, and work and finished vectors of lengths m and […]31

      29: Does such a detection algorithm exist?

      30: That's a lot of allocation sequences. And how does the algorithm know these sequences? (Hint: it's impossible to know those sequences in general; see the halting problem).

      31: … n. Work = available for all is 0 - n, and if allocation 0 then finish[i] is false and if allocation[i] = 0 it is then true. Once we find an i where finish = 0 and request ≤ work then we can proceed to calculate work = work + allocation · finish[i] = true and continue by finding another; meeting the previously defined conditions if no such index exists then the system may be declared deadlocked.

    14. No, there is no way to distinguish between the two; waiting versus deadlock. In the case such that three process A, B, and C each access[?] a resource, but only 232 are available at any one time. A + B will both eventually use both resources, and C wants both after A and B finish.33 A will request and then B. Once both has a resource they are deadlocked34 + C forever waits. The OS […]35

      32: Two resource units, I'm guessing.

      33: How does this lead to deadlock?

      34: Why are they deadlocked?

      35: … knows no different.

    15. No, this is not possible because of congestion control for best throughput is to send everything at once and keep the number of unacknowledged packets at max value, the window size.
    16. Yes, the OS can distinguish between the two possible as when they are deadlocked or just waiting to allocation that[?] is usually deleted.36 The system should have an algorithm that runs[?] periodically to detect how many processes are waiting.37 The algorithm executes such that if the number of processes are still there38 and then the work hasn't been completed,39 then system I-O [?] [?] but if it processes numbers are more than needed it's deadlocked.

      36: What?

      37: How does this algorithm do that?

      38: What?

      39: What work?

    17. Yes, an OS could tell the difference by examining the resource matrix. Resource matrices keep track of which process went/own a given resource.40 If the operating system kept track of process idle time, time spent not executing, and cross referenced this to the resource matrix, the OS would know if the process is waiting for a resource or not.41 Process with high idle time but not waiting for a resource may be hung, but with […]42

      40: What would the OS be looking for in the resource matrix to determine deadlock?

      41: Why is the idle time necessary? Why isn't the request-grant matrix enough?

      42: … performing inter-process communication it is hard to know for sure. A process may be running a very long loop.

  2. Every once in a while a file system is compacted. After compaction, the file-system free-list size is unchanged. Assuming compaction changes the file system for the better, how is the free-list implemented? Justify your answer.


    The free-list implementation choices are bit maps and interval lists. A bit map would never change size, regardless of what happens during compaction. More positively, after compaction there's one free-space block and an interval list would have one entry.

    The answers, approximately verbatim, and in no particular order:

    1. When a file system is compacted or defragmented the operating system takes all of the data written to the disk and shifts it down to the starting address and begins to assemble it into logical chunks of data without space between them. In doing this the free list size is in fact unchanged because the same amount of data still resides on the disk.1 The free list is implemented in a way that changes the […]2

      1: True, but perhaps the same information, when rearranged, can be represented more compactly?

      2: … address of the free space on the drive until there is one area o the disk that is solid data and another consisting of solid free space. it is much like shuffling a deck of cards, there's still 52 of them, just in different locations.

    2. Since the size of the free list is unchanged and set to a certain amount, the free list can be implemented to accept objects of a certain size3 and know how much space it can fill.4 This manages how much information a file system can take it and even set priorities over what information is saved. Data compaction allows information that needs to be saved for long periods of time and data that has to […]5

      3: A free list accepting objects of a certain size? What does that mean?

      4: “It” is what?

      5: … be sent across expensive transmission lines to be stored in smaller data.

    3. It is free list implemented because middleware. This is a layer between an OS and host-resident system. A middleware hosts OS functions and spreads them over a network.
    4. [ unanswered ]
    5. One way is to use a bit map. A disk with n blocks requires a bitmap with n bits. Free blocks are represented by 1s in the map, allocated blocks by 0s. We can use free blocks to hold the free list. Only one block of pointers6 is needed to be kept in main memory. When a file is created, the needed blocks are taken from the block of pointers.7 When it runs out, a […]8

      6: Pointers? Pointers to what?

      7: What does this have to do with the problem raised by the question?

      8: … new block of pointers is read in from the disk. When a file is deleted, its blocks are freed and added to the block of pointers in memory. When this block fills up, it is written to disk.

    6. Better performance is achieved when files are closer together and as contiguous as possible.9 Once compaction has taken place, the files in the disk are listed closer together; the remaining free space can be listed in the free-list implementation in a contiguous manner as well. A linked list implementation would do.10

      9: True, but is that relevant to the question?

      10: But would a linked-list implementation have the property suggested in the question?

    7. The free list is implemented in a way in which the size of the file would not change.11 The implementation would not focus[?] on the size of the file but however compact the details or content within the file.12 Some example would be that if change some of the content in it to make it easier to read or distinguish. The compaction process in this case would make it13 better without changing its size.

      11: Which file?

      12: What does a free list have to do with file contents?

      13: “It” again is what? Compaction works on a file system; was there a switch from files to file systems?

    8. The free list should be implemented as an array14 because the size does not change15 and there is no need for it to be dynamic like a linked list or a tree.

      14: What kind of array?

      15: The size of what?

    9. The free list has nothing to do with the file system being compacted.16 That is why the free list didn't change. The free-list must take a broad view of the file system, meaning that compaction changed the location of the files but it didn't not change the amount of files there was.17

      16: Why not?

      17: True, but changing the location of the files also changes the location of the free blocks.

    10. One of the main factors of compaction is the cost, and it can only be done dynamically and while the system is in execution.18 Compaction moves all the free space in one big block. The free list is implemented the same way19 but the big difference is instead[?] of having 1 & 0 periodically now all the 0s are clustered together e.g. 000000011111.

      18: Why is does the system, as opposed to some small part of the system, have to be in execution?

      19: The same way as what?

    11. The free list would be implemented as a hash table20 such that reorganizing the list would not be too complex.21 When the system is compacted, items in the list that need to be moved for optimization are given a new hash key[?] and tacked on to the back of the list in that spot.

      20: How would a free disk block get allocated from a hash table?

      21: What does it mean to reorganize a hash table?

    12. [ unanswered ]
    13. The free list is implemented like a journal.22 The journal keeps track of the file system regardless of its size and will allow for the same performance before and after the compression.23

      22: What would it mean for a free list to be implemented as a journal?

      23: But wouldn't the journal grow larger as the free blocks were shifted around and the free list updated?

    14. The free list is implemented in a double array24 where it acts as an access control list.25 It provides the file system with a list of agents and objects where it provides the actions each agent can perform on an object.26 Since the file system is completed the size of the free list is unchanged because there still the same number of agents and objects.

      24: A double array? Is that like a two-dimensional array?

      25: An access control list? Why would it want to act like that?

      26: A free list does that? Really?

    15. [ unanswered ]
    16. There is no free list being implemented,27 or the free list is not being recalculated after compaction.28 Not sure if compaction is defragmentation or compression or either.29 Compaction changing the file system for the better does not imply it uses all the space in a contiguous manner.30

      27: Then how are free disk blocks found and allocated?

      28: Same question here.

      29: Doesn't matter; the answer's the same for all three.

      30: Oh? Why not?

    17. The free list is most likely implemented as an array.31 Compaction will shuffle the files or "squeeze" the free space or fragmentation out between files. For this to happened and the free-space count remain the same it32 must know exactly how many free blocks exist between files.33 The easiest way to know this is to know how many blocks are in use subtracted from the total blocks.

      31: Any particular kind of array?

      32: “It” being what?

      33: Doesn't any free list know this information? How does that effect size change?

  3. A colleague of your thinks that capabilities are nothing more than access control lists applied to agents. Is your colleague correct? Justify your answer.


    No, the colleague is wrong. An ACL contains (agent, operation) pairs, while a capability contains (object, operation) pairs. An ACL describes what agents can perform what operations on a particular object; a capability describes what operations on what objects a particular agent can perform.

    The answers, approximately verbatim, and in no particular order:

    1. Yes, my colleague is correct. Not being able to access something in a control list1 means that you don't have the capability to access it.2 When you do have access to the control list means you are capable of accessing it.

      1: What's a control list? Do you mean an access control list?

      2: What is the justification for making this claim?

    2. He is correct. Capabilities specify what the agent is able to do, in the same way that access control lists control what actions are valid for an agent to perform during an operation.3

      3: But are they the same? Can they be exchanged without effecting how the system operates?

    3. Because of framework tools. This can be integrated via data, control of presentation. It also exploits component portability manage context to simplify services provides wizard and other tools.
    4. Capabilities and ACL are not the same. Capability for an agent is described by cutting along a row [diagram]. Access control: when cutting along the column creates ACL. [diagram]4

      4: Is that difference meaningful? How?

    5. He is correct. An ACL is nothing more than a list where it provides an object and a series of agents and in each row what operation that agent can perform in that object. Capabilities are what the agent is allowed to do or perform, since this list does exactly what ACLs are basically list of agents capabilities on objects.5 [diagram]

      5: An ACL for object 1 indicates what agents A and B can do to object 1. Does a capability for agent A indicate what agent B can do to object 1?

    6. Not quite, although they appear similar, they are entirely different structures. An ACL deals with a single object and every agent that can access it with certain permissions. However, a capability takes the opposite approach, breaking down objects and operations by agent. They seem the same, but ACLs are much more efficient,6 and capabilities don't allow for the same implementations.7

      6: Efficient in what sense?

      7: Implementations of what?

    7. Yes, my colleague is correct, an access control list is a matrix of permissions between agents and objects.8 With agents on the rows and objects on the columns. If you go along a row for a single agent it lists in each column the operations that agent can perform on the object. The entire row holds the capabilities for that agent on all objects and is known as the agent's capabilities.9

      8: The whole matrix, or just a part of it?

      9: Does this mean that capabilities and access control lists are interchangeable?

    8. In the most basic sense, yes, my colleague is right. When considering a capability-based system, the system must often implements a set of possible access rights of the access control list.10 This set controls the ability to read, write, or execute a memory segment. There can be additional user defined rights as well as system rights, all of which are controlled by the set of defined access rights.11

      10: Why is that?

      11: Is this describing a capability or an access control list?

    9. The colleague who thinks capabilities are nothing more than access control lists applied to agents would happen to be incorrect. This reason is being is because capabilities are more capable to do other things other than control access lists to agents.12 Capabilities are other things that exist in networks and OS such as when connectivity and capable things.13 They can also access needs in the operating system.

      12: What would be an example of the things that capabilities can do that access control lists can't?

      13: What?

    10. It is similar but work totally different. ACL uses the column in a matrix to allow access to different ways when a request is processed. the system checks the column if user has access or not. Capabilities uses rows in a matrix where each row has its own domain[?] & access possession to each [?] Capabilities are also very[?] expensive[?] to the [?] process directly.
    11. My colleague is incorrect. Capabilities are the list of all agents and what things they can do to every object.14 ACLs are what agents can do to a single object. Capabilities determine what an agent can do to every object, even if it is nothing.15 ACLs determine what agents can do to one object.

      14: A capability is what?

      15: It is nothing? Do you mean the object is nothing, or what the agent can do is nothing?

    12. He is correct. During[?] matrix selection[?] each column is an access control list that shows each agent and each agent's permissible operations.16 This applies directly to capabilities. In order to get a permission to transfer17 you18 need to receive a certificate. A certificate can be directly related to an ACL19 which gives you access to certain agents or transferable rights. These two systems are very much alike.

      16: Does this describe a capability or an access control list?

      17: Transfer what? From where to where?

      18: Who's you?

      19: How?

    13. No, the fundamental difference between the two is one of implementation. A capability still carries with it the problem of having to seek out the object that the agent must perform operations on whereas ACLs detail the permission for each agent about to operate on the object in question making them far easier to implement and less prone to permission confusion.
    14. The colleague is not correct in thinking access control list are only capabilities applied to agents. Capabilities show how objects can be worked on by a specific agent. Access control lists show what a specific object can have done to it by agents. Access control contains all the permissions an object and show who can do what to it. A capability shows what an agent can do but not what all agents can do.
    15. Capabilities partition access matrices into separate rows so that information for a single subject are all together.20 An access list is separated into columns separated by objects, each with all subjects and their privileges over that object. The difference between the two is how the subject is listed. Capabilities separates the subject based all the info given on the subject.21 Access control lists all the subjects and the privileges over the objects. The colleague based […]22

      20: What's a subject?

      21: What does it mean to separate subjects? And what might “info” be?

      22: … on what an access list is and what a capabilities is, is right. Capabilities a no more the access control lists applied to agents.

    16. Yes. Both of these things can explain how a process is able to gain rights.23 The capabilities approach is a way of giving a process a specific certificate or right. The use of agents and the ACL is an abstraction of how this is done as it can give a process the same rights but in an indirect manner.24

      23: True, but do capabilities and access control lists grant rights in the same way, and with the same consequences?

      24: True, but do they work in the same way? Can a capability be used interchangeably with an ACL?

    17. My colleague is incorrect. An ACL associates with each object an ordered list containing all the domains that may access the object and how. The ACL method slices up the protection matrix by columns. Compared to the ACL, capability slices up the protection matrix by rows, with the method associated with each process is a list of object that may be accessed, along with an indication of which operations are permitted on each (its domain). […]25

      25: … Both these methods differ by how they slice the protection matrix and how its store only non-empty elements.

  4. True or false: the worst-case utilization for free space by first-fit is approximately 50%. Assume that 1) allocation requests are approximately the same size (that is, are randomly distributed with small variance) and much less (< 10%) than total free-space size, and 2) there is sufficient demand to have 100% free-space utilization with some perfect allocation scheme. Justify your answer.


    Ignoring the ends (or wrapping them to form a ring), the free list has \(M\) contiguous allocated-block sequences. Each sequence is surrounded by free-space blocks, arbitrarily assign the trailing free-space block to a allocated sequence. Because the allocation sizes are more or less the same, the results follow.

    The answers, approximately verbatim, and in no particular order:

    1. False. The worst case utilization for free space by first fit is not approximately 50%. The reason why it is true is because there is sufficient demand to have 100% free space utilization with some perfect allocation scheme within the OS.1 This also gives or takes the much less (< 10%) than the total free space size. 100% - (< 10%)2 would not approximate to 50% in this case.

      1: That assumption just makes sure demand doesn't run out; there are always enough requests to complete whatever argument is necessary.

      2: What does this mean? 100% is the total amount of space, while < 10% is the amount of a request.

    2. True, assuming requests are about the same size means that there should be close to even distribution.3 Of course they can be allocated in a way that is not helpful, but even at the worst you will find a next fit, or first fit that's not much bigger than what you already have allocated.4 Therefore you will always have 50% or higher utilization.5 [diagram] By allowing for bigger or smaller file size variations it […]6

      3: Of what over what?

      4: Why is that?

      5: Why is that?

      6: … only opens the room for more file utilization or means these are bigger files meaning more utilization.

    3. False. Based on those assumptions, worst-case for first fit would be much lower.7 If all allocations were 9.99% of total free space and there was high demand for 100% utilization of free space, then just to subsequent allocations would already be better than 50% with some[?] perfect allocation scheme something will have had to one horribly wrong to achieve only 50% utilization.8

      7: You mean utilization would be much lower, right?

      8: Why would that be the case? And could something that horrible actually happen?

    4. True. First-fit allocation finds the first space big enough to fit the block and places it there. If all block sizes are virtually the same size, the disk would be split up symmetrically, making disk arm movement minimal.9 Therefore, the worst possible case for this utilization is 50%.10 There would also be very little unused space on the disk.

      9: Is this how disk blocks are allocated?

      10: Why? Because of disk arm movement? How does that work?

    5. Since all request are approximately the same size that means some one just slightly bigger than others with demand for 100% utilization these allocations will be a tightly packed as possible.11 When the smaller processes are done they will not be able to be replaced by the larger ones. If the smaller ones finish in an alternating pattern and no small process can replace them then will only be about 50% utilization.

      11: How tight would that be?

    6. False, there is no way that you could have all of the free space used up.12 Some disk fragments are too small to store data. You probably will not allocate the same amount of data with the size of the disk sector.13 Data is in many different sizes and will take up random parts of the disk.14

      12: Why not? What prevents it?

      13: What do disk sectors have to do with anything

      14: The problem doesn't apply to disks because disks mostly don't use contiguous-space allocation.

    7. True, even assuming the system allocates requests in decreasing order from 9% to 5% (Full of 9% to half full of 5%, allowing for maximum external fragmentation),15 and is then hit with requests for nothing but 9% allocation there will still be 10 5% allocations in the free space keeping the utilization at exactly 50%.16

      15: Why would that assumption be necessary?

      16: Why is that?

    8. True. Given enough time, with many requests receiving and releasing space, eventually there will be small gaps between available allocations.17 Worst case would be every other allocation is of size 1, leaving only 50% utilization of free space. Also if the first request was for 50% of available space18 and all subsequent requests were equal or larger, only 50% would be utilized, even with a normal distribution those extremes are possible.

      17: Right, but those are small gaps between larger allocated blocks? Is the ration 50-50?

      18: Is that the assumed allocation distribution?

    9. False - assuming both conditions are true, the worst-case utilization would be higher than 50%.19

      19: Why?

    10. A good idea would be to have location transparency logical objects are location agnostic. Because remote file caches can be in primary store.
    11. False. First off, assuming that each request is approximately the same size, then you can assume that each request can more or less replace each other requests in thinking about size since they are the same. [1] The worse case utilization would be about 75-80% since all requests are the same size20 they can all fit in each other places.21 first fit means first available space. With this there won't be much overlap22 or lost […]23

      20: The same size, or more or less the same size?

      21: What does that mean? And why does that lead to 75-80%?

      22: What's overlapping what?

      23: … free space. This is why worst-case utilization will be higher than 50%. [1] [diagram]

    12. True. By assuming that the requests are of similar size and are small, there is less chance of ending up with too many spaces that cannot fit a request.24 For example, if the space is 100 units and there are 5 requests all of size less than 10 units, even if they are placed en to end in order that they are received, it is likely that 10 of the request will fit being that25 […]26

      24: Why is that? Isn't the problem relative rather than absolute?

      25: Ok, but supposed there were 50 requests.

      26: … in a space of 100 units and all request being of a size less than 10, the sum of their space will be less than 100 units. Prior to completing requests and moving free space to fit new requests the system would be utilizing roughly 66% and a drop below 50% would be unlikely. A best fit scheme may cause much closer to 100% utilization, but under these conditions first fit should remain over 50% utilization.

    13. True. First-fit is the simplest algorithm.27 This scheme will result in memory filling up with tiny useless holes. First-fit generates the largest holes on the average.28

      27: How is simplicity related to utilization?

      28: Why?

    14. True. The problem with the first fit is that it searches[?] for the first biggest hole of memory. In many cases it can be too big to what may be needed. Using best fit it allows the OS to chose the most fitted size for the storage.29 In case where the requested size are similar[?], then it would not be a big problem such as 50% as mentioned[?].30

      29: But doesn't that leave smaller holes?

      30: Why not?

    15. The more free space that is utilized the better. As for 50% being the worst-case utilization is true. If free-space utilization is lower than 50% more allocation requests can come in.31 Being at 50% not all the space is utilized, and not enough is utilized to accept a lot of requests.32 Having only 50% utilized can be expensive and useless because not all the space is filled but it is still taking up space that […]33

      31: It seems reasonable that requests granted is tied to free-space utilization, but how are requests made tied to free-space utilization?

      32: How can under-utilization lead to not accepting requests?

      33: … can probably used elsewhere.

    16. It is false that the worst-case utilization is with each request being under 20% of total free space34 and there being enough demand for 100% utilization given the right algorithm that means there is a high number of requests as each request is under 10% you could fit any request until you have 90% utilization35 meaning worst case is at utilization is at least 90%. This is well above 50%.

      34: Is that part of the problem given by the question?

      35: Why is that?

    17. [ unanswered ]
  5. A colleague of yours discovers that a simple, immediately grant the maximum request style of resource allocation is as effective as the Banker's algorithm at avoiding deadlock over some resource. Your colleague modifies the operating system accordingly, and is dismayed to discover that, while the system still doesn't deadlock, some of its operating characteristics are worse then they were in the old, unmodified system using the Banker's algorithm. Name one of these characteristics, and explain why it differs between the modified and unmodified versions of the system.


    The Banker's algorithm allocates resources in response to explicit requests; the modified algorithm always allocates the maximum, whether needed or not. Assuming processes don't always use their resource maximums, the modified algorithm leads to lower resource utilization than does the Banker's algorithm. Reduced resource utilization leads to reduced process throughput as the modified algorithm makes processes wait for resources that would have been allocated to them by the Banker's algorithm.

    The answers, approximately verbatim, and in no particular order:

    1. One of the characteristics of the system1 is that you are granting the maximum request of only granting a certain amount of the maximum which is the Banker's algorithm,2 because the Banker's algorithm assumes that a process won't need all of the resources. Using maximum resources will slow down the system.3

      1: Which system? There are two of them: the old one and the new one.

      2: What?

      3: There's a lot of ways to slow down a system; how would this way do it?

    2. With the Banker's algorithm in use there is no starvation[?] of processes, as long as there are enough resources to run the processes. The algorithm will allow to run. With the modified version immediately granting the max request every time will cause starvation of some processes.4 Say there is a process that requires 5 resources and one that requires 2. The one needing 5 will be started first.5 Halfway through another process is added […]6

      4: Why? Is that true when the waiting processes are organized in a queue?

      5: By which system: the new or the old? Why would this system perform that allocation first?

      6: … to the queue needing 4. Since this is the largest of the requests it will be next and the one needing 2 will be starved if this continues to happen.

    3. One of the characteristics in this case would be the effects of the Banker's algorithm avoiding deadlock over some other resources. The operating characteristics are more than they were when it was old7 because the modified and unmodified solutions do not give the immediate grants to the maximum request style of resource allocation.8 This makes the Banker's algorithm uneffective,9 but perhaps effects it in a negative way because it came out worse. The characteristics change[?] […]10

      7: What does this mean? When what was old? The old system had the Banker's algorithm.

      8: Doesn't the new system do that?

      9: Uneffective at what? And why is it uneffective?

      10: … would be ineffective.

    4. [ unanswered ]
    5. One characteristic that will significantly decrease is the operating system's throughput. If all required resources are being allocated at once that means that there are some that are being held onto to use after others.11 These resources could be used to get other processes started and make them finish faster than if they had to wait for the process holding them finish entirely before getting a hold of them.

      11: Is this referring to resources or processes? And what's doing the holding, the requester or the granter?

    6. The old system would run more smoothly[?] (faster) because the Banker's algorithm is designed to allow as many processes as possible to run based on resource supply and needs. Bulk allocation reduces deadlock,12 but also slows down the system. The new implementation may free up too many processes,13 which leaves other processes waiting. This takes too much time, and therefore makes the system slower than with the Banker's algorithm.

      12: Reduces or eliminates?

      13: How might it free up too many processes?

    7. The resource allocation ensures that the system doesn't deadlock, but this method of resource allocation doesn't allow for multiple processes.14 It allows the larger processes15 to complete but leaves the smaller processes waiting if other large processes come along.16 This will leave smaller processes waiting, possibly for forever, if there are larger processes constantly being called by the system. this makes the OS hang while larger processes compete and smaller ones wait.

      14: Why not? If there's 10 units and two processes each need at most 2 units each, couldn't the modified system satisfy both processes at once?

      15: Larger in what dimensions? Resource requirements?

      16: Why? What mechanism brings this about?

    8. [ unanswered]
    9. Assuming my colleague is utilizing a cycle detection algorithm to prevent deadlock in his maximum first request algorithm, they have most likely found that while the system may remain in a safe state and in perfect conditions should even be more efficient than the Banker's algorithm;17 the system may not be as efficient under less than perfect conditions.18 In an event where they cannot allocate all their resources to more than one processes, the cycle […]19

      17: By what measure of efficiency?

      18: What are perfect conditions?

      19: … detection algorithm will know this and not[?] allow the maximum request to proceed until it frees its resources. this can result in system hangs, and does not provide the flexibility to complete multiple passes based on the available resources, maximum demand of each process, allocation to each process, and need like the Banker's algorithm does.

    10. One of the system's operating characteristics that differs in both versions of the system would be throughput. It differs because in the old system processes could grow up to their maximum allowance of resources and there was always enough to accommodate the largest possible request. In the new system all resources can immediately be lent out, leaving nothing to be used by smaller processes since there is less to go around, throughput degrades.20

      20: But what discriminates against smaller processes?

    11. It violates the [?] [?] [?] of the operating system . Using the maximum request style OS can get stuck[?] in a unsafe[?] state in a case where there are not enough resources to finish or process21 & there is a process that can be finished[?] with the currently available resources. Banker's algorithm insures that before a process starts OS has maximum resources that may be needed by a process.22

      21: If the allocation is successful and its the maximum needed, how could the process fail to complete because it didn't have enough resources?

      22: Why can't that be done for the new scheme too?

    12. An operating characteristic that is worse with the new allocation than with the old is efficiency.23 With resources being allocated to the maximum request, resources are not available at a rate allowed by the Banker's algorithm.24

      23: Efficiency in what sense? More efficient to implement? To execute? To reason about correctness?

      24: Why not?

    13. Utilization25 will certainly suffer if each request is granted its maximum amount of resources immediately. The system will be able to allocate too much new requests before using all its available resources. The Banker's algorithm always keeps around resources on hand to satisfy some maximum request.26 But if all resources are allocated immediately the system must constantly wait on a process to finish before allowing a waiting one to execute.

      25: Utilization of what?

      26: But doesn't the modification do that too?

    14. Granting a maximum request or resources can cause deadlock when every resource request is granted to requests done at the same time.27 (Dinning philosopher problem).28 The Banker's algorithm is a better solution. 1) No process clams more than total resources. 2) No process holds more than it needs. 3) Sum is no more than total resources available.

      27: How? The request is either granted or denied. If granted, the process runs; if not, it waits. How does that lead to deadlock?

      28: Is that why the dining philosopher's problem may deadlock? Or is it more about coordination?

    15. Process wait time will be impacted. Process wait time is how long a process has to wait for the resources it needs. In the unmodified system, total resources 10, process 1 needs 3 and process 2 needs 5. These two processes could run concurrently because there are enough resources. In the new model, process 2 needs to wait until process 1 completes.29 This is added unnecessary latency, and turns the OS into sequential like.

      29: Why? After allocation, process 1 leaves 10 - 3 = 7 units, which is more than enough to handle process 2.

    16. What differs between the modified and unmodified versions of the system is that the maximum request is immediately granted. Therefor since there is not enough resources for another process they are not issued because this would put[?] system into an unsafe state.30 This implementation does prevent deadlock, but since it immediately grants the maximum request it can only service one process at a time,31 which does not give any process resources until the existing one […]32

      30: How does that effect operating characteristics?

      31: Why is that?

      32: … has returned the resources.

    17. [ unanswered ]

This page last modified on 2012 May 2.