This test has six questions. The test runs the entire class period, starting at 7:25 and ending at 9:15. When you finish the test, you may leave.
Always justify answers unless the question explicitly states it’s not necessary. Some questions require reasonable assumptions in order to be answered. In such cases the answer should clearly state the assumptions made.• Translation-lookaside buffer• Deadlock prevention• Effective access time• Working set• Banker's Algorithm• Thrashing
• Translation-lookaside buffer:A cache of (page number, page frame) pairs used to short-circuit page-table look-ups.• Deadlock prevention:Techniques that arrange resource management so that deadlock cannot occur.• Effective access time• Working set:The set of pages accessed by a process over some time period.• Banker's Algorithm:An algorithm for deadlock avoidance based on making sure its always possible to satisfy some processs allocation request.''• Thrashing:A negative feedback loop in which a process pages out pages it will almost immediately need to page back in.
The answers, approximately verbatim, and in no particular order:
• Translation-lookaside buffer:[no answer]• Deadlock prevention:Adding concurrency1 which doesnt allow two threads to attempt simultaneous access of one or a set of resources• Effective access time:[no answer]• Working set:[no answer]• Banker's Algorithm:[no answer]• Thrashing:[no answer]
1: No, nothing to do with concurrency.
• Translation-lookaside buffer:[no answer]• Deadlock prevention:Preventing deadlocks using different techniques such as turn-taking2 or locality3 so that resources arent being accessed at the same time.4'• Effective access time:Access/write/read times in ms from a drive/memory.• Working set:Whatever resources5 are available6 for allocation.• Banker's Algorithm:[no answer]• Thrashing:[no answer]
2: Turn-taking is part of a method for providing mutual exclusion.
3: How does locality help with deadlock prevention?
4: Nothing to do with concurrency.
5: No, just page frames.
6: Needed
• Translation-lookaside buffer:A register with parallel search capabilities that is used to store commonly used7 pages8 in order to improve access times of pages.• Deadlock prevention:The act of preventing a program from getting into a state where no program can be made due to resource availability.• Effective access time:The average access time for each instruction, calculated with register rate, page-fault rate, register access time, and memory access time. (1 - ρ×ma + ρ×f• Working set:An idea used in memory systems stating that programs use a general locality at a single point in time.9 A working set is the pages in this locality.• Banker's Algorithm:[no answer]• Thrashing:A problem that occurs when a process does not receive a large enough slice[?] for pages in memory. Pages are swapped out but are needed near immediately, causing a high fault rate.
7: Most recently referenced, not commonly used.
8: Page numbers; even small pages are too big to fit in a TLB.
9: An interval of time, the working-set window.
• Translation-lookaside buffer:A way to look-up pages10 in cache, stored on the CPU.• Deadlock prevention:Trying to prevent two threads competing for similar resources from blocking when the resource(s) are unavailable. Prevention of cycle allocation requests.11• Effective access time:The expected value of primary store access time (page faults/total store references).12• Working set:[no answer]• Banker's Algorithm:An algorithm for implementing mutual exclusion.13• Thrashing:Playing a sick solo on an electric guitar.
10: Pages, or page frames?
11: And...?
12: This is the page-fault rate.
13: Nothing to do with concurrency.
• Translation-lookaside buffer:[no answer]• Deadlock prevention:Utilizing techniques at an OS level to prevent a process from stalling for an infinite amount of time due to blocked resource allocation.• Effective access time:14(1 - ρs + ρf; s is the time to access primary store, f page-fault handling time.• Working set:The set of storage that is currently in execution.15• Banker's Algorithm:Algorithm that prevents uncoordinated read and write to shared storage.16 Fixes the ATM problem.• Thrashing:[no answer]
14: This is an example, not a definition.
15: So the utilization is the working set?
16: Nothing to do with concurrency.
• Translation-lookaside buffer:An expensive piece of hardware used to locate and store pages17 using keys and values.• Deadlock prevention:A technique used to control resources to prevent deadlock.18• Effective access time:When address binding is done during code execution instead of compile-time or load-time.19• Working set:[no answer]• Banker's Algorithm:[no answer]• Thrashing:When there is more page swapping going on then actual code execution.
17: Store pages how?
18: How?
19: Nothing to do with binding times.
Locality describes two benefits for demand paging. First, references to particular addresses are correlated so that sequences of address references are circumscribed by a relatively small range of the address space (spatial locality). Second, the small address ranges accessed by a process are relatively stable over short-term process execution and change slowly over the longer term (temporal locality).
The answers, approximately verbatim, and in no particular order:
1: Which pages files and code? What is the relation between these page files and code and the method at 107? And what are page files?
2: Sure, but which others?
3: Pages don't get loaded into the CPU.
4: Not only is it much faster, it's otherwise impossible.
5: This reads like a chicken and egg problem: to efficiently demand load process information into primary store, the process information must be in primary store. But to get process information into primary store, it must be demand loaded. Maybe the out is in “efficiently”: the first demand load is inefficient, and the others are not?
The more popular a page is accesses, then that page should be kept more local in the CPU like cache storage so that the page look-up will be faster.
6: True, but that's the wrong kind of locality.
7: Whole pages? Wouldn't that be too big?
8: How else would other pages be loaded in a demand-paged system?
9: Section of what?
facefeed
16 into the physical
address baddad
16. Assume the system uses an inverted page table; you'll
have to make other assumptions too.
The CPU issues the logical address facefeed
16 to the MMU. The MMU splits
the address into the page number facefeed
16/26 = 7d677f7
16 and
the page offset facefeed
16 mod 26 = 2d
16. If the page number
7d677f7
16 is in the translation-lookaside buffer (TLB), the TLB provides
the frame number f to the MMU, which forms the physical address
f*26 + 2d
16.
If the page number is not in the TLB, the MMU searches the inverted page-table entries for the page number. If found, the MMU uses the index at which the page number was found as the page-frame number f and computes the physical address as in the previous paragraph. If the page number is not inverted page table, the MMU issues a page fault. Once the page lands in primary store in page frame f, the MMU computes the physical address as in the previous paragraph and makes the faulting process ready for execution.
The answers, approximately verbatim, and in no particular order:
Assuming there is a cache table,1 the cache
table would be checked first to see if the page exists before going out to
primary store. If the page is not found then the MMU (memory management unit)
translates the address facefeed
16 to baddad
162 after calculating in the base
register and limit[?] register (assuming this is relocatable code). The MMU
then retrieves the data at the address baddad
16.
1: What is a cache table?
2: How does the translation go?
3: Linking? Is this the right stage to be linking, even for dynamic link-loading?
4: What information?
5: How does a store access, which may be a read, insert information into the physical address space?
Deadlock detection can't be handled piecemeal. Deadlock can be prevented locally resource by resource (but note all resources must be dealt with), detecting deadlock is a global property. If the deadlocked processes are split among detection procedures, each individual procedure will not have enough information to detect the deadlock.
The answers, approximately verbatim, and in no particular order:
1: Does “the resource itself” mean the individual resources making up the subset? Or does “the resource itself” mean the subset? Why bother defining subsets? They don't get mentioned again in the answer (that I can tell).
2: It, it, it. What “it”? The hundreds of threads?
3: “it and it” What could that possibly mean?
4: Why would a deadlock detection algorithm be interested in USB printers if the printers can't deadlock?
5: Ok, but what about sequential resources?
6: Ok, but what about detection?
7: Right, but unlike deadlock prevention, which only requires that one of the properties be missing, deadlock detection requires that all four properties be present.
8: If the process continually tries to allocate a resource, how can it be deadlocked?
9: Ok, but what about deadlock detection?
10: Ok, but why is that necessary? What happens when different deadlock detection techniques are used on different resource subsets? Why?
Class A: modified, not accessed1) Order these classes from most desirable to least desirable in terms of paging performance. Justify your ordering.
Class B: modified, accessed
Class C: not modified, not accessed
Class D: not modified, accessed
2) Given your ordering, estimate the relative paging-performance value of adjacent classes on your list. That is, if your ordering is A, B, C, D, explain how much more valuable pages in class A are over pages in class B with respect to paging performance and so on. This question requires that you to assign a reasonable measure of paging-performance value to the page classes.
The ordering is C, D, A, B. Pages in class C don't have to be paged out and are unlikely to be paged back in. Pages in class D don't have to be paged out, but may have to be paged back in. Pages in class A have to be paged out, but may not have to be paged back in. Pages in class B have to be paged out and may have to be paged back in.
Define page-class value to be inversely proportional to the paging time for pages in the class; the larger the paging time, the less valuable the class. Assume the accessed bit is an accurate predictor of future access. Assume paging out and paging in take the same amount of time t. Class C pages incur no paging time and so are infinitely valuable. Class D and Class A pages have the same paging time t. Class B pages have paging time 2t.
The answers, approximately verbatim, and in no particular order:
If C takes 100 ms to replace,1 D will take 150 ms due to having to be added in the future. A will take 30 ms due to read and write, and B will take 350 ms due to being written more often.2
1: Why should a class C page need any time to be replaced?
2: How many times does a class B page need to be written? Once out to the disk, and once back into primary store. What else is there?
Assuming most desirable to replace to least desirable to replace. C and A classes are both not accessed therefore it would be a good bet that no process would be accessing a page in that class anytime soon. Not modified is more desirable to replay because it adds an extra assurance that this page has not been accessed or modified.34
3: And also that it doesn't have to be paged out to save the changes.
4: Where are the relative valuations?
If the pages are not modified, then the page table doesn't need to be changed when returning that page(s).56
5: Why not? Doesn't the page table always need to be modified when changing the relation between a page and a page frame?
6: Where are the relative valuations?
This ordering was chosen since the modification takes longer than accessing.8 Obviously in the case where the class9 has been modified and accessed is the ideal situation.10 And from there the classes are ordered using the above logic.11
Class A and B are much more valuable than D and C,12 and D and C are not much different from each other in terms of value.7: These are ordered the opposite of most desirable to least desirable, right?
8: What is being modified and accessed? Pages? Page-table entries? Bit flags?
9: Does “the class” mean “a page in the class”?
10: Why is a candidate sacrifice page that has been modified and accessed ideal? Why isn't a candidate sacrifice page that hasn't been either modified or accessed more ideal?
11: Which logic?
12: By what measure?
A page that hasn't been used or accessed is the quickest one to use first since it's like a fresh blank page.
Then an unmodified but accessed page requires only to obtain access but the page itself is still unused.13Then if a page is modified but not accessed then you are free to grab the page but what is on it must be wiped to be used.14
Then a modified and accessed page would require the most work of releasing [?] the page and then also copying and clearing out its contents.1513: How can the page be unused if it's been accessed? And what is obtaining access to what?
14: What happens to the stuff that gets wiped? Is it gone forever? And isn't it true for any page class that the previous contents of the page frame must be wiped for the page frame to be reused?
15: And the relative page-class values?
A three-level page table splits logical addresses into four parts:
(root-table index, intermediate-table index, leaf-table index, page offset)Because 264 = 24216, each component could be 16 bits, but that would give 64k-byte pages, which is a little big. Alternatively, distributing three offset bits to the indices provides 213 = 8k pages and 217 = 128k element index tables.
The answers, approximately verbatim, and in no particular order: