Lecture Notes for Operating Systems
Virtual Memory and Paging, 6 November 2000
- large program in small memory
- overlays - programmer defined fragments
- virtual memory
- virtual addresses - completely decouple physical and process addresses
- virtual memory fragments - pages
- virtual addresses - page #, page offset
- paging
- pages
- physical memory is divided into page frames
- memory management unit - virtual to real address mapping
- page tables - from virtual address to page frame
- virtual memory address structure - page number, page offset
- issues
- page table size - the trade-off
- translation speed - as fast as possible
- context switch overhead - as low as possible
- single level page tables
- registers - small and expensive but fast
- main store - big and cheap but slow
- multi-level page tables - 219 8k pages in 32-bit
address space
- page table entries - page frame number, present-absent bit,
protection bits, modified-referenced bits, cache bit, pinned bit
- translation look-aside buffer - speeding up the mmu
- caching page table to page frame mappings
- hardware or software
- prefetching - process stack + code
- example
- 50 ms tlb lookup, 750 ms memory access
- tlb hit: 50 tlb + 750 memory access
- tlb miss: 50 tlb + 750 page table + 750 memory access
- 8-16 entry tlbs give 80-90% hit ratios
- inverted page tables
- 64-bit addresses - 16k page frames, 248 =
281*1012 pages
- page frame -> (proc, page no) mappings
- 228 mbyte main store -> 212
216kbyte page frames
- tlb to speed-up mapping
- hashing addresses on tlb failure
- page replacement - when good addresses go bad
- page faults
- random page selection - yank any in-store page; not very effieient
- optimal page replacement - furthest use in the future
- not recently used
- modified-reference bits -
- periodically clear reference bits to find old, unreferenced pages
- toss not referenced before referenced pages, and not modified
before modified pages
- hardware implementation - still need to clear reference bits
- software - interrupts and protection
- fifo - oldest out next; no reference or modify checking
- second chance - oldest unreferenced out
- clock page replacement - second chance or fifo as a ring
- lru - the past as prolog
- page queue ordered by access time
- counters
- matrix
- lru simulations
- not frequently used - add the read bit to a counter; doesn't
forget
- aging - shift the read bit in from the right
- intra-interval granularity - there isn't any
- finite resolution - usually large enough
- design issues
- working set model
- demand paging
- locality of reference
- working set
- thrashing
- prefetch
- using counters to compute the working set
- adding the working set to paging algorithms - wsclock
- local vs. global page allocations
- fractional vs. weighted total allocations
- managing by page-fault frequency
- swapping is still useful to handle global thrashing
- page size
- set by hardware or software - alpha 8, 16, 32, 64kb pages
- internal fragment, small segments argues for small pages
- small page tables, paging overhead argues for large pages
- page table overhead on context switches
- the memory-management interface
- sharing pages
- pinning pages
- mapping files
- distributed shared memory
This page last modified on 6 November 2000.