Lecture Notes for Operating Systems

Virtual Memory and Paging, 18 July 2002


  1. large program in small memory

    1. overlays - programmer defined fragments

  2. virtual memory

    1. virtual addresses - completely decouple physical and process addresses

    2. virtual memory fragments - pages

    3. virtual addresses - page #, page offset

  3. paging

    1. pages

      1. physical memory is divided into page frames

    2. memory management unit - virtual to real address mapping

      1. page tables - from virtual address to page frame

        1. virtual memory address structure - page number, page offset

        2. issues

          1. page table size - the trade-off

          2. translation speed - as fast as possible

          3. context switch overhead - as low as possible

        3. single level page tables

          1. registers - small and expensive but fast

          2. main store - big and cheap but slow

        4. multi-level page tables - 219 8k pages in 32-bit address space

        5. page table entries - page frame number, present-absent bit, protection bits, modified-referenced bits, cache bit, pinned bit

    3. translation look-aside buffer - speeding up the mmu

      1. caching page table to page frame mappings

      2. hardware or software

      3. prefetching - process stack + code

      4. example

        1. 50 ms tlb lookup, 750 ms memory access

        2. tlb hit: 50 tlb + 750 memory access

        3. tlb miss: 50 tlb + 750 page table + 750 memory access

      5. 8-16 entry tlbs give 80-90% hit ratios

    4. inverted page tables

      1. 64-bit addresses - 16k page frames, 248 = 281*1012 pages

      2. page frame -> (proc, page no) mappings

        1. 228 mbyte main store -> 212 216kbyte page frames

      3. tlb to speed-up mapping

      4. hashing addresses on tlb failure

  4. page replacement - when good addresses go bad

    1. page faults

    2. random page selection - yank any in-store page; not very effieient

    3. optimal page replacement - furthest use in the future

    4. not recently used

      1. modified-reference bits -

        1. periodically clear reference bits to find old, unreferenced pages

        2. toss not referenced before referenced pages, and not modified before modified pages

      2. hardware implementation - still need to clear reference bits

      3. software - interrupts and protection

    5. fifo - oldest out next; no reference or modify checking

    6. second chance - oldest unreferenced out

    7. clock page replacement - second chance or fifo as a ring

    8. lru - the past as prolog

      1. page queue ordered by access time

      2. counters

      3. matrix

      4. lru simulations

        1. not frequently used - add the read bit to a counter; doesn't forget

        2. aging - shift the read bit in from the right

        3. intra-interval granularity - there isn't any

        4. finite resolution - usually large enough

  5. design issues

    1. working set model

      1. demand paging

      2. locality of reference

      3. working set

      4. thrashing

      5. prefetch

      6. using counters to compute the working set

      7. adding the working set to paging algorithms - wsclock

    2. local vs. global page allocations

      1. fractional vs. weighted total allocations

      2. managing by page-fault frequency

      3. swapping is still useful to handle global thrashing

    3. page size

      1. set by hardware or software - alpha 8, 16, 32, 64kb pages

      2. internal fragment, small segments argues for small pages

      3. small page tables, paging overhead argues for large pages

      4. page table overhead on context switches

    4. the memory-management interface

      1. sharing pages

      2. pinning pages

      3. mapping files

      4. distributed shared memory


This page last modified on 8 August 2002.