Memory Management: From Absolute Addresses to

Daniel Sanchez Computer Science and Artificial Intelligence Laboratory M.I.T.

http://www.csg.csail.mit.edu/6.823 L07-2 Multilevel Caches

• A memory cannot be large and fast • Add level of cache to reduce miss penalty – Each level can have longer latency than level above – So, increase sizes of cache at each level

CPU L1 L2 DRAM

Metrics: Local miss rate = misses in cache/ accesses to cache Global miss rate = misses in cache / CPU memory accesses Misses per instruction = misses in cache / number of instructions

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-3 Inclusion Policy

• Inclusive multilevel caches: – Inner cache data must also be in the outer cache – External accesses need only check outer cache – Most common case

• Non-inclusive multilevel caches: – Inner cache may hold data not in outer cache – On a miss, both inner and outer level store a copy of the data

• Exclusive multilevel caches: – Inner cache data is not outer cache – Swap lines between inner/outer caches on miss – Used in AMD Athlon with 64KB primary and 256KB secondary cache

Pros & cons of each type?

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-4 Write Policy

• Write-through: Propagate writes to the next level in the hierarchy – Keeps next-level cache / memory updated – Simple, high-bandwidth

• Write-back: Writes not propagated to next level until block is replaced – Contents of next-level cache / memory can be stale – Complex, lower bandwidth – Most common case

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-5 Typical Memory Hierarchies

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-6 Memory Management

• The Fifties - Absolute Addresses - Dynamic address translation

• The Sixties - Atlas’ Demand Paging - Paged memory systems and TLBs

• Modern Systems

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-7 Names for Memory Locations

Address Physical ISA machine virtual Mapping physical Memory language address address (DRAM) address

• Machine language address – as specified in machine code • Virtual address – ISA specifies translation of machine code address into virtual address of program variable (sometime called effective address) • Physical address  specifies mapping of virtual address into name for a physical memory location

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-8 Absolute Addresses

EDSAC, early 50’s virtual address = physical memory address

• Only one program ran at a time, with unrestricted access to entire machine (RAM + I/O devices) • Addresses in a program depended upon where the program was to be loaded in memory • But it was more convenient for programmers to write location-independent subroutines

How could location independence be achieved? Linker and/or loader modify addresses of subroutines and callers when building a program memory image

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-9 Multiprogramming

Motivation In the early machines, I/O operations were slow and each word transferred involved the CPU

Higher throughput if CPU and I/O of 2 or more programs were overlapped. How? prog1 multiprogramming

Location-independent programs Programming and storage management ease  need for a base register

Protection prog2 Memory Physical Independent programs should not affect each other inadvertently  need for a bound register

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-10 Simple Base and Bound Translation

Segment Length Bound Bounds Register  Violation?

Physical current Load X Effective Address segment Address + Main Memory Memory Main Base Register Base Physical Address Program Address Space Base and bounds registers are visible/accessible only when processor is running in the supervisor mode

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-11 Separate Areas for Program and Data

Data Bound Bounds Register  Violation? data Effective Addr Load X Register segment Data Base Register +

Program Program Bound Bounds Address Register  Violation? Memory Main Space Program program Counter segment Program Base Register +

What is an advantage of this separation? (Scheme used on all Cray vector supercomputers prior to X1, 2002)

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-12 Memory Fragmentation

Users 4 & 5 Users 2 & 5 free arrive leave OS OS OS Space Space Space 16K user 1 16K user 1 user 1 16K user 2 24K user 2 24K 24K user 4 16K 24K user 4 16K 8K 8K 32K user 3 32K user 3 user 3 32K

24K user 5 24K 24K

As users come and go, the storage is “fragmented”. Therefore, at some stage programs have to be moved around to compact the storage.

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-13 Paged Memory Systems

• Processor generated address can be interpreted as a pair number offset • A contains the physical address of the base of each page 1 0 0 0 1 1 2 2 3 3 3

Address Space Page Table 2 of User-1 of User-1 Page tables make it possible to store the pages of a program non-contiguously. February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-14 Private Address Space per User

OS User 1 VA1 pages Page Table Physical Memory

User 2 VA1

Page Table

User 3 VA1

Page Table free

• Each user has a page table • Page table contains an entry for each user page

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-15 Where Should Page Tables Reside?

• Space required by the page tables (PT) is proportional to the address space, number of users, ... Space requirement is large  Too expensive to keep in registers

• Idea: Keep PT of the current user in special registers – may not be feasible for large page tables – Increases the cost of context swap

• Idea: Keep PTs in the main memory – needs one reference to retrieve the page base address and another to access the data word  doubles the number of memory references!

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-16 Page Tables in Physical Memory

PT User 1

VA1 PT User 2 User 1

VA1 Idea: cache the address translation User 2 of frequently used pages -- TLBs

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-17 A Problem in Early Sixties

• There were many applications whose data could not fit in the main memory, e.g., payroll – Paged memory system reduced fragmentation but still required the whole program to be resident in the main memory

• Programmers moved the data back and forth from the secondary store by overlaying it repeatedly on the primary store

tricky programming!

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-18 Manual Overlays

• Assume an instruction can address all the storage on the drum 40k bits main • Method 1: programmer keeps track of addresses in the main memory and initiates an I/O transfer when required 640k bits drum • Method 2: automatic initiation of I/O Central Store transfers by software address translation Ferranti Mercury 1956 Brooker’s interpretive coding, 1960

Problems? Method1: Difficult, error prone Method2: Inefficient

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-19 Demand Paging in Atlas (1962)

“A page from secondary storage is brought into the primary storage whenever it is (implicitly) demanded by the processor.” Tom Kilburn Primary 32 Pages 512 words/page

Primary memory as a cache for secondary memory Secondary Central (Drum) User sees 32 x 6 x 512 words Memory 32x6 pages of storage

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-20 Hardware Organization of Atlas

Effective 16 ROM pages system code 0.4 ~1 sec Address Initial  (not swapped) Address Decode 2 subsidiary pages system data PARs 1.4 sec (not swapped) 48-bit words 0 Main Drum (4) 512-word pages 8 Tape decks 32 pages 192 pages 88 sec/word 1.4 sec 1 Page Address  31 Register (PAR) per page frame

Compare the effective page address against all 32 PARs match normal access no match page fault save the state of the partially executed instruction

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-21 Atlas Demand Paging Scheme

• On a page fault: – Input transfer into a free page is initiated

– The Page Address Register (PAR) is updated

– If no free page is left, a page is selected to be replaced (based on usage)

– The replaced page is written on the drum • to minimize the drum latency effect, the first empty page on the drum was selected

– The page table is updated to point to the new location of the page on the drum

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-22 Caching vs. Demand Paging

secondary memory

primary primary CPU cache CPU memory memory

Caching Demand paging cache entry page frame cache block (~32 bytes) page (~4K bytes) cache miss rate (1% to 20%) page miss rate (<0.001%) cache hit (~1 cycle) page hit (~100 cycles) cache miss (~100 cycles) page miss (~5M cycles) a miss is handled a miss is handled in hardware mostly in software

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-23 Modern Virtual Memory Systems Illusion of a large, private, uniform store Protection & Privacy OS several users, each with their private address space and one or more shared address spaces useri page table name space

Swapping Demand Paging Store Provides the ability to run programs Primary larger than the primary memory Memory

Hides differences in machine configurations

The price is address translation on each memory reference VA mapping PA TLB

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-24 Linear Page Table

Data Pages • Page Table Entry (PTE) Page Table contains: PPN A bit to indicate if a page PPN – DPN exists PPN – PPN (physical page number) Data word for a memory-resident page – DPN (disk page number) for Offset a page on the disk – Status bits for protection and usage DPN PPN • OS sets the Page Table PPN Base Register DPN whenever active user DPN VPN changes DPN PPN PPN

PT Base Register VPN Offset Virtual address February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-25 Size of Linear Page Table

With 32-bit addresses, 4 KB pages & 4-byte PTEs:  220 PTEs, i.e, 4 MB page table per user  4 GB of swap space needed to back up the full virtual address space

Larger pages? • Internal fragmentation (Not all memory in a page is used) • Larger page fault penalty (more time to read from disk)

What about 64-bit virtual address space??? • Even 1MB pages would require 244 8-byte PTEs (35 TB!) What is the “saving grace” ?

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-26 Hierarchical Page Table Virtual Address 31 22 21 12 11 0 p1 p2 offset

10-bit 10-bit L1 index L2 index offset Root of the Current Page Table p2 p1

(Processor Level 1 Register) Page Table

Level 2 page in primary memory Page Tables page in secondary memory

PTE of a nonexistent page Data Pages February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-27 Address Translation & Protection

Virtual Address Virtual Page No. (VPN) offset Kernel/User Mode

Read/Write Protection Address Check Translation

Exception? Physical Address Physical Page No. (PPN) offset

• Every instruction and data access needs address translation and protection checks

A good VM design needs to be fast (~ one cycle) and space efficient

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-28 Translation Lookaside Buffers Address translation is very expensive! In a two-level page table, each reference becomes several memory accesses

Solution: Cache translations in TLB TLB hit Single Cycle Translation TLB miss Page Table Walk to refill

virtual address VPN offset

V R W D tag PPN (VPN = virtual page number)

(PPN = physical page number)

hit? physical address PPN offset

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-29 TLB Designs

• Typically 32-128 entries, usually fully associative – Each entry maps a large page, hence less spatial locality across pages  more likely that two entries conflict – Sometimes larger TLBs (256-512 entries) are 4-8 way set- associative • Random or FIFO replacement policy • No process information in TLB? • TLB Reach: Size of largest virtual address space that can be simultaneously mapped by TLB

Example: 64 TLB entries, 4KB pages, one page per entry

TLB Reach = ______64 entries * 4 KB = 256 KB (if contiguous) ?

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-30 Variable-Sized Page Support Virtual Address 31 22 21 12 11 0 p1 p2 offset

10-bit 10-bit L1 index L2 index offset Root of the Current Page Table p2 p1

(Processor Level 1 Register) Page Table

Level 2 page in primary memory Page Tables large page in primary memory page in secondary memory PTE of a nonexistent page Data Pages February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-31 Variable-Size Page TLB Some systems support multiple page sizes.

virtual address VPN offset

V R W D Tag PPN L

hit?

physical address PPN offset

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-32 Handling a TLB Miss

Software (MIPS, Alpha) TLB miss causes an exception and the operating system walks the page tables and reloads TLB. A privileged “untranslated” addressing mode used for walk

Hardware (SPARC v8, x86, PowerPC) A (MMU) walks the page tables and reloads the TLB

If a missing (data or PT) page is encountered during the TLB reloading, MMU gives up and signals a Page-Fault exception for the original instruction

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-33 Hierarchical Page Table Walk: SPARC v8

Virtual Address Index 1 Index 2 Index 3 Offset 31 23 17 11 0 Context Context Table Table Register L1 Table root ptr Context Register L2 Table PTP L3 Table PTP

PTE

31 11 Physical Address 0 PPN Offset

MMU does this table walk in hardware on a TLB miss

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

L07-34 Address Translation: putting it all together Virtual Address hardware TLB hardware or software software Lookup

miss hit

Page Table Protection Walk Check the page is memory memory denied permitted

Page Fault Update TLB Protection Physical (OS loads page) Fault Address (to cache) Where? SEGFAULT

February 27, 2015 http://www.csg.csail.mit.edu/6.823 Sanchez & Emer

Next lecture:

Modern Virtual Memory Systems

http://www.csg.csail.mit.edu/6.823