Lecture-14 () CS422-Spring 2018

Biswa@CSE-IITK The Ideal World

Pipeline Instruction (Instruction Supply Supply execution)

- Zero-cycle latency - No pipeline stalls - Zero-cycle latency

- Infinite capacity -Perfect data flow - Infinite capacity (reg/memory dependencies) - Zero cost - Infinite bandwidth - Zero-cycle interconnect - Perfect control flow (operand communication) - Zero cost

- Enough functional units

- Zero latency compute

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 2 World of Memory Hierarchy. But Why?

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 3

• Semiconductor memory began to be competitive in early 1970s • Intel formed to exploit market for semiconductor memory • Early semiconductor memory was Static RAM (SRAM). SRAM cell internals similar to a latch (cross-coupled inverters).

• First commercial Dynamic RAM (DRAM) was Intel 1103 • 1Kbit of storage on single chip • charge on a capacitor used to hold value

Semiconductor memory quickly replaced core in ‘70s

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 4 One-transistor DRAM 1-T DRAM Cell

word access transistor

VREF

bit Storage capacitor (FET gate, trench, stack)

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 5 DRAM Architecture

bit lines Col. Col. 1 2M word lines Row 1 N

Row 2N

Row Address Address Row Decoder M (one bit) N+M Column Decoder & Sense Amplifiers

Data D

▪ Bits stored in 2-dimensional arrays on chip ▪ Modern chips have around 4-8 logical banks on each chip ▪ each logical bank physically implemented as many smaller arrays

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 6 DRAM • Dynamic memory • Capacitor charge state indicates stored value • Whether the capacitor is charged or discharged indicates storage of 1 or 0 • 1 capacitor • 1 access transistor

row enable • Capacitor leaks • DRAM cell loses charge over time

• DRAM cell needs to be refreshed _bitline

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 7 SRAM • Static random access memory • Two cross coupled inverters store a single bit • Feedback path enables the stored value to persist in the “cell” • 4 transistors for storage • 2 transistors for access

row select

bitline _bitline

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 8 DRAM vs SRAM • DRAM • Slower access (capacitor) • Higher density (1T 1C cell) • Lower cost • Requires refresh (power, performance, circuitry)

• SRAM • Faster access (no capacitor) • Lower density (6T cell) • Higher cost • No need for refresh

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 9 The Problem? • Bigger is slower • SRAM, 512 , sub-nanosec • SRAM, KByte~MByte, ~nanosec • DRAM, , ~50 nanosec • Hard Disk, Terabyte, ~10 millisec

• Faster is more expensive (dollars and chip area) • SRAM, < 10$ per Megabyte • DRAM, < 1$ per Megabyte • Hard Disk < 1$ per Gigabyte • These sample values scale with time

• Other technologies have their place as well • , PC-RAM, MRAM, RRAM (not mature yet)

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 10 Why Memory Hierarchy? • We want both fast and large

• But we cannot achieve both with a single level of memory

• Idea: Have multiple levels of storage (progressively bigger and slower as the levels are farther from the ) and ensure most of the data the processor needs is kept in the fast(er) level(s)

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 11 Memory Wall Problem

µProc 60%/year 1000 CPU

Processor-Memory 100 Performance Gap: (growing 50%/yr) 10 DRAM

Performance 7%/year DRAM

1

1988

1986 1987 1989 1990 1991 1992 1993 1994 1995 1996

1983 1985 1980 1981 1982 1984 1997 1998 1999 2000 Time Four-issue 4GHz superscalar accessing 100ns DRAM could execute ______instructions during time for one memory access! 1600 CS422: Spring 2018 Biswabandan Panda, CSE@IITK 12 Size Affects Latency

CPU

CPU

Small Memory Big Memory

Motivates 3D stacking ▪ Signals have further to travel ▪ Fan out to more locations

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 13 Memory Hierarchy

A B Small, Big, Slow Memory CPU Fast Memory (RF, SRAM) (DRAM)

holds frequently used data

• capacity: Register << SRAM << DRAM • latency: Register << SRAM << DRAM • bandwidth: on-chip >> off-chip On a data access: if data  fast memory  low latency access (SRAM) if data  fast memory  high latency access (DRAM)

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 14

Access Patterns Memory Address (one dot per access) per dot (one Address Memory

Donald J. Hatfield, Jeanette Gerald: Program Restructuring for Virtual Time Memory. IBM Systems Journal 10(3): 168-192 (1971) CS422: Spring 2018 Biswabandan Panda, CSE@IITK 15 Examples

Address n loop iterations Instruction fetches

subroutine subroutine call Stack return accesses argument access

Data accesses scalar accesses Time

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 16

• Temporal Locality: If a location is referenced it is likely to be referenced again in the near future.

• Spatial Locality: If a location is referenced it is likely that locations near it will be referenced in the near future.

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 17 Again

Temporal Locality

Spatial

Memory Address (one dot per access) per dot (one Address Memory Locality Time

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 18 Inside a

Address Address Processor CACHE Main Memory Data Data

copy of main copy of main memory memory location 100 location 101

Data Data 100 Byte Line Data 304 Byte Address 6848 Tag 416

Data

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 19 Intel i7

▪ Private L1 and L2 – L2 is 256KB each. 10 cycle latency ▪ 8MB shared L3. ~40 cycles latency

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 20 Cache Events

Look at Processor Address, search cache tags to find match. Then either

Found in cache Not in cache a.k.a. HIT a.k.a. MISS

Return copy Read block of data from of data from Main Memory cache Wait …

Return data to processor and update cache Q: Which line do we replace?

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 21 Placement Policy

1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 3 3 Block Number 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1

Memory

Set Number 0 1 2 3 0 1 2 3 4 5 6 7

Cache

Fully (2-way) Set Direct Associative Associative Mapped anywhere anywhere in only into block 12 can be placed set 0 block 4 (12 mod 4) (12 mod 8)

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 22 Direct Mapped

Block Tag Index Offset

t k b V Tag Data Block

2k In reality, tag-store is placed lines separately

t =

HIT Data Word or Byte

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 23 High bits or Low bits

Block Index Tag Offset k t b V Tag Data Block

2k lines

t =

HIT Data Word or Byte

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 24 Set-Associative

Tag Index Block Offset b t k V Tag Data Block V Tag Data Block

t

= = Data Word or Byte

HIT

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 25 Fully-associative

V Tag Data Block

t

= Tag t = HIT

Data Offset Block = Word b or Byte

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 26 What’s in Tag Store? • Valid bit • Tag • Replacement policy bits

• Dirty bit? • Write back vs. write through caches

CS422: Spring 2018 Biswabandan Panda, CSE@IITK 27