<<

Characteristics of Memory CSCI 4717/5717 “Location wrt • Inside CPU – temporary memory or registers • Inside processor – L1 Topic: Cache Memory : Stallings, Chapter 4 • – main memory and L2 cache • Main memory – DRAM and L3 cache • External – such as disk, tape, and networked memory devices

CSCI 4717 – Cache Memory – Page 1 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 2 of 81

Characteristics of Memory Characteristics of Memory “Capacity – Word Size” “Capacity – Addressable Units” • The natural size for a processor. • Varies based on the system's ability to • A 32- processor has a 32-bit word. allow addressing at level etc. • Typically smallest location which can be • Typically based on processor's data uniquely addressed width (i.e., the width of an or an • At mother board level, this is the word instruction) • It is a cluster on disks • Varying widths can be obtained by putting • Addressable units (N) equals 2 raised to memory chips in with same the power of the number of in the address lines address bus

CSCI 4717 – Computer Architecture Cache Memory – Page 3 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 4 of 81

Characteristics of Memory Characteristics of Memory “Unit of transfer” “Access method” • The number of bits read out of or written • Based on the hardware of into memory at a time. the storage device • Internal – Usually governed by data bus • Four types width, i.e., a word – Sequential • External – Usually a block which is much – Direct larger than a word – Random – Associative

CSCI 4717 – Computer Architecture Cache Memory – Page 5 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 6 of 81

1 Sequential Access Method Direct Access Method

• Start at the beginning and read through in • Individual blocks have unique address order • Access is by jumping to vicinity then • Access time depends on location of data performing a sequential search and previous location • Access time depends on location of data • Example: tape within "block" and previous location • Example: hard disk

CSCI 4717 – Computer Architecture Cache Memory – Page 7 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 8 of 81

Random Access Method Associative Access Method

• Individual addresses identify locations • Addressing information must be stored exactly with data in a general data location • Access time is consistent across all • A specific data element is located by a locations and is independent previous comparing desired address with address access portion of stored elements • Example: RAM • Access time is independent of location or previous access • Example: cache

CSCI 4717 – Computer Architecture Cache Memory – Page 9 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 10 of 81

Performance – Access Time Performance – Memory Cycle time

• Time between "requesting" data and getting it • Primarily a RAM phenomenon •RAM • Adds "recovery" time to cycle allowing for – Time between putting address on bus and getting transients to dissipate so that access is data. reliable. – It's predictable. • Cycle time is access + recovery • Other types, Sequential, Direct, Associative – Time it takes to position the read-write mechanism at the desired location. – Not predictable.

CSCI 4717 – Computer Architecture Cache Memory – Page 11 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 12 of 81

2 Performance – Transfer Rate Physical Types • Rate at which data can be moved • RAM – Predictable; equals 1/(cycle time) • – RAM • Non-RAM – Not predictable; equals • Magnetic – Disk & Tape • Optical – CD & DVD

TN = TA + (N/R) • Others – Bubble (old) – memory that made a "bubble" of where charge in an opposite direction to that of the thin magnetic material that on which it was mounted –TN = Average time to read or write N bits – Hologram (new) – much like the hologram on your –TA = Average access time credit card, laser beams are used to store computer- – N = Number of bits generated data in three dimensions. (10 times faster with 12 times the density) – R = Transfer rate in bits per second

CSCI 4717 – Computer Architecture Cache Memory – Page 13 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 14 of 81

Physical Characteristics Organization

•Decay • Physical arrangement of bits into words –Power loss • Not always obvious – Degradation over time • Non-sequential arrangements may be due to • Volatility – RAM vs. Flash speed or reliability benefits, e.g. interleaved • Erasable – RAM vs. ROM • Power consumption – More specific to , PDAs, and embedded systems

CSCI 4717 – Computer Architecture Cache Memory – Page 15 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 16 of 81

Memory Hierarchy Memory Hierarch (continued) • Trade-offs among three key characteristics – Amount – will ALWAYS fill available memory Implementation – Going down the – Speed – Memory should be able to keep up with hierarchy has the following results: the processor – Decreasing cost per bit (cheaper) – Cost – Whatever the market will bear • Balance these three characteristics with a memory – Increasing capacity (larger) hierarchy – Increasing access time (slower) • Analogy – – KEY – Decreasing frequency of access of the & cupboard (fast access – lowest variety) memory by the processor freezer & pantry (slower access – better variety) grocery store (slowest access – greatest variety)

CSCI 4717 – Computer Architecture Cache Memory – Page 17 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 18 of 81

3 Memory Hierarch (continued) Mechanics of

• The mechanics of creating memory directly affect the first three characteristics of the hierarchy: – Decreasing cost per bit – Increasing capacity – Increasing access time • The fourth characteristic is met because of a principle known as locality of reference Source: Null, Linda and Lobur, Julia (2003). Computer Organization and Architecture (p. 236). Sudbury, MA: Jones and Bartlett Publishers.

CSCI 4717 – Computer Architecture Cache Memory – Page 19 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 20 of 81

In-Class Exercise • In groups, examine the following code. Identify how Locality of Reference many times the processor "touches" each piece of data and each line of code: Due to the nature of programming,

int values[8] = instructions and data tend to cluster {9, 34, 23, 67, 23, 7, 3, 65}; together (loops, , and data int count; ) int sum = 0; for (count = 0; count < 8; count++) – Over a long period of time, clusters will sum += values[count]; change – Over a short period, clusters will tend to be • For better results, try the same exercise using the the same version found at: http://faculty.etsu.edu/tarnoff/ntes4717/week_03/assy.pdf

CSCI 4717 – Computer Architecture Cache Memory – Page 21 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 22 of 81

Breaking Memory into Levels Examples • Assume a hypothetical system has two levels of memory Example: If 95% of the memory accesses are found – Level 2 should contain all instructions and data in the faster level, then the average access time might – Level 1 doesn't have room for everything, so when be: a new cluster is required, the cluster it replaces must be sent back to the level 2 (0.95)(0.01 uS) + (0.05)(0.1 uS) = 0.0095 + 0.0055 • These principles can be applied to much more than = 0.015 uS just two levels • If performance is based on amount of memory rather than speed, lower levels can be used to simulate larger sizes for higher levels, e.g.,

CSCI 4717 – Computer Architecture Cache Memory – Page 23 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 24 of 81

4 Performance of a Simple Two- Level Memory (Figure 4.2) Hierarchy List

• Registers – volatile • L1 Cache – volatile • L2 Cache – volatile • CDRAM (main memory) cache – volatile • Main memory – volatile • Disk cache – volatile • Disk – non-volatile • Optical – non-volatile • Tape – non-volatile

CSCI 4717 – Computer Architecture Cache Memory – Page 25 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 26 of 81

Cache Cache (continued)

• What is it? A cache is a small amount of fast memory • What makes small fast? – Simpler decoding logic – More expensive SRAM technology – Close proximity to processor – Cache sits between normal main memory and CPU or it may be located on CPU chip or module

CSCI 4717 – Computer Architecture Cache Memory – Page 27 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 28 of 81

Cache operation – overview Going Deeper with Principle of Locality

• CPU requests contents of memory location • Cache "misses" are unavoidable, i.e., every piece of data and code thing must be loaded at least once • Check cache for this data • What does a processor do during a miss? It waits for • If present, get from cache (fast) the data to be loaded. • If not present, one of two things happens: • Power consumption varies linearly with speed – read required block from main memory to cache and the square of the voltage. then deliver from cache to CPU (cache physically • Adjusting clock speed and voltage of processor has between CPU and bus) the potential to produce cubic (cubed root) power reductions – read required block from main memory to cache (http://www.visc.vt.edu/~mhsiao/papers/pacs00ch.pdf) and simultaneously deliver to CPU (CPU and • Identify places in in-class exercise where this might cache both receive data from the same data bus happen. buffer)

CSCI 4717 – Computer Architecture Cache Memory – Page 29 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 30 of 81

5 Cache Cache Structure (continued)

Line • Cache includes tags to identify the address of number Tag Block the block of main memory contained in a line 0 1 of the cache 2 • Each word in main memory has a unique n-bit address • There are M=2n/K block of K words in main memory -1 • Cache contains C lines of K words each plus a Block length tag uniquely identifying the block of K words (K words)

CSCI 4717 – Computer Architecture Cache Memory – Page 31 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 32 of 81

Memory Divided into Blocks Cache Design 1 •Size 2 3 Block of • Mapping K words • Replacement • Write Policy • Block Size • Number of Caches Block

2n-1

Word length

CSCI 4717 – Computer Architecture Cache Memory – Page 33 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 34 of 81

Cache size Typical Cache Organization

• Cost – More cache is expensive • Speed – More cache is faster (up to a point) – Larger decoding circuits slow up a cache – Algorithm is needed for mapping main memory addresses to lines in the cache. This takes more time than just a direct RAM

CSCI 4717 – Computer Architecture Cache Memory – Page 35 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 36 of 81

6 Mapping Functions Cache Example

• A mapping function is the method used to locate These notes use an example of a cache to a memory address within a cache illustrate each of the mapping functions. The • It is used when copying a block from main characteristics of the cache used are: memory to the cache and it is used again when – Size: 64 kByte trying to retrieve data from the cache – Block size: 4 – i.e. the cache has 16k • There are three kinds of mapping functions (214) lines of 4 bytes – Direct – Address bus: 24-bit– i.e., 16M bytes main – Associative memory divided into 4M 4 byte blocks – Set Associative

CSCI 4717 – Computer Architecture Cache Memory – Page 37 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 38 of 81

Direct Mapping Traits Direct Mapping Address Structure

• Each block of main memory maps to only one cache Each main memory address can by divided into three fields line – i.e. if a block is in cache, it will always be found • Least Significant w bits identify unique word within a block in the same place • Remaining bits (s) specify which block in memory. These are • Line number is calculated using the following divided into two fields function – Least significant r bits of these s bits identifies which line in the cache i = j modulo m – Most significant s-r bits uniquely identifies the block within a where line of the cache i = cache line number s-r bits r bits w bits j = main memory block number m = number of lines in the cache Tag Bits identifying Bits identifying word row in cache offset into block

CSCI 4717 – Computer Architecture Cache Memory – Page 39 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 40 of 81

Direct Mapping Address Structure Direct Mapping Address Structure Example (continued) Tag s-r Line or slot r Word w 8 14 2 • Why are the r-bits used to identify which • 24 bit address line in cache? • 2 bit word identifier (4 byte block) • More likely to have unique r bits than s-r • 22 bit block identifier bits based on principle of locality of • 8 bit tag (=22–14) reference • 14 bit slot or line • No two blocks in the same line have the same tag • Check contents of cache by finding line and comparing tag

CSCI 4717 – Computer Architecture Cache Memory – Page 41 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 42 of 81

7 Direct Mapping Cache Organization Direct Mapping Cache Line Table

Cache line Main Memory blocks held 0 0, m, 2m, 3m…2s–m 1 1, m+1, 2m+1…2s–m+1 m–1 m–1, 2m–1, 3m–1…2s–1

CSCI 4717 – Computer Architecture Cache Memory – Page 43 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 44 of 81

Direct Mapping Examples More Direct Mapping Examples Assume that a portion of the tags in the cache in our example What cache line number will the following looks like the table below. Which of the following addresses are addresses be stored to, and what will the contained in the cache?

minimum address and the maximum address of a.) 438EE816 b.) F18EFF16 c.) 6B8EF316 d.) AD8EF316 each block they are in be if we have a cache with 4K lines of 16 words to a block in a 256 Tag (binary) Line number (binary) Addresses wi/ block Meg memory space (28-bit address)? 00 01 10 11 0101 0011 1000 1110 1110 10 Tag s-r Line or slot r Word w 1110 1101 1000 1110 1110 11 12 12 4 1010 1101 1000 1110 1111 00 0110 1011 1000 1110 1111 01 1011 0101 1000 1110 1111 10 a.) 9ABCDEF16 1111 0001 1000 1110 1111 11 b.) 123456716

CSCI 4717 – Computer Architecture Cache Memory – Page 45 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 46 of 81

Direct Mapping Summary Direct Mapping pros & cons

• Address length = (s + w) bits •Simple • Number of addressable units = 2s+w words or • Inexpensive bytes • Fixed location for given block – • Block size = line width = 2w words or bytes If a program accesses 2 blocks that map to the • Number of blocks in main memory = 2s+ w/2w = same line repeatedly, cache misses are very 2s high (thrashing) • Number of lines in cache = m = 2r • Size of tag = (s – r) bits

CSCI 4717 – Computer Architecture Cache Memory – Page 47 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 48 of 81

8 Associative Mapping Address Structure Associative Mapping Traits Example • A main memory block can load into any line of cache Tag – s bits Word – w bits (22 in example) (2 in ex.) • Memory address is interpreted as: – Least significant w bits = word position within block – Most significant s bits = tag used to identify which • 22 bit tag stored with each 32 bit block of data block is stored in a particular line of cache • Compare tag field with tag entry in cache to • Every line's tag must be examined for a match check for hit • Cache searching gets expensive and slower • Least significant 2 bits of address identify which of the four 8 bit words is required from 32 bit data block

CSCI 4717 – Computer Architecture Cache Memory – Page 49 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 50 of 81

Fully Associative Cache Organization Fully Associative Mapping Example

Assume that a portion of the tags in the cache in our example looks like the table below. Which of the following addresses are contained in the cache?

a.) 438EE816 b.) F18EFF16 c.) 6B8EF316 d.) AD8EF316

Tag (binary) Addresses wi/ block 00 01 10 11 0101 0011 1000 1110 1110 10 1110 1101 1100 1001 1011 01 1010 1101 1000 1110 1111 00 0110 1011 1000 1110 1111 11 1011 0101 0101 1001 0010 00 1111 0001 1000 1110 1111 11

CSCI 4717 – Computer Architecture Cache Memory – Page 51 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 52 of 81

Associative Mapping Summary Set Associative Mapping Traits

• Address length = (s + w) bits • Address length is s + w bits • Number of addressable units = 2s+w words or • Cache is divided into a number of sets, v = 2d bytes • k blocks/lines can be contained within each set • Block size = line size = 2w words or bytes • k lines in a cache is called a k-way set • Number of blocks in main memory = 2s+ w/2w = associative mapping 2s • Number of lines in a cache = v•k = k•2d • Number of lines in cache = undetermined • Size of tag = (s-d) bits • Size of tag = s bits

CSCI 4717 – Computer Architecture Cache Memory – Page 53 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 54 of 81

9 Set Associative Mapping Traits (continued) K-Way Set Associative Cache Organization

• Hybrid of Direct and Associative k = 1, this is basically direct mapping v = 1, this is associative mapping • Each set contains a number of lines, basically the number of lines divided by the number of sets • A given block maps to any line within its specified set – e.g. Block B can be in any line of set i. • 2 lines per set is the most common organization. – Called 2 way associative mapping – A given block can be in one of 2 lines in only one specific set – Significant improvement over direct mapping

CSCI 4717 – Computer Architecture Cache Memory – Page 55 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 56 of 81

How does this affect our example? Set Associative Mapping Address Structure • Let’s go to two-way set associative mapping • Divides the 16K lines into 8K sets Tag Set Word • This requires a 13 bit set number 9 bits 13 bits 2 bits • With 2 word bits, this leaves 9 bits for the tag • Blocks beginning with the addresses 000000 , 16 • Note that there is one more bit in the tag than for 008000 , 010000 , 018000 , 020000 , 028000 , 16 16 16 16 16 this same example using direct mapping. etc. map to the same set, Set 0. • Therefore, it is 2-way set associative • Blocks beginning with the addresses 00000416, • Use set field to determine cache set to look in 00800416, 01000416, 01800416, 02000416, 02800416, etc. map to the same set, Set 1. • Compare tag field to see if we have a hit

CSCI 4717 – Computer Architecture Cache Memory – Page 57 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 58 of 81

Set Associative Mapping Example Set Associative Mapping Summary For each of the following addresses, answer the following questions based on a 2-way set associative cache with 4K lines, • Address length = (s + w) bits each line containing 16 words, with the main memory of size 256 s+w Meg memory space (28-bit address): • Number of addressable units = 2 words or bytes • Block size = line size = 2w words or bytes • What cache set number will the block be stored to? • Number of blocks in main memory = 2s+ w/2w = 2s • What will their tag be? • Number of lines in set = k • What will the minimum address and the maximum address of • Number of sets = v = 2d each block they are in be? • Number of lines in cache = kv = k * 2d 1.9ABCDEF16 Tag s-r Set s Word w • Size of tag = (s – d) bits 2.1234567 16 13 11 4

CSCI 4717 – Computer Architecture Cache Memory – Page 59 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 60 of 81

10 Associative & Set Associative Replacement Replacement Algorithms

• There must be a method for selecting which line • Least Recently used (LRU) in the cache is going to be replaced when there’s no room for a new line – Replace the block that hasn't been touched in the longest period of time • Hardware implemented algorithm (speed) – Two way set associative simply uses a USE bit. • Direct mapping When one block is referenced, its USE bit is set – There is no need for a replacement algorithm with while its partner in the set is cleared direct mapping – Each block only maps to one line • First in first out (FIFO) – replace block that – Replace that line has been in cache longest

CSCI 4717 – Computer Architecture Cache Memory – Page 61 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 62 of 81

Associative & Set Associative Writing to Cache Replacement Algorithms (continued) • Must not overwrite a cache block unless • Least frequently used (LFU) – replace block main memory is up to date which has had fewest hits • Two main problems: • Random – only slightly lower performance than – If cache is written to, main memory is invalid or if use-based algorithms LRU, FIFO, and LFU main memory is written to, cache is invalid – Can occur if I/O can address main memory directly – Multiple CPUs may have individual caches; once one cache is written to, all caches are invalid

CSCI 4717 – Computer Architecture Cache Memory – Page 63 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 64 of 81

Write through Write back

• All writes go to main memory as well as • Updates initially made in cache only cache • Update bit for cache slot is set when update • Multiple CPUs can monitor main memory occurs traffic to keep local (to CPU) cache up to • If block is to be replaced, write to main memory only if update bit is set date • Other caches get out of sync • Lots of traffic • I/O must access main memory through cache • Slows down writes • Research shows that 15% of memory references are writes

CSCI 4717 – Computer Architecture Cache Memory – Page 65 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 66 of 81

11 Solutions to Prevent Problems with Multiple Processors/Multiple Caches Multiprocessor/cache systems • Even if a write through policy is used, other • Bus watching with write through – each cache processors may have invalid data in their caches the bus to see if data they contain is being • In other words, if a processor updates its cache written to the main memory by another processor. All and updates main memory, a second processor processors must be using the write through policy may have been using the same data in its own • Hardware transparency – a "" watches all cache which is now invalid. caches, and upon seeing an update to any processor's cache, it updates main memory AND all of the caches • Noncacheable memory – Any shared memory (identified with a chip select) may not be cached.

CSCI 4717 – Computer Architecture Cache Memory – Page 67 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 68 of 81

Line Size Multi-Level Caches • There is a relationship between line size (i.e., the number of words in a line in the cache) and hit ratios • Increases in densities have allowed for caches • As the line size (block size) goes up, the hit ratio could to be placed inside processor chip go up due to more words available to the principle of • Internal caches have very short wires (within the chip locality of reference itself) and are therefore quite fast, even faster then any • As block size increases, however, the number of blocks zero wait- memory accesses outside of the chip goes down, and the hit ratio will begin to go back down • This means that a super fast internal cache (level 1) can after a while be inside of the chip while an external cache (level 2) • Lastly, as the block size increases, the chances of a hit can provide access faster then to main memory to a word farther from the initially referenced word goes down

CSCI 4717 – Computer Architecture Cache Memory – Page 69 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 70 of 81

Unified versus Split Caches caches

• Split into two caches – one for instructions, one for data • 80386 – no on chip cache • Disadvantages • 80486 – 8k using 16 byte lines and four-way set – Questionable as unified cache balances data and associative organization (main memory had 32 instructions merely with hit rate. address lines – 4 Gig) – Hardware is simpler with unified cache • (all versions) • Advantage – Two on chip L1 caches – What a split cache is really doing is providing one cache for the instruction decoder and one for the – Data & instructions unit. – This supports pipelined .

CSCI 4717 – Computer Architecture Cache Memory – Page 71 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 72 of 81

12 Pentium 4 (Figure 4.13) Pentium 4 L1 and L2 Caches

• L1 cache –8k bytes – 64 byte lines – Four way set associative • L2 cache – Feeding both L1 caches –256k – 128 byte lines – 8 way set associative

CSCI 4717 – Computer Architecture Cache Memory – Page 73 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 74 of 81

Pentium 4 Operation – Core Processor Pentium 4 Operation – Core Processor (continued)

• Fetch/Decode Unit • Execution units – Fetches instructions from L2 cache – Execute micro-ops – Decode into micro-ops – Data from L1 cache – Store micro-ops in L1 cache – Results in registers

• Out of order execution logic • Memory subsystem – L2 cache and systems bus – Schedules micro-ops – Based on data dependence and resources – May speculatively execute

CSCI 4717 – Computer Architecture Cache Memory – Page 75 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 76 of 81

Pentium 4 Design Reasoning Pentium 4 Design Reasoning (continued)

• Decodes instructions into RISC like micro-ops before L1 • Data cache is write back – Can be configured to write cache through • Micro-ops fixed length – Superscalar pipelining and • L1 cache controlled by 2 bits in register scheduling – CD = cache disable • Pentium instructions long & complex – NW = not write through • Performance improved by separating decoding from – 2 instructions to invalidate (flush) cache and write scheduling & pipelining – (More later – ch14) back then invalidate

CSCI 4717 – Computer Architecture Cache Memory – Page 77 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 78 of 81

13 PowerPC G4 (Figure 4.14) Power PC Cache Organization

• 601 – single 32kb 8 way set associative • 603 – 16kb (2 x 8kb) two way set associative • 604 – 32kb • 610 – 64kb •G3 & G4 – 64kb L1 cache – 8 way set associative – 256k, 512k or 1M L2 cache – two way set associative

CSCI 4717 – Computer Architecture Cache Memory – Page 79 of 81 CSCI 4717 – Computer Architecture Cache Memory – Page 80 of 81

Comparison of Cache Sizes (Table 4.3)

CSCI 4717 – Computer Architecture Cache Memory – Page 81 of 81

14