Computer Architecture 1DT016: Cache
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Computer Organization and Architecture Designing for Performance Ninth Edition
COMPUTER ORGANIZATION AND ARCHITECTURE DESIGNING FOR PERFORMANCE NINTH EDITION William Stallings Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montréal Toronto Delhi Mexico City São Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo Editorial Director: Marcia Horton Designer: Bruce Kenselaar Executive Editor: Tracy Dunkelberger Manager, Visual Research: Karen Sanatar Associate Editor: Carole Snyder Manager, Rights and Permissions: Mike Joyce Director of Marketing: Patrice Jones Text Permission Coordinator: Jen Roach Marketing Manager: Yez Alayan Cover Art: Charles Bowman/Robert Harding Marketing Coordinator: Kathryn Ferranti Lead Media Project Manager: Daniel Sandin Marketing Assistant: Emma Snider Full-Service Project Management: Shiny Rajesh/ Director of Production: Vince O’Brien Integra Software Services Pvt. Ltd. Managing Editor: Jeff Holcomb Composition: Integra Software Services Pvt. Ltd. Production Project Manager: Kayla Smith-Tarbox Printer/Binder: Edward Brothers Production Editor: Pat Brown Cover Printer: Lehigh-Phoenix Color/Hagerstown Manufacturing Buyer: Pat Brown Text Font: Times Ten-Roman Creative Director: Jayne Conte Credits: Figure 2.14: reprinted with permission from The Computer Language Company, Inc. Figure 17.10: Buyya, Rajkumar, High-Performance Cluster Computing: Architectures and Systems, Vol I, 1st edition, ©1999. Reprinted and Electronically reproduced by permission of Pearson Education, Inc. Upper Saddle River, New Jersey, Figure 17.11: Reprinted with permission from Ethernet Alliance. Credits and acknowledgments borrowed from other sources and reproduced, with permission, in this textbook appear on the appropriate page within text. Copyright © 2013, 2010, 2006 by Pearson Education, Inc., publishing as Prentice Hall. All rights reserved. Manufactured in the United States of America. -
Overview of 15-740
Memory Hierarchy 15-740 SPRING’18 NATHAN BECKMANN Topics Memories Caches L3 Reading Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers Famous paper; won “Test of Time Award” Norm Jouppi • DEC, Hewlett-Packard, Google (TPUs) • Winner of Eckert-Mauchly Award • “The Nobel Prize of Computer Architecture” Early computer system Processor interrupts Memory-I/O bus I/O I/O I/O controller controller Memory controller diskDisk diskDisk Display Network Recall: Processor-memory gap Ideal memory We want a large, fast memory …But technology doesn’t let us have this! Key observation: Locality ◦ All data are not equal ◦ Some data are accessed more often than others Architects solution: Caches Memory hierarchy Modern computer system Processor interrupts Cache Memory-I/O bus I/O I/O I/O controller controller Memory controller diskDisk diskDisk Display Network Technological tradeoffs in accessing data Memory Technology Physical size affects latency CPU CPU Small Memory Big Memory ▪ Signals have further to travel ▪ Fan out to more locations 12 Why is bigger slower? •Physics slows us down •Racing the speed of light? ◦ take recent Intel chip (Haswell-E 8C) ◦ how far can I go in a clock cycle @ 3 GHz? (3.0x10^8 m/s) / (3x10^9 cycles/s) = 0.1m/cycle ◦ for comparison: Haswell-E 8C is about 19mm = .019m across ◦ speed of light doesn’t directly limit speed, but its in ballpark •Capacitance: ◦ long wires have more capacitance ◦ either more powerful (bigger) transistors required, or slower ◦ signal propagation speed proportional to capacitance ◦ going “off chip” has an order of magnitude more capacitance Single-transistor (1T) DRAM cell (one bit) 1-T DRAM Cell word access transistor TiN top electrode (VREF) V REF Ta2O5 dielectric bit Storage capacitor (FET gate, trench, stack) poly W bottom word electrode line access transistor 14 Modern “3D” DRAM structure [Samsung, sub-70nm DRAM, 2004] 15 DRAM architecture bit lines Col. -
Chapter 7 Memory Hierarchy
Chapter 7 Memory Hierarchy Outline Memory hierarchy The basics of caches Measuring and improving cache performance Virtual memory A common framework for memory hierarchy 2 Technology Trends Capacity Speed (latency) Logic: 4x in 1.5 years 4x in 3 years DRAM: 4x in 3 years 2x in 10 years Disk: 4x in 3 years 2x in 10 years DRAM Year Size Cycle Time 19801000:1! 64 Kb2:1! 250 ns 1983 256 Kb 220 ns 1986 1 Mb 190 ns 1989 4 Mb 165 ns 1992 16 Mb 145 ns 1995 64 Mb 120 ns Performance 1000 100 10 1 1980 Processor Memory Latency Gap 1981 1982 1983 1984 1985 1986 1987 1988 1989 Moore 1990 1991 1992 ’ 1993 s Law 1994 1995 1996 1997 (grows 50% / year) performance gap: Processor 1998 1999 2000 Year - memory (2X/10 yrs) 9%/yr. DRAM (2X/1.5 yr) 60%/yr. Proc Solution: Memory Hierarchy An Illusion of a large, fast, cheap memory –Fact: Large memories slow, fast memories small –How to achieve: hierarchy, parallelism An expanded view of memory system: Processor Control Memory Memory M M e e Memory Datapath m m o o r r y y Speed: Fastest Slowest Size: Smallest Biggest Cost: Highest Lowest Memory Hierarchy: Principle At any given time, data is copied between only two adjacent levels: –Upper level: the one closer to the processor Smaller, faster, uses more expensive technology –Lower level: the one away from the processor Bigger, slower, uses less expensive technology Block: basic unit of information transfer –Minimum unit of information that can either be present or not present in a level of the hierarchy Lower Level To Processor Upper Level Memory Memory -
ARM Processor Architecture
ARMARM ProcessorProcessor ArchitectureArchitecture Jin-Fu Li Department of Electrical Engineering National Central University Adopted from National Chiao-Tung University IP Core Design SOC Consortium Course Material Outline ARM Processor Core Memory Hierarchy Software Development Summary SOC Consortium Course Material 2 ARM Processor Core SOC Consortium Course Material 3 3-Stage Pipeline ARM Organization A[31:0] control Register Bank address register – 2 read ports, 1 write ports, P access any register incrementer C – 1 additional read port, 1 PC additional write port for r15 (PC) register bank Barrel Shifter instruction decode – Shift or rotate the operand by A multiply & any number of bits L register U control A B b ALU u b b s u u s barrel s shifter Address register and incrementer ALU Data Registers – Hold data passing to and from memory data out register data in register Instruction Decoder and D[31:0] Control SOC Consortium Course Material 4 3-Stage Pipeline (1/2) Fetch – The instruction is fetched from memory and placed in the instruction pipeline Decode – The instruction is decoded and the datapath control signals prepared for the next cycle Execute – The register bank is read, an operand shifted, the ALU result generated and written back into destination register SOC Consortium Course Material 5 3-Stage Pipeline (2/2) At any time slice, 3 different instructions may occupy each of these stages, so the hardware in each stage has to be capable of independent operations When the processor is executing data processing instructions -
Advanced Computer Architecture
AdvancedAdvanced ComputerComputer ArchitectureArchitecture Course Goal: Understanding important emerging design techniques, machine structures, technology factors, evaluation methods that will determine the form of high- performance programmable processors and computing systems in 21st Century. Important Factors: • Driving Force: Applications with diverse and increased computational demands even in mainstream computing (multimedia etc.) • Techniques must be developed to overcome the major limitations of current computing systems to meet such demands: – Instruction-Level Parallelism (ILP) limitations, Memory latency, IO performance. – Increased branch penalty/other stalls in deeply pipelined CPUs. – General-purpose processors as only homogeneous system computing resource. • Increased density of VLSI logic (> three billion transistors in 2011) Enabling Technology for many possible solutions: – Enables implementing more advanced architectural enhancements. – Enables chip-level Thread Level Parallelism: • Simultaneous Multithreading (SMT)/Chip Multiprocessors (CMPs, AKA multi- core processors). – Enables a high-level of chip-level system integration. • System On Chip (SOC) approach EECC722 - Shaaban #1 Lec # 1 Fall 2012 9-3-2012 Course Topics Topics we will cover include: • Overcoming inherent ILP & clock scaling limitations by exploiting Thread-level Parallelism (TLP): – Support for Simultaneous Multithreading (SMT). • Alpha EV8. Intel P4 Xeon and Core i7 (aka Hyper-Threading), IBM Power5. – Chip Multiprocessors (CMPs): • The Hydra Project: An example CMP with Hardware Data/Thread Level Speculation (TLS) Support. IBM Power4, 5, 6 …. • Instruction Fetch Bandwidth/Memory Latency Reduction: – Conventional & Block-based Trace Cache (Intel P4). • Advanced Dynamic Branch Prediction Techniques. • Towards micro heterogeneous computing systems: – Vector processing. Vector Intelligent RAM (VIRAM). – Digital Signal Processing (DSP), Media Processors. – Graphics Processor Units (GPUs). – Re-Configurable Computing and Processors. • Virtual Memory Design/Implementation Issues.