Design Options for Small Scale Shared Memory Multiprocessors

Total Page:16

File Type:pdf, Size:1020Kb

Design Options for Small Scale Shared Memory Multiprocessors DESIGN OPTIONS FOR SMALL SCALE SHARED MEMORY MULTIPROCESSORS by Luiz André Barroso _____________________ A Dissertation Presented to the FACULTY OF THE GRADUATE SCHOOL UNIVERSITY OF SOUTHERN CALIFORNIA In Partial Fulfillment of the Requirements for the Degree DOCTOR OF PHILOSOPHY (Computer Engineering) December, 1996 Copyright 1996 Luiz André Barroso i to Jacqueline Chame ii Acknowledgments During my stay at USC I have had the previlege of interacting with a number of people who have, in many different and significant ways, helped me along the way. Out of this large group I would like to mention just a few names. Koray Oner, Krishnan Ramamurthy, Weihua Mao, Barton Sano and Fong Pong have been friends and colleages in the every day grind. Koray and Jaeheon Jeong and I spent way too many sleepless nights together building and debugging the RPM multiprocessor. Thanks to their work ethic, talent and self-motivation we were able to get it done. I am also thankful to the support of my thesis committee throughout the years. Although separated from them by thousends of miles, my family has been very much present all along, and I cannot thank them enough for their love and support. The Nobrega Chame family has been no less loving and supportive. My friends, PC and Beto, have also been in my heart and thoughts despite the distance. I am indebted to the people at Digital Equipment Western Research Laboratory for offering me a job in a very special place. Thanks in particular to Joel Bartlett, Kourosh Gharachorloo and Marco Annaratone for reminding me that I had a thesis to finish when I was imersed in a lot of other fun stuff. Jacqueline Chame is the main reason why I have survived it. iii Table of Contents CHAPTER 1: INTRODUCTION 1 1.1 Motivations................................................................................................................................. 1 1.2 Summary of Research Contributions.......................................................................................... 4 1.3 Prior Related Work and Background.......................................................................................... 6 1.3.1 Multiprocessor Interconnect Architectures ............................................................... 6 1.3.1.1 Uniform vs. Non-Uniform Memory Access Architectures....................... 6 1.3.1.2 Limits on Bus Performance....................................................................... 7 1.3.1.3 Point-to-Point Links .................................................................................. 9 1.3.1.4Ring Networks......................................................................................... 10 1.3.1.5Crossbar Networks .................................................................................. 11 1.3.1.6Other Networks ....................................................................................... 12 1.3.1.7Cluster-based Architectures .................................................................... 12 1.3.2 Cache Coherence Protocols..................................................................................... 13 1.3.2.1 Snooping.................................................................................................. 14 1.3.2.2Centralized Directories............................................................................ 16 1.3.2.3 Distributed Directories ............................................................................ 18 1.3.3 Reducing and Tolerating Memory Latencies .......................................................... 19 1.3.3.1Prefetching .............................................................................................. 19 1.3.3.2 Relaxed Consistency Models .................................................................. 21 1.3.3.3Multithreading......................................................................................... 23 1.3.3.4 Hardware Support for Synchronization................................................... 23 1.3.4 Performance Evaluation Methodologies................................................................. 24 CHAPTER 2: CACHE COHERENCE IN RING BASED MULTIPROCESSORS 25 2.1 Ring Architectures.................................................................................................................... 25 2.1.1 Token-Passing Ring................................................................................................. 27 2.1.2 Register Insertion Ring ........................................................................................... 27 2.1.3 Slotted Ring............................................................................................................. 29 2.1.4 Packaging and Electrical Considerations ................................................................ 30 2.2 Dividing the Ring into Message Slots...................................................................................... 31 2.3 Cache Coherence Protocols for a Slotted Ring Multiprocessor ............................................... 33 2.3.1 Centralized Directory Protocols.............................................................................. 33 2.3.2 Distributed Directory Protocols .............................................................................. 39 2.3.3 Snooping Protocols ................................................................................................. 42 2.4 Summary................................................................................................................................... 47 CHAPTER 3: PERFORMANCE EVALUATION METHODOLOGY 49 3.1 Trace-driven Simulations ......................................................................................................... 49 3.2 A Hybrid Analytical Methodology........................................................................................... 52 3.2.1 Analytic Models for Ring-based Protocols ............................................................. 53 3.3 Program-driven Simulations..................................................................................................... 56 3.4 Benchmarks .............................................................................................................................. 57 CHAPTER 4: PERFORMANCE OF UNIDIRECTIONAL RING MULTIPROCESSORS 60 4.1 Snooping vs. Centralized Directory Protocols ......................................................................... 62 4.2 Distributed Directory Protocols................................................................................................ 69 4.3 Effect of Cache Block Size....................................................................................................... 72 iv CHAPTER 5: PERFORMANCE OF BIDIRECTIONAL RING MULTIPROCESSORS 75 5.1 Bidirectional Rings and Evaluation Assumptions ....................................................................75 5.2 Simulation of Unidirectional and Bidirectional Rings..............................................................77 5.3 Discussion .................................................................................................................................79 5.4 Summary ...................................................................................................................................88 CHAPTER 6: PERFORMANCE OF NUMA BUS MULTIPROCESSORS 89 6.1 A High-Performance NUMA Bus Architecture........................................................................89 6.2 A NUMA Bus Snooping Protocol ............................................................................................90 6.3 Packet- vs. Circuit-Switched Buses ..........................................................................................91 6.4 Performance Evaluation of a Packet-Switched NUMA Bus ....................................................92 6.5 Potential of Software Prefetching .............................................................................................97 6.6 Summary .................................................................................................................................104 CHAPTER 7: PERFORMANCE OF CROSSBAR MULTIPROCESSORS 105 7.1 A NUMA Crossbar-based Multiprocessor Architecture.........................................................105 7.1.1 Cache Coherence Protocols for Crossbar-connected Multiprocessors ..................107 7.1.2 Simulation Results for Ring, Bus and Crossbar-based Systems............................108 7.2 Summary .................................................................................................................................114 CHAPTER 8: HARDWARE SUPPORT FOR LOCKING OPERATIONS 115 8.1 Atomic Operations ..................................................................................................................115 8.2 Test&Set Primitives in Write-Invalidate Protocols.................................................................116 8.3 Queue On Lock Bit (QOLB)...................................................................................................119 8.4 Hardware Support for Locking on Snooping Slotted Rings ...................................................120 8.5 Performance Impact of Hardware Locking Mechanisms .......................................................122 8.6 Summary .................................................................................................................................128 CHAPTER 9: THE IMPACT OF RELAXED MEMORY CONSISTENCY MODELS 129 9.1 Introduction.............................................................................................................................129
Recommended publications
  • Page 1 Cache Coherence in More Detail
    What is (Hardware) Shared Memory? • Take multiple microprocessors ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) • Implement a memory system with a single global physical address space (usually) Shared Memory MPs – Coherence & Snooping – Communication assist HW does the “magic” of cache coherence Copyright 2006 Daniel J. Sorin • Goal 1: Minimize memory latency Duke University – Use co-location & caches Slides are derived from work by • Goal 2: Maximize memory bandwidth Sarita Adve (Illinois), Babak Falsafi (CMU), – Use parallelism & caches Mark Hill (Wisconsin), Alvy Lebeck (Duke), Steve Reinhardt (Michigan), and J. P. Singh (Princeton). Thanks! (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 3 Outline Some Memory System Options • Motivation for Cache-Coherent Shared Memory P1 Pn Switch P P1 n (Interleaved) First-level $ • Snooping Cache Coherence (Chapter 5) $ $ – Basic systems Bus (Interleaved) – Design tradeoffs Main memory Mem I/O devices • Implementing Snooping Systems (Chapter 6) (a) Shared cache (b) Bus-based shared memory P1 Pn P • Advanced Snooping Systems P1 n $ $ $ $ Mem Mem Interconnection network Interconnection network Mem Mem (c) Dancehall (d) Distributed-memory (C) 2006 Daniel J. Sorin from Adve, (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 2 Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 4 Page 1 Cache Coherence In More Detail • According to Webster’s dictionary … • Efficient
    [Show full text]
  • Introduction and Cache Coherency
    EEC 581 Computer Architecture Multiprocessor and Memory Coherence Department of Electrical Engineering and Computer Science Cleveland State University Memory Hierarchy in a Multiprocessor Shared cache Bus-based shared memory P P P P P P $ $ $ Cache Memory Memory Fully-connected shared memory Distributed shared memory (Dancehall) P P P P P $ $ $ $ $ Memory Memory Interconnection Network Interconnection Network Memory Memory 2 1 Cache Coherency Closest cache level is private Multiple copies of cache line can be present across different processor nodes Local updates Lead to incoherent state Problem exhibits in both write-through and writeback caches Bus-based globally visible Point-to-point interconnect visible only to communicated processor nodes 3 Example (Writeback Cache) P P P Rd? Rd? Cache Cache Cache X= -100 X= -100 X=X= - 100505 X= -100 Memory 4 2 Example (Write-through Cache) P P P Rd? Cache Cache Cache X= -100 X= 505 X=X= - 100505 X=X= -505100 Memory 5 Defining Coherence An MP is coherent if the results of any execution of a program can be reconstructed by a hypothetical serial order Implicit definition of coherence Write propagation Writes are visible to other processes Write serialization All writes to the same location are seen in the same order by all processes (to “all” locations called write atomicity) E.g., w1 followed by w2 seen by a read from P1, will be seen in the same order by all reads by other processors Pi 6 3 Sounds Easy? A=0 B=0 P0 P1 P2 P3 T1 A=1 B=2 T2 A=1 A=1 B=2 B=2 T3 A=1 A=1 B=2 B=2 B=2 A=1
    [Show full text]
  • Verification of Hierarchical Cache Coherence Protocols for Futuristic Processors
    VERIFICATION OF HIERARCHICAL CACHE COHERENCE PROTOCOLS FOR FUTURISTIC PROCESSORS by Xiaofang Chen A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah December 2008 Copyright °c Xiaofang Chen 2008 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL SUPERVISORY COMMITTEE APPROVAL of a dissertation submitted by Xiaofang Chen This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Ganesh L. Gopalakrishnan Steven M. German Ching-Tsun Chou John B. Carter Rajeev Balasubramonian THE UNIVERSITY OF UTAH GRADUATE SCHOOL FINAL READING APPROVAL To the Graduate Council of the University of Utah: I have read the dissertation of Xiaofang Chen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Ganesh L. Gopalakrishnan Chair: Supervisory Committee Approved for the Major Department Martin Berzins Chair/Director Approved for the Graduate Council David S. Chapman Dean of The Graduate School ABSTRACT Multicore architectures are considered inevitable, given that sequential processing hardware has hit various limits. Unfortunately, the memory system of multicore pro- cessors is a huge bottleneck, as distant memory accesses cost thousands of cycles. To combat this problem, one must design aggressively optimized cache coherence protocols.
    [Show full text]
  • Verification of Hierarchical Cache Coherence Protocols for Future Processors
    VERIFICATION OF HIERARCHICAL CACHE COHERENCE PROTOCOLS FOR FUTURE PROCESSORS by Xiaofang Chen A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah May 2008 Copyright c Xiaofang Chen 2008 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL SUPERVISORY COMMITTEE APPROVAL of a dissertation submitted by Xiaofang Chen This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Ganesh L. Gopalakrishnan Steven M. German Ching-Tsun Chou John B. Carter Rajeev Balasubramonian THE UNIVERSITY OF UTAH GRADUATE SCHOOL FINAL READING APPROVAL To the Graduate Council of the University of Utah: I have read the dissertation of Xiaofang Chen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Ganesh L. Gopalakrishnan Chair: Supervisory Committee Approved for the Major Department Martin Berzins Chair/Director Approved for the Graduate Council David S. Chapman Dean of The Graduate School ABSTRACT The advancement of technology promises to make chip multiprocessors or multicores ubiquitous. With multicores, there naturally exists a memory hierarchy across which caches have to be kept coherent. Currently, large (hierarchical) cache coherence proto- cols are verified at either the high (specification) level or at the low (RTL implementation) level.
    [Show full text]
  • Parallel System Architectures 2017 — Lab Assignment 1: Cache Coherency —
    Jun Xiao Simon Polstra Institute of Informatics Dr. Andy Pimentel Computer Systems Architecture September 4, 2017 Parallel System Architectures 2017 | Lab Assignment 1: Cache Coherency | Introduction In this assignment you will use SystemC to build a simulator of a Level-1 Data-cache and various implementations of a cache coherency protocol and evaluate their performance. The simulator will be driven using trace files which will be provided. All documents and files for this assignment can be found on the website of the lab listed below. For information about cache coherencies and memory, see Appendix A and B. The framework can be downloaded from: https://staff.fnwi.uva.nl/s.polstra/psa2017/ This framework contains all documentation, the helper library with supporting functions for managing trace files and statistics, the trace files, and a Makefile to automatically compile your assignments. Using the Makefile, the contents of each directory under src/ is compiled automatically as a separate target. It already includes directories and the code of the Tutorial, as well as a piece of example code for Assignment 1. Trace files are loaded through the functions provided by the helper library (psa.h and psa.cpp), which is in the provided framework or can be downloaded separately from the resources mentioned above. Detailed documentation for these functions and classes is provided in Appendix D, but the example code for Assignment 1 also contains all these functions and should make it self-explanatory. Note that, although later on simulations with up to eight processors with caches are required to be built, it is easier if you start with a single CPU organization.
    [Show full text]
  • Cache Coherence Protocols
    Computer Architecture(EECC551) Cache Coherence Protocols Presentation By: Sundararaman Nakshatra Cache Coherence Protocols Overview ¾Multiple processor system System which has two or more processors working simultaneously Advantages ¾Multiple Processor Hardware Types based on memory (Distributed, Shared and Distributed Shared Memory) ¾Need for CACHE Functions and Advantages ¾Problem when using cache for Multiprocessor System ¾Cache Coherence Problem (assuming write back cache) ¾Cache Coherence Solution ¾Bus Snooping Cache Coherence Protocol ¾Write Invalidate Bus Snooping Protocol For write through For write back Problems with write invalidate ¾Write Update or Write Invalidate? A Comparison ¾Some other Cache Coherence Protocols ¾Enhancements in Cache Coherence Protocols ¾References Multiple Processor System A computer system which has two or more processors working simultaneously and sharing the same hard disk, memory and other memory devices. Advantages: • Reduced Cost: Multiple processors share the same resources (like power supply and mother board). • Increased Reliability: The failure of one processor does not affect the other processors though it will slow down the machine provided there is no master and slave processor. • Increased Throughput: An increase in the number of processes completes the work in less time. Multiple Processor Hardware Bus-based multiprocessors Why do we need cache? Cache Memory : “A computer memory with very short access time used for storage of frequently used instructions or data” – webster.com Cache memory
    [Show full text]
  • PAP Advanced Computer Architectures 1 Content
    Advanced Computer Architectures Multiprocessor systems and memory coherence problem Czech Technical University in Prague, Faculty of Electrical Engineering Slides authors: Michal Štepanovský, update Pavel Píša B4M35PAP Advanced Computer Architectures 1 Content • What is cache memory? • What is SMP? • Other multi-processor systems? UMA, NUMA, aj. • Consistence and coherence • Coherence protocol • Explanation of states of cache lines B4M35PAP Advanced Computer Architectures 2 Multiprocessor systems Change of meaning: Multiprocessor systems = system with multiple processors. Toady term processor can refer even to package/silicon with multiple cores. Software point of view (how programmer seems the system): • Shared memory systems - SMS. Single operating system (OS, single image), Standard: OpenMP, MPI (Message Passing Interface) can be used as well. • Advantages: easier programming and data sharing • Distributed memory systems - DMS: communication and data exchange by message passing. The unique instance of OS on each node (processor, a group of processors - hybrid). Network protocols, RPC, Standard: MPI. Sometimes labeled as NoRMA (No Remote Memory Access) • Advantages: less HW required, easier scaling • Often speedup by Remote Direct Memory Access (RDMA), Remote Memory Access (RMA), i.e., for InfiniBand B4M35PAP Advanced Computer Architectures 3 Multiprocessor systems Hardware point of view: • Shared memory systems – single/common physical address- space – SMP: UMA • Distributed memory system – memory physically distributed to multiple nodes, address-space private to the node (cluster) or global i.e., NUMA, then more or less hybrid memory organization Definitions: • SMP – Symmetric multiprocessor – processors are connected to the central common memory. Processors are identical and „access“ memory and the rest of the system same way (by same address, port, instructions).
    [Show full text]
  • Mesi Cache Coherence Protocol
    Mesi Cache Coherence Protocol Fettered Doug hospitalizes his tarsals certify intensively. Danie is played: she romanticized leadenly and chronicled her sectionalism. Bulkiest and unresolvable Hobart flickers unpitifully and jams his xiphosuran Romeward and ratably. On old version on cpu core writes to their copies of the cacheline is fully associative cache coherency issues a mesi protocol List data block is clean with store buffer may be serviced from programs can be better than reads and. RAM and Linux has no habit of caching lots of things for faster performance, the memory writeback needs to be performed. Bandwidth required for mesi protocol mesi? Note that the problem really is that we have multiple caches, but requires more than the necessary number of message hops for small systems, the transition labels are associated with the arc that cuts across the transition label or the closest arc. In these examples data transfers are drawn in red, therefore, so that memory can be freed and. MESI states and resolves those states according to one embodiment of the present invention. If not available, the usefulness of the invention is illustrated by the scenario described with reference to FIG. No dirty cache lines ever. Bus bandwidth limits no. High buffer size and can we have a data transfers. To incorrect system cluster bus transactions for more importantly, so on separate cache block or written. This tests makes a coherent view this involves a bus. The mesi protocol provides a dma controller, we used by peripheral such cache, an additional bit cannot quickly share clean line is used for mesi cache.
    [Show full text]
  • SNOOPING PROTOCOLS Mahdi Nazm Bojnordi Assistant Professor School of Computing University of Utah
    SNOOPING PROTOCOLS Mahdi Nazm Bojnordi Assistant Professor School of Computing University of Utah CS/ECE 7810: Advanced Computer Architecture Overview ¨ Upcoming deadline ¤ Feb. 10th: project proposal ¤ one-page proposal explaining your project subject, objectives, tools and simulators to be used, and possible methodologies for evaluation Overview ¨ This lecture ¤ Coherence basics ¤ Update vs. Invalidate ¤ A simple protocol ¤ Illinois protocol ¤ MESI protocol ¤ MOESI optimization ¤ Implementation issues Recall: Shared Memory Model ¨ Goal: parallel programs communicate through shared memory system ¨ Example: a write from P1 is followed by a read from P2 to the same memory location (A) P1 P2 Mem[A] = 1 … Print Mem[A] ¨ Problem: what if Mem[A] was cached by P1 or P2? ¤ Writable vs. read-only data Cache Coherence Protocol ¨ Guarantee that all processors see a consistent value for the same memory location ¨ Provide the followings ¤ Write propagation that sends updates to other caches ¤ Write serialization that provide a consistent global order seen by all processors ¨ A global point of serialization is needed for ordering store instructions Bus Snooping ¨ Relies on a broadcast infrastructure among caches ¨ Every cache monitors (snoops) the traffic to keep the states of the cache block up to date ¤ All communication can be seen by all ¨ More scalable solution: ‘directory based’ schemes Core … Core L1 L1 LLC Memory [Goodman’83] Write Propagation ¨ Invalidate signal ¤ Keep a single copy of the data after a write ¨ Update message ¤ Update all of
    [Show full text]