Design Options for Small Scale Shared Memory Multiprocessors
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Page 1 Cache Coherence in More Detail
What is (Hardware) Shared Memory? • Take multiple microprocessors ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) • Implement a memory system with a single global physical address space (usually) Shared Memory MPs – Coherence & Snooping – Communication assist HW does the “magic” of cache coherence Copyright 2006 Daniel J. Sorin • Goal 1: Minimize memory latency Duke University – Use co-location & caches Slides are derived from work by • Goal 2: Maximize memory bandwidth Sarita Adve (Illinois), Babak Falsafi (CMU), – Use parallelism & caches Mark Hill (Wisconsin), Alvy Lebeck (Duke), Steve Reinhardt (Michigan), and J. P. Singh (Princeton). Thanks! (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 3 Outline Some Memory System Options • Motivation for Cache-Coherent Shared Memory P1 Pn Switch P P1 n (Interleaved) First-level $ • Snooping Cache Coherence (Chapter 5) $ $ – Basic systems Bus (Interleaved) – Design tradeoffs Main memory Mem I/O devices • Implementing Snooping Systems (Chapter 6) (a) Shared cache (b) Bus-based shared memory P1 Pn P • Advanced Snooping Systems P1 n $ $ $ $ Mem Mem Interconnection network Interconnection network Mem Mem (c) Dancehall (d) Distributed-memory (C) 2006 Daniel J. Sorin from Adve, (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 2 Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 4 Page 1 Cache Coherence In More Detail • According to Webster’s dictionary … • Efficient -
Introduction and Cache Coherency
EEC 581 Computer Architecture Multiprocessor and Memory Coherence Department of Electrical Engineering and Computer Science Cleveland State University Memory Hierarchy in a Multiprocessor Shared cache Bus-based shared memory P P P P P P $ $ $ Cache Memory Memory Fully-connected shared memory Distributed shared memory (Dancehall) P P P P P $ $ $ $ $ Memory Memory Interconnection Network Interconnection Network Memory Memory 2 1 Cache Coherency Closest cache level is private Multiple copies of cache line can be present across different processor nodes Local updates Lead to incoherent state Problem exhibits in both write-through and writeback caches Bus-based globally visible Point-to-point interconnect visible only to communicated processor nodes 3 Example (Writeback Cache) P P P Rd? Rd? Cache Cache Cache X= -100 X= -100 X=X= - 100505 X= -100 Memory 4 2 Example (Write-through Cache) P P P Rd? Cache Cache Cache X= -100 X= 505 X=X= - 100505 X=X= -505100 Memory 5 Defining Coherence An MP is coherent if the results of any execution of a program can be reconstructed by a hypothetical serial order Implicit definition of coherence Write propagation Writes are visible to other processes Write serialization All writes to the same location are seen in the same order by all processes (to “all” locations called write atomicity) E.g., w1 followed by w2 seen by a read from P1, will be seen in the same order by all reads by other processors Pi 6 3 Sounds Easy? A=0 B=0 P0 P1 P2 P3 T1 A=1 B=2 T2 A=1 A=1 B=2 B=2 T3 A=1 A=1 B=2 B=2 B=2 A=1 -
Verification of Hierarchical Cache Coherence Protocols for Futuristic Processors
VERIFICATION OF HIERARCHICAL CACHE COHERENCE PROTOCOLS FOR FUTURISTIC PROCESSORS by Xiaofang Chen A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah December 2008 Copyright °c Xiaofang Chen 2008 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL SUPERVISORY COMMITTEE APPROVAL of a dissertation submitted by Xiaofang Chen This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Ganesh L. Gopalakrishnan Steven M. German Ching-Tsun Chou John B. Carter Rajeev Balasubramonian THE UNIVERSITY OF UTAH GRADUATE SCHOOL FINAL READING APPROVAL To the Graduate Council of the University of Utah: I have read the dissertation of Xiaofang Chen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Ganesh L. Gopalakrishnan Chair: Supervisory Committee Approved for the Major Department Martin Berzins Chair/Director Approved for the Graduate Council David S. Chapman Dean of The Graduate School ABSTRACT Multicore architectures are considered inevitable, given that sequential processing hardware has hit various limits. Unfortunately, the memory system of multicore pro- cessors is a huge bottleneck, as distant memory accesses cost thousands of cycles. To combat this problem, one must design aggressively optimized cache coherence protocols. -
Verification of Hierarchical Cache Coherence Protocols for Future Processors
VERIFICATION OF HIERARCHICAL CACHE COHERENCE PROTOCOLS FOR FUTURE PROCESSORS by Xiaofang Chen A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah May 2008 Copyright c Xiaofang Chen 2008 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL SUPERVISORY COMMITTEE APPROVAL of a dissertation submitted by Xiaofang Chen This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Ganesh L. Gopalakrishnan Steven M. German Ching-Tsun Chou John B. Carter Rajeev Balasubramonian THE UNIVERSITY OF UTAH GRADUATE SCHOOL FINAL READING APPROVAL To the Graduate Council of the University of Utah: I have read the dissertation of Xiaofang Chen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Ganesh L. Gopalakrishnan Chair: Supervisory Committee Approved for the Major Department Martin Berzins Chair/Director Approved for the Graduate Council David S. Chapman Dean of The Graduate School ABSTRACT The advancement of technology promises to make chip multiprocessors or multicores ubiquitous. With multicores, there naturally exists a memory hierarchy across which caches have to be kept coherent. Currently, large (hierarchical) cache coherence proto- cols are verified at either the high (specification) level or at the low (RTL implementation) level. -
Parallel System Architectures 2017 — Lab Assignment 1: Cache Coherency —
Jun Xiao Simon Polstra Institute of Informatics Dr. Andy Pimentel Computer Systems Architecture September 4, 2017 Parallel System Architectures 2017 | Lab Assignment 1: Cache Coherency | Introduction In this assignment you will use SystemC to build a simulator of a Level-1 Data-cache and various implementations of a cache coherency protocol and evaluate their performance. The simulator will be driven using trace files which will be provided. All documents and files for this assignment can be found on the website of the lab listed below. For information about cache coherencies and memory, see Appendix A and B. The framework can be downloaded from: https://staff.fnwi.uva.nl/s.polstra/psa2017/ This framework contains all documentation, the helper library with supporting functions for managing trace files and statistics, the trace files, and a Makefile to automatically compile your assignments. Using the Makefile, the contents of each directory under src/ is compiled automatically as a separate target. It already includes directories and the code of the Tutorial, as well as a piece of example code for Assignment 1. Trace files are loaded through the functions provided by the helper library (psa.h and psa.cpp), which is in the provided framework or can be downloaded separately from the resources mentioned above. Detailed documentation for these functions and classes is provided in Appendix D, but the example code for Assignment 1 also contains all these functions and should make it self-explanatory. Note that, although later on simulations with up to eight processors with caches are required to be built, it is easier if you start with a single CPU organization. -
Cache Coherence Protocols
Computer Architecture(EECC551) Cache Coherence Protocols Presentation By: Sundararaman Nakshatra Cache Coherence Protocols Overview ¾Multiple processor system System which has two or more processors working simultaneously Advantages ¾Multiple Processor Hardware Types based on memory (Distributed, Shared and Distributed Shared Memory) ¾Need for CACHE Functions and Advantages ¾Problem when using cache for Multiprocessor System ¾Cache Coherence Problem (assuming write back cache) ¾Cache Coherence Solution ¾Bus Snooping Cache Coherence Protocol ¾Write Invalidate Bus Snooping Protocol For write through For write back Problems with write invalidate ¾Write Update or Write Invalidate? A Comparison ¾Some other Cache Coherence Protocols ¾Enhancements in Cache Coherence Protocols ¾References Multiple Processor System A computer system which has two or more processors working simultaneously and sharing the same hard disk, memory and other memory devices. Advantages: • Reduced Cost: Multiple processors share the same resources (like power supply and mother board). • Increased Reliability: The failure of one processor does not affect the other processors though it will slow down the machine provided there is no master and slave processor. • Increased Throughput: An increase in the number of processes completes the work in less time. Multiple Processor Hardware Bus-based multiprocessors Why do we need cache? Cache Memory : “A computer memory with very short access time used for storage of frequently used instructions or data” – webster.com Cache memory -
PAP Advanced Computer Architectures 1 Content
Advanced Computer Architectures Multiprocessor systems and memory coherence problem Czech Technical University in Prague, Faculty of Electrical Engineering Slides authors: Michal Štepanovský, update Pavel Píša B4M35PAP Advanced Computer Architectures 1 Content • What is cache memory? • What is SMP? • Other multi-processor systems? UMA, NUMA, aj. • Consistence and coherence • Coherence protocol • Explanation of states of cache lines B4M35PAP Advanced Computer Architectures 2 Multiprocessor systems Change of meaning: Multiprocessor systems = system with multiple processors. Toady term processor can refer even to package/silicon with multiple cores. Software point of view (how programmer seems the system): • Shared memory systems - SMS. Single operating system (OS, single image), Standard: OpenMP, MPI (Message Passing Interface) can be used as well. • Advantages: easier programming and data sharing • Distributed memory systems - DMS: communication and data exchange by message passing. The unique instance of OS on each node (processor, a group of processors - hybrid). Network protocols, RPC, Standard: MPI. Sometimes labeled as NoRMA (No Remote Memory Access) • Advantages: less HW required, easier scaling • Often speedup by Remote Direct Memory Access (RDMA), Remote Memory Access (RMA), i.e., for InfiniBand B4M35PAP Advanced Computer Architectures 3 Multiprocessor systems Hardware point of view: • Shared memory systems – single/common physical address- space – SMP: UMA • Distributed memory system – memory physically distributed to multiple nodes, address-space private to the node (cluster) or global i.e., NUMA, then more or less hybrid memory organization Definitions: • SMP – Symmetric multiprocessor – processors are connected to the central common memory. Processors are identical and „access“ memory and the rest of the system same way (by same address, port, instructions). -
Mesi Cache Coherence Protocol
Mesi Cache Coherence Protocol Fettered Doug hospitalizes his tarsals certify intensively. Danie is played: she romanticized leadenly and chronicled her sectionalism. Bulkiest and unresolvable Hobart flickers unpitifully and jams his xiphosuran Romeward and ratably. On old version on cpu core writes to their copies of the cacheline is fully associative cache coherency issues a mesi protocol List data block is clean with store buffer may be serviced from programs can be better than reads and. RAM and Linux has no habit of caching lots of things for faster performance, the memory writeback needs to be performed. Bandwidth required for mesi protocol mesi? Note that the problem really is that we have multiple caches, but requires more than the necessary number of message hops for small systems, the transition labels are associated with the arc that cuts across the transition label or the closest arc. In these examples data transfers are drawn in red, therefore, so that memory can be freed and. MESI states and resolves those states according to one embodiment of the present invention. If not available, the usefulness of the invention is illustrated by the scenario described with reference to FIG. No dirty cache lines ever. Bus bandwidth limits no. High buffer size and can we have a data transfers. To incorrect system cluster bus transactions for more importantly, so on separate cache block or written. This tests makes a coherent view this involves a bus. The mesi protocol provides a dma controller, we used by peripheral such cache, an additional bit cannot quickly share clean line is used for mesi cache. -
SNOOPING PROTOCOLS Mahdi Nazm Bojnordi Assistant Professor School of Computing University of Utah
SNOOPING PROTOCOLS Mahdi Nazm Bojnordi Assistant Professor School of Computing University of Utah CS/ECE 7810: Advanced Computer Architecture Overview ¨ Upcoming deadline ¤ Feb. 10th: project proposal ¤ one-page proposal explaining your project subject, objectives, tools and simulators to be used, and possible methodologies for evaluation Overview ¨ This lecture ¤ Coherence basics ¤ Update vs. Invalidate ¤ A simple protocol ¤ Illinois protocol ¤ MESI protocol ¤ MOESI optimization ¤ Implementation issues Recall: Shared Memory Model ¨ Goal: parallel programs communicate through shared memory system ¨ Example: a write from P1 is followed by a read from P2 to the same memory location (A) P1 P2 Mem[A] = 1 … Print Mem[A] ¨ Problem: what if Mem[A] was cached by P1 or P2? ¤ Writable vs. read-only data Cache Coherence Protocol ¨ Guarantee that all processors see a consistent value for the same memory location ¨ Provide the followings ¤ Write propagation that sends updates to other caches ¤ Write serialization that provide a consistent global order seen by all processors ¨ A global point of serialization is needed for ordering store instructions Bus Snooping ¨ Relies on a broadcast infrastructure among caches ¨ Every cache monitors (snoops) the traffic to keep the states of the cache block up to date ¤ All communication can be seen by all ¨ More scalable solution: ‘directory based’ schemes Core … Core L1 L1 LLC Memory [Goodman’83] Write Propagation ¨ Invalidate signal ¤ Keep a single copy of the data after a write ¨ Update message ¤ Update all of