Study and Performance Analysis of Cache-Coherence Protocols in Shared-Memory Multiprocessors
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Page 1 Cache Coherence in More Detail
What is (Hardware) Shared Memory? • Take multiple microprocessors ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) • Implement a memory system with a single global physical address space (usually) Shared Memory MPs – Coherence & Snooping – Communication assist HW does the “magic” of cache coherence Copyright 2006 Daniel J. Sorin • Goal 1: Minimize memory latency Duke University – Use co-location & caches Slides are derived from work by • Goal 2: Maximize memory bandwidth Sarita Adve (Illinois), Babak Falsafi (CMU), – Use parallelism & caches Mark Hill (Wisconsin), Alvy Lebeck (Duke), Steve Reinhardt (Michigan), and J. P. Singh (Princeton). Thanks! (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 3 Outline Some Memory System Options • Motivation for Cache-Coherent Shared Memory P1 Pn Switch P P1 n (Interleaved) First-level $ • Snooping Cache Coherence (Chapter 5) $ $ – Basic systems Bus (Interleaved) – Design tradeoffs Main memory Mem I/O devices • Implementing Snooping Systems (Chapter 6) (a) Shared cache (b) Bus-based shared memory P1 Pn P • Advanced Snooping Systems P1 n $ $ $ $ Mem Mem Interconnection network Interconnection network Mem Mem (c) Dancehall (d) Distributed-memory (C) 2006 Daniel J. Sorin from Adve, (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 2 Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 4 Page 1 Cache Coherence In More Detail • According to Webster’s dictionary … • Efficient -
An Evaluation of Cache Coherence Protocols
An Evaluation of Snoop-Based Cache Coherence Protocols Linda Bigelow Veynu Narasiman Aater Suleman ECE Department The University of Texas at Austin fbigelow, narasima, [email protected] I. INTRODUCTION based cache coherence protocol is one that not only maintains coherence, but does so with minimal performance degrada- A common design for multiprocessor systems is to have a tion. small or moderate number of processors each with symmetric In the following sections, we will first describe some of access to a global main memory. Such systems are known as the existing snoop-based cache coherence protocols, explain Symmetric Multiprocessors, or SMPs. All of the processors their deficiencies, and discuss solutions and optimizations are connected to each other as well as to main memory that have been proposed to improve the performance of through the same interconnect, usually a shared bus. these protocols. Next, we will discuss the hardware imple- In such a system, when a certain memory location is read, mentation considerations associated with snoop-based cache we expect that the value returned is the latest value written coherence protocols. We will highlight the differences among to that location. This property is definitely maintained in implementations of the same coherence protocol, as well a uniprocessor system. However, in an SMP, where each as differences required across different coherence protocols. processor has its own cache, special steps have to be taken Lastly, we will evaluate the performance of several different to ensure that this is true. For example, consider the situation cache coherence protocols using real parallel applications run where two different processors, A and B, are reading from on a multiprocessor simulation model. -
Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems
Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Dipl.-Inf. David Christoph Slogsnat aus Heidelberg Mannheim, 2008 Dekan: Prof. Dr. Matthias Krause, Universität Mannheim Referent: Prof. Dr. Ulrich Brüning, Universität Heidelberg Koreferent: Prof. Dr. Reinhard Männer, Universität Heidelberg Tag der mündlichen Prüfung: 4. August 2008 Abstract The demand for processing power is increasing steadily. In the past, single processor archi- tectures clearly dominated the markets. As instruction level parallelism is limited in most applications, significant performance can only be achieved in the future by exploiting par- allelism at the higher levels of thread or process parallelism. As a consequence, modern “processors” incorporate multiple processor cores that form a single shared memory multi- processor. In such systems, high performance devices like network interface controllers are connected to processors and memory like every other input/output device over a hierarchy of periph- eral interconnects. Thus, one target must be to couple coprocessors physically closer to main memory and to the processors of a computing node. This removes the overhead of today’s peripheral interconnect structures. Such a step is the direct connection of Hyper- Transport (HT) devices to Opteron processors, which is presented in this thesis. Also, this work analyzes how communication from a device to processors can be optimized on the protocol level. As today’s computing nodes are shared memory systems, the cache coherence protocol is the central protocol for data exchange between processors and devices. Consequently, the analysis extends to classes of devices that are cache coherence protocol aware. -
A Simulation Framework for Evaluating Location Consistency Based Cache
LC-SIM: A SIMULATION FRAMEWORK FOR EVALUATING LOCATION CONSISTENCY BASED CACHE PROTOCOLS by Pouya Fotouhi A thesis submitted to the Faculty of the University of Delaware in partial fulfillment of the requirements for the degree of Master of Science in Computer Engineering Spring 2017 c 2017 Pouya Fotouhi All Rights Reserved LC-SIM: A SIMULATION FRAMEWORK FOR EVALUATING LOCATION CONSISTENCY BASED CACHE PROTOCOLS by Pouya Fotouhi Approved: Guang R. Gao,Ph.D. Professor in charge of thesis on behalf of the Advisory Committee Approved: Kenneth E. Barner, Ph.D. Chair of the Department of Electrical and Computer Engineering Approved: Babatunde A. Ogunnaike, Ph.D. Dean of the College of Engineering Approved: Ann L. Ardis, Ph.D. Senior Vice Provost for Graduate and Professional Education ACKNOWLEDGMENTS I would like to thank Professor Gao for giving me the opportunity of joining CAPSL and multi-dimensional learning experience. With special thanks to Dr. St´ephaneZuckerman for guiding me step by step over the research, and my colleague Jose Monsalve Diaz for deep discussions and his technical help. Very special thanks to my wife Elnaz , and also my parents for their support and love. iii TABLE OF CONTENTS LIST OF FIGURES ::::::::::::::::::::::::::::::: vi ABSTRACT ::::::::::::::::::::::::::::::::::: ix Chapter 1 INTRODUCTION :::::::::::::::::::::::::::::: 1 2 BACKGROUND ::::::::::::::::::::::::::::::: 4 2.1 An Introduction to Memory Consistency Models :::::::::::: 5 2.1.1 Uniform Memory Consistency Models :::::::::::::: 6 2.1.1.1 Sequential Consistency -
Exploiting Software Information for an Efficient Memory Hierarchy
EXPLOITING SOFTWARE INFORMATION FOR AN EFFICIENT MEMORY HIERARCHY BY RAKESH KOMURAVELLI DISSERTATION Submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science in the Graduate College of the University of Illinois at Urbana-Champaign, 2014 Urbana, Illinois Doctoral Committee: Professor Sarita V. Adve, Director of Research Professor Marc Snir, Chair Professor Vikram S. Adve Professor Wen-mei W. Hwu Dr. Ravi Iyer, Intel Labs Dr. Gilles Pokam, Intel Labs Dr. Pablo Montesinos, Qualcomm Research ABSTRACT Power consumption is one of the most important factors in the design of today’s processor chips. Multicore and heterogeneous systems have emerged to address the rising power concerns. Since the memory hierarchy is becoming one of the major consumers of the on-chip power budget in these systems [73], designing an efficient memory hierarchy is critical to future systems. We identify three sources of inefficiencies in memory hierarchies of today’s systems: (a) coherence, (b) data communication, and (c) data storage. This thesis takes the stand that many of these inefficiencies are a result of today’s software-agnostic hardware design. There is a lot of information in the software that can be exploited to build an efficient memory hierarchy. This thesis focuses on identifying some of the inefficiencies related to each of the above three sources, and proposing various techniques to mitigate them by exploiting information from the software. First, we focus on inefficiencies related to coherence and communication. Today’s hardware based direc- tory coherence protocols are extremely complex and incur unnecessary overheads for sending invalidation messages and maintaining sharer lists. -
Verification of Hierarchical Cache Coherence Protocols for Future Processors
VERIFICATION OF HIERARCHICAL CACHE COHERENCE PROTOCOLS FOR FUTURE PROCESSORS by Xiaofang Chen A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah May 2008 Copyright c Xiaofang Chen 2008 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL SUPERVISORY COMMITTEE APPROVAL of a dissertation submitted by Xiaofang Chen This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Ganesh L. Gopalakrishnan Steven M. German Ching-Tsun Chou John B. Carter Rajeev Balasubramonian THE UNIVERSITY OF UTAH GRADUATE SCHOOL FINAL READING APPROVAL To the Graduate Council of the University of Utah: I have read the dissertation of Xiaofang Chen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Ganesh L. Gopalakrishnan Chair: Supervisory Committee Approved for the Major Department Martin Berzins Chair/Director Approved for the Graduate Council David S. Chapman Dean of The Graduate School ABSTRACT The advancement of technology promises to make chip multiprocessors or multicores ubiquitous. With multicores, there naturally exists a memory hierarchy across which caches have to be kept coherent. Currently, large (hierarchical) cache coherence proto- cols are verified at either the high (specification) level or at the low (RTL implementation) level. -
A Primer on Memory Consistency and CACHE COHERENCE CONSISTENCY on MEMORY a PRIMER and Cache Coherence Consistency and Daniel J
Series ISSN: 1935-3235 SORINWOOD •HILL • SYNTHESIS LECTURES ON M Morgan& Claypool Publishers COMPUTER ARCHITECTURE &C Series Editor: Mark D. Hill, University of Wisconsin A Primer on Memory A Primer on Memory Consistency A PRIMER ON MEMORY CONSISTENCY AND CACHE COHERENCE and Cache Coherence Consistency and Daniel J. Sorin, Duke University Mark D. Hill and David A. Wood, University of Wisconsin, Madison Cache Coherence Many modern computer systems and most multicore chips (chip multiprocessors) support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence proto-cols that ensure that multiple cached copies of data are kept up-to-date. The goal of this primer is to provide readers with a basic understanding of consistency and coherence. This understanding includes both the issues that Daniel J. Sorin must be solved as well as a variety of solutions. We present both high-level concepts as well as specific, concrete examples from real-world systems. Mark D. Hill David A. Wood About SYNTHESIs This volume is a printed version of a work that appears in the Synthesis MORGAN Digital Library of Engineering and Computer Science. Synthesis Lectures provide concise, original presentations of important research and development topics, published quickly, in digital and print formats. -
Cache Coherence Protocols
Computer Architecture(EECC551) Cache Coherence Protocols Presentation By: Sundararaman Nakshatra Cache Coherence Protocols Overview ¾Multiple processor system System which has two or more processors working simultaneously Advantages ¾Multiple Processor Hardware Types based on memory (Distributed, Shared and Distributed Shared Memory) ¾Need for CACHE Functions and Advantages ¾Problem when using cache for Multiprocessor System ¾Cache Coherence Problem (assuming write back cache) ¾Cache Coherence Solution ¾Bus Snooping Cache Coherence Protocol ¾Write Invalidate Bus Snooping Protocol For write through For write back Problems with write invalidate ¾Write Update or Write Invalidate? A Comparison ¾Some other Cache Coherence Protocols ¾Enhancements in Cache Coherence Protocols ¾References Multiple Processor System A computer system which has two or more processors working simultaneously and sharing the same hard disk, memory and other memory devices. Advantages: • Reduced Cost: Multiple processors share the same resources (like power supply and mother board). • Increased Reliability: The failure of one processor does not affect the other processors though it will slow down the machine provided there is no master and slave processor. • Increased Throughput: An increase in the number of processes completes the work in less time. Multiple Processor Hardware Bus-based multiprocessors Why do we need cache? Cache Memory : “A computer memory with very short access time used for storage of frequently used instructions or data” – webster.com Cache memory -
Cache Coherence and Mesi Protocol
Cache Coherence And Mesi Protocol Multijugate and Cairene Ernest inwrapped her handstands superstructs painstakingly or shent overnight, is Kelvin lily-white? Unstooping and undissociated Carlton undercoats her photocells voyageurs plume and rebounds cagily. Maneuverable and radiophonic Rusty secludes while semipalmate Hodge overtopping her Kenny unmeritedly and menace therefrom. Disadvantage of coherence and other cached copies of its cache coherence transactions caused by intel does not need to cache. Fill in portable table this with the states of the cache lines at your step. PDF Teaching the cache memory coherence with the MESI. Coherence and the shared bus of the SMP system only looks at the types of. Two processors P1 and P2 and uniform memory are connected to a shared bus which implements the MESI cache coherency protocol. Protokoll wurde zuerst von forschern der caches and cache coherency protocol is cached content and more. And vent are many. This makes directories smaller and disgrace can be clocked faster. What chance a Cache Coherence Problem? NoC-Based Support of Heterogeneous Cache-Coherence. When next to a shared location the related coherent cache line is invalidated in grey other caches. Write-invalidate protocols Based on the assumption that shared data as likely always remain shared Basic protocol similar to MESI but. MOESI protocol is slower than MESI protocol as it handles lesser number of requests in the same perk as compared to MESI protocol, which is caused by that fact that MOESI takes more cycles to input a group or write transaction. Controller and mesi protocol to cache coherence issue in a previous write cache discards a vigenere matrix? The universe present possess the cache is a cucumber data. -
Mesi Cache Coherence Protocol
Mesi Cache Coherence Protocol Fettered Doug hospitalizes his tarsals certify intensively. Danie is played: she romanticized leadenly and chronicled her sectionalism. Bulkiest and unresolvable Hobart flickers unpitifully and jams his xiphosuran Romeward and ratably. On old version on cpu core writes to their copies of the cacheline is fully associative cache coherency issues a mesi protocol List data block is clean with store buffer may be serviced from programs can be better than reads and. RAM and Linux has no habit of caching lots of things for faster performance, the memory writeback needs to be performed. Bandwidth required for mesi protocol mesi? Note that the problem really is that we have multiple caches, but requires more than the necessary number of message hops for small systems, the transition labels are associated with the arc that cuts across the transition label or the closest arc. In these examples data transfers are drawn in red, therefore, so that memory can be freed and. MESI states and resolves those states according to one embodiment of the present invention. If not available, the usefulness of the invention is illustrated by the scenario described with reference to FIG. No dirty cache lines ever. Bus bandwidth limits no. High buffer size and can we have a data transfers. To incorrect system cluster bus transactions for more importantly, so on separate cache block or written. This tests makes a coherent view this involves a bus. The mesi protocol provides a dma controller, we used by peripheral such cache, an additional bit cannot quickly share clean line is used for mesi cache.