A Simulation Framework for Evaluating Location Consistency Based Cache
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Directed Test Generation for Validation of Cache Coherence Protocols Yangdi Lyu, Xiaoke Qin, Mingsong Chen, Member, IEEE and Prabhat Mishra, Senior Member, IEEE
IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, VOL. XX, NO. NN, MMM YYYY 1 Directed Test Generation for Validation of Cache Coherence Protocols Yangdi Lyu, Xiaoke Qin, Mingsong Chen, Member, IEEE and Prabhat Mishra, Senior Member, IEEE Abstract—Computing systems utilize multi-core processors Since all possible behaviors of the cache blocks in a system with complex cache coherence protocols to meet the increasing with n cores can be defined by a global finite state machine need for performance and energy improvement. It is a major (FSM), the entire state space is the product of n cache block challenge to verify the correctness of a cache coherence protocol since the number of reachable states grows exponentially with level FSMs. Although the FSM of each cache controller is the number of cores. In this paper, we propose an efficient test easy to understand, the structure of the product FSM for generation technique, which can be used to achieve full state modern cache coherence protocols usually have quite obscure and transition coverage in simulation based verification for a structures that are hard to analyze. Clearly, it is inefficient to wide variety of cache coherence protocols. Based on effective use breadth-first search (BFS) on this product FSM to achieve analysis of the state space structure, our method can generate more efficient test sequences (50% shorter) on-the-fly compared full state or transition coverage, because a large number of with tests generated by breadth-first search. While our on-the- transitions may be unnecessarily repeated, if they are on the fly method can reduce the numbers of required tests by half, it shortest path to many other states. -
Page 1 Cache Coherence in More Detail
What is (Hardware) Shared Memory? • Take multiple microprocessors ECE 259 / CPS 221 Advanced Computer Architecture II (Parallel Computer Architecture) • Implement a memory system with a single global physical address space (usually) Shared Memory MPs – Coherence & Snooping – Communication assist HW does the “magic” of cache coherence Copyright 2006 Daniel J. Sorin • Goal 1: Minimize memory latency Duke University – Use co-location & caches Slides are derived from work by • Goal 2: Maximize memory bandwidth Sarita Adve (Illinois), Babak Falsafi (CMU), – Use parallelism & caches Mark Hill (Wisconsin), Alvy Lebeck (Duke), Steve Reinhardt (Michigan), and J. P. Singh (Princeton). Thanks! (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 3 Outline Some Memory System Options • Motivation for Cache-Coherent Shared Memory P1 Pn Switch P P1 n (Interleaved) First-level $ • Snooping Cache Coherence (Chapter 5) $ $ – Basic systems Bus (Interleaved) – Design tradeoffs Main memory Mem I/O devices • Implementing Snooping Systems (Chapter 6) (a) Shared cache (b) Bus-based shared memory P1 Pn P • Advanced Snooping Systems P1 n $ $ $ $ Mem Mem Interconnection network Interconnection network Mem Mem (c) Dancehall (d) Distributed-memory (C) 2006 Daniel J. Sorin from Adve, (C) 2006 Daniel J. Sorin from Adve, Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 2 Falsafi, Hill, Lebeck, Reinhardt, Singh ECE 259 / CPS 221 4 Page 1 Cache Coherence In More Detail • According to Webster’s dictionary … • Efficient -
Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems
Tightly-Coupled and Fault-Tolerant Communication in Parallel Systems Inauguraldissertation zur Erlangung des akademischen Grades eines Doktors der Naturwissenschaften der Universität Mannheim vorgelegt von Dipl.-Inf. David Christoph Slogsnat aus Heidelberg Mannheim, 2008 Dekan: Prof. Dr. Matthias Krause, Universität Mannheim Referent: Prof. Dr. Ulrich Brüning, Universität Heidelberg Koreferent: Prof. Dr. Reinhard Männer, Universität Heidelberg Tag der mündlichen Prüfung: 4. August 2008 Abstract The demand for processing power is increasing steadily. In the past, single processor archi- tectures clearly dominated the markets. As instruction level parallelism is limited in most applications, significant performance can only be achieved in the future by exploiting par- allelism at the higher levels of thread or process parallelism. As a consequence, modern “processors” incorporate multiple processor cores that form a single shared memory multi- processor. In such systems, high performance devices like network interface controllers are connected to processors and memory like every other input/output device over a hierarchy of periph- eral interconnects. Thus, one target must be to couple coprocessors physically closer to main memory and to the processors of a computing node. This removes the overhead of today’s peripheral interconnect structures. Such a step is the direct connection of Hyper- Transport (HT) devices to Opteron processors, which is presented in this thesis. Also, this work analyzes how communication from a device to processors can be optimized on the protocol level. As today’s computing nodes are shared memory systems, the cache coherence protocol is the central protocol for data exchange between processors and devices. Consequently, the analysis extends to classes of devices that are cache coherence protocol aware. -
Verification of Hierarchical Cache Coherence Protocols for Future Processors
VERIFICATION OF HIERARCHICAL CACHE COHERENCE PROTOCOLS FOR FUTURE PROCESSORS by Xiaofang Chen A dissertation submitted to the faculty of The University of Utah in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Computer Science School of Computing The University of Utah May 2008 Copyright c Xiaofang Chen 2008 All Rights Reserved THE UNIVERSITY OF UTAH GRADUATE SCHOOL SUPERVISORY COMMITTEE APPROVAL of a dissertation submitted by Xiaofang Chen This dissertation has been read by each member of the following supervisory committee and by majority vote has been found to be satisfactory. Chair: Ganesh L. Gopalakrishnan Steven M. German Ching-Tsun Chou John B. Carter Rajeev Balasubramonian THE UNIVERSITY OF UTAH GRADUATE SCHOOL FINAL READING APPROVAL To the Graduate Council of the University of Utah: I have read the dissertation of Xiaofang Chen in its final form and have found that (1) its format, citations, and bibliographic style are consistent and acceptable; (2) its illustrative materials including figures, tables, and charts are in place; and (3) the final manuscript is satisfactory to the Supervisory Committee and is ready for submission to The Graduate School. Date Ganesh L. Gopalakrishnan Chair: Supervisory Committee Approved for the Major Department Martin Berzins Chair/Director Approved for the Graduate Council David S. Chapman Dean of The Graduate School ABSTRACT The advancement of technology promises to make chip multiprocessors or multicores ubiquitous. With multicores, there naturally exists a memory hierarchy across which caches have to be kept coherent. Currently, large (hierarchical) cache coherence proto- cols are verified at either the high (specification) level or at the low (RTL implementation) level. -
Study and Performance Analysis of Cache-Coherence Protocols in Shared-Memory Multiprocessors
Study and performance analysis of cache-coherence protocols in shared-memory multiprocessors Dissertation presented by Anthony GÉGO for obtaining the Master’s degree in Electrical Engineering Supervisor(s) Jean-Didier LEGAT Reader(s) Olivier BONAVENTURE, Ludovic MOREAU, Guillaume MAUDOUX Academic year 2015-2016 Abstract Cache coherence is one of the main challenges to tackle when designing a shared-memory mul- tiprocessors system. Incoherence may happen when multiple actors in a system are working on the same pieces of data without any coordination. This coordination is brought by the coher- ence protocol : a set of finite states machines, managing the caches and memory and keeping the coherence invariants true. This master’s thesis aims at introducing cache coherence in details and providing a high- level performance analysis of some state-of-the art protocols. First, shared-memory multipro- cessors are briefly introduced. Then, a substantial bibliographical summary of cache coherence protocol design is proposed. Afterwards, gem5, an architectural simulator, and the way co- herence protocols are designed into it are introduced. A simulation framework adapted to the problematic is then designed to run on the simulator. Eventually, several coherence protocols and their associated memory hierarchies are simulated and analysed to highlight the perfor- mance impact of finer-designed protocols and their reaction faced to qualitative and quantita- tive changes into the hierarchy. Résumé La cohérence des caches est un des principaux défis auxquels il faut faire face lors de la concep- tion d’un système multiprocesseur à mémoire partagée. Une incohérence peut se produire lorsque plusieurs acteurs manipulent le même jeu de données sans aucune coordination. -
Models for Addressing Cache Coherence Problems 9 INTERNATIONAL JOURNAL of NATURAL and APPLIED SCIENCES (IJNAS), VOL
Models for addressing cache coherence problems 9 INTERNATIONAL JOURNAL OF NATURAL AND APPLIED SCIENCES (IJNAS), VOL. 12, Special Edition (2019); P. 9 – 14, 1 TABLE, 4 FIG. Models for addressing cache coherence problems in multilevel caches: A review paper * I. I. Arikpo and A. T. Okoro ABSTRACT The computational tasks of the 21st century require systems with high processing speed. This can be achieved by having, among other system specifications, a high capacity microprocessor and large cache size in a single processor system. The cache prevents round-trips to RAM for every memory access. Nevertheless, dependence on a single cache for both data and instructions limits the performance of the system regardless of the capacity of the processor and cache size. Multilevel cache is a technological advancement that has made it possible for a multi-core processor system to have more than one cache. The introduction of multilevel cache technology no doubt speeds up system operations and enhances performance, but is also not without limitations. Cache coherence issues are common problems inherent in this design. Different protocols have been designed to ameliorate the issues surrounding cache coherence. The aim of this paper is to discuss the efficiency of these protocols in addressing cache coherence problems in multilevel cache technologies. INTRODUCTION instructions and data over a short period of time (Stallings, 2013). With the ever-increasing rate of processor speed, an equivalent Computer hardware consists of many components including the increase in cache memory is necessary to ensure that data is timely memory unit, which is primarily used for information storage. -
A Primer on Memory Consistency and CACHE COHERENCE CONSISTENCY on MEMORY a PRIMER and Cache Coherence Consistency and Daniel J
Series ISSN: 1935-3235 SORINWOOD •HILL • SYNTHESIS LECTURES ON M Morgan& Claypool Publishers COMPUTER ARCHITECTURE &C Series Editor: Mark D. Hill, University of Wisconsin A Primer on Memory A Primer on Memory Consistency A PRIMER ON MEMORY CONSISTENCY AND CACHE COHERENCE and Cache Coherence Consistency and Daniel J. Sorin, Duke University Mark D. Hill and David A. Wood, University of Wisconsin, Madison Cache Coherence Many modern computer systems and most multicore chips (chip multiprocessors) support shared memory in hardware. In a shared memory system, each of the processor cores may read and write to a single shared address space. For a shared memory machine, the memory consistency model defines the architecturally visible behavior of its memory system. Consistency definitions provide rules about loads and stores (or memory reads and writes) and how they act upon memory. As part of supporting a memory consistency model, many machines also provide cache coherence proto-cols that ensure that multiple cached copies of data are kept up-to-date. The goal of this primer is to provide readers with a basic understanding of consistency and coherence. This understanding includes both the issues that Daniel J. Sorin must be solved as well as a variety of solutions. We present both high-level concepts as well as specific, concrete examples from real-world systems. Mark D. Hill David A. Wood About SYNTHESIs This volume is a printed version of a work that appears in the Synthesis MORGAN Digital Library of Engineering and Computer Science. Synthesis Lectures provide concise, original presentations of important research and development topics, published quickly, in digital and print formats. -
Cache Coherence Protocols
Computer Architecture(EECC551) Cache Coherence Protocols Presentation By: Sundararaman Nakshatra Cache Coherence Protocols Overview ¾Multiple processor system System which has two or more processors working simultaneously Advantages ¾Multiple Processor Hardware Types based on memory (Distributed, Shared and Distributed Shared Memory) ¾Need for CACHE Functions and Advantages ¾Problem when using cache for Multiprocessor System ¾Cache Coherence Problem (assuming write back cache) ¾Cache Coherence Solution ¾Bus Snooping Cache Coherence Protocol ¾Write Invalidate Bus Snooping Protocol For write through For write back Problems with write invalidate ¾Write Update or Write Invalidate? A Comparison ¾Some other Cache Coherence Protocols ¾Enhancements in Cache Coherence Protocols ¾References Multiple Processor System A computer system which has two or more processors working simultaneously and sharing the same hard disk, memory and other memory devices. Advantages: • Reduced Cost: Multiple processors share the same resources (like power supply and mother board). • Increased Reliability: The failure of one processor does not affect the other processors though it will slow down the machine provided there is no master and slave processor. • Increased Throughput: An increase in the number of processes completes the work in less time. Multiple Processor Hardware Bus-based multiprocessors Why do we need cache? Cache Memory : “A computer memory with very short access time used for storage of frequently used instructions or data” – webster.com Cache memory -
Cache Coherence and Mesi Protocol
Cache Coherence And Mesi Protocol Multijugate and Cairene Ernest inwrapped her handstands superstructs painstakingly or shent overnight, is Kelvin lily-white? Unstooping and undissociated Carlton undercoats her photocells voyageurs plume and rebounds cagily. Maneuverable and radiophonic Rusty secludes while semipalmate Hodge overtopping her Kenny unmeritedly and menace therefrom. Disadvantage of coherence and other cached copies of its cache coherence transactions caused by intel does not need to cache. Fill in portable table this with the states of the cache lines at your step. PDF Teaching the cache memory coherence with the MESI. Coherence and the shared bus of the SMP system only looks at the types of. Two processors P1 and P2 and uniform memory are connected to a shared bus which implements the MESI cache coherency protocol. Protokoll wurde zuerst von forschern der caches and cache coherency protocol is cached content and more. And vent are many. This makes directories smaller and disgrace can be clocked faster. What chance a Cache Coherence Problem? NoC-Based Support of Heterogeneous Cache-Coherence. When next to a shared location the related coherent cache line is invalidated in grey other caches. Write-invalidate protocols Based on the assumption that shared data as likely always remain shared Basic protocol similar to MESI but. MOESI protocol is slower than MESI protocol as it handles lesser number of requests in the same perk as compared to MESI protocol, which is caused by that fact that MOESI takes more cycles to input a group or write transaction. Controller and mesi protocol to cache coherence issue in a previous write cache discards a vigenere matrix? The universe present possess the cache is a cucumber data.