Multicore Semantics and Programming Mark Batty

Total Page:16

File Type:pdf, Size:1020Kb

Multicore Semantics and Programming Mark Batty Multicore Semantics and Programming Mark Batty University of Cambridge January, 2013 – p.1 These Lectures Semantics of concurrency in multiprocessors and programming languages. Establish a solid basis for thinking about relaxed-memory executions, linking to usage, microarchitecture, experiment, and semantics. x86, POWER/ARM, C/C++11 Today: x86 – p.2 Inventing a Usable Abstraction Have to be: Unambiguous Sound w.r.t. experimentally observable behaviour Easy to understand Consistent with what we know of vendors intentions Consistent with expert-programmer reasoning – p.3 Inventing a Usable Abstraction Key facts: Store buffering (with forwarding) is observable IRIW is not observable, and is forbidden by the recent docs Various other reorderings are not observable and are forbidden These suggest that x86 is, in practice, like SPARC TSO. – p.4 x86-TSO Abstract Machine Thread Thread Write Buffer Write Buffer Lock Shared Memory – p.5 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer Lock x=0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread t0:W x=1 Write Buffer Write Buffer Lock x= 0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer (x,1) Lock x= 0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread t1:W y=1 Write Buffer Write Buffer (x,1) Lock x= 0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer (x,1) (y,1) Lock x= 0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer (x,1) t0:R y=0 (y,1) Lock x= 0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer (x,1) (y,1) t1:R x=0 Lock x= 0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer (x,1) (y,1) t0:τ x=1 Lock x= 0 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer (y,1) Lock x= 1 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer (y,1) t1:τ y=1 Lock x= 1 Shared Memory y= 0 – p.6 SB, on x86 Thread 0 Thread 1 MOV [x] 1 (write x=1) MOV [y] 1 (write y=1) ← ← MOV EAX [y] (read y) MOV EBX [x] (read x) ← ← Thread Thread Write Buffer Write Buffer Lock x= 1 Shared Memory y= 1 – p.6 How to formally define this? Separate instruction semantics and memory model Define the memory model in two (provably equivalent) styles: an abstract machine (or operational model) an axiomatic model Put the instruction semantics and abstract machine in parallel, exchanging read and write messages (and lock/unlock messages). – p.7 Let’s Pretend... that we live in an SC world Thread1 Threadn WRW R Shared Memory Multiple threads acting on a sequentially consistent (SC) shared memory – p.8 A Tiny Language location, x, m address integer, n integer thread id, t thread id expression, e ::= expression n integer literal | x read from address x | x = e write value of e to address x | e; e sequential composition | ! e + e plus | ! process, p ::= process t:e thread | p p parallel composition | | ! – p.9 A Tiny Language That was just the syntax — how can we be precise about the permitted behaviours of programs? – p.10 Defining an SC Semantics: expressions l e e! e does l to become e! −→ l e e! e does l to become e! −→ READ x R x=n n −−−−→ l e1 e! WRITE 1 W x=n −→ PLUS CONTEXT 1 x = n n e + e l e + e −−−−→ 1 2 −→ 1! 2 l e e! l −→ WRITE CONTEXT e2 e2! l −→ PLUS CONTEXT 2 x = e x = e! l −→ n1 + e2 n1 + e! −→ 2 SEQ n; e τ e −→ n = n1 + n2 τ PLUS l n1 + n2 n e1 e1! −→ −→ SEQ CONTEXT e ; e l e ; e 1 2 −→ 1! 2 – p.11 Example: SC Expression Trace (x = y); x – p.12 Example: SC Expression Trace (x = y); x R y=7 W x=7 τ R x=9 (x = y); x 9 −−−→ −−−−→−→−−−→ – p.12 Example: SC Expression Trace (x = y); x R y=7 W x=7 τ R x=9 (x = y); x 9 −−−→ −−−−→−→−−−→ READ R y=7 y 7 WRITE CONTEXT −−−→R y=7 x = y x =7 SEQ CONTEXT −−−→ R y=7 (x = y); x (x =7);x −−−→ – p.12 Example: SC Expression Trace (x = y); x R y=7 W x=7 τ R x=9 (x = y); x 9 −−−→ −−−−→−→−−−→ WRITE W x=7 x =7 7 SEQ CONTEXT −−−−→W x=7 (x =7);x 7; x −−−−→ – p.12 Example: SC Expression Trace (x = y); x R y=7 W x=7 τ R x=9 (x = y); x 9 −−−→ −−−−→−→−−−→ τ SEQ 7; x x −→ READ R x=9 x 9 −−−→ – p.12 Defining an SC Semantics: lifting to processes t:l p p p does t : l to become p −→ ! ! l e e! −→ THREAD t:l t:e t:e −→ ! t:l p1 p1! −→ PAR CONTEXT LEFT t:l p p p p 1| 2 −→ 1! | 2 t:l p2 p2! −→ PAR CONTEXT RIGHT t:l p p p p 1| 2 −→ 1| 2! free interleaving – p.13 Defining an SC Semantics: SC memory Take an SC memory M to be a function from addresses to integers. Define the behaviour as a labelled transition system (LTS): the least set of (memory,label,memory) triples satisfying these rules. t:l M M M does t : l to become M −→ ! ! M (x)=n MREAD t:R x=n M M −−−−−→ MWRITE t:W x=n M M (x n) −−−−−→ ⊕ %→ – p.14 Defining an SC Semantics: whole-system states A system state p, M is a pair of a process and a memory. & ' t:l s s s does t : l to become s −→ ! ! t:l p p! −→t:l M M ! −→ SSYNC t:l p, M p , M & ' −→& ! !' t:τ p p! −−→ TAU t:τ S p, M p , M & ' −−→& ! ' synchronising between the process and the memory, and letting threads do internal transitions – p.15 Example: SC Interleaving All threads can read and write the shared memory. Threads execute asynchronously – the semantics allows any interleaving of the thread transitions. Here there are two: t1:x =1t2:x =2, x 0 & l| R{R %→ }' t1:W x=1 lll RRRt2:W x=2 lll RRR lll RRR ulll RR) t :1 t :x =2, x 1 t :x =1t :2, x 2 & 1 | 2 { %→ }' & 1 | 2 { %→ }' t2:W x=2 t1:W x=1 t :1 t :2, x 2 t :1 t :2, x 1 & 1 | 2 { %→ }'&1 | 2 { %→ }' But each interleaving has a linear order of reads and writes to the memory. – p.16 Combinatorial Explosion The behaviour of t :x = x +1t :x = x +7for the initial store 1 | 2 x 0 : { %→ } r + w t1:1 t2:(x = x +7), x 1 / / / t1:1 t2:8, ll 8 & | i4 { %→ }' • • & | { %→ }' w iiii iiii iiii iiii t1:(x =1)t2:(x = x +7), x 0 t1:1 t2:(x =7+0), x 1 & | m6 UU { %→ }' & | i4 {TT %→ }' + mm UUU r w iii TTT + mm UUUU iiii TTTT mmm UUUU iiii TTT mmm UUU iii TTT m U* ii T* w t1:(x =1+0)t2:(x = x +7), x 0 t1:(x =1)t2:(x =7+0), x 0 t1:1 t2:(x =7), x 1 / t1:1 t2:7, x 7 & |m6 QQ { %→ }' & i|i4 UUU{ %→ }' & | j4 { %→ }' & | { %→ }' r mm QQ r + iii UUU + w jjjj mmm QQQ iiii UUUU jjj mmm QQQ iiii UUUU jjjj mmm QQ( iiii UUU* jjjj t1:(x = x +1)t2:(x = x +7), x 0 t1:(x =1+0)t2:(x =7+0), x 0 t1:(x =1)t2:(x =7), x 0 & | QQ { %→ }' & m| 6 UU { %→ }' & i4 | TT{ %→ }' QQ mm UUUU + iiii TTTT QQQ mmm UUUU iiii TTT r QQ mmr + UUU iii w TTT QQQ mmm UUU iii TTT ( m U* ii T* w t1:(x = x +1)t2:(x =7+0), x 0 t1:(x =1+0)t2:(x =7), x 0 t1:x =1t2:7, x 7 / t1:1 t2:7, x 1 & | QQ { %→ }' & ii4 | UUU{ %→ }' & jj4 | { %→ }' & | { %→ }' QQQ r iiii UUUU + jjj QQ iii UUU jjjj + QQQ iiii w UUUU jjj QQ( iiii UUU* jjjj t :(x = x +1)t :(x =7), x 0 t :x =1+0t :7, x 7 & 1 | 2 { %→ }' & 1 | 2 { %→ }' UUUU UUUUw UUUU UUUU U* r + w t :x = x +1t :7, x 7 / / / t :8 t :7, ll 8 & 1 | 2 { %→ }' • • & 1 | 2 { %→ }' NB: the labels +, w and r in this picture are just informal hints as to how those transitions were derived – p.17 Morals For free interleaving, number of systems states scales as nt,wheren is the threads per state and t the number of threads.
Recommended publications
  • Lazy Threads Compiler and Runtime Structures For
    Lazy Threads Compiler and Runtime Structures for FineGrained Parallel Programming by Seth Cop en Goldstein BSE Princeton University MSE University of CaliforniaBerkeley A dissertation submitted in partial satisfaction of the requirements for the degree of Do ctor of Philosophy in Computer Science in the GRADUATE DIVISION of the UNIVERSITY of CALIFORNIA at BERKELEY Committee in charge Professor David E Culler Chair Professor Susan L Graham Professor Paul McEuen Professor Katherine A Yelick Fall Abstract Lazy Threads Compiler and Runtime Structures for FineGrained Parallel Programming by Seth Cop en Goldstein Do ctor of Philosophy in Computer Science University of California Berkeley Professor David E Culler Chair Many mo dern parallel languages supp ort dynamic creation of threads or require multithreading in their implementations The threads describ e the logical parallelism in the program For ease of expression and b etter resource utilization the logical parallelism in a program often exceeds the physical parallelism of the machine and leads to applications with many negrained threads In practice however most logical threads need not b e indep endent threads Instead they could b e run as sequential calls which are inherently cheap er than indep endent threads The challenge is that one cannot generally predict which logical threads can b e implemented as sequential calls In lazy multithreading systems each logical thread b egins execution sequentially with the attendant ecientstack management and direct transfer of control and data
    [Show full text]
  • Performance Portability Libraries
    The Heterogeneous Computing Revolution Felice Pantaleo CERN - Experimental Physics Department [email protected] 1 A reminder that… CPU evolution is not B. Panzer able to cope with the increasing demand of performance 2 …CMS is in a computing emergency • Performance demand will increase substantially at HL-LHC • an order of magnitude more CPU performance offline and online 3 Today • High Level Trigger • readout of the whole detector with full granularity, based on the CMS software, running on 30,000 CPU cores • Maximum average latency is ~600ms with HT 4 The CMS Trigger in Phase 2 • Level-1 Trigger output rate will increase to 750 kHz (7.5x) • Pileup will increase by a factor 3x-4x • The reconstruction of the new highly granular Calorimeter Endcap will contribute substantially to the required computing resources • Missing an order of magnitude in computing performance 5 The Times They Are a-Changin' Achieving sustainable HEP computing requires change Long shutdown 2 represents a good opportunity to embrace a paradigm shift towards modern heterogeneous computer architectures and software techniques: • Heterogeneous Computing • Machine Learning Algorithms and Frameworks The acceleration of algorithms with GPUs is expected to benefit: • Online computing: decreasing the overall cost/volume of the event selection farm, or increasing its discovery potential/throughput • Offline computing: enabling software frameworks to execute efficiently on HPC centers and saving costs by making WLCG tiers heterogeneous • Volunteer computing: making use of accelerators that are already available on the volunteers’ machines Patatrack • Patatrack is a software R&D incubator • Born in 2016 by a very small group of passionate people • Interests: algorithms, HPC, heterogeneous computing, machine learning, software engineering • Lay the foundations of the CMS online/offline heterogeneous reconstruction starting from 2020s 8 and it’s growing fast 9 Why should our community care? • Accelerators are becoming ubiquitous • Driven by more complex and deeper B.
    [Show full text]
  • A Practical Quantum Instruction Set Architecture
    A Practical Quantum Instruction Set Architecture Robert S. Smith, Michael J. Curtis, William J. Zeng Rigetti Computing 775 Heinz Ave. Berkeley, California 94710 Email: {robert, spike, will}@rigetti.com IV Quil Examples 9 Abstract—We introduce an abstract machine archi- IV-A Quantum Fourier Transform . .9 tecture for classical/quantum computations—including compilation—along with a quantum instruction lan- IV-B Variational Quantum Eigensolver . .9 guage called Quil for explicitly writing these computa- tions. With this formalism, we discuss concrete imple- IV-B1 Static Implementation . .9 mentations of the machine and non-trivial algorithms IV-B2 Dynamic Implementation . 10 targeting them. The introduction of this machine dove- tails with ongoing development of quantum computing technology, and makes possible portable descriptions V Forest: A Quantum Programming Toolkit 10 of recent classical/quantum algorithms. Keywords— quantum computing, software architecture V-A Overview . 10 V-B Applications and Tools . 11 Contents V-C Quil Manipulation . 11 I Introduction 1 V-D Compilation . 11 V-E Instruction Parallelism . 12 II The Quantum Abstract Machine 2 V-F Rigetti Quantum Virtual Machine . 12 II-A Qubit Semantics . .2 II-B Quantum Gate Semantics . .3 VI Conclusion 13 II-C Measurement Semantics . .4 VII Acknowledgements 13 III Quil: a Quantum Instruction Language 5 III-A Classical Addresses and Qubits . .5 Appendix 13 III-B Numerical Interpretation of Classical A The Standard Gate Set . 13 Memory Segments . .5 B Prior Work . 13 III-C Static and Parametric Gates . .5 B1 Embedded Domain- III-D Gate Definitions . .6 Specific Languages . 13 III-E Circuits . .6 B2 High-Level QPLs .
    [Show full text]
  • The Implementation of Prolog Via VAX 8600 Microcode ABSTRACT
    The Implementation of Prolog via VAX 8600 Microcode Jeff Gee,Stephen W. Melvin, Yale N. Patt Computer Science Division University of California Berkeley, CA 94720 ABSTRACT VAX 8600 is a 32 bit computer designed with ECL macrocell arrays. Figure 1 shows a simplified block diagram of the 8600. We have implemented a high performance Prolog engine by The cycle time of the 8600 is 80 nanoseconds. directly executing in microcode the constructs of Warren’s Abstract Machine. The imulemention vehicle is the VAX 8600 computer. The VAX 8600 is a general purpose processor Vimal Address containing 8K words of writable control store. In our system, I each of the Warren Abstract Machine instructions is implemented as a VAX 8600 machine level instruction. Other Prolog built-ins are either implemented directly in microcode or executed by the general VAX instruction set. Initial results indicate that. our system is the fastest implementation of Prolog on a commercrally available general purpose processor. 1. Introduction Various models of execution have been investigated to attain the high performance execution of Prolog programs. Usually, Figure 1. Simplified Block Diagram of the VAX 8600 this involves compiling the Prolog program first into an intermediate form referred to as the Warren Abstract Machine (WAM) instruction set [l]. Execution of WAM instructions often follow one of two methods: they are executed directly by a The 8600 consists of six subprocessors: the EBOX. IBOX, special purpose processor, or they are software emulated via the FBOX. MBOX, Console. and UO adamer. Each of the seuarate machine language of a general purpose computer.
    [Show full text]
  • GPU As a Prototype for Next Generation Supercomputer Viktor K
    GPU as a Prototype for Next Generation Supercomputer Viktor K. Decyk and Tajendra V. Singh UCLA Abstract The next generation of supercomputers will likely consist of a hierarchy of parallel computers. If we can define each node as a parameterized abstract machine, then it is possible to design algorithms even if the actual hardware varies. Such an abstract machine is defined by the OpenCL language to consist of a collection of vector (SIMD) processors, each with a small shared memory, communicating via a larger global memory. This abstraction fits a variety of hardware, such as Graphical Processing Units (GPUs), and multi-core processors with vector extensions. To program such an abstract machine, one can use ideas familiar from the past: vector algorithms from vector supercomputers, blocking or tiling algorithms from cache-based machines, and domain decomposition from distributed memory computers. Using the OpenCL language itself is not necessary. Examples from our GPU Particle-in-Cell code will be shown to illustrate this approach. Sunday, May 13, 2012 Outline of Presentation • Abstraction of future computer hardware • Short Tutorial on Cuda Programming • Particle-in-Cell Algorithm for GPUs • Performance Results on GPU • Other Parallel Languages Sunday, May 13, 2012 Revolution in Hardware Many new architectures • Multi-core processors • SIMD accelerators, GPUs • Low power embedded processors, FPGAs • Heterogeneous processors (co-design) Driven by: • Increasing shared memory parallelism, since clock speed is not increasing • Low power computing
    [Show full text]
  • Copyright © IEEE. Citation for the Published Paper
    Copyright © IEEE. Citation for the published paper: This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of BTH's products or services Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by sending a blank email message to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. CudaRF: A CUDA-based Implementation of Random Forests Håkan Grahn, Niklas Lavesson, Mikael Hellborg Lapajne, and Daniel Slat School of Computing Blekinge Institute of Technology SE-371 39 Karlskrona, Sweden [email protected], [email protected] Abstract—Machine learning algorithms are frequently applied where the RF implementation was done using Direct3D and in data mining applications. Many of the tasks in this domain the high level shader language (HLSL). concern high-dimensional data. Consequently, these tasks are of- In this paper, we present a parallel CUDA-based imple- ten complex and computationally expensive. This paper presents a GPU-based parallel implementation of the Random Forests mentation of the Random Forests algorithm. The algorithm algorithm. In contrast to previous work, the proposed algorithm is experimentally evaluated on a NVIDIA GT220 graphics is based on the compute unified device architecture (CUDA). An card with 48 CUDA cores and 1 GB of memory. The perfor- experimental comparison between the CUDA-based algorithm mance is compared with two state-of-the-art implementations (CudaRF), and state-of-the-art Random Forests algorithms (Fas- of Random Forests: LibRF [8] and FastRF in Weka [9].
    [Show full text]
  • The Abstract Machine a Pattern for Designing Abstract Machines
    The Abstract Machine A Pattern for Designing Abstract Machines Julio García-Martín Miguel Sutil-Martín Universidad Politécnica de Madrid1. Abstract Because of the increasing gap between modern high-level programming languages and existing hardware, it has often become necessary to introduce intermediate languages and to build abstract machines on top of the primitive hardware. This paper describes the ABSTRACT- MACHINE, a structural pattern that captures the essential features addressing the definition of abstract machines. The pattern describes both the static and dynamic features of abstract machines as separate components, as well as it considers the instruction set and the semantics for these instructions as other first-order components of the pattern. Intent Define a common template for the design of Abstract Machines. The pattern captures the essential features underlying abstract machines (i.e., data area, program, instruction set, etc...), encapsulating them in separated loose-coupled components into a structural pattern. Furthermore, the proposal provides the collaboration structures how components of an abstract machine interact. Also known as Virtual Machine, Abstract State Machine. Motivation Nowadays, one of the buzziest of the buzzwords in computer science is Java. Though to be an object- oriented language for the development of distributed and GUI applications, Java is almost every day adding new quite useful programming features and facilities. As a fact, in the success of Java has been particular significant of Java the use of the virtual machine's technology2. As well known, the Java Virtual Machine (JVM) is an abstract software-based machine that can operate over different microprocessor machines (i.e., hardware independent).
    [Show full text]
  • Abstract Machines for Programming Language Implementation
    Future Generation Computer Systems 16 (2000) 739–751 Abstract machines for programming language implementation Stephan Diehl a,∗, Pieter Hartel b, Peter Sestoft c a FB-14 Informatik, Universität des Saarlandes, Postfach 15 11 50, 66041 Saarbrücken, Germany b Department of Electronics and Computer Science, University of Southampton, Highfield, Southampton SO17 1BJ, UK c Department of Mathematics and Physics, Royal Veterinary and Agricultural University, Thorvaldsensvej 40, DK-1871 Frederiksberg C, Denmark Accepted 24 June 1999 Abstract We present an extensive, annotated bibliography of the abstract machines designed for each of the main programming paradigms (imperative, object oriented, functional, logic and concurrent). We conclude that whilst a large number of efficient abstract machines have been designed for particular language implementations, relatively little work has been done to design abstract machines in a systematic fashion. © 2000 Elsevier Science B.V. All rights reserved. Keywords: Abstract machine; Compiler design; Programming language; Intermediate language 1. What is an abstract machine? next instruction to be executed. The program counter is advanced when the instruction is finished. This ba- Abstract machines are machines because they per- sic control mechanism of an abstract machine is also mit step-by-step execution of programs; they are known as its execution loop. abstract because they omit the many details of real (hardware) machines. 1.1. Alternative characterizations Abstract machines provide an intermediate lan- guage stage for compilation. They bridge the gap The above characterization fits many abstract ma- between the high level of a programming language chines, but some abstract machines are more abstract and the low level of a real machine.
    [Show full text]
  • Distributed Microarchitectural Protocols in the TRIPS Prototype Processor
    Øh Appears in the ¿9 Annual International Symposium on Microarchitecture Distributed Microarchitectural Protocols in the TRIPS Prototype Processor Karthikeyan Sankaralingam Ramadass Nagarajan Robert McDonald Ý Rajagopalan DesikanÝ Saurabh Drolia M.S. Govindan Paul Gratz Divya Gulati Heather HansonÝ ChangkyuKim HaimingLiu NityaRanganathan Simha Sethumadhavan Sadia SharifÝ Premkishore Shivakumar Stephen W. Keckler Doug Burger Department of Computer Sciences ÝDepartment of Electrical and Computer Engineering The University of Texas at Austin [email protected] www.cs.utexas.edu/users/cart Abstract are clients on one or more micronets. Higher-level mi- croarchitectural protocols direct global control across the Growing on-chip wire delays will cause many future mi- micronets and tiles in a manner invisible to software. croarchitectures to be distributed, in which hardware re- In this paper, we describe the tile partitioning, micronet sources within a single processor become nodes on one or connectivity, and distributed protocols that provide global more switched micronetworks. Since large processor cores services in the TRIPS processor, including distributed fetch, will require multiple clock cycles to traverse, control must execution, flush, and commit. Prior papers have described be distributed, not centralized. This paper describes the this approach to exploiting parallelism as well as high-level control protocols in the TRIPS processor, a distributed, tiled performanceresults [15, 3], but have not described the inter- microarchitecture that supports dynamic execution. It de- tile connectivity or protocols. Tiled architectures such as tails each of the five types of reused tiles that compose the RAW [23] use static orchestration to manage global oper- processor, the control and data networks that connect them, ations, but in a dynamically scheduled, distributed archi- and the distributed microarchitectural protocols that imple- tecture such as TRIPS, hardware protocols are required to ment instruction fetch, execution, flush, and commit.
    [Show full text]
  • A Retrospective on the VAX VMM Security Kernel
    IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 17, NO. 11, NOVEMBER 1991 1147 A Retrospective on the VAX VMM Security Kernel Paul A. Karger, Member, IEEE, Mary Ellen Zurko, Douglas W. Bonin, Andrew H. Mason, and Clifford E. Kahn Abstract-This paper describes the development of a virtual- Attempting to meet all of these goals, represented a very machine monitor (VMM) security kernel for the VAX archi- ambitious undertaking, because no system had ever simulta- tecture. The paper particularly focuses on how the system’s neously met the security, performance, and software compati- hardware, microcode, and software are aimed at meeting Al-level security requirements while maintaining the standard interfaces bility goals. Indeed, very few systems have ever met just the and applications of the VMS and ULTRIX-32 operating systems. security goal, as most so-called secure systems have proven The VAX Security Kernel supports multiple concurrent virtual to be easily penetrable [l]. machines on a single VAX system, providing isolation and con- trolled sharing of sensitive data. Rigorous engineering standards were applied during development to comply with the assurance 11. BACKGROUND requirements for verification and configuration management. This section presents a brief summary of the basics of The VAX Security Kernel has been developed with a heavy computer security as background for the remainder of the emphasis on performance and system management tools. The kernel performs sufficiently well that much of its development paper. Most secure systems are based on an abstract notion was carried out in virtual machines running on the kernel itself, of subjects and objects. The secure system must mediate rather than in a conventional time-sharing system.
    [Show full text]
  • Deriving an Abstract Machine for Strong Call by Need
    Deriving an Abstract Machine for Strong Call by Need Małgorzata Biernacka Institute of Computer Science, University of Wrocław, Poland [email protected] Witold Charatonik Institute of Computer Science, University of Wrocław, Poland [email protected] Abstract Strong call by need is a reduction strategy for computing strong normal forms in the lambda calculus, where terms are fully normalized inside the bodies of lambda abstractions and open terms are allowed. As typical for a call-by-need strategy, the arguments of a function call are evaluated at most once, only when they are needed. This strategy has been introduced recently by Balabonski et al., who proved it complete with respect to full β-reduction and conservative over weak call by need. We show a novel reduction semantics and the first abstract machine for the strong call-by-need strategy. The reduction semantics incorporates syntactic distinction between strict and non-strict let constructs and is geared towards an efficient implementation. It has been defined within the framework of generalized refocusing, i.e., a generic method that allows to go from a reduction semantics instrumented with context kinds to the corresponding abstract machine; the machine is thus correct by construction. The format of the semantics that we use makes it explicit that strong call by need is an example of a hybrid strategy with an infinite number of substrategies. 2012 ACM Subject Classification Theory of computation → Lambda calculus; Theory of computa- tion → Operational semantics Keywords and phrases abstract machines, reduction semantics Digital Object Identifier 10.4230/LIPIcs.FSCD.2019.8 Funding This research is supported by the National Science Centre of Poland, under grant number 2014/15/B/ST6/00619.
    [Show full text]
  • A Better X86 Memory Model: X86-TSO
    A Better x86 Memory Model: x86-TSO Scott Owens Susmit Sarkar Peter Sewell University of Cambridge http://www.cl.cam.ac.uk/users/pes20/weakmemory Abstract. Real multiprocessors do not provide the sequentially consis- tent memory that is assumed by most work on semantics and verifica- tion. Instead, they have relaxed memory models, typically described in ambiguous prose, which lead to widespread confusion. These are prime targets for mechanized formalization. In previous work we produced a rig- orous x86-CC model, formalizing the Intel and AMD architecture spec- ifications of the time, but those turned out to be unsound with respect to actual hardware, as well as arguably too weak to program above. We discuss these issues and present a new x86-TSO model that suffers from neither problem, formalized in HOL4. We believe it is sound with respect to real processors, reflects better the vendor’s intentions, and is also better suited for programming. We give two equivalent definitions of x86-TSO: an intuitive operational model based on local write buffers, and an axiomatic total store ordering model, similar to that of the SPARCv8. Both are adapted to handle x86-specific features. We have implemented the axiomatic model in our memevents tool, which calculates the set of all valid executions of test programs, and, for greater confidence, verify the witnesses of such executions directly, with code extracted from a third, more algorithmic, equivalent version of the definition. 1 Introduction Most previous research on the semantics and verification of concurrent programs assumes sequential consistency: that accesses by multiple threads to a shared memory occur in a global-time linear order.
    [Show full text]