Simplified Vector-Thread Architectures for Flexible and Efficient Data-Parallel Accelerators

Total Page:16

File Type:pdf, Size:1020Kb

Simplified Vector-Thread Architectures for Flexible and Efficient Data-Parallel Accelerators Simplified Vector-Thread Architectures for Flexible and Efficient Data-Parallel Accelerators by Christopher Francis Batten B.S. in Electrical Engineering, University of Virginia, May 1999 M.Phil. in Engineering, University of Cambridge, August 2000 Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering and Computer Science at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY February 2010 © Massachusetts Institute of Technology 2010. All rights reserved. Author........................................................................... Department of Electrical Engineering and Computer Science January 29, 2010 Certified by . Krste Asanovic´ Associate Professor, University of California, Berkeley Thesis Supervisor Accepted by . Terry P. Orlando Chairman, Department Committee on Graduate Students Electrical Engineering and Computer Science 2 Simplified Vector-Thread Architectures for Flexible and Efficient Data-Parallel Accelerators by Christopher Francis Batten Submitted to the Department of Electrical Engineering and Computer Science on January 29, 2010, in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Electrical Engineering and Computer Science Abstract This thesis explores a new approach to building data-parallel accelerators that is based on simplify- ing the instruction set, microarchitecture, and programming methodology for a vector-thread archi- tecture. The thesis begins by categorizing regular and irregular data-level parallelism (DLP), before presenting several architectural design patterns for data-parallel accelerators including the multiple- instruction multiple-data (MIMD) pattern, the vector single-instruction multiple-data (vector-SIMD) pattern, the single-instruction multiple-thread (SIMT) pattern, and the vector-thread (VT) pattern. Our recently proposed VT pattern includes many control threads that each manage their own array of microthreads. The control thread uses vector memory instructions to efficiently move data and vector fetch instructions to broadcast scalar instructions to all microthreads. These vector mecha- nisms are complemented by the ability for each microthread to direct its own control flow. In this thesis, I introduce various techniques for building simplified instances of the VT pattern. I propose unifying the VT control-thread and microthread scalar instruction sets to simplify the mi- croarchitecture and programming methodology. I propose a new single-lane VT microarchitecture based on minimal changes to the vector-SIMD pattern. Single-lane cores are simpler to implement than multi-lane cores and can achieve similar energy efficiency. This new microarchitecture uses control processor embedding to mitigate the area overhead of single-lane cores, and uses vector fragments to more efficiently handle both regular and irregular DLP as compared to previous VT ar- chitectures. I also propose an explicitly data-parallel VT programming methodology that is based on a slightly modified scalar compiler. This methodology is easier to use than assembly programming, yet simpler to implement than an automatically vectorizing compiler. To evaluate these ideas, we have begun implementing the Maven data-parallel accelerator. This thesis compares a simplified Maven VT core to MIMD, vector-SIMD, and SIMT cores. We have implemented these cores with an ASIC methodology, and I use the resulting gate-level models to evaluate the area, performance, and energy of several compiled microbenchmarks. This work is the first detailed quantitative comparison of the VT pattern to other patterns. My results suggest that future data-parallel accelerators based on simplified VT architectures should be able to combine the energy efficiency of vector-SIMD accelerators with the flexibility of MIMD accelerators. Thesis Supervisor: Krste Asanovic´ Title: Associate Professor, University of California, Berkeley 4 Acknowledgements I would like to first thank my research advisor, Krste Asanovic,´ who has been a true mentor, pas- sionate teacher, inspirational leader, valued colleague, and professional role model throughout my time at MIT and U.C. Berkeley. This thesis would not have been possible without Krste’s constant stream of ideas, encyclopedic knowledge of technical minutia, and patient yet unwavering support. I would also like to thank the rest of my thesis committee, Christopher Terman and Arvind, for their helpful feedback as I worked to crystallize the key themes of my research. Thanks to the other members of the Scale team at MIT for helping to create the vector-thread architectural design pattern. Particular thanks to Ronny Krashinsky, who led the Scale team, and taught me more than he will probably ever know. Working with Ronny on the Scale project was, without doubt, the highlight of my time in graduate school. Thanks to Mark Hampton for having the courage to build a compiler for a brand new architecture. Thanks to the many others who made both large and small contributions to the Scale project including Steve Gerding, Jared Casper, Albert Ma, Asif Khan, Jaime Quinonez, Brian Pharris, Jeff Cohen, and Ken Barr. Thanks to the other members of the Maven team at U.C. Berkeley for accepting me like a real Berke- ley student and helping to rethink vector threading. Particular thanks to Yunsup Lee for his tremen- dous help in implementing the Maven architecture. Thanks to both Yunsup and Rimas Avizienis for working incredibly hard on the Maven RTL and helping to generate such detailed results. Thanks to the rest of the Maven team including Chris Celio, Alex Bishara, and Richard Xia. This thesis would still be an idea on a piece of green graph paper without all of their hard work, creativity, and dedica- tion. Thanks also to Hidetaka Aoki for so many wonderful discussions about vector architectures. Section 1.4 discusses in more detail how the members of the Maven team contributed to this thesis. Thanks to the members of the nanophotonic systems team at MIT and U.C. Berkeley for allowing me to explore a whole new area of research that had nothing to do with vector processors. Thanks to Vladimir Stajanovic´ for his willingness to answer even the simplest questions about nanophotonic technology, and for being a great professional role model. Thanks to Ajay Joshi, Scott Beamer, and Yong-Jin Kwon for working with me to figure out what to do with this interesting new technology. Thanks to my fellow graduate students at both MIT and Berkeley for making every extended in- tellectual debate, every late-night hacking session, and every conference trip such a wonderful ex- perience. Particular thanks to Dave Wentzlaff, Ken Barr, Ronny Krashinsky, Edwin Olson, Mike Zhang, Jessica Tseng, Albert Ma, Mark Hampton, Seongmoo Heo, Steve Gerding, Jae Lee, Nirav Dave, Michael Pellauer, Michal Karczmarek, Bill Thies, Michael Taylor, Niko Loening, and David Liben-Nowell. Thanks to Rose Liu and Heidi Pan for supporting me as we journeyed from one coast to the other. Thanks to Mary McDavitt for being an amazing help throughout my time at MIT and even while I was in California. Thanks to my parents, Arthur and Ann Marie, for always supporting me from my very first exper- iment with Paramecium to my very last experiment with vector threading. Thanks to my brother, Mark, for helping me to realize that life is about working hard but also playing hard. Thanks to my wife, Laura, for her unending patience, support, and love through all my ups and downs. Finally, thanks to my daughter, Fiona, for helping to put everything into perspective. 5 6 Contents 1 Introduction 13 1.1 Transition to Multicore & Manycore General-Purpose Processors . 13 1.2 Emergence of Programmable Data-Parallel Accelerators . 19 1.3 Leveraging Vector-Threading in Data-Parallel Accelerators . 21 1.4 Collaboration, Previous Publications, and Funding . 22 2 Architectural Design Patterns for Data-Parallel Accelerators 25 2.1 Regular and Irregular Data-Level Parallelism . 25 2.2 Overall Structure of Data-Parallel Accelerators . 30 2.3 MIMD Architectural Design Pattern . 32 2.4 Vector-SIMD Architectural Design Pattern . 36 2.5 Subword-SIMD Architectural Design Pattern . 41 2.6 SIMT Architectural Design Pattern . 43 2.7 VT Architectural Design Pattern . 48 2.8 Comparison of Architectural Design Patterns . 53 2.9 Example Data-Parallel Accelerators . 55 3 Maven: A Flexible and Efficient Data-Parallel Accelerator 61 3.1 Unified VT Instruction Set Architecture . 61 3.2 Single-Lane VT Microarchitecture Based on Vector-SIMD Pattern . 62 3.3 Explicitly Data-Parallel VT Programming Methodology . 66 4 Maven Instruction Set Architecture 69 4.1 Instruction Set Overview . 69 4.2 Challenges in a Unified VT Instruction Set . 74 4.3 Vector Configuration Instructions . 77 4.4 Vector Memory Instructions . 80 4.5 Calling Conventions . 82 4.6 Extensions to Support Other Architectural Design Patterns . 83 4.7 Future Research Directions . 86 7 4.8 Related Work . 89 5 Maven Microarchitecture 91 5.1 Microarchitecture Overview . 91 5.2 Control Processor Embedding . 99 5.3 Vector Fragment Merging . 100 5.4 Vector Fragment Interleaving . 102 5.5 Vector Fragment Compression . 104 5.6 Leveraging Maven VT Cores in a Full Data-Parallel Accelerator . 107 5.7 Extensions to Support Other Architectural Design Patterns . 109 5.8 Future Research Directions . 111 5.9 Related Work . 113 6 Maven Programming Methodology 117 6.1 Programming Methodology Overview . 117 6.2 VT Compiler . 122 6.3 VT Application
Recommended publications
  • Hardware Realization of an FPGA Processor – Operating System Call Offload and Experiences
    Downloaded from orbit.dtu.dk on: Oct 02, 2021 Hardware Realization of an FPGA Processor – Operating System Call Offload and Experiences Hindborg, Andreas Erik; Schleuniger, Pascal; Jensen, Nicklas Bo; Karlsson, Sven Published in: Proceedings of the 2014 Conference on Design and Architectures for Signal and Image Processing (DASIP) Publication date: 2014 Link back to DTU Orbit Citation (APA): Hindborg, A. E., Schleuniger, P., Jensen, N. B., & Karlsson, S. (2014). Hardware Realization of an FPGA Processor – Operating System Call Offload and Experiences. In A. Morawiec, & J. Hinderscheit (Eds.), Proceedings of the 2014 Conference on Design and Architectures for Signal and Image Processing (DASIP) IEEE. General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. Users may download and print one copy of any publication from the public portal for the purpose of private study or research. You may not further distribute the material or use it for any profit-making activity or commercial gain You may freely distribute the URL identifying the publication in the public portal If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim. Hardware Realization of an FPGA Processor – Operating System Call Offload and Experiences Andreas Erik Hindborg, Pascal Schleuniger Nicklas Bo Jensen, Sven Karlsson DTU Compute – Technical University of Denmark fahin,pass,nboa,[email protected] Abstract—Field-programmable gate arrays, FPGAs, are at- speedup of up to 64% over a Xilinx MicroBlaze based baseline tractive implementation platforms for low-volume signal and system.
    [Show full text]
  • Performance and Energy Efficient Network-On-Chip Architectures
    Linköping Studies in Science and Technology Dissertation No. 1130 Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 ISBN 978-91-85895-91-5 ISSN 0345-7524 ii Performance and Energy Efficient Network-on-Chip Architectures Sriram R. Vangal ISBN 978-91-85895-91-5 Copyright Sriram. R. Vangal, 2007 Linköping Studies in Science and Technology Dissertation No. 1130 ISSN 0345-7524 Electronic Devices Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden Linköping 2007 Author email: [email protected] Cover Image A chip microphotograph of the industry’s first programmable 80-tile teraFLOPS processor, which is implemented in a 65-nm eight-metal CMOS technology. Printed by LiU-Tryck, Linköping University Linköping, Sweden, 2007 Abstract The scaling of MOS transistors into the nanometer regime opens the possibility for creating large Network-on-Chip (NoC) architectures containing hundreds of integrated processing elements with on-chip communication. NoC architectures, with structured on-chip networks are emerging as a scalable and modular solution to global communications within large systems-on-chip. NoCs mitigate the emerging wire-delay problem and addresses the need for substantial interconnect bandwidth by replacing today’s shared buses with packet-switched router networks. With on-chip communication consuming a significant portion of the chip power and area budgets, there is a compelling need for compact, low power routers. While applications dictate the choice of the compute core, the advent of multimedia applications, such as three-dimensional (3D) graphics and signal processing, places stronger demands for self-contained, low-latency floating-point processors with increased throughput.
    [Show full text]
  • Computer Architecture: Dataflow (Part I)
    Computer Architecture: Dataflow (Part I) Prof. Onur Mutlu Carnegie Mellon University A Note on This Lecture n These slides are from 18-742 Fall 2012, Parallel Computer Architecture, Lecture 22: Dataflow I n Video: n http://www.youtube.com/watch? v=D2uue7izU2c&list=PL5PHm2jkkXmh4cDkC3s1VBB7- njlgiG5d&index=19 2 Some Required Dataflow Readings n Dataflow at the ISA level q Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. q Arvind and Nikhil, “Executing a Program on the MIT Tagged- Token Dataflow Architecture,” IEEE TC 1990. n Restricted Dataflow q Patt et al., “HPS, a new microarchitecture: rationale and introduction,” MICRO 1985. q Patt et al., “Critical issues regarding HPS, a high performance microarchitecture,” MICRO 1985. 3 Other Related Recommended Readings n Dataflow n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Lee and Hurson, “Dataflow Architectures and Multithreading,” IEEE Computer 1994. n Restricted Dataflow q Sankaralingam et al., “Exploiting ILP, TLP and DLP with the Polymorphous TRIPS Architecture,” ISCA 2003. q Burger et al., “Scaling to the End of Silicon with EDGE Architectures,” IEEE Computer 2004. 4 Today n Start Dataflow 5 Data Flow Readings: Data Flow (I) n Dennis and Misunas, “A Preliminary Architecture for a Basic Data Flow Processor,” ISCA 1974. n Treleaven et al., “Data-Driven and Demand-Driven Computer Architecture,” ACM Computing Surveys 1982. n Veen, “Dataflow Machine Architecture,” ACM Computing Surveys 1986. n Gurd et al., “The Manchester prototype dataflow computer,” CACM 1985. n Arvind and Nikhil, “Executing a Program on the MIT Tagged-Token Dataflow Architecture,” IEEE TC 1990.
    [Show full text]
  • ECE 590: Digital Systems Design Using Hardware Description Language (VHDL) Systolic Implementation of Faddeev's Algorithm in V
    Project Report ECE 590: Digital Systems Design using Hardware Description Language (VHDL) Systolic Implementation of Faddeev’s Algorithm in VHDL. Final Project Tejas Tapsale. PSU ID: 973524088. Project Report Introduction: = In this project we are implementing Nash’s systolic implementation and Chuang an He,s systolic implementation for Faddeev’s algorithm. These two implementations have their own advantages and drawbacks. Here in this project report we first see detail of Nash implementation and then we will go for Chaung and He’s implementation. The organization of this report is like this:- 1. First we take detail idea about what is systolic architecture and how it can be used for matrix multiplication and its advantages and disadvantages. 2. Then we discuss about Gaussian Elimination for matrix computation and its properties. 3. Then we will see Faddeev’s algorithm and how it is used. 4. Systolic arrays for MATRIX TRIANGULARIZATION 5. We will discuss Nash implementation in detail and its VHDL coding. 6. Advantages and disadvantage of Nash systolic implementation. 7. Chaung and He’s implementation in detail and its VHDL coding. 8. Difficulties chased in this project. 9. Conclusion. 10. VHDL code for Nash Implementation. 11. VHDL code for Chaung and He’s Implementation. 12. Simulation Results. 13. References. 14. PowerPoint Presentation Project Report 1: Systolic Architecture: = A systolic array is composed of matrix-like rows of data processing units called cells. Data processing units (DPU) are similar to central processing units (CPU)s, (except for the usual lack of a program counter, since operation is transport-triggered, i.e., by the arrival of a data object).
    [Show full text]
  • On the Efficiency of Register File Versus Broadcast Interconnect For
    On the Efficiency of Register File versus Broadcast Interconnect for Collective Communications in Data-Parallel Hardware Accelerators Ardavan Pedram, Andreas Gerstlauer Robert A. van de Geijn Department of Electrical and Computer Engineering Department of Computer Science The University of Texas at Austin The University of Texas at Austin fardavan,[email protected] [email protected] Abstract—Reducing power consumption and increasing effi- on broadcast communication among a 2D array of PEs. In ciency is a key concern for many applications. How to design this paper, we focus on the LAC’s data-parallel broadcast highly efficient computing elements while maintaining enough interconnect and on showing how representative collective flexibility within a domain of applications is a fundamental question. In this paper, we present how broadcast buses can communication operations can be efficiently mapped onto eliminate the use of power hungry multi-ported register files this architecture. Such collective communications are a core in the context of data-parallel hardware accelerators for linear component of many matrix or other data-intensive operations algebra operations. We demonstrate an algorithm/architecture that often demand matrix manipulations. co-design for the mapping of different collective communication We compare our design with typical SIMD cores with operations, which are crucial for achieving performance and efficiency in most linear algebra routines, such as GEMM, equivalent data parallelism and with L1 and L2 caches that SYRK and matrix transposition. We compare a broadcast bus amount to an equivalent aggregate storage space. To do so, based architecture with conventional SIMD, 2D-SIMD and flat we examine efficiency and performance of the cores for data register file for these operations in terms of area and energy movement and data manipulation in both GEneral matrix- efficiency.
    [Show full text]
  • Bringing Virtualization to the X86 Architecture with the Original Vmware Workstation
    12 Bringing Virtualization to the x86 Architecture with the Original VMware Workstation EDOUARD BUGNION, Stanford University SCOTT DEVINE, VMware Inc. MENDEL ROSENBLUM, Stanford University JEREMY SUGERMAN, Talaria Technologies, Inc. EDWARD Y. WANG, Cumulus Networks, Inc. This article describes the historical context, technical challenges, and main implementation techniques used by VMware Workstation to bring virtualization to the x86 architecture in 1999. Although virtual machine monitors (VMMs) had been around for decades, they were traditionally designed as part of monolithic, single-vendor architectures with explicit support for virtualization. In contrast, the x86 architecture lacked virtualization support, and the industry around it had disaggregated into an ecosystem, with different ven- dors controlling the computers, CPUs, peripherals, operating systems, and applications, none of them asking for virtualization. We chose to build our solution independently of these vendors. As a result, VMware Workstation had to deal with new challenges associated with (i) the lack of virtual- ization support in the x86 architecture, (ii) the daunting complexity of the architecture itself, (iii) the need to support a broad combination of peripherals, and (iv) the need to offer a simple user experience within existing environments. These new challenges led us to a novel combination of well-known virtualization techniques, techniques from other domains, and new techniques. VMware Workstation combined a hosted architecture with a VMM. The hosted architecture enabled a simple user experience and offered broad hardware compatibility. Rather than exposing I/O diversity to the virtual machines, VMware Workstation also relied on software emulation of I/O devices. The VMM combined a trap-and-emulate direct execution engine with a system-level dynamic binary translator to ef- ficiently virtualize the x86 architecture and support most commodity operating systems.
    [Show full text]
  • Computer Architectures
    Parallel (High-Performance) Computer Architectures Tarek El-Ghazawi Department of Electrical and Computer Engineering The George Washington University Tarek El-Ghazawi, Introduction to High-Performance Computing slide 1 Introduction to Parallel Computing Systems Outline Definitions and Conceptual Classifications » Parallel Processing, MPP’s, and Related Terms » Flynn’s Classification of Computer Architectures Operational Models for Parallel Computers Interconnection Networks MPP’s Performance Tarek El-Ghazawi, Introduction to High-Performance Computing slide 2 Definitions and Conceptual Classification What is Parallel Processing? - A form of data processing which emphasizes the exploration and exploitation of inherent parallelism in the underlying problem. Other related terms » Massively Parallel Processors » Heterogeneous Processing – In the1990s, heterogeneous workstations from different processor vendors – Now, accelerators such as GPUs, FPGAs, Intel’s Xeon Phi, … » Grid computing » Cloud Computing Tarek El-Ghazawi, Introduction to High-Performance Computing slide 3 Definitions and Conceptual Classification Why Massively Parallel Processors (MPPs)? » Increase processing speed and memory allowing studies of problems with higher resolutions or bigger sizes » Provide a low cost alternative to using expensive processor and memory technologies (as in traditional vector machines) Tarek El-Ghazawi, Introduction to High-Performance Computing slide 4 Stored Program Computer The IAS machine was the first electronic computer developed, under
    [Show full text]
  • Systolic Computing on Gpus for Productive Performance
    Systolic Computing on GPUs for Productive Performance Hongbo Rong Xiaochen Hao Yun Liang Intel Intel, Peking University Peking University [email protected] [email protected] [email protected] Lidong Xu Hong H Jiang Pradeep Dubey Intel Intel Intel [email protected] [email protected] [email protected] Abstract an SIMT (single-instruction multiple-threads) programming We propose a language and compiler to productively build interface, and rely on an underlying compiler to transparently high-performance software systolic arrays that run on GPUs. map a wrap of threads to SIMD execution units. If data need to Based on a rigorous mathematical foundation (uniform re- be exchanged among threads in the same wrap, programmers currence equations and space-time transform), our language have to write explicit shuffle instructions [19]. has a high abstraction level and covers a wide range of ap- This paper proposes a new programming style that pro- plications. A programmer specifies a projection of a dataflow grams GPUs as building software systolic arrays. Systolic ar- compute onto a linear systolic array, while leaving the detailed rays have been extensively studied since 1978 [15], and shown implementation of the projection to a compiler; the compiler an abundance of practice-oriented applications, mainly in implements the specified projection and maps the linear sys- fields dominated by iterative procedures [32], e.g. image and tolic array to the SIMD execution units and vector registers signal processing, matrix arithmetic, non-numeric applica- of GPUs. In this way, both productivity and performance are tions, relational database [7, 8, 10, 11, 16, 17, 30], and so on.
    [Show full text]
  • Static Fault Attack on Hardware DES Registers
    Static Fault Attack on Hardware DES Registers Philippe Loubet-Moundi, Francis Olivier, and David Vigilant Gemalto, Security Labs, France {philippe.loubet-moundi,david.vigilant,francis.olivier}@gemalto.com http://www.gemalto.com Abstract. In the late nineties, Eli Biham and Adi Shamir published the first paper on Differential Fault Analysis on symmetric key algorithms. More specifically they introduced a fault model where a key bit located in non-volatile memory is forced to 0=1 with a fault injection. In their scenario the fault was permanent, and could lead the attacker to full key recovery with low complexity. In this paper, another fault model is considered: forcing a key bit to 0=1 in the register of a hardware block implementing Data Encryption Stan- dard. Due to the specific location of the fault, the key modification is not permanent in the life of the embedded device, and this leads to apply a powerful safe-error like attack. This paper reports a practical validation of the fault model on two actual circuits, and discusses limitations and efficient countermeasures against this threat. Keywords: Hardware DES, fault attacks, safe-error, register attacks 1 Introduction Fault attacks against embedded systems exploit the disturbed execution of a given program to infer sensitive data, or to execute unauthorized parts of this program. Usually a distinction is made between permanent faults and transient faults.A permanent fault alters the behavior of the device from the moment it is injected, and even if the device is powered off [8]. On the other hand, a transient fault has an effect limited in time, only a few cycles of the process are affected.
    [Show full text]
  • Configurable Fine-Grain Protection for Multicore Processor Virtualization 1
    Configurable Fine-Grain Protection for Multicore Processor Virtualization David Wentzlaff1, Christopher J. Jackson2, Patrick Griffin3, and Anant Agarwal2 [email protected], [email protected], griffi[email protected], [email protected] 1Princeton University 2Tilera Corp. 3Google Inc. Abstract TLB Access DMA Engine “User” Network I/O Network 2 2 2 2 1 3 1 3 1 3 1 3 Multicore architectures, with their abundant on-chip re- 0 0 0 0 sources, are effectively collections of systems-on-a-chip. The protection system for these architectures must support Key: 0 – User Code, 1 – OS, 2 – Hypervisor, 3 – Hypervisor Debugger multiple concurrently executing operating systems (OSes) Figure 1. With CFP, system software can dy• with different needs, and manage and protect the hard- namically set the privilege level needed to ac• ware’s novel communication mechanisms and hardware cess each fine•grain processor resource. features. Traditional protection systems are insufficient; they protect supervisor from user code, but typically do not protect one system from another, and only support fixed as- ticore systems, a protection system must both temporally signment of resources to protection levels. In this paper, protect and spatially isolate access to resources. Spatial iso- we propose an alternative to traditional protection systems lation is the need to isolate different system software stacks which we call configurable fine-grain protection (CFP). concurrently executing on spatially disparate cores in a mul- CFP enables the dynamic assignment of in-core resources ticore system. Spatial isolation is especially important now to protection levels. We investigate how CFP enables differ- that multicore systems have directly accessible networks ent system software stacks to utilize the same configurable connecting cores to other cores and cores to I/O devices.
    [Show full text]
  • Validating Register Allocation and Spilling
    Validating Register Allocation and Spilling Silvain Rideau1 and Xavier Leroy2 1 Ecole´ Normale Sup´erieure,45 rue d'Ulm, 75005 Paris, France [email protected] 2 INRIA Paris-Rocquencourt, BP 105, 78153 Le Chesnay, France [email protected] Abstract. Following the translation validation approach to high- assurance compilation, we describe a new algorithm for validating a posteriori the results of a run of register allocation. The algorithm is based on backward dataflow inference of equations between variables, registers and stack locations, and can cope with sophisticated forms of spilling and live range splitting, as well as many architectural irregular- ities such as overlapping registers. The soundness of the algorithm was mechanically proved using the Coq proof assistant. 1 Introduction To generate fast and compact machine code, it is crucial to make effective use of the limited number of registers provided by hardware architectures. Register allocation and its accompanying code transformations (spilling, reloading, coa- lescing, live range splitting, rematerialization, etc) therefore play a prominent role in optimizing compilers. As in the case of any advanced compiler pass, mistakes sometimes happen in the design or implementation of register allocators, possibly causing incorrect machine code to be generated from a correct source program. Such compiler- introduced bugs are uncommon but especially difficult to exhibit and track down. In the context of safety-critical software, they can also invalidate all the safety guarantees obtained by formal verification of the source code, which is a growing concern in the formal methods world. There exist two major approaches to rule out incorrect compilations. Com- piler verification proves, once and for all, the correctness of a compiler or compi- lation pass, preferably using mechanical assistance (proof assistants) to conduct the proof.
    [Show full text]
  • V810 Familytm 32-Bit Microprocessor
    V810 FAMILYTM 32-BIT MICROPROCESSOR ARCHITECTURE V805TM V810TM V820TM V821TM Document No. U10082EJ1V0UM00 (1st edition) Date Published October 1995P © 11995 Printed in Japan NOTES FOR CMOS DEVICES 1 PRECAUTION AGAINST ESD FOR SEMICONDUCTORS Note: Strong electric field, when exposed to a MOS device, can cause destruction of the gate oxide and ultimately degrade the device operation. Steps must be taken to stop generation of static electricity as much as possible, and quickly dissipate it once it has occurred. Environmental control must be adequate. When it is dry, humidifier should be used. It is recommended to avoid using insulators that easily build static electricity. Semiconductor devices must be stored and transported in an anti- static container, static shielding bag or conductive material. All test and measurement tools including work bench and floor should be grounded. The operator should be grounded using wrist strap. Semiconductor devices must not be touched with bare hands. Similar precautions need to be taken for PW boards with semiconductor devices on it. 2 HANDLING OF UNUSED INPUT PINS FOR CMOS Note: No connection for CMOS device inputs can be cause of malfunction. If no connection is provided to the input pins, it is possible that an internal input level may be generated due to noise, etc., hence causing malfunction. CMOS devices behave differently than Bipolar or NMOS devices. Input levels of CMOS devices must be fixed high or low by using a pull-up or pull-down circuitry. Each unused pin should be connected to VDD or GND with a resistor, if it is considered to have a possibility of being an output pin.
    [Show full text]