Efficient Enclave Communication Through Shared Memory a Case Study of Intel SGX Enabled Open Vswitch

Total Page:16

File Type:pdf, Size:1020Kb

Efficient Enclave Communication Through Shared Memory a Case Study of Intel SGX Enabled Open Vswitch DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2019 Efficient Enclave Communication through Shared Memory A case study of Intel SGX enabled Open vSwitch JAKOB SVENNINGSSON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ELECTRICAL ENGINEERING AND COMPUTER SCIENCE i Efficient Enclave Communication through Shared Memory A case study of Intel SGX enabled Open vSwitch JAKOB SVENNINGSSON [email protected] Master in Computer Science Date: December 12, 2019 Supervisor: Nicolae Paladi/Panos Papadimitratos Examiner: Mathias Ekstedt School of Electrical Engineering and Computer Science Host company: RISE Swedish title: Effektiv kommunikation genom delat minne - en fallstudie av Open vSwitch med SGX-stöd iii Abstract Open vSwitch is a virtual network switch commonly used to forward net- work packages between virtual machines. The switch routes network pack- ets based on a set of flow rules stored in its flow tables. Open vSwitch does not provide confidentiality or integrity protection of its flow tables; there- fore, an attacker can exploit software vulnerabilities in Open vSwitch to gain access to the host machine and observe or modify installed flow rules. Medina [1] brought integrity and confidentially guarantees to the flow ta- bles of Open vSwitch, even in the presence of untrusted privileged soft- ware, by confining them inside of an Intel SGX enclave. However, using an enclave to protect the flow tables has significantly reduced the perfor- mance of Open vSwitch. This thesis investigates how and to what extent the performance overhead introduced by Intel SGX in Open vSwitch can be reduced. The method consisted of the development of a general-purpose communi- cation library for Intel SGX enclaves, and two optimized SGX enabled Open vSwitch prototypes. The library enables efficient communication between the enclave and the untrusted application through shared memory-based techniques. Integrating the communication library in Open vSwitch, com- bined with other optimization techniques, resulted in two optimized pro- totypes that were evaluated on a set of common Open vSwitch use cases. The results of this thesis show that it is possible to reduce the overhead in- troduced by Intel SGX in Open vSwitch with several orders of magnitude, depending on the use case and optimization technique, without compro- mising its security guarantees. iv Sammanfattning Open vSwitch är en virtuell nätverksswitch som vanligtvis används för att vidarebefordra datatrafik mellan virtuella maskiner. Switchen vidarebeford- rar datatrafik baserat på en uppsättning flödesregler lagrade i dess flödesta- beller. Open vSwitch garanterar inte flödestabellernas integritet eller kon- fidentialitet, därför är det möjligt för en angripare att utnyttja sårbarheter i Open vSwitch för att få tillgång till värdmaskinen och observera eller modi- fiera flödesregler. En tidigare studie gav integritets- och konfidentialitetsgarantier till flödes- tabellerna i Open vSwitch, även i närvaro av opålitlig och privilegierad mjuk- vara, genom att placera flödestabellerna inuti en Intel SGX-enklav [1]. An- vändandet av en enklav för att skydda flödestabellerna medför emellertid en signifikant försämring av Open vSwitch’s prestanda. Detta examensar- bete undersöker hur och i vilken utsträckning prestandaförsämringen med- förd av Intel SGX kan minimeras i Open vSwitch. Examensarbetets metod bestod av utveckling av ett kommunikationsbib- liotek för Intel SGX-enklaver och två optimerade Open vSwitch-prototyper med SGX-stöd. Det utvecklade biblioteket möjliggör effektiv kommunika- tion mellan en enklav och den opålitliga applikationen genom kommu- nikationstekniker baserade på delat minne. Kombinering av kommunika- tionsbibliotekets olika optimeringsfunktioner med andra optimeringstek- niker resulterade i två optimerade Open vSwitch-prototyper med SGX-stöd som utvärderades på en uppsättning användningsfall. Resultaten av detta examensarbete visar att det är möjligt att minska pre- standaförsämringen genererat av Intel SGX i Open vSwitch med flera mag- nituder, beroende på användningsfall och optimeringsteknik, utan att kom- promissa med dess säkerhetsgarantier. Contents 1 Introduction 1 1.1 Research Question . .2 1.2 Scope . .2 1.3 Disposition . .3 2 Background 4 2.1 Trusted Execution Environment . .4 2.2 Intel Software Guard Extensions . .5 2.2.1 Enclave Entry and Exit . .6 2.2.2 Attestation . .7 2.3 Inter Process Communication (IPC) with Shared Memory . .9 2.4 Memoization . 10 2.5 Software Defined Networking . 10 2.6 Virtual Switch . 11 2.7 Open vSwitch . 12 2.8 Previous Work . 13 3 Methodology 16 3.1 Prestudy ................................ 16 3.2 Design . 17 3.3 Implementation . 18 3.4 Evaluation . 18 3.4.1 Measuring Methodology . 19 3.4.2 Experimental Settings . 19 3.5 Benchmarks . 20 3.5.1 HotCall Bundler . 20 3.5.2 Open vSwitch . 21 4 SGX Performance Analysis in Open vSwitch 22 v vi CONTENTS 4.1 Use Cases . 22 4.2 Overhead Generated by Enclave Transitions . 23 4.2.1 Estimating the Cost of Enclave Transitions . 23 4.2.2 The Cost of a Single Enclave Transition . 24 4.2.3 Estimate of Total Enclave Transition Overhead . 25 4.3 Open vSwitch Enclave Access Pattern . 26 5 HotCall Bundler Library 27 5.1 Functional Requirements . 28 5.2 Architecture . 28 5.3 Switchless Enclave Function Calls . 29 5.3.1 Translation Functions . 31 5.4 Execution Graphs . 32 5.4.1 Iterator . 34 5.4.2 If . 34 5.4.3 For . 35 5.4.4 While . 35 5.4.5 Construction of Execution Graphs . 36 5.5 Enclave Function Memoization . 36 5.5.1 Limitations . 38 5.6 Library API . 39 5.7 Integration in Intel SGX Application . 39 6 Open vSwitch Prototypes 42 6.1 Modifications of OFTinSGX . 42 6.1.1 Prototype Bundle & Refactored . 43 6.1.2 Prototype Bundle . 44 6.1.3 Prototype Refactor . 44 6.2 Modifications of Open vSwitch . 46 6.2.1 Prototype Bundle & Refactor . 47 6.2.2 Prototype Bundle . 47 6.2.3 Prototype Refactor . 48 7 Results 50 7.1 HotCall Bundler Library . 50 7.1.1 Enclave Transition Time . 50 7.1.2 Execution Graphs . 51 7.1.3 Enclave Function Memoization . 54 7.2 Open vSwitch Prototypes . 55 7.2.1 Add Flow Rule . 56 CONTENTS vii 7.2.2 Delete Flow Rule . 56 7.2.3 Modify Flow Rule . 58 7.2.4 Evict Flow Rule . 59 8 Discussion 61 8.1 Evaluation of HotCall Bundler . 61 8.1.1 Enclave Transition Time . 61 8.1.2 Execution Graphs . 62 8.1.3 Enclave Function Memoization . 64 8.2 Evaluation of Open vSwitch Prototypes . 65 8.3 Security Analysis . 67 8.4 Trusted Computing Base . 69 8.5 Method Critique and Possible Improvements . 70 9 Conclusions 71 Bibliography 73 A Estimating ECall Overhead 78 B HotCall Bundler User API 80 B.1 Calling a Switchless Enclave Function . 80 B.2 Merging Enclave Functions with Execution Graphs . 80 B.3 For Loop . 81 B.4 For Each . 81 B.5 Map................................... 82 B.6 If..................................... 83 B.7 While Loop . 83 B.8 Enclave Function Memoization . 84 C Results of Open vSwitch Prototypes 86 List of Acronyms SDN Software Defined Networking TEE Trusted Execution Environment OVS Open vSwitch SGX Software Guard Extension FIFO First-In-First-Out LRU Least Recently Used CDF Cumulative Distribution Function API Application Programming Interface EPC Enclave Page Cache PRM Processor Reserved Memory NIC Network Interface Card VNIC Virtual Network Interface Card OVSDB Open vSwitch Database SDK Software Development Kit RDTSCP Read Time Stamp Counter Instruction LLC Last Level Cache DRAM Dynamic Random-Access Memory L2 Level 2 viii Chapter 1 Introduction Software applications today often handle confidential and integrity sensi- tive data [2]. Deploying applications to public cloud platforms are increas- ing in popularity, which has raised concerns about the confidentiality and integrity of sensitive data stored on these platforms [3]. Hardware-based trusted execution environments (TEE), such as Intel SGX [4], provide in- tegrity and confidentiality guarantees to user data even in the presence of untrusted privileged software [5]. Intel SGX enables the deployment of ap- plications handling sensitive data with increased confidence on cloud plat- forms where the host machine or other tenants are potentially malicious [6]. Open vSwitch is a virtual network switch that is purpose-built for virtual- ized environments. Virtual switches are commonly used to forward net- work packets between virtual machines and are a critical piece in cloud platform infrastructure since it provides network isolation among tenants’ virtual machines [7]. Open vSwitch does not provide confidentiality or in- tegrity protection of its flow tables; therefore, an attacker can exploit soft- ware vulnerabilities in Open vSwitch to gain access to the host machine and its memory [8]. Access to host memory allows for an attacker to ob- serve or modify installed flow rules, which are security-sensitive assets of Open vSwitch. Observing the flow rules of Open vSwitch allows for an at- tacker to learn about the network topology, and the ability to modify flow rules enables an attacker to reroute traffic, which can be used to avoid a firewall or intrusion detection system [9]. Medina [1] presented a security-enhanced Open vSwitch where its flow ta- 1 2 CHAPTER 1. INTRODUCTION bles are confined within an Intel SGX enclave; however, the security guar- antees provided by Intel SGX do not come for free [10]. The performance overhead associated with Intel SGX is well documented [11]. Open vSwitch with Intel SGX support is significantly slower compared to its non-SGX coun- terpart and is hence less likely to be adopted in a production environment. The aim of this thesis is to investigate how and to what extent the perfor- mance overhead in SGX enabled Open vSwitch can be reduced while still maintaining the security properties provided by Intel SGX. 1.1 Research Question The main goal of this thesis is to optimize the performance of SGX enabled Open vSwitch without compromising the provided security guarantees.
Recommended publications
  • On the Hardware Reduction of Z-Datapath of Vectoring CORDIC
    On the Hardware Reduction of z-Datapath of Vectoring CORDIC R. Stapenhurst*, K. Maharatna**, J. Mathew*, J.L.Nunez-Yanez* and D. K. Pradhan* *University of Bristol, Bristol, UK **University of Southampton, Southampton, UK [email protected] Abstract— In this article we present a novel design of a hardware wordlength larger than 18-bits the hardware requirement of it optimal vectoring CORDIC processor. We present a mathematical becomes more than the classical CORDIC. theory to show that using bipolar binary notation it is possible to eliminate all the arithmetic computations required along the z- In this particular work we propose a formulation to eliminate datapath. Using this technique it is possible to achieve three and 1.5 all the arithmetic operations along the z-datapath for conventional times reduction in the number of registers and adder respectively two-sided vector rotation and thereby reducing the hardware compared to classical CORDIC. Following this, a 16-bit vectoring while increasing the accuracy. Also the resulting architecture CORDIC is designed for the application in Synchronizer for IEEE shows significant hardware saving as the wordlength increases. 802.11a standard. The total area and dynamic power consumption Although we stick to the 2’s complement number system, without of the processor is 0.14 mm2 and 700μW respectively when loss of generality, this formulation can be adopted easily for synthesized in 0.18μm CMOS library which shows its effectiveness redundant arithmetic and higher radix formulation. A 16-bit as a low-area low-power processor. processor developed following this formulation requires 0.14 mm2 area and consumes 700 μW dynamic power when synthesized in 0.18μm CMOS library.
    [Show full text]
  • 18-447 Computer Architecture Lecture 6: Multi-Cycle and Microprogrammed Microarchitectures
    18-447 Computer Architecture Lecture 6: Multi-Cycle and Microprogrammed Microarchitectures Prof. Onur Mutlu Carnegie Mellon University Spring 2015, 1/28/2015 Agenda for Today & Next Few Lectures n Single-cycle Microarchitectures n Multi-cycle and Microprogrammed Microarchitectures n Pipelining n Issues in Pipelining: Control & Data Dependence Handling, State Maintenance and Recovery, … n Out-of-Order Execution n Issues in OoO Execution: Load-Store Handling, … 2 Reminder on Assignments n Lab 2 due next Friday (Feb 6) q Start early! n HW 1 due today n HW 2 out n Remember that all is for your benefit q Homeworks, especially so q All assignments can take time, but the goal is for you to learn very well 3 Lab 1 Grades 25 20 15 10 5 Number of Students 0 30 40 50 60 70 80 90 100 n Mean: 88.0 n Median: 96.0 n Standard Deviation: 16.9 4 Extra Credit for Lab Assignment 2 n Complete your normal (single-cycle) implementation first, and get it checked off in lab. n Then, implement the MIPS core using a microcoded approach similar to what we will discuss in class. n We are not specifying any particular details of the microcode format or the microarchitecture; you can be creative. n For the extra credit, the microcoded implementation should execute the same programs that your ordinary implementation does, and you should demo it by the normal lab deadline. n You will get maximum 4% of course grade n Document what you have done and demonstrate well 5 Readings for Today n P&P, Revised Appendix C q Microarchitecture of the LC-3b q Appendix A (LC-3b ISA) will be useful in following this n P&H, Appendix D q Mapping Control to Hardware n Optional q Maurice Wilkes, “The Best Way to Design an Automatic Calculating Machine,” Manchester Univ.
    [Show full text]
  • Datapath Design I Systems I
    Systems I Datapath Design I Topics Sequential instruction execution cycle Instruction mapping to hardware Instruction decoding Overview How do we build a digital computer? Hardware building blocks: digital logic primitives Instruction set architecture: what HW must implement Principled approach Hardware designed to implement one instruction at a time Plus connect to next instruction Decompose each instruction into a series of steps Expect that most steps will be common to many instructions Extend design from there Overlap execution of multiple instructions (pipelining) Later in this course Parallel execution of many instructions In more advanced computer architecture course 2 Y86 Instruction Set Byte 0 1 2 3 4 5 nop 0 0 addl 6 0 halt 1 0 subl 6 1 rrmovl rA, rB 2 0 rA rB andl 6 2 irmovl V, rB 3 0 8 rB V xorl 6 3 rmmovl rA, D(rB) 4 0 rA rB D jmp 7 0 mrmovl D(rB), rA 5 0 rA rB D jle 7 1 OPl rA, rB 6 fn rA rB jl 7 2 jXX Dest 7 fn Dest je 7 3 call Dest 8 0 Dest jne 7 4 ret 9 0 jge 7 5 pushl rA A 0 rA 8 jg 7 6 popl rA B 0 rA 8 3 Building Blocks fun Combinational Logic A A = Compute Boolean functions of L U inputs B 0 Continuously respond to input changes MUX Operate on data and implement 1 control Storage Elements valA A srcA Store bits valW Register W file dstW Addressable memories valB B Non-addressable registers srcB Clock Loaded only as clock rises Clock 4 Hardware Control Language Very simple hardware description language Can only express limited aspects of hardware operation Parts we want to explore and modify Data
    [Show full text]
  • LECTURE 5 Single-Cycle Datapath and Control
    Single-Cycle LECTURE 5 Datapath and Control PROCESSORS In lecture 1, we reminded ourselves that the datapath and control are the two components that come together to be collectively known as the processor. • Datapath consists of the functional units of the processor. • Elements that hold data. • Program counter, register file, instruction memory, etc. • Elements that operate on data. • ALU, adders, etc. • Buses for transferring data between elements. • Control commands the datapath regarding when and how to route and operate on data. MIPS To showcase the process of creating a datapath and designing a control, we will be using a subset of the MIPS instruction set. Our available instructions include: • add, sub, and, or, slt • lw, sw • beq, j DATAPATH To start, we will look at the datapath elements needed by every instruction. First, we have instruction memory. Instruction memory is a state element that provides read-access to the instructions of a program and, given an address as input, supplies the corresponding instruction at that address. ­ Code can also be written, e.g., self-modifying code DATAPATH Next, we have the program counter or PC. The PC is a state element that holds the address of the current instruction. Essentially, it is just a 32-bit register which holds the instruction address and is updated at the end of every clock cycle. ­ Normally PC increments sequentially except for branch instructions The arrows on either side indicate that the PC state element is both readable and writeable. DATAPATH Lastly, we have the adder. The adder is responsible for incrementing the PC to hold the address of the next instruction.
    [Show full text]
  • Effectiveness of the MAX-2 Multimedia Extensions for PA-RISC 2.0 Processors
    Effectiveness of the MAX-2 Multimedia Extensions for PA-RISC 2.0 Processors Ruby Lee Hewlett-Packard Company HotChips IX Stanford, CA, August 24-26,1997 Outline Introduction PA-RISC MAX-2 features and examples Mix Permute Multiply with Shift&Add Conditionals with Saturation Arith (e.g., Absolute Values) Performance Comparison with / without MAX-2 General-Purpose Workloads will include Increasing Amounts of Media Processing MM a b a b 2 1 2 1 b c b c functionality 5 2 5 2 A B C D 1 2 22 2 2 33 3 4 55 59 A B C D 1 2 A B C D 22 1 2 22 2 2 2 2 33 33 3 4 55 59 3 4 55 59 Distributed Multimedia Real-time Information Access Communications Tool Tool Computation Tool time 1980 1990 2000 Multimedia Extensions for General-Purpose Processors MAX-1 for HP PA-RISC (product Jan '94) VIS for Sun Sparc (H2 '95) MAX-2 for HP PA-RISC (product Mar '96) MMX for Intel x86 (chips Jan '97) MDMX for SGI MIPS-V (tbd) MVI for DEC Alpha (tbd) Ideally, different media streams map onto both the integer and floating-point datapaths of microprocessors images GR: GR: 32x32 video 32x64 ALU SMU FP: graphics FP:16x64 Mem 32x64 audio FMAC PA-RISC 2.0 Processor Datapath Subword Parallelism in a General-Purpose Processor with Multimedia Extensions General Regs. y5 y6 y7 y8 x5 x6 x7 x8 x1 x2 x3 x4 y1 y2 y3 y4 Partitionable Partitionable 64-bit ALU 64-bit ALU 8 ops / cycle Subword Parallel MAX-2 Instructions in PA-RISC 2.0 Parallel Add (modulo or saturation) Parallel Subtract (modulo or saturation) Parallel Shift Right (1,2 or 3 bits) and Add Parallel Shift Left (1,2 or 3 bits) and Add Parallel Average Parallel Shift Right (n bits) Parallel Shift Left (n bits) Mix Permute MAX-2 Leverages Existing Processing Resources FP: INTEGER FLOAT GR: 16x64 General Regs.
    [Show full text]
  • Avocado: a Secure In-Memory Distributed Storage System
    Avocado: A Secure In-Memory Distributed Storage System Maurice Bailleu , Dimitra Giantsidi Vasilis Gavrielatos , Do Le Quoc , Vijay Nagarajan , Pramod Bhatotia University of Edinburgh 1Huawei Research 1 TU Munich 1 2∗ 1 1,3 1 2 3 Abstract hypervisor. Given this promise, TEEs are now commercially oered by major cloud computing providers [23, 34, 60]. We introduce Avocado, a secure in-memory distributed Although TEEs provide a promising building block for se- storage system that provides strong security, fault-tolerance, curing systems againsta powerfuladversary,theyalso present consistency (linearizability) and performance for untrusted signicant challenges while designing a replicated secure cloud environments. Avocado achieves these properties distributed storage system. The fundamental issue is that the based on TEEs, which, however, are primarily designed TEEs are primarily designed to secure the limited in-memory for securing limited physical memory (enclave) within a state of a single-node system, and thus, the security properties single-node system. Avocado overcomes this limitation by of TEEs do not naturally extend to a distributed infrastructure. extending the trust of a secure single-node enclave to the Therefore we ask the question: How can we leverage TEEs distributed environment over an untrusted network, while to design a high-performance, secure, and fault-tolerant ensuring that replicas are kept consistent and fault-tolerant in-memory distributed KVS for untrusted cloud environments? in a malicious environment. To achieve these goals, we design and implement Avocado In this workwe introduce Avocado,a secure,distributedin- underpinning on the cross-layer contributions involving the memory KVS based on Intel SGX [5] as the foundational TEE security network stack, the replication protocol, scalable trust estab- that achieves the following properties: (a) strong , in condentiality lishment, and memory management.
    [Show full text]
  • Micro-Sequencer Approach Speeds Reconfiguration
    On The Softer Side Configurable Digital Signal Processing Micro-Sequencer Approach Speeds Reconfiguration Not all reconfigurable computing schemes were created equal. A micro-sequencer architecture offers faster performance and greater flexibility. Roozbeh Jafari, Graduate Student consumption and silicon area are cute their task. A non-flexible hardware Henry Fan, Graduate Student reduced by 72% and 77% respectively, by realization for such applications has to fit Majid Sarrafzadeh, Professor using a customized 8-bit data bus versus all required algorithm variations on the UCLA Computer Science Department a 64-bit data bus, while the speed is die. This, if possible, makes the design improved by 157%. and fabrication processes more compli- ey military applications such as cated and expensive. image processing, sonar/radar and FPGAs Make it Possible K SIGINT can enjoy significant perfor- Before getting into the details of the Dealing with Delays mance gains by using reconfigurable com- micro-sequencer scheme, it’s helpful to A major drawback of using runtime puting. Therefore, it’s perhaps surprising examine the underlying hardware that reconfiguration is the significant delay of that the technology has been so slow to makes reconfigurable computing possi- reprogramming the hardware. The total gain broad acceptance. Among the barriers ble. Programmable system capability runtime of an application includes the is a lack of clear understanding about its forms the heart of reconfigurable com- actual execution delay of each task on the real performance differences compared to puting. Programmable devices including hardware along with the total time spent traditional compute architectures. FPGAs contain an array of programma- for hardware reconfiguration between Group RTC The / ©2002 Call (949)226-2000 Work done at ULCA’s Computer ble computational units that can be pro- computations.
    [Show full text]
  • Computer Arithmetic
    Computer Arithmetic MIPS Integer Representation Logical, Integer Addition & Subtraction 32-bit signed integers, e.g., for numeric operations Chapter 3.1-3.3 • 2’s complement: one representationrepresentation for zero, balanced, allows add/subtract to be treated uniformly EEC170 FQ 2005 32-bit unsigned integers, e.g., for address operations • Address considered 32-bit unsigned integer Provides distinct instructions for signed/unsigned: • ADD, ADDI: add signed register, add signed immediate à causes exception on overflow • ADDU, ADDIU: add unsigned register, add unsigned immediate à no exception on overflow OP Rs Rt Rd 0 ADD/U ADDI/U Rs Rt immediate data Layout of a full adder cell 1 2 Comparison Sign Extension Distinct instructions for comparison of Sign of immediate data extended to form 32-bit signed/unsigned integers representation: • Which is larger: 1111...1111 or 0000...0000 ? Depends of type, signed or unsigned 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 0 1 0 1 0 1 0 1 0 1 0 0 Two versions of slt for signed/unsigned: • slt, sltu: set less than signed, unsigned 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 OP Rs Rt Rd 0 SLT/U Thus, ALU always uses 32-bit operands Two versions of immediate comparison also provided: • slti, sltiu: set less than immediate signed, unsigned Extension occurs for signed and unsigned arithmetic SLTI/U Rs Rt immediate data 3 4 Overflow Computer System, Top Level View MIPS has no flag (status) register • complicates pipeline (see Chapter 6) Compiler Overflow (underflow): • Occurs if operands are same sign, result is different sign.
    [Show full text]
  • Computer Abstractions and Technology CS 154: Computer Architecture Lecture #2 Winter 2020
    Computer Abstractions and Technology CS 154: Computer Architecture Lecture #2 Winter 2020 Ziad Matni, Ph.D. Dept. of Computer Science, UCSB A Word About Registration for CS154 FOR THOSE OF YOU NOT YET REGISTERED: •This class is FULL •If you want to add this class AND you are on the waitlist, see me after lecture 1/9/20 Matni, CS154, Wi20 2 Lecture Outline •Tech Details • Trends • Historical context • The manufacturinG process of Ics •Important Performance Measures • CPU time • CPI • Other factors (power, multiprocessors) • Pitfalls 1/9/20 Matni, CS154, Wi20 3 Parts of the CPU • The Datapath, which includes the Arithmetic Logic Unit (ALU) and other items that perform operations on data • Cache Memory, which is small & fast RAM memory for immediate access to data. Resides inside the CPU. (other types of memory are outside the CPU, like DRAM, etc…) • The Control Unit (CU) which sequences how Datapath + Memory interact Image from wikimedia.org 1/9/20 Matni, CS154, Wi20 4 Inside the Apple A5 Processor Manufactured in 2011 – 2013 32 nm technoloGy 37.8 mm2 die siZe 1/9/20 Matni, CS154, Wi20 5 The CPU’s Fetch-Execute Cycle •Fetch the next instruction This is what happens inside a •Decode the instruction computer interacting with a program at the “lowest” level •Get data if needed •Execute the instruction • Maybe access mem aGain and/or write back to reG. 1/9/20 Matni, CS154, Wi20 6 Pipelining (Parallelism) in CPUs • PipelininG is a fundamental desiGn in CPUs • Allows multiple instructions to Go on at once • a.k.a instruction-level parallelism 1/9/20 7 Digital Design of a CPU (Showing Pipelining) 1/9/20 Matni, CS154, Wi20 8 Computer Languages and the F-E Cycle •Instructions Get executed in the CPU in machine lanGuaGe (i.e.
    [Show full text]
  • Object-Oriented Development for Reconfigurable Architectures
    Object-Oriented Development for Reconfigurable Architectures Von der Fakultät für Mathematik und Informatik der Technischen Universität Bergakademie Freiberg genehmigte DISSERTATION zur Erlangung des akademischen Grades Doktor Ingenieur Dr.-Ing., vorgelegt von Dipl.-Inf. (FH) Dominik Fröhlich geboren am 19. Februar 1974 Gutachter: Prof. Dr.-Ing. habil. Bernd Steinbach (Freiberg) Prof. Dr.-Ing. Thomas Beierlein (Mittweida) PD Dr.-Ing. habil. Michael Ryba (Osnabrück) Tag der Verleihung: 20. Juni 2007 To my parents. ABSTRACT Reconfigurable hardware architectures have been available now for several years. Yet the application devel- opment for such architectures is still a challenging and error-prone task, since the methods, languages, and tools being used for development are inappropriate to handle the complexity of the problem. This hampers the widespread utilization, despite of the numerous advantages offered by this type of architecture in terms of computational power, flexibility, and cost. This thesis introduces a novel approach that tackles the complexity challenge by raising the level of ab- straction to system-level and increasing the degree of automation. The approach is centered around the paradigms of object-orientation, platforms, and modeling. An application and all platforms being used for its design, implementation, and deployment are modeled with objects using UML and an action language. The application model is then transformed into an implementation, whereby the transformation is steered by the platform models. In this thesis solutions for the relevant problems behind this approach are discussed. It is shown how UML can be used for complete and precise modeling of applications and platforms. Application development is done at the system-level using a set of well-defined, orthogonal platform models.
    [Show full text]
  • Data Path & Control Design
    Data Path & Control Design‐ (i) Simple arithmetic computations (ii) Complex Datapath, Transcendental Functions Vineet Sahula [email protected] Dept. of ECE, MNIT Jaipur Text Marking 1.!!! 2.1 Design‐ representation 2.2.1 RTL components 2.2.3 Register level design 3.2 Data representation‐ Fixed & Floating point 4.1 Fixed point arithmetic, + ‐ × 4.2 ALU 4.3.1 Floating point arithmetic [4.3.2 Pipelining] 5.1 Control design basics‐ HW 5.2 Control basics‐ Microprogrammed CAD-DS [V. Sahula] Complex Data Path & Control Design 2 1 Arithmetic Digital Design • RTL symbols & Algorithm State Machine • Control Unit Design • Hard‐wired control • Micro‐programmed control • Example data path • GCD Computer • Shift Add multiplier • Processor‐ RISC/CISC • Complex data path • log • sin cos sin cos • FFT • Processor instruction design • Control field encoding CAD-DS [V. Sahula] Complex Data Path & Control Design 3 Register Transfer Symbols CAD-DS [V. Sahula] Complex Data Path & Control Design 4 2 Data Path Design CAD-DS [V. Sahula] Complex Data Path & Control Design 5 Data Path Components • Shifters Counters • Adders/Subtracters/Multipliers/Dividers • Multiplexers – 2P input m‐output MUX • Selectors Decoders • Magnitude comparator • Registers – PIPO, SISO CAD-DS [V. Sahula] Complex Data Path & Control Design 6 3 Arithmetic Data Path • Serial adder • 4‐bit parallel adder – Ripple carry (RCA) – Carry Look Ahead (CLA) – Carry save • Multiplication – Shift‐add – Booth’s coded • Division – Repeated subtraction – Repeated multiplication • Others – GCD computer CAD-DS [V. Sahula] Complex Data Path & Control Design 7 Ripple Carry Adder B An-1 n-1 An-2Bn-2 A0 B0 C Cn+1 n C 1-bit 1-bit 1-bit 1 C adder adder adder 0 S Sn-1 Sn-2 0 Si Ai Bi Ci1 Ci1 Ai Bi Ci (Ai Bi ) CAD-DS [V.
    [Show full text]
  • Graphical Microcode Simulator with a Reconfigurable Datapath
    Rochester Institute of Technology RIT Scholar Works Theses 12-11-2006 Graphical microcode simulator with a reconfigurable datapath Brian VanBuren Follow this and additional works at: https://scholarworks.rit.edu/theses Recommended Citation VanBuren, Brian, "Graphical microcode simulator with a reconfigurable datapath" (2006). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. Graphical Microcode Simulator with a Reconfigurable Datapath by Brian G VanBuren A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Engineering Supervised by Associate Professor Dr. Muhammad Shaaban Department of Computer Engineering Kate Gleason College of Engineering Rochester Institute of Technology Rochester, New York August 2006 Approved By: Dr. Muhammad Shaaban Associate Professor Primary Adviser Dr. Roy Czernikowski Professor, Department of Computer Engineering Dr. Roy Melton Visiting Assistant Professor, Department of Computer Engineering Thesis Release Permission Form Rochester Institute of Technology Kate Gleason College of Engineering Title: Graphical Microcode Simulator with a Reconfigurable Datapath I, Brian G VanBuren, hereby grant permission to the Wallace Memorial Library repor- duce my thesis in whole or part. Brian G VanBuren Date Dedication To my son. iii Acknowledgments I would like to thank Dr. Shaaban for all his input and desire to have an update microcode simulator. I would like to thank Dr. Czernikowski for his support and methodical approach to everything. I would like to thank Dr.
    [Show full text]