A Study on SIMD Architecture

Total Page:16

File Type:pdf, Size:1020Kb

A Study on SIMD Architecture A study on SIMD architecture Gurkan¨ Solmaz, Rouhollah Rahmatizadeh and Mohammad Ahmadian Department of Electrical Engineering and Computer Science University of Central Florida Email: fgsolmaz,rrahmati,[email protected] Abstract— Single instruction, multiple data (SIMD) architec- implementation part of this research, we implement it on single tures became popular with the demanding increase on data machine with minimum load as test platform for measuring the streaming applications such as real-time games and video pro- performance. cessing. Since modern processors in desktop computers support SIMD instructions with various implementations, we may use As a group of graduate students taking the CDA 5106 these machines for optimizing applications in which we process course, we try to get involved in this challenge by doing multiple data with single instructions. research on the SIMD CPU architecture. In this project, we In this project, we study the use SIMD architectures and worked in almost all phases together as a team, having regular learn their effects on the performance of specific applications. meetings before and after the milestones. To sum up, Gurkan¨ We choose matrix multiplication and Advanced Encryption Stan- dard (AES) encryption algorithms and modify them to exploit worked on the implementation and documentation, prepared the use of SIMD instructions. The performance improvements the final report, made research for the history and related using the SIMD instructions are analyzed and validated by the studies. Rouhollah worked on implementation of the optimized experimental study. algorithms and documentation phases. He also conducted the experiments. Mohammad proposed the main idea, worked on I. INTRODUCTION the proposal, documentation for the benchmarks, and analysis Performance in a computer system is defined by the amount of the results. of useful work accomplished by the computer system com- The rest of the paper is organized as follows. Section II pared to the time and the resources used. There are several briefly summarizes the history of SIMD architectures and aspects for improving performance of computer systems. Re- the related work. We describe SIMD and some architectures searchers from several areas are striving to achieve higher per- in Section III. We provide a detailed description for our formance ranging from algorithm, compiler, OS, and hardware benchmarks in Section IV. The results of the experiments are designers. presented in Section V. We finally conclude in Section VI. Nowadays, most CPU designs contain at least some vector processing instructions, typically referred to as SIMD in which II. RELATED WORK typically operate on a few vectors elements per clock cycle in a pipeline. These vector processors run multiple mathematical Let us briefly discuss the history of the SIMD architectures operations on multiple data elements simultaneously. Thus, and the related work in the literature. The first use of SIMD they have effects on the performance equation. From the Iron instructions was in early 1970s [1]. As an example, they were law, we know that performance of a program is calculated by used in CDC Star 100 and TI ASC machines which could the formula P erformance := IC · CPI · CT , where IC do the same operation on a bunch of data. This architecture is the number of instructions (instruction count), CPI is the specially became common when Cray Inc. used it in their number of cycles per instruction and CT is the cycle time of supercomputers. However, vector processing used in these the processor. The use of SIMD architecture changes IC and machines nowadays are considered different from the SIMD CPI values of a program. machines. Thinking Machines CM-1 and CM-2 which are con- With the new enhancements in the processor architec- sidered as massively parallel processing-style supercomputers tures, the current modern processors started supporting 256- [2], started a new era in using SIMD for processing the data bit vector implementations. Moreover, the interest on SIMD in parallel. However, current researches focus on using SIMD architectures by the computer architecture research community instructions in desktop computers. A lot of tasks desktop is increasing. In the near future, the new powerful machines computers do these days like video processing and real-time are expected to make new high performance multiple data gaming need to do the same operation on a bunch of data. So, applications available with the enhanced SIMD architectures. companies tried to use this architecture in desktops. As one In this study, we target the SIMD architecture and its effects of the earliest attempts, Sun Microsystems introduced SIMD on performance of some designed cases. Then, we analyze integer instructions in VIS (visual instruction set) extensions the performance of our approach by evaluating the speed-up in UltraSPARC I microprocessor in 1995. values of the programs using SIMD architecture. One of the MIPS introduced MDMX (MIPS Digital Media eXten- problems we challenge is the configuration of compilers to sion). Intel made SIMD widely-used by introducing MMX output vector instruction in binary executable file. The other extensions to the x86 architecture in 1996. Then Motorolla challenge is designing case study for calculating of speed up. introduced AltiVec system in its PowerPC’s which also was For this reason, matrix multiplication and the AES encryption used in IBM’s POWER systems. So, this caused the Intels algorithms are chosen as the best candidates in which we need respond which was introduction of SSE. These days SSE and to do several linear binary operations on multiple data. For its extensions are used more than the others. Fig. 3. Data processing with SISD vs. SIMD. Taken from [6]. Fig. 1. Processor array in a supercomputer. Taken from [5]. from the memory to the PEs is equal to the number of PEs. The supercomputers also have an specific interconnection networks. The interconnection networks provide flexibility for data from and to PEs with high performance. They also have an I/O system which have differences from one machine to another. Figure 1 illustrates processor array architectures in super- computers including memory modules, interconnection net- work, PEs, control unit and the I/O system. In some su- percomputers, processing elements are controlled by a host computer, which is illustrated in Figure 2. Supercomputers which are categorized as multiple instruction, multiple data (MIMD) in Flynn‘s taxonomy is became popular and this caused the reduce in the interest in SIMD machines for some period of time. Fig. 2. The relationship between processor array and the host computer. Enhancements on the desktop computers lead to powerful Taken from [5]. machines which are strong enough to handle applications such as video processing. Therefore, the SIMD architectures again became popular in 1990s. SIMD architectures exploit There are various studies in the literature which are con- a property of data stream called as ”data parallelism”. SIMD ducted by research groups or companies which focus on computing is also known as vector processing, considering hardware. Holzer-Graf et al. [3] studied the efficient vector the row of data coming to the processor as vectors of data. implementations of AES-based designs. In this paper three It is almost impossible to have applications which are purely different vector implementations are analyzed and the per- parallel and the pure use of SIMD computing is not possible. formance of each of them is compared. The use of chip Hence, in applications of SIMD computing, the programs are multiprocessing and the cell broadband engine is described written for single instruction, single data (SISD) machines and by Gschwind [4]. they include SIMD instructions. The proportion of sequential part and the SIMD part in the program determines the max- III. SIMD ARCHITECTURE imum speed-up according to Amdahl‘s law. Figure 3 shows To understand the improvements being made on SIMD the data exploitation by a SIMD machine which processes 3 architectures, let us first start with the older SIMD architec- vectors at the same time, compared to an SISD machine which tures. Earlier versions of SIMD architectures are proposed is able to process one row of data. Length of vectors in a for supercomputers [1] which have a number of processing SIMD processor determines the number of elements of a given units (elements) and a control unit. In these machines, the data type. For instance, a 128-bit vector implementation in a processing units (PUs) are the pieces which make the compu- processor allows us to do four-way single-precision floating- tation while the control unit controls these array of processing point operations. elements (PEs). The single control unit is generally responsible We may categorize the SIMD operations by their types. for reading the instructions, decoding the instructions and The one obvious operation type is the intra-element arithmetic sending control signals to the PEs. Data are supplied to PEs and non-arithmetic operations. Addition, multiplication, and by a memory. In this architecture, the number of data paths substraction are example arithmetic operations, while AND, Fig. 4. Intra-element arithmetic and non-arithmetic operations. Taken from Fig. 6. AltiVec architecture with 4 distinct registers. Taken from [6]. [6]. Fig. 5. Inter-element operations between the elements of a vector. Taken from [6]. Fig. 7. Vector permutation with AltiVec architecture. Taken from [6]. XOR, and OR are examples of non-arithmetic operations. Figure 4 illustrates the intra-element operations with two source vectors VA and VB and a destination vector VT . They had a floating point unit along with the SIMD units in Each of these vectors contain 4 registers with 32-bit. This their processors. They also had a switch mechanism in their means we can have operations on 2 vectors and each of the processors, which allow the processor to change its mode vectors include 4 integers or floating points.
Recommended publications
  • 2.5 Classification of Parallel Computers
    52 // Architectures 2.5 Classification of Parallel Computers 2.5 Classification of Parallel Computers 2.5.1 Granularity In parallel computing, granularity means the amount of computation in relation to communication or synchronisation Periods of computation are typically separated from periods of communication by synchronization events. • fine level (same operations with different data) ◦ vector processors ◦ instruction level parallelism ◦ fine-grain parallelism: – Relatively small amounts of computational work are done between communication events – Low computation to communication ratio – Facilitates load balancing 53 // Architectures 2.5 Classification of Parallel Computers – Implies high communication overhead and less opportunity for per- formance enhancement – If granularity is too fine it is possible that the overhead required for communications and synchronization between tasks takes longer than the computation. • operation level (different operations simultaneously) • problem level (independent subtasks) ◦ coarse-grain parallelism: – Relatively large amounts of computational work are done between communication/synchronization events – High computation to communication ratio – Implies more opportunity for performance increase – Harder to load balance efficiently 54 // Architectures 2.5 Classification of Parallel Computers 2.5.2 Hardware: Pipelining (was used in supercomputers, e.g. Cray-1) In N elements in pipeline and for 8 element L clock cycles =) for calculation it would take L + N cycles; without pipeline L ∗ N cycles Example of good code for pipelineing: §doi =1 ,k ¤ z ( i ) =x ( i ) +y ( i ) end do ¦ 55 // Architectures 2.5 Classification of Parallel Computers Vector processors, fast vector operations (operations on arrays). Previous example good also for vector processor (vector addition) , but, e.g. recursion – hard to optimise for vector processors Example: IntelMMX – simple vector processor.
    [Show full text]
  • SIMD Extensions
    SIMD Extensions PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 12 May 2012 17:14:46 UTC Contents Articles SIMD 1 MMX (instruction set) 6 3DNow! 8 Streaming SIMD Extensions 12 SSE2 16 SSE3 18 SSSE3 20 SSE4 22 SSE5 26 Advanced Vector Extensions 28 CVT16 instruction set 31 XOP instruction set 31 References Article Sources and Contributors 33 Image Sources, Licenses and Contributors 34 Article Licenses License 35 SIMD 1 SIMD Single instruction Multiple instruction Single data SISD MISD Multiple data SIMD MIMD Single instruction, multiple data (SIMD), is a class of parallel computers in Flynn's taxonomy. It describes computers with multiple processing elements that perform the same operation on multiple data simultaneously. Thus, such machines exploit data level parallelism. History The first use of SIMD instructions was in vector supercomputers of the early 1970s such as the CDC Star-100 and the Texas Instruments ASC, which could operate on a vector of data with a single instruction. Vector processing was especially popularized by Cray in the 1970s and 1980s. Vector-processing architectures are now considered separate from SIMD machines, based on the fact that vector machines processed the vectors one word at a time through pipelined processors (though still based on a single instruction), whereas modern SIMD machines process all elements of the vector simultaneously.[1] The first era of modern SIMD machines was characterized by massively parallel processing-style supercomputers such as the Thinking Machines CM-1 and CM-2. These machines had many limited-functionality processors that would work in parallel.
    [Show full text]
  • An Introduction to Gpus, CUDA and Opencl
    An Introduction to GPUs, CUDA and OpenCL Bryan Catanzaro, NVIDIA Research Overview ¡ Heterogeneous parallel computing ¡ The CUDA and OpenCL programming models ¡ Writing efficient CUDA code ¡ Thrust: making CUDA C++ productive 2/54 Heterogeneous Parallel Computing Latency-Optimized Throughput- CPU Optimized GPU Fast Serial Scalable Parallel Processing Processing 3/54 Why do we need heterogeneity? ¡ Why not just use latency optimized processors? § Once you decide to go parallel, why not go all the way § And reap more benefits ¡ For many applications, throughput optimized processors are more efficient: faster and use less power § Advantages can be fairly significant 4/54 Why Heterogeneity? ¡ Different goals produce different designs § Throughput optimized: assume work load is highly parallel § Latency optimized: assume work load is mostly sequential ¡ To minimize latency eXperienced by 1 thread: § lots of big on-chip caches § sophisticated control ¡ To maXimize throughput of all threads: § multithreading can hide latency … so skip the big caches § simpler control, cost amortized over ALUs via SIMD 5/54 Latency vs. Throughput Specificaons Westmere-EP Fermi (Tesla C2050) 6 cores, 2 issue, 14 SMs, 2 issue, 16 Processing Elements 4 way SIMD way SIMD @3.46 GHz @1.15 GHz 6 cores, 2 threads, 4 14 SMs, 48 SIMD Resident Strands/ way SIMD: vectors, 32 way Westmere-EP (32nm) Threads (max) SIMD: 48 strands 21504 threads SP GFLOP/s 166 1030 Memory Bandwidth 32 GB/s 144 GB/s Register File ~6 kB 1.75 MB Local Store/L1 Cache 192 kB 896 kB L2 Cache 1.5 MB 0.75 MB
    [Show full text]
  • Cuda C Programming Guide
    CUDA C PROGRAMMING GUIDE PG-02829-001_v10.0 | October 2018 Design Guide CHANGES FROM VERSION 9.0 ‣ Documented restriction that operator-overloads cannot be __global__ functions in Operator Function. ‣ Removed guidance to break 8-byte shuffles into two 4-byte instructions. 8-byte shuffle variants are provided since CUDA 9.0. See Warp Shuffle Functions. ‣ Passing __restrict__ references to __global__ functions is now supported. Updated comment in __global__ functions and function templates. ‣ Documented CUDA_ENABLE_CRC_CHECK in CUDA Environment Variables. ‣ Warp matrix functions now support matrix products with m=32, n=8, k=16 and m=8, n=32, k=16 in addition to m=n=k=16. www.nvidia.com CUDA C Programming Guide PG-02829-001_v10.0 | ii TABLE OF CONTENTS Chapter 1. Introduction.........................................................................................1 1.1. From Graphics Processing to General Purpose Parallel Computing............................... 1 1.2. CUDA®: A General-Purpose Parallel Computing Platform and Programming Model.............3 1.3. A Scalable Programming Model.........................................................................4 1.4. Document Structure...................................................................................... 5 Chapter 2. Programming Model............................................................................... 7 2.1. Kernels......................................................................................................7 2.2. Thread Hierarchy........................................................................................
    [Show full text]
  • Threading SIMD and MIMD in the Multicore Context the Ultrasparc T2
    Overview SIMD and MIMD in the Multicore Context Single Instruction Multiple Instruction ● (note: Tute 02 this Weds - handouts) ● Flynn’s Taxonomy Single Data SISD MISD ● multicore architecture concepts Multiple Data SIMD MIMD ● for SIMD, the control unit and processor state (registers) can be shared ■ hardware threading ■ SIMD vs MIMD in the multicore context ● however, SIMD is limited to data parallelism (through multiple ALUs) ■ ● T2: design features for multicore algorithms need a regular structure, e.g. dense linear algebra, graphics ■ SSE2, Altivec, Cell SPE (128-bit registers); e.g. 4×32-bit add ■ system on a chip Rx: x x x x ■ 3 2 1 0 execution: (in-order) pipeline, instruction latency + ■ thread scheduling Ry: y3 y2 y1 y0 ■ caches: associativity, coherence, prefetch = ■ memory system: crossbar, memory controller Rz: z3 z2 z1 z0 (zi = xi + yi) ■ intermission ■ design requires massive effort; requires support from a commodity environment ■ speculation; power savings ■ massive parallelism (e.g. nVidia GPGPU) but memory is still a bottleneck ■ OpenSPARC ● multicore (CMT) is MIMD; hardware threading can be regarded as MIMD ● T2 performance (why the T2 is designed as it is) ■ higher hardware costs also includes larger shared resources (caches, TLBs) ● the Rock processor (slides by Andrew Over; ref: Tremblay, IEEE Micro 2009 ) needed ⇒ less parallelism than for SIMD COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 1 COMP8320 Lecture 2: Multicore Architecture and the T2 2011 ◭◭◭ • ◮◮◮ × 3 Hardware (Multi)threading The UltraSPARC T2: System on a Chip ● recall concurrent execution on a single CPU: switch between threads (or ● OpenSparc Slide Cast Ch 5: p79–81,89 processes) requires the saving (in memory) of thread state (register values) ● aggressively multicore: 8 cores, each with 8-way hardware threading (64 virtual ■ motivation: utilize CPU better when thread stalled for I/O (6300 Lect O1, p9–10) CPUs) ■ what are the costs? do the same for smaller stalls? (e.g.
    [Show full text]
  • Thread-Level Parallelism I
    Great Ideas in UC Berkeley UC Berkeley Teaching Professor Computer Architecture Professor Dan Garcia (a.k.a. Machine Structures) Bora Nikolić Thread-Level Parallelism I Garcia, Nikolić cs61c.org Improving Performance 1. Increase clock rate fs ú Reached practical maximum for today’s technology ú < 5GHz for general purpose computers 2. Lower CPI (cycles per instruction) ú SIMD, “instruction level parallelism” Today’s lecture 3. Perform multiple tasks simultaneously ú Multiple CPUs, each executing different program ú Tasks may be related E.g. each CPU performs part of a big matrix multiplication ú or unrelated E.g. distribute different web http requests over different computers E.g. run pptx (view lecture slides) and browser (youtube) simultaneously 4. Do all of the above: ú High fs , SIMD, multiple parallel tasks Garcia, Nikolić 3 Thread-Level Parallelism I (3) New-School Machine Structures Software Harness Hardware Parallelism & Parallel Requests Achieve High Assigned to computer Performance e.g., Search “Cats” Smart Phone Warehouse Scale Parallel Threads Computer Assigned to core e.g., Lookup, Ads Computer Core Core Parallel Instructions Memory (Cache) >1 instruction @ one time … e.g., 5 pipelined instructions Input/Output Parallel Data Exec. Unit(s) Functional Block(s) >1 data item @ one time A +B A +B e.g., Add of 4 pairs of words 0 0 1 1 Main Memory Hardware descriptions Logic Gates A B All gates work in parallel at same time Out = AB+CD C D Garcia, Nikolić Thread-Level Parallelism I (4) Parallel Computer Architectures Massive array
    [Show full text]
  • SIMD: Data Parallel Execution J
    ERLANGEN REGIONAL COMPUTING CENTER SIMD: Data parallel execution J. Eitzinger PATC, 12.6.2018 Stored Program Computer: Base setting for (int j=0; j<size; j++){ Memory sum = sum + V[j]; } 401d08: f3 0f 58 04 82 addss xmm0,[rdx + rax * 4] Arithmetic Control 401d0d: 48 83 c0 01 add rax,1 Logical 401d11: 39 c7 cmp edi,eax Unit CPU 401d13: 77 f3 ja 401d08 Unit Input Output Architect’s view: Make the common case fast ! Execution and memory . Improvements for relevant software Strategies . What are the technical opportunities? . Increase clock speed . Parallelism . Economical concerns . Specialization . Marketing concerns 2 Data parallel execution units (SIMD) for (int j=0; j<size; j++){ Scalar execution A[j] = B[j] + C[j]; } Register widths • 1 operand = + • 2 operands (SSE) • 4 operands (AVX) • 8 operands (AVX512) 3 Data parallel execution units (SIMD) for (int j=0; j<size; j++){ A[j] = B[j] + C[j]; SIMD execution } Register widths • 1 operand • 2 operands (SSE) = + • 4 operands (AVX) • 8 operands (AVX512) 4 History of short vector SIMD 1995 1996 1997 1998 1999 2001 2008 2010 2011 2012 2013 2016 VIS MDMX MMX 3DNow SSE SSE2 SSE4 VSX AVX IMCI AVX2 AVX512 AltiVec NEON 64 bit 128 bit 256 bit 512 bit Vendors package other ISA features under the SIMD flag: monitor/mwait, NT stores, prefetch ISA, application specific instructions, FMA X86 Power ARM SPARC64 Intel 512bit IBM 128bit A5 upward Fujitsu 256bit AMD 256bit 64bit / 128bit K Computer 128bit 5 Technologies Driving Performance Technology 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001
    [Show full text]
  • Implementing Elliptic Curve Cryptography (A Narrow Survey)
    Implementing Elliptic Curve Cryptography (a narrow survey) Institute of Computing – UNICAMP Campinas, Brazil April 2005 Darrel Hankerson Auburn University Implementing ECC – 1/110 Overview Objective: sample selected topics of practical interest. Talk will favor: I Software solutions on general-purpose processors rather than dedicated hardware. I Techniques with broad applicability. I Methods targeted to standardized curves. Goals: I Present proposed methods in context. I Limit coverage of technical details (but “implementing” necessarily involves platform considerations). Implementing ECC – 2/110 Focus: higher-performance processors “Higher-performance” includes processors commonly associated with workstations, but also found in surprisingly small portable devices. Sun and IBM workstations RIM pager circa 1999 SPARC or Intel x86 (Pentium) Intel x86 (custom 386) 256 MB – 8 GB 2 MB “disk”, 304 KB RAM 0.5 GHz – 3 GHz 10 MHz, single AA battery heats entire building fits in shirt pocket Implementing ECC – 3/110 Optimizing ECC Elliptic Curve Digital Signature Algorithm (ECDSA) Random number Big number and Curve generation modular arithmetic arithmetic Fq field arithmetic General categories of optimization efforts: 1. Field-level optimizations. 2. Curve-level optimizations. 3. Protocol-level optimizations. Constraints: security requirements, hardware limitations, bandwidth, interoperability, and patents. Implementing ECC – 4/110 Optimizing ECC... 1. Field-level optimizations. I Choose fields with fast multiplication and inversion. I Use special-purpose hardware (cryptographic coprocessors, DSP, floating-point, SIMD). 2. Curve-level optimizations. I Reduce the number of point additions (windowing). I Reduce the number of field inversions (projective coords). I Replace point doubles (endomorphism methods). 3. Protocol-level optimizations. I Develop efficient protocols. I Choose methods and protocols that favor your computations or hardware.
    [Show full text]
  • Effectiveness of the MAX-2 Multimedia Extensions for PA-RISC 2.0 Processors
    Effectiveness of the MAX-2 Multimedia Extensions for PA-RISC 2.0 Processors Ruby Lee Hewlett-Packard Company HotChips IX Stanford, CA, August 24-26,1997 Outline Introduction PA-RISC MAX-2 features and examples Mix Permute Multiply with Shift&Add Conditionals with Saturation Arith (e.g., Absolute Values) Performance Comparison with / without MAX-2 General-Purpose Workloads will include Increasing Amounts of Media Processing MM a b a b 2 1 2 1 b c b c functionality 5 2 5 2 A B C D 1 2 22 2 2 33 3 4 55 59 A B C D 1 2 A B C D 22 1 2 22 2 2 2 2 33 33 3 4 55 59 3 4 55 59 Distributed Multimedia Real-time Information Access Communications Tool Tool Computation Tool time 1980 1990 2000 Multimedia Extensions for General-Purpose Processors MAX-1 for HP PA-RISC (product Jan '94) VIS for Sun Sparc (H2 '95) MAX-2 for HP PA-RISC (product Mar '96) MMX for Intel x86 (chips Jan '97) MDMX for SGI MIPS-V (tbd) MVI for DEC Alpha (tbd) Ideally, different media streams map onto both the integer and floating-point datapaths of microprocessors images GR: GR: 32x32 video 32x64 ALU SMU FP: graphics FP:16x64 Mem 32x64 audio FMAC PA-RISC 2.0 Processor Datapath Subword Parallelism in a General-Purpose Processor with Multimedia Extensions General Regs. y5 y6 y7 y8 x5 x6 x7 x8 x1 x2 x3 x4 y1 y2 y3 y4 Partitionable Partitionable 64-bit ALU 64-bit ALU 8 ops / cycle Subword Parallel MAX-2 Instructions in PA-RISC 2.0 Parallel Add (modulo or saturation) Parallel Subtract (modulo or saturation) Parallel Shift Right (1,2 or 3 bits) and Add Parallel Shift Left (1,2 or 3 bits) and Add Parallel Average Parallel Shift Right (n bits) Parallel Shift Left (n bits) Mix Permute MAX-2 Leverages Existing Processing Resources FP: INTEGER FLOAT GR: 16x64 General Regs.
    [Show full text]
  • Computer Architectures an Overview
    Computer Architectures An Overview PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Sat, 25 Feb 2012 22:35:32 UTC Contents Articles Microarchitecture 1 x86 7 PowerPC 23 IBM POWER 33 MIPS architecture 39 SPARC 57 ARM architecture 65 DEC Alpha 80 AlphaStation 92 AlphaServer 95 Very long instruction word 103 Instruction-level parallelism 107 Explicitly parallel instruction computing 108 References Article Sources and Contributors 111 Image Sources, Licenses and Contributors 113 Article Licenses License 114 Microarchitecture 1 Microarchitecture In computer engineering, microarchitecture (sometimes abbreviated to µarch or uarch), also called computer organization, is the way a given instruction set architecture (ISA) is implemented on a processor. A given ISA may be implemented with different microarchitectures.[1] Implementations might vary due to different goals of a given design or due to shifts in technology.[2] Computer architecture is the combination of microarchitecture and instruction set design. Relation to instruction set architecture The ISA is roughly the same as the programming model of a processor as seen by an assembly language programmer or compiler writer. The ISA includes the execution model, processor registers, address and data formats among other things. The Intel Core microarchitecture microarchitecture includes the constituent parts of the processor and how these interconnect and interoperate to implement the ISA. The microarchitecture of a machine is usually represented as (more or less detailed) diagrams that describe the interconnections of the various microarchitectural elements of the machine, which may be everything from single gates and registers, to complete arithmetic logic units (ALU)s and even larger elements.
    [Show full text]
  • 2 the VIS Instruction Set Pdist Instruction
    The VISTM Instruction Set Version 1.0 June 2002 A White Paper This document provides an overview of the Visual Instruction Set. ® 4150 Network Circle Santa Clara, CA 95054 USA www.sun.com Copyright © 2002 Sun Microsystems, Inc. All Rights reserved. THE INFORMATION CONTAINED IN THIS DOCUMENT IS PROVIDED"AS IS" WITHOUT ANY EXPRESS REPRESENTATIONS OR WARRANTIES. IN ADDITION, SUN MICROSYSTEMS, INC. DISCLAIMS ALL IMPLIED REPRESENTATIONS AND WARRANTIES, INCLUDING ANY WARRANTY OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, OR NON-INFRINGEMENT OF THIRD PARTY INTELLECTUAL PROPERTY RIGHTS. This document contains proprietary information of Sun Microsystems, Inc. or under license from third parties. No part of this document may be reproduced in any form or by any means or transferred to any third party without the prior written consent of Sun Microsystems, Inc. Sun, Sun Microsystems, the Sun Logo, VIS, Java, and mediaLib are trademarks or registered trademarks of Sun Microsystems, Inc. in the United States and other countries. All SPARC trademarks are used under license and are trademarks or registered trademarks of SPARC International, Inc. in the United States and other countries. Products bearing SPARC trademarks are based upon an architecture developed by Sun Microsystems, Inc. UNIX is a registered trademark in the United States and other countries, exclusively licensed through x/Open Company Ltd. The information contained in this document is not designed or intended for use in on-line control of aircraft, air traffic, aircraft navigation or aircraft communications; or in the design, construction, operation or maintenance of any nuclear facility. Sun disclaims any express or implied warranty of fitness for such uses.
    [Show full text]
  • Pengju Ren@XJTU 2021
    Computer Architecture Lecture 10 – Vector Machine (Data Level Parallel) Pengju Ren Institute of Artificial Intelligence and Robotics PengjuXi’an Ren@XJTU Jiaotong University 2021 http://gr.xjtu.edu.cn/web/pengjuren SISD、MIMD、SIMD and MIMD Data Streams Single Multiple Instruction Single SISD: Intel Pentium 4 SIMD: SSE instruction of x86 Streams Multiple MISD: No example today MIMD: Intel Core i7 SISD: Single Instruction stream, Single Data Stream MIMD: Multiple Instruction streams, Multiple Data Streams SPMD (SinglePengju Program Ren@XJTU multiple data) GPU 2021 SIMD: Single Instruction stream, Multiple Data Streams MISD: Multiple Instruction streams, Single Data Stream 2 Agenda . Vector Processors . Single Instruction Multiple Data (SIMD) . Instruction Set Extensions (Neon@ARM, AVX@Intel, etc.) Pengju Ren@XJTU 2021 3 Modern SIMD Processors SIMD architectures can exploit significant data-level parallelism for: Matrix-oriented scientific computing Media-oriented image and sound processors Machine Learning Algorithms Most modern CPUs have SIMD architectures Intel SSE and MMX, AVX, AVX2 (Streaming SIMD Extension, Multimedia extensions、Advanced Vector extensions) ARM NEON, MIPS MDMX These architectures include instruction set extensions which allow both sequential and parallel instructions to be executed Some architectures include separate SIMD coprocessors for handling these instructions ARM NEON Pengju Ren@XJTU 2021 Included in Cortex-A8 and Cortex-A9 processors Intel SSE and AVX Introduced in 1999 in the Pentium III processor AVX512 currently used in Xeon Core series 4 Vector Processor Basic idea: Read sets of data elements into “vector registers” Operate on those registers Disperse the results back into memory Registers are controlled by compiler Used to hide memory latency Leverage memory bandwidth Overcoming limitations of ILP: – Dramatic reduction in fetch and decode bandwidth.
    [Show full text]