Speedup Stacks: Identifying Scaling Bottlenecks in Multi-Threaded Applications

Total Page:16

File Type:pdf, Size:1020Kb

Speedup Stacks: Identifying Scaling Bottlenecks in Multi-Threaded Applications Speedup Stacks: Identifying Scaling Bottlenecks in Multi-Threaded Applications Stijn Eyerman Kristof Du Bois Lieven Eeckhout ELIS Department, Ghent University, Belgium Abstract 16 14 blackscholes Multi-threaded workloads typically show sublinear 12 facesim cholesky speedup on multi-core hardware, i.e., the achieved speedup 10 is not proportional to the number of cores and threads. Sub- linear scaling may have multiple causes, such as poorly 8 scalable synchronization leading to spinning and/or yield- speedup 6 ing, and interference in shared resources such as the last- 4 level cache (LLC) as well as the main memory subsystem. 2 It is vital for programmers and processor designers to un- 0 derstand scaling bottlenecks in existing and emerging work- 1 thread 2 threads 4 threads 8 threads 16 threads loads in order to optimize application performance and de- sign future hardware. Figure 1. Speedup as a function of the num- In this paper, we propose the speedup stack, which quan- ber of cores for blackscholes, facesim (both tifies the impact of the various scaling delimiters on multi- PARSEC) and cholesky (SPLASH-2). threaded application speedup in a single stack. We describe a mechanism for computing speedup stacks on a multi-core processor, and we find speedup stacks to be accurate within 5.1% on average for sixteen-threaded applications. We allel programming has become inevitable for mainstream present several use cases: we discuss how speedup stacks computing. One of the key needs to efficient programming can be used to identify scaling bottlenecks, classify bench- is to have the appropriate tools to analyze parallel perfor- marks, optimize performance, and understand LLC perfor- mance. In particular, a software developer needs analysis mance. tools to identify the performance scaling bottlenecks, not only on current hardware but also on future hardware with many more cores than are available today; likewise, com- 1 Introduction puter architects need analysis tools to understand the be- havioral characteristics of existing and future workloads to Power efficiency and diminishing returns in improving design and optimize future hardware. single-core performance have driven the computer indus- Speedup curves which report speedup as a function of try towards multi-core processors. Current general-purpose the number cores, as exemplified in Figure 1, are often used processors employ a limited number of cores in the typical to understand scaling behavior of an application. Although range of 4 to 8 cores, see for example Intel Nehalem, Intel a speedup curve gives a high-level view on application scal- Westmere, IBM POWER7, AMD Bulldozer, etc. It is to be ing behavior, it does not provide any insight with respect to expected that the number of cores will increase in the com- why an application does or does not scale. There are many ing years given the continuous transistor density improve- possible causes for poor scaling behavior, such as synchro- ments predicted by Moore’s law, as exemplified by Intel’s nization, as well as interference in both shared on-chip re- Many Integrated Core architecture with more than 50 cores sources (e.g., last-level cache) and off-chip resources (e.g., on a chip. main memory). Unfortunately, a speedup curve provides no A major challenge with increasing core counts is the clue whatsoever why an application exhibits poor scaling ability to program multi-core and many-core systems. Al- behavior. though parallel programming has been a challenge for many In this paper, we propose the speedup stack which is a years in the scientific computing community, given the re- novel representation that provides insight into an applica- cent advent of multi-core and many-core processors, par- tion’s scaling behavior on multi-core hardware. The height max theoretical speedup hardware. The concept of a speedup stack is broadly N speedup imbalance applicable to multiprocessor, chip-multiprocessor and spinning various forms of multi-threading architectures. yielding • We extend a previously proposed per-thread cycle ac- cache coherency counting architecture [7] for computing speedup stacks parallelization overhead on chip-multiprocessors. The extensions include sup- net negative interference negative interference actual speedup port for quantifying the impact of positive interference positive interference in shared caches on multi-threaded application perfor- base speedup mance along with support for quantifying the impact 0 of spinning and yielding. The overall accounting ar- chitecture can be implemented in hardware. Figure 2. Illustrative speedup stack. 2 Speedup stack of the speedup stack is defined as N with N the number For explaining the key concept of a speedup stack, we of cores or threads. The different components in a speedup refer to Figure 3. To simplify the discussion we focus on stack define the actual speedup plus a number of perfor- the parallelizable part of a program. Amdahl’s law already mance delimiters: last-level cache (LLC) and memory in- explains the impact of the sequential part on parallel perfor- terference components represent both positive and negative mance, hence, we do not consider it further in the remainder interference in the LLC and main memory; the spinning of the paper. If of interest, including the sequential part in component denotes time spent spinning on lock and bar- the speedup stack can easily be done. rier variables; the yield component denotes performance de- We define Ts as the execution time of (the parallelizable ficiency due to yielding on barriers and highly contended part of) a program under single-threaded execution. The ex- lock variables; additional components are due to cache co- ecution time of the same program during multi-threaded ex- herency, work imbalance and parallelization overhead. Fig- ecution will (most likely) be shorter, say Tp. We now break ure 2 shows an example speedup stack. The intuition is that up the execution time of a thread during multi-threaded ex- the scaling delimiters, such as negative LLC and memory ecution in various cycle components; note that the total ex- interference, spinning and yielding, and their relative con- ecution time is identical for all threads under this break- tributions, are immediately clear from the speedup stack. up. The idealized multi-threaded execution time, assuming Optimizing the largest scaling delimiters is likely to yield perfect parallelization, equals Ts/N with N the number of the largest speedup, hence, a speedup stack is an intuitive threads or cores. Note we use the terms ‘thread’ and ‘core’ and useful tool for both software and hardware optimiza- interchangeably as we assume chip-multiprocessors in this tions. paper, however, the concept of a speedup stack can also be Next to introducing the concept of the speedup stack, this applied to shared-memory multiprocessors (SMP) as well paper also describes a method for computing speedup stacks as simultaneous multi-threading (SMT) and other forms of from a single run, either in simulation software or real hard- multi-threading. ware; the cost for a hardware implementation is limited to Obviously, the idealized multi-threaded execution time 1.1KB per core or 18KB for a 16-core CMP. Our exper- Ts/N is not achieved in practice, hence multi-threaded ex- imental results demonstrate the accuracy of the approach: ecution time is typically larger, for a number of reasons. we achieve an average absolute error of 5.1% on 16-core Parallelizing an application incurs overhead in the form of processors across a set of SPLASH-2, PARSEC and Ro- additional instructions being executed to communicate data dinia benchmarks. Finally, we describe several applications between threads, recompute data, etc. This is referred to as for speedup stacks apart from the obvious application of an- parallelization overhead in the speedup stack. Other over- alyzing performance scaling bottlenecks. We use speedup head factors include spinning (active spinning on lock and stacks to classify benchmarks based on their scaling bottle- barrier variables), yielding (the operating system scheduling necks, we identify optimization opportunities, and we ana- out threads that are waiting on barriers or highly contended lyze LLC performance. locks), and imbalance (threads waiting for other threads to In summary, we make the following two major contribu- finish their execution). Finally, there are interference effects tions in this paper: in the memory hierarchy, both positive and negative in both the LLC and memory subsystem, as well as performance • We introduce the speedup stack which is a novel repre- penalties due to cache coherency. Positive interference ob- sentation that quantifies the impact of various scaling viously offsets negative interference. In a rare case, posi- bottlenecks in multi-threaded workloads on multi-core tive interference could lead to superlinear speedups in case (a) Single-threaded execution tion overhead) for thread i, and Pi positive interference for single-threaded execution time Ts thread i. Given the estimated single-threaded execution time, we can now estimate the achieved speedup Sˆ: (b) Multi-threaded execution N N P P per-thread execution time Tp Tˆ P Tˆ i Tp − j Oi,j + Pi Sˆ = s = i i = . (3) Tp Tp Tp per-thread execution time breakup: We now reformulate the above formula to: idealized execution time T /N positive interference N P PN s X j Oi,j Pi Sˆ = N − + i . (4) T T i p p parallelization overhead yielding negative interference This formula immediately leads to the speedup stack by spinning imbalance showing the different terms in the above formula in a cache coherency stacked bar. The height of the bar equals the maximum achievable speedup, namely N. The various terms de- Figure 3. Breaking up per-thread performance note the aggregate overhead components across all threads, for computing speedup stacks. along with the aggregate positive interference component. In summary, the speedup stack consists of the base speedup plus a number of components, see also Figure 2.
Recommended publications
  • Balanced Multithreading: Increasing Throughput Via a Low Cost Multithreading Hierarchy
    Balanced Multithreading: Increasing Throughput via a Low Cost Multithreading Hierarchy Eric Tune Rakesh Kumar Dean M. Tullsen Brad Calder Computer Science and Engineering Department University of California at San Diego {etune,rakumar,tullsen,calder}@cs.ucsd.edu Abstract ibility of SMT comes at a cost. The register file and rename tables must be enlarged to accommodate the architectural reg- A simultaneous multithreading (SMT) processor can issue isters of the additional threads. This in turn can increase the instructions from several threads every cycle, allowing it to clock cycle time and/or the depth of the pipeline. effectively hide various instruction latencies; this effect in- Coarse-grained multithreading (CGMT) [1, 21, 26] is a creases with the number of simultaneous contexts supported. more restrictive model where the processor can only execute However, each added context on an SMT processor incurs a instructions from one thread at a time, but where it can switch cost in complexity, which may lead to an increase in pipeline to a new thread after a short delay. This makes CGMT suited length or a decrease in the maximum clock rate. This pa- for hiding longer delays. Soon, general-purpose micropro- per presents new designs for multithreaded processors which cessors will be experiencing delays to main memory of 500 combine a conservative SMT implementation with a coarse- or more cycles. This means that a context switch in response grained multithreading capability. By presenting more virtual to a memory access can take tens of cycles and still provide contexts to the operating system and user than are supported a considerable performance benefit.
    [Show full text]
  • Amdahl's Law Threading, Openmp
    Computer Science 61C Spring 2018 Wawrzynek and Weaver Amdahl's Law Threading, OpenMP 1 Big Idea: Amdahl’s (Heartbreaking) Law Computer Science 61C Spring 2018 Wawrzynek and Weaver • Speedup due to enhancement E is Exec time w/o E Speedup w/ E = ---------------------- Exec time w/ E • Suppose that enhancement E accelerates a fraction F (F <1) of the task by a factor S (S>1) and the remainder of the task is unaffected Execution Time w/ E = Execution Time w/o E × [ (1-F) + F/S] Speedup w/ E = 1 / [ (1-F) + F/S ] 2 Big Idea: Amdahl’s Law Computer Science 61C Spring 2018 Wawrzynek and Weaver Speedup = 1 (1 - F) + F Non-speed-up part S Speed-up part Example: the execution time of half of the program can be accelerated by a factor of 2. What is the program speed-up overall? 1 1 = = 1.33 0.5 + 0.5 0.5 + 0.25 2 3 Example #1: Amdahl’s Law Speedup w/ E = 1 / [ (1-F) + F/S ] Computer Science 61C Spring 2018 Wawrzynek and Weaver • Consider an enhancement which runs 20 times faster but which is only usable 25% of the time Speedup w/ E = 1/(.75 + .25/20) = 1.31 • What if its usable only 15% of the time? Speedup w/ E = 1/(.85 + .15/20) = 1.17 • Amdahl’s Law tells us that to achieve linear speedup with 100 processors, none of the original computation can be scalar! • To get a speedup of 90 from 100 processors, the percentage of the original program that could be scalar would have to be 0.1% or less Speedup w/ E = 1/(.001 + .999/100) = 90.99 4 Amdahl’s Law If the portion of Computer Science 61C Spring 2018 the program that Wawrzynek and Weaver can be parallelized is small, then the speedup is limited The non-parallel portion limits the performance 5 Strong and Weak Scaling Computer Science 61C Spring 2018 Wawrzynek and Weaver • To get good speedup on a parallel processor while keeping the problem size fixed is harder than getting good speedup by increasing the size of the problem.
    [Show full text]
  • Parallel Generation of Image Layers Constructed by Edge Detection
    International Journal of Applied Engineering Research ISSN 0973-4562 Volume 13, Number 10 (2018) pp. 7323-7332 © Research India Publications. http://www.ripublication.com Speed Up Improvement of Parallel Image Layers Generation Constructed By Edge Detection Using Message Passing Interface Alaa Ismail Elnashar Faculty of Science, Computer Science Department, Minia University, Egypt. Associate professor, Department of Computer Science, College of Computers and Information Technology, Taif University, Saudi Arabia. Abstract A. Elnashar [29] introduced two parallel techniques, NASHT1 and NASHT2 that apply edge detection to produce a set of Several image processing techniques require intensive layers for an input image. The proposed techniques generate computations that consume CPU time to achieve their results. an arbitrary number of image layers in a single parallel run Accelerating these techniques is a desired goal. Parallelization instead of generating a unique layer as in traditional case; this can be used to save the execution time of such techniques. In helps in selecting the layers with high quality edges among this paper, we propose a parallel technique that employs both the generated ones. Each presented parallel technique has two point to point and collective communication patterns to versions based on point to point communication "Send" and improve the speedup of two parallel edge detection techniques collective communication "Bcast" functions. The presented that generate multiple image layers using Message Passing techniques achieved notable relative speedup compared with Interface, MPI. In addition, the performance of the proposed that of sequential ones. technique is compared with that of the two concerned ones. In this paper, we introduce a speed up improvement for both Keywords: Image Processing, Image Segmentation, Parallel NASHT1 and NASHT2.
    [Show full text]
  • CS650 Computer Architecture Lecture 10 Introduction to Multiprocessors
    NJIT Computer Science Dept CS650 Computer Architecture CS650 Computer Architecture Lecture 10 Introduction to Multiprocessors and PC Clustering Andrew Sohn Computer Science Department New Jersey Institute of Technology Lecture 10: Intro to Multiprocessors/Clustering 1/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Key Issues Run PhotoShop on 1 PC or N PCs Programmability • How to program a bunch of PCs viewed as a single logical machine. Performance Scalability - Speedup • Run PhotoShop on 1 PC (forget the specs of this PC) • Run PhotoShop on N PCs • Will it run faster on N PCs? Speedup = ? Lecture 10: Intro to Multiprocessors/Clustering 2/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Types of Multiprocessors Key: Data and Instruction Single Instruction Single Data (SISD) • Intel processors, AMD processors Single Instruction Multiple Data (SIMD) • Array processor • Pentium MMX feature Multiple Instruction Single Data (MISD) • Systolic array • Special purpose machines Multiple Instruction Multiple Data (MIMD) • Typical multiprocessors (Sun, SGI, Cray,...) Single Program Multiple Data (SPMD) • Programming model Lecture 10: Intro to Multiprocessors/Clustering 3/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Shared-Memory Multiprocessor Processor Prcessor Prcessor Interconnection network Main Memory Storage I/O Lecture 10: Intro to Multiprocessors/Clustering 4/15 12/7/2003 A. Sohn NJIT Computer Science Dept CS650 Computer Architecture Distributed-Memory Multiprocessor Processor Processor Processor IO/S MM IO/S MM IO/S MM Interconnection network IO/S MM IO/S MM IO/S MM Processor Processor Processor Lecture 10: Intro to Multiprocessors/Clustering 5/15 12/7/2003 A.
    [Show full text]
  • High-Performance Message Passing Over Generic Ethernet Hardware with Open-MX Brice Goglin
    High-Performance Message Passing over generic Ethernet Hardware with Open-MX Brice Goglin To cite this version: Brice Goglin. High-Performance Message Passing over generic Ethernet Hardware with Open-MX. Parallel Computing, Elsevier, 2011, 37 (2), pp.85-100. 10.1016/j.parco.2010.11.001. inria-00533058 HAL Id: inria-00533058 https://hal.inria.fr/inria-00533058 Submitted on 5 Nov 2010 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. High-Performance Message Passing over generic Ethernet Hardware with Open-MX Brice Goglin INRIA Bordeaux - Sud-Ouest – LaBRI 351 cours de la Lib´eration – F-33405 Talence – France Abstract In the last decade, cluster computing has become the most popular high-performance computing architec- ture. Although numerous technological innovations have been proposed to improve the interconnection of nodes, many clusters still rely on commodity Ethernet hardware to implement message passing within parallel applications. We present Open-MX, an open-source message passing stack over generic Ethernet. It offers the same abilities as the specialized Myrinet Express stack, without requiring dedicated support from the networking hardware. Open-MX works transparently in the most popular MPI implementations through its MX interface compatibility.
    [Show full text]
  • An Investigation of Symmetric Multi-Threading Parallelism for Scientific Applications
    An Investigation of Symmetric Multi-Threading Parallelism for Scientific Applications Installation and Performance Evaluation of ASCI sPPM Frank Castaneda [email protected] Nikola Vouk [email protected] North Carolina State University CSC 591c Spring 2003 May 2, 2003 Dr. Frank Mueller An Investigation of Symmetric Multi-Threading Parallelism for Scientific Applications Introduction Our project is to investigate the use of Symmetric Multi-Threading (SMT), or “Hyper-threading” in Intel parlance, applied to course-grain parallelism in large-scale distributed scientific applications. The processors provide the capability to run two streams of instruction simultaneously to fully utilize all available functional units in the CPU. We are investigating the speedup available when running two threads on a single processor that uses different functional units. The idea we propose is to utilize the hyper-thread for asynchronous communications activity to improve course-grain parallelism. The hypothesis is that there will be little contention for similar processor functional units when splitting the communications work from the computational work, thus allowing better parallelism and better exploiting the hyper-thread technology. We believe with minimal changes to the 2.5 Linux kernel we can achieve 25-50% speedup depending on the amount of communication by utilizing a hyper-thread aware scheduler and a custom communications API. Experiment Setup Software Setup Hardware Setup ?? Custom Linux Kernel 2.5.68 with ?? IBM xSeries 335 Single/Dual Processor Red Hat distribution 2.0 Ghz Xeon ?? Modification of Kernel Scheduler to run processes together ?? Custom Test Code In order to test our code we focus on the following scenarios: ?? A serial execution of communication / computation sections o Executing in serial is used to test the expected back-to-back run-time.
    [Show full text]
  • Computer Systems Architecture
    CS 352H: Computer Systems Architecture Topic 14: Multicores, Multiprocessors, and Clusters University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell Introduction Goal: connecting multiple computers to get higher performance Multiprocessors Scalability, availability, power efficiency Job-level (process-level) parallelism High throughput for independent jobs Parallel processing program Single program run on multiple processors Multicore microprocessors Chips with multiple processors (cores) University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 2 Hardware and Software Hardware Serial: e.g., Pentium 4 Parallel: e.g., quad-core Xeon e5345 Software Sequential: e.g., matrix multiplication Concurrent: e.g., operating system Sequential/concurrent software can run on serial/parallel hardware Challenge: making effective use of parallel hardware University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 3 What We’ve Already Covered §2.11: Parallelism and Instructions Synchronization §3.6: Parallelism and Computer Arithmetic Associativity §4.10: Parallelism and Advanced Instruction-Level Parallelism §5.8: Parallelism and Memory Hierarchies Cache Coherence §6.9: Parallelism and I/O: Redundant Arrays of Inexpensive Disks University of Texas at Austin CS352H - Computer Systems Architecture Fall 2009 Don Fussell 4 Parallel Programming Parallel software is the problem Need to get significant performance improvement Otherwise, just use a faster uniprocessor,
    [Show full text]
  • Computer Architecture: Parallel Processing Basics
    Computer Architecture: Parallel Processing Basics Onur Mutlu & Seth Copen Goldstein Carnegie Mellon University 9/9/13 Today What is Parallel Processing? Why? Kinds of Parallel Processing Multiprocessing and Multithreading Measuring success Speedup Amdhal’s Law Bottlenecks to parallelism 2 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi PDL'09 © 2007-9 Goldstein5 Concurrent Systems Embedded-Physical Distributed Sensor Claytronics Networks Geographically Distributed Power Internet Grid Cloud Computing EC2 Tashi Parallel PDL'09 © 2007-9 Goldstein6 Concurrent Systems Physical Geographical Cloud Parallel Geophysical +++ ++ --- --- location Relative +++ +++ + - location Faults ++++ +++ ++++ -- Number of +++ +++ + - Processors + Network varies varies fixed fixed structure Network --- --- + + connectivity 7 Concurrent System Challenge: Programming The old joke: How long does it take to write a parallel program? One Graduate Student Year 8 Parallel Programming Again?? Increased demand (multicore) Increased scale (cloud) Improved compute/communicate Change in Application focus Irregular Recursive data structures PDL'09 © 2007-9 Goldstein9 Why Parallel Computers? Parallelism: Doing multiple things at a time Things: instructions,
    [Show full text]
  • Your Speedup Is a Megaflop!
    KOLUMNE 22 CHI MIA 47 (1993) Nr. II~ (lnnuor/Fcbru",) COMPUTATIONAL CHEMISTRY Column Editors: Prof. Dr. J. Weber, University of Geneva COLUMN Prof. Dr. H. Huber, University of Basel Dr. H. P. Weber, Sandoz AG, Basel Clzilllia 47 (/993) 22-23 ond instruction on the first number at the © Neue Sclz\i'eizerisclze Clzelllische Gesellschaft same time as the first worker performs the /SSN 0009-4293 first instruction on the second number, etc .. For the first 20 steps no product is leaving your assembly line (no result is Your Speedup is a Megaflop! leaving your vector-processor) but after that you get a product (result) in each step, 'I just bought a new personal compu- Another common measure is the mil- i.e. after a total of 120 steps your loop is ter, the LAST, with a speed of 50 MHz, lions of instructions performed per sec- processed! You gain a factor of about 17, whereas your old MADOS has only a 25 ond (Mips). Again this is not a very useful i.e. if your scalar processor performs I MHz frequency'. 'You are joking, the measure for number crunching as differ- Mflops, your vector-processor yields 17 LAST makes only 2 Mflops, whereas mine ent processors need a different number of Mflops, which would roughly con'espond makes 25 Mips.' 'Never mind, your old instructions to perform a useful calcula- to the performance advertized by the com- scalar processors will not do it for long, I tion, e.g. a multiplication. For example puter manufacturer. However, any scien- just read about a supercomputer of the workstations usually have risc-processors tific program has instructions other than Teraflop generation with a speedup of (reduced instruction set computer) which loops which do not vectorize and, there- over 100!' in average need slightly more instructions fore, the efficiency is in practice much to perform floating point operations than lower! Depending on your problem and on Not far from reality - perhaps virtual the 'classical' main-frame processors.
    [Show full text]
  • Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures
    Computer Architecture A Quantitative Approach, Fifth Edition Chapter 4 Data-Level Parallelism in Vector, SIMD, and GPU Architectures Copyright © 2012, Elsevier Inc. All rights reserved. 1 Contents 1. SIMD architecture 2. Vector architectures optimizations: Multiple Lanes, Vector Length Registers, Vector Mask Registers, Memory Banks, Stride, Scatter-Gather, 3. Programming Vector Architectures 4. SIMD extensions for media apps 5. GPUs – Graphical Processing Units 6. Fermi architecture innovations 7. Examples of loop-level parallelism 8. Fallacies Copyright © 2012, Elsevier Inc. All rights reserved. 2 Classes of Computers Classes Flynn’s Taxonomy SISD - Single instruction stream, single data stream SIMD - Single instruction stream, multiple data streams New: SIMT – Single Instruction Multiple Threads (for GPUs) MISD - Multiple instruction streams, single data stream No commercial implementation MIMD - Multiple instruction streams, multiple data streams Tightly-coupled MIMD Loosely-coupled MIMD Copyright © 2012, Elsevier Inc. All rights reserved. 3 Introduction Advantages of SIMD architectures 1. Can exploit significant data-level parallelism for: 1. matrix-oriented scientific computing 2. media-oriented image and sound processors 2. More energy efficient than MIMD 1. Only needs to fetch one instruction per multiple data operations, rather than one instr. per data op. 2. Makes SIMD attractive for personal mobile devices 3. Allows programmers to continue thinking sequentially SIMD/MIMD comparison. Potential speedup for SIMD twice that from MIMID! x86 processors expect two additional cores per chip per year SIMD width to double every four years Copyright © 2012, Elsevier Inc. All rights reserved. 4 Introduction SIMD parallelism SIMD architectures A. Vector architectures B. SIMD extensions for mobile systems and multimedia applications C.
    [Show full text]
  • Vector Processors
    VECTOR PROCESSORS Computer Science Department CS 566 – Fall 2012 1 Eman Aldakheel Ganesh Chandrasekaran Prof. Ajay Kshemkalyani OUTLINE What is Vector Processors Vector Processing & Parallel Processing Basic Vector Architecture Vector Instruction Vector Performance Advantages Disadvantages Applications Conclusion 2 VECTOR PROCESSORS A processor can operate on an entire vector in one instruction Work done automatically in parallel (simultaneously) The operand to the instructions are complete vectors instead of one element Reduce the fetch and decode bandwidth Data parallelism Tasks usually consist of: Large active data sets Poor locality Long run times 3 VECTOR PROCESSORS (CONT’D) Each result independent of previous result Long pipeline Compiler ensures no dependencies High clock rate Vector instructions access memory with known pattern Reduces branches and branch problems in pipelines Single vector instruction implies lots of work Example: for(i=0; i<n; i++) c(i) = a(i) + b(i); 4 5 VECTOR PROCESSORS (CONT’D) vadd // C code b[15]+=a[15] for(i=0;i<16; i++) b[i]+=a[i] b[14]+=a[14] b[13]+=a[13] b[12]+=a[12] // Vectorized code b[11]+=a[11] set vl,16 b[10]+=a[10] vload vr0,b b[9]+=a[9] vload vr1,a b[8]+=a[8] vadd vr0,vr0,vr1 b[7]+=a[7] vstore vr0,b b[6]+=a[6] b[5]+=a[5] b[4]+=a[4] Each vector instruction b[3]+=a[3] holds many units of b[2]+=a[2] independent operations b[1]+=a[1] 6 b[0]+=a[0] 1 Vector Lane VECTOR PROCESSORS (CONT’D) vadd // C code b[15]+=a[15] 16 Vector Lanes for(i=0;i<16; i++) b[14]+=a[14] b[i]+=a[i]
    [Show full text]
  • SIMD the Good, the Bad and the Ugly
    SIMD The Good, the Bad and the Ugly Viktor Leis Outline 1. Why does SIMD exist? 2. How to use SIMD? 3. When does SIMD help? 1 Why Does SIMD Exist? Hardware Trends • for decades, single-threaded performance doubled every 18-22 months • during the last decade, single-threaded performance has been stagnating • clock rates stopped growing due to heat/end of Dennard’s scaling • number of transistors is still growing, used for: • large caches • many cores (thread parallelism) • SIMD (data parallelism) 2 Number of Cores AMD Rome 64 48 AMD Naples 32 cores Skylake SP Broadwell EX Broadwell EP Haswell EP 16 Ivy Bridge EX Ivy Bridge EP Nehalem (Westmere EX) Nehalem (Beckton) Sandy Bridge EP Core (Kentsfield) Core (Lynnfield) NetBurst (Foster) NetBurst (Paxville) 1 2000 2004 2008 2012 2016 2020 year 3 Skylake CPU 4 Skylake Core 5 x86 SIMD History • 1997: MMX 64-bit (Pentium 1) • 1999: SSE 128-bit (Pentium 3) • 2011: AVX 256-bit float (Sandy Bridge) • 2013: AVX2 256-bit int (Haswell) • 2017: AVX-512 512 bit (Skylake Server) • instructions usually introduced by Intel • AMD typically followed quickly (except for AVX-512) 6 Single Instruction Multiple Data (SIMD) • data parallelism exposed by the CPU’s instruction set • CPU register holds multiple fixed-size values (e.g., 4×32-bit) • instructions (e.g., addition) are performed on two such registers (e.g., executing 4 additions in one instruction) 42 1 7 + 31 6 2 = 73 7 9 7 Registers • SSE: 16×128 bit (XMM0...XMM15) • AVX: 16×256 bit (YMM0...YMM15) • AVX-512: 32×512 bit (ZMM0...ZMM31) • XMM0 is lower half
    [Show full text]