Intely Xeon Phitm Coprocessor H Igh- Performance Programming

Total Page:16

File Type:pdf, Size:1020Kb

Intely Xeon Phitm Coprocessor H Igh- Performance Programming Intl n hM Cprr h rfrn rrn ffr ndr AMSTERDAM • BOSTON • HEIDELBERG • LONDON NEW YORK • OXFORD • PARIS • SAN DIEGO SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO M<MOGA KAUMA ESEIE Morgan Kaufmann is an imprint of Elsevier Cntnt oewo iii eace ii Ackowegemes i CAE Intrdtn 1 e moe aaeism 1 Wy eo iM coocessos ae eee 2 aoms wi coocessos 5 e is Ie R eo i coocesso 6 Keeig e "ia Ga" ue coo 9 asomig-a-uig oue aaage 1 We o use a IeR eo coocesso 11 Maimiig eomace a ocessos is 11 Wy scaig as oe ue eas is so imoa 1 Maimiig aae ogam eomace 15 Measuig eaiess o igy aae eecuio 15 Wa aou GUs? 1 eyo e ease o oig o icease eomace 1 asomaio o eomace 17 ye-eaig esus muieaig 17 Coocesso mao usage moe MI esus ooa 1 Comie a ogammig moes 19 Cace oimiaios 20 Eames e eais 1 o moe iomaio 1 CAE 2 h rfrn Cld r t rv 3 ookig ue e oo coocesso seciicaios 24 Saig e ca commuicaig wi e coocesso 26 akig i ou easy uig ou is coe 28 Saig o acceeae uig moe a oe ea 3 ea o e mea iig u see usig a coes 3 Easig i o e is cue accessig memoy awi 9 ig see ake cue maimiig memoy awi 5 ack o e i a summay 57 CAE A rndl Cntr d 59 eaig o ou couy oa i cae ocus 59 Geig a ee o e oa e 9-oi seci agoim 60 A e saig ie e aseie 9-oi seci imemeaio 1 oug oa aea uig e aseie seci coe 68 V v Cntnt Cobblestone street ride: vectors but not yet scaling 70 Open road all-out race: vectors plus scaling 72 Some grease and wrenches!: a bit of tuning 75 Adjusting the "Alignment" 76 Using streaming stores 77 Using huge 2-MB memory pages 79 Summary 81 For more information 81 CAE 4 rvn Arnd n: Optzn A lWrld Cd Expl 83 Choosing the direction: the basic diffusion calculation 84 Turn ahead: accounting for boundary effects 84 Finding a wide boulevard: scaling the code 91 Thunder road: ensuring vectorization 93 Peeling out: peeling code from the inner loop 97 Trying higher octane fuel: improving speed using data locality and tiling 100 High speed driver certificate: summary of our high speed tour 105 CAE t f t (tr 107 Why vectorize? 107 How to vectorize 108 Five approaches to achieving vectorization 108 Six step vectorization methodology 110 Step 1. Measure baseline release build performance 111 Step 2. Determine hotspots using Intel" VTuneTM amplifier XE 111 Step 3. Determine loop candidates using Intel compiler vec-report 111 Step 4. Get advice using the Intel Compiler GAP report and toolkit resources 112 Step 5. Implement GAP advice and other suggestions (such as using elemental functions and/or array notations) 112 Step 6: Repeat! 112 Streaming through caches: data layout, alignment, prefetching, and so an 112 Why data layout affects vectorization performance 113 Data alignment 114 Prefetchin g 116 Streaming stores 121 Compiler tips 123 Avoid manual loop unrolling 123 Requirements for a loop to vectorize (Intel" Compiler) 124 Importance of inlining, interference with simple profiling 126 Compiler options 126 Memory disambiguätion inside vector-loops 127 Compiler directives 128 SIMD directives 129 Cntnt v The VECTOR and NOVECTOR directives 134 The IVDEP directive 135 Random number function vectorization 137 Utilizing full vectors, -opt-assume-safe-padding 138 Option -opt-assume-safe-padding 142 Data alignment to assist vectorization 142 Tradeoffs in array notations due to vector lengths 146 Use array sections to encourage vectorization 150 Fortran array sections 150 Cilk Plus array sections and elemental functions 152 Look at what the compiler created: assembly code inspection 156 How to find the assembly code 157 Quick inspection of assembly code 158 Numerical result variations with vectorization 163 Summary 163 For more information 163 CAE 6 t f (nt hrd 165 OpenMP, Fortran 2008, Intel" TBB, Intel Cilk'" Plus, Intel" MKL 166 Task creation needs to happen on the coprocessor 166 Importance of thread pools 168 OpenMP 168 Parallel processing model 168 Directives 169 Significant controls over OpenMP 169 Nesting 170 Fortran 2008 171 DO CONCURRENT 171 DO CONCURRENT and DATA RACES 171 DO CONCURRENT definition 172 DO CONCURRENT vs. FOR ALL 173 DO CONCURRENT vs. OpenMP "Parallel" 173 Intel' TBB 174 History 175 Using TBB 177 parallel_for 177 blocked_range 177 Partitioners 178 parallel_reduce 179 parallel_invoke 180 Notes on C ++11 180 TBB summary 181 Cilk Plus 181 History 183 v Cntnt Borrowing components from TBB 183 Loaning components to TBB 184 Keyword spelling 184 cilk_for 184 cilk_spawn and cilk_sync 185 Reducers (Hyperobjects) 187 Array notation and elemental functions 187 Cilk Plus summary 187 Summary 187 For more information 188 CAE Offld 189 Two offload models 190 Choosing offload vs. native execution 191 Non-shared memory model: using offload pragmas/directives 191 Shared virtual memory model: using offload with shared VM 191 Intel' Math Kernel Library (Intel MKL) automatic offload 192 Language extensions for offload 192 Compiler options and environment variables for offload 193 Sharing environment variables for offload 195 Offloading to multiple coprocessors 195 Using pragma/directive offload 195 Placing variables and functions on the coprocessor 198 Managing memory allocation for pointer variables 200 Optimization for time: another reason to persist allocations 206 Target-specific code using a pragma in C/C++ 206 Target-specific code using a directive in fortran 209 Code that should not be built for processor-only execution 209 Predefined macros for Intel' MIC architecture 211 Fortran arrays 211 Allocating memory for parts of C/C++ arrays 212 Allocating memory for parts of fortran arrays 213 Moving data from one variable to another 214 Restrictions on offloaded code using a pragma 215 Using offload with shared virtual memory 217 Using shared memory and shared variables 217 About shared functions 219 Shared memory management functions 219 Synchronous and asynchronous function execution: _cilk_offload 219 Sharing variables and functions: _cilk_shared 220 Rules for using _cilk_shared and _cilk_offload 222 Synchronization between the processor and the target 222 Writing target-specific code with _cilk_offload 223 Restrictions on offloaded code using shared virtual memory 224 Cntnt x esise aa we usig sae iua memoy 5 C + + ecaaios o esise aa wi sae iua memoy 7 Aou asycoous comuaio 228 Aou asycoous aa ase 9 Asycoous aa ase om e ocesso o e coocesso 9 Ayig e age aiue o muie ecaaios 3 ec-eo oio use wi ooas 35 Measuig imig a aa i ooa egios 3 _Ooa_eo 3 Usig iaies i ooae coe 37 Aou ceaig ooa iaies wi ia a i 37 eomig ie a e coocesso 3 oggig sou a se om ooae coe 240 Summay 1 o moe iomaio 1 CAE 8 Cprr Arhttr 3 e Ie eo iM coocesso amiy 244 Coocesso ca esig 5 Ie eo i coocesso siico oeiew 246 Iiiua coocesso coe aciecue 7 Isucio a muiea ocessig 9 Cace ogaiaio a memoy access cosieaios 51 eecig 5 eco ocessig ui aciecue 53 eco isucios 5 Coocesso CIe sysem ieace a MA 57 MA caaiiies 5 Coocesso owe maageme caaiiies eiaiiy aaiaiiy a seiceaiiy (AS 3 Macie ceck aciecue (MCA 264 Coocesso sysem maageme cooe (SMC 5 Sesos 5 ema esig owe moioig a coo 266 a see coo 266 oeia aicaio imac 266 ecmaks 7 Summay 7 o moe iomaio 7 CAE Cprr St Sftr 9 Coocesso sowae aciecue oeiew 9 Symmey 71 ig ees use a kee 71 x Cntnt Coocesso ogammig moes a oios 71 ea a e 73 Coocesso MI ogammig moes 7 Coocesso sowae aciecue comoes 7 eeome oos a aicaio aye 7 Ie maycoe aom sowae sack 77 MYO mie yous ous 77 COI coocesso ooa iasucue 7 SCI symmeic commuicaios ieace 7 iua ewokig (ee C/I a sockes 7 Coocesso sysem maageme 79 Coocesso comoes o MI aicaios 282 iu suo o Ie eo coocessos 7 uig memoy aocaio eomace 288 Cooig e ume o M ages 288 Moioig e ume o M ages a e coocesso 288 A same meo o aocaig M ages 9 Summay 9 o moe iomaio 91 CAE 0 nx n th Cprr 93 Coocesso iu aseie 93 Ioucio o coocesso iu oosa a coiguaio 9 eau coocesso iu coiguaio 95 Se 1 Esue oo access 9 Se Geeae e eau coiguaio 9 Se 3 Cage coiguaio 9 Se Sa e Ie MSS seice 9 Cagig coocesso coiguaio 97 Coiguae comoes 97 Coiguaio ies 9 Coiguig oo aamees 9 Coocesso oo ie sysem 3 e micc uiiy 35 Coocesso sae coo 3 ooig coocessos 3 Suig ow coocessos 3 eooig e coocessos 3 eseig coocessos 37 Coocesso coiguaio iiiaiaio a oagaio 3 ee ucios o coiguaio aamees 39 Oe ie sysem ee ucios 311 Aig sowae 31 Aig ies o e oo ie sysem 313 Cntnt x Example: Adding a new global file set 314 Coprocessor Linux boot process 315 Booting the coprocessor 315 Coprocessors in a Linux cluster 318 Intel' Cluster Ready 319 How Inter Cluster Checker works 319 Intel' Cluster Checker support for coprocessors 320 Summary 322 For more information 323 CAE Mth brr 325 Intel Math Kernel Library overview 326 Intel MKL differences on the coprocessor 327 Intel MKL and Intel compiler 327 Coprocessor support overview 327 Control functions for automatic offload 328 Examples of how to set the environment variables 330 Using the coprocessor in native mode 330 Tips for using native mode 332 Using automatic offload mode 332 How to enable automatic offload 333 Examples of using control work division 333 Tips for effective use of automatic offload 333 Some tips for effective use of Intel MKL with or without offload 336 Using compiler-assisted offload 337 Tips for using compiler assisted offload 338 Precision choices and variations 339 Fast transcendentals and mathematics 339 Understanding the potential for floating-point arithmetic variations 339 Summary 342 For more information 342 CAE 2 MI 343 MPI overview 343 Using MPI on Intel' Xeon coprocessors 345 Heterogeneity (and why it matters) 345 Prerequisites (batteries not included) 348 Offload from an MPI rank 349 Hello world 350 Trapezoidal rule 350 Using MPI natively on the coprocessor 354 Hello world (again) 354
Recommended publications
  • Using Intel® Math Kernel Library and Intel® Integrated Performance Primitives in the Microsoft* .NET* Framework
    Using Intel® MKL and Intel® IPP in Microsoft* .NET* Framework Using Intel® Math Kernel Library and Intel® Integrated Performance Primitives in the Microsoft* .NET* Framework Document Number: 323195-001US.pdf 1 Using Intel® MKL and Intel® IPP in Microsoft* .NET* Framework INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL® PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications.
    [Show full text]
  • Intel(R) Math Kernel Library for Linux* OS User's Guide
    Intel® Math Kernel Library for Linux* OS User's Guide MKL 10.3 - Linux* OS Document Number: 315930-012US Legal Information Legal Information INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL(R) PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. UNLESS OTHERWISE AGREED IN WRITING BY INTEL, THE INTEL PRODUCTS ARE NOT DESIGNED NOR INTENDED FOR ANY APPLICATION IN WHICH THE FAILURE OF THE INTEL PRODUCT COULD CREATE A SITUATION WHERE PERSONAL INJURY OR DEATH MAY OCCUR. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or characteristics of any features or instructions marked "reserved" or "undefined." Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The information here is subject to change without notice. Do not finalize a design with this information. The products described in this document may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Contact your local Intel sales office or your distributor to obtain the latest specifications and before placing your product order.
    [Show full text]
  • Intel® Math Kernel Library for Windows* OS User's Guide
    Intel® Math Kernel Library for Windows* OS User's Guide Intel® MKL - Windows* OS Document Number: 315930-027US Legal Information Contents Contents Legal Information................................................................................7 Introducing the Intel® Math Kernel Library...........................................9 Getting Help and Support...................................................................11 Notational Conventions......................................................................13 Chapter 1: Overview Document Overview.................................................................................15 What's New.............................................................................................15 Related Information.................................................................................15 Chapter 2: Getting Started Checking Your Installation.........................................................................17 Setting Environment Variables ..................................................................17 Compiler Support.....................................................................................19 Using Code Examples...............................................................................19 What You Need to Know Before You Begin Using the Intel® Math Kernel Library...............................................................................................19 Chapter 3: Structure of the Intel® Math Kernel Library Architecture Support................................................................................23
    [Show full text]
  • 0 BLIS: a Modern Alternative to the BLAS
    0 BLIS: A Modern Alternative to the BLAS FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin We propose the portable BLAS-like Interface Software (BLIS) framework which addresses a number of short- comings in both the original BLAS interface and present-day BLAS implementations. The framework allows developers to rapidly instantiate high-performance BLAS-like libraries on existing and new architectures with relatively little effort. The key to this achievement is the observation that virtually all computation within level-2 and level-3 BLAS operations may be expressed in terms of very simple kernels. Higher-level framework code is generalized so that it can be reused and/or re-parameterized for different operations (as well as different architectures) with little to no modification. Inserting high-performance kernels into the framework facilitates the immediate optimization of any and all BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are supported through a straightforward compatibility layer, though calling sequences must be updated for those who wish to access new functionality. Experimental performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans. Math. Softw. 0, 0, Article 0 ( 0000), 31 pages.
    [Show full text]
  • Intel® Parallel Studio Xe 2017 Runtime
    Intel® Parallel StudIo Xe 2017 runtIme Release Notes 26 July 2016 Contents 1 Introduction ................................................................................................................................................... 1 1.1 What Every User Should Know About This Release ..................................................................... 1 2 Product Contents ......................................................................................................................................... 2 3 System Requirements ................................................................................................................................ 3 3.1 Processor Requirements........................................................................................................................... 3 3.2 Disk Space Requirements ......................................................................................................................... 3 3.3 Operating System Requirements .......................................................................................................... 3 3.4 Memory Requirements .............................................................................................................................. 3 3.5 Additional Software Requirements ...................................................................................................... 3 4 Issues and Limitations ..............................................................................................................................
    [Show full text]
  • 0 BLIS: a Framework for Rapidly Instantiating BLAS Functionality
    0 BLIS: A Framework for Rapidly Instantiating BLAS Functionality FIELD G. VAN ZEE and ROBERT A. VAN DE GEIJN, The University of Texas at Austin The BLAS-like Library Instantiation Software (BLIS) framework is a new infrastructure for rapidly in- stantiating Basic Linear Algebra Subprograms (BLAS) functionality. Its fundamental innovation is that virtually all computation within level-2 (matrix-vector) and level-3 (matrix-matrix) BLAS operations can be expressed and optimized in terms of very simple kernels. While others have had similar insights, BLIS reduces the necessary kernels to what we believe is the simplest set that still supports the high performance that the computational science community demands. Higher-level framework code is generalized and imple- mented in ISO C99 so that it can be reused and/or re-parameterized for different operations (and different architectures) with little to no modification. Inserting high-performance kernels into the framework facili- tates the immediate optimization of any BLAS-like operations which are cast in terms of these kernels, and thus the framework acts as a productivity multiplier. Users of BLAS-dependent applications are given a choice of using the the traditional Fortran-77 BLAS interface, a generalized C interface, or any other higher level interface that builds upon this latter API. Preliminary performance of level-2 and level-3 operations is observed to be competitive with two mature open source libraries (OpenBLAS and ATLAS) as well as an established commercial product (Intel MKL). Categories and Subject Descriptors: G.4 [Mathematical Software]: Efficiency General Terms: Algorithms, Performance Additional Key Words and Phrases: linear algebra, libraries, high-performance, matrix, BLAS ACM Reference Format: ACM Trans.
    [Show full text]
  • Accelerating Spark ML Applications Date Published: 2020-01-16 Date Modified
    Best Practices Accelerating Spark ML Applications Date published: 2020-01-16 Date modified: https://docs.cloudera.com/ Legal Notice © Cloudera Inc. 2021. All rights reserved. The documentation is and contains Cloudera proprietary information protected by copyright and other intellectual property rights. No license under copyright or any other intellectual property right is granted herein. Copyright information for Cloudera software may be found within the documentation accompanying each component in a particular release. Cloudera software includes software from various open source or other third party projects, and may be released under the Apache Software License 2.0 (“ASLv2”), the Affero General Public License version 3 (AGPLv3), or other license terms. Other software included may be released under the terms of alternative open source licenses. Please review the license and notice files accompanying the software for additional licensing information. Please visit the Cloudera software product page for more information on Cloudera software. For more information on Cloudera support services, please visit either the Support or Sales page. Feel free to contact us directly to discuss your specific needs. Cloudera reserves the right to change any products at any time, and without notice. Cloudera assumes no responsibility nor liability arising from the use of products, except as expressly agreed to in writing by Cloudera. Cloudera, Cloudera Altus, HUE, Impala, Cloudera Impala, and other Cloudera marks are registered or unregistered trademarks in the United States and other countries. All other trademarks are the property of their respective owners. Disclaimer: EXCEPT AS EXPRESSLY PROVIDED IN A WRITTEN AGREEMENT WITH CLOUDERA, CLOUDERA DOES NOT MAKE NOR GIVE ANY REPRESENTATION, WARRANTY, NOR COVENANT OF ANY KIND, WHETHER EXPRESS OR IMPLIED, IN CONNECTION WITH CLOUDERA TECHNOLOGY OR RELATED SUPPORT PROVIDED IN CONNECTION THEREWITH.
    [Show full text]
  • Intel Threading Building Blocks
    Praise for Intel Threading Building Blocks “The Age of Serial Computing is over. With the advent of multi-core processors, parallel- computing technology that was once relegated to universities and research labs is now emerging as mainstream. Intel Threading Building Blocks updates and greatly expands the ‘work-stealing’ technology pioneered by the MIT Cilk system of 15 years ago, providing a modern industrial-strength C++ library for concurrent programming. “Not only does this book offer an excellent introduction to the library, it furnishes novices and experts alike with a clear and accessible discussion of the complexities of concurrency.” — Charles E. Leiserson, MIT Computer Science and Artificial Intelligence Laboratory “We used to say make it right, then make it fast. We can’t do that anymore. TBB lets us design for correctness and speed up front for Maya. This book shows you how to extract the most benefit from using TBB in your code.” — Martin Watt, Senior Software Engineer, Autodesk “TBB promises to change how parallel programming is done in C++. This book will be extremely useful to any C++ programmer. With this book, James achieves two important goals: • Presents an excellent introduction to parallel programming, illustrating the most com- mon parallel programming patterns and the forces governing their use. • Documents the Threading Building Blocks C++ library—a library that provides generic algorithms for these patterns. “TBB incorporates many of the best ideas that researchers in object-oriented parallel computing developed in the last two decades.” — Marc Snir, Head of the Computer Science Department, University of Illinois at Urbana-Champaign “This book was my first introduction to Intel Threading Building Blocks.
    [Show full text]
  • Intel® Math Kernel Library 10.1 for Windows*, Linux*, and Mac OS* X
    Intel® Math Kernel Library 10.1 for Windows*, Linux*, and Mac OS* X Product Brief The Flagship for High-Performance Computing Intel® Math Kernel Library 10.1 Math Software for Windows*, Linux*, and Mac OS* X Intel® Math Kernel Library (Intel® MKL) is a library of highly optimized, extensively threaded math routines for science, engineering, and financial applications that require maximum performance. Availability • Intel® C++ Compiler Professional Editions (Windows, Linux, Mac OS X) • Intel® Fortran Compiler Professional Editions (Windows, Linux, Mac OS X) • Intel® Cluster Toolkit Compiler Edition (Windows, Linux) • Intel® Math Kernel Library 10.1 (Windows, Linux, Mac OS X) Functionality “By adopting the • Linear Algebra—BLAS and LAPACK • Fast Fourier Transforms Intel MKL DGEMM • Linear Algebra—ScaLAPACK • Vector Math Library libraries, our standard • Linear Algebra—Sparse Solvers • Vector Random Number Generators benchmarks timing DGEMM Threaded Performance Intel® Xeon® Quad-Core Processor E5472 3.0GHZ, 8MB L2 Cache,16GB Memory Intel® Xeon® Quad-Core Processor improved between Redhat 5 Server Intel® MKL 10.1; ATLAS 3.8.0 DGEMM Function 43 percent and 71 Intel MKL - 8 Threads Intel MKL - 1 Thread ATLAS - 8 Threads 100 percent…” ATLAS - 1 Thread 90 Matt Dunbar 80 Software Developer, 70 ABAQUS, Inc. s 60 p GFlo 50 40 30 20 10 0 4 6 8 2 4 0 8 2 8 4 6 0 4 2 4 8 6 6 8 9 10 11 12 13 14 16 18 19 20 22 25 32 38 51 Matrix Size (M=20000, N=4000, K=64, ..., 512) Features and Benefits Vector Random Number Generators • Outstanding performance Intel MKL Vector Statistical Library (VSL) is a collection of 9 random number generators and 22 probability distributions • Multicore and multiprocessor ready that deliver significant performance improvements in physics, • Extensive parallelism and scaling chemistry, and financial analysis.
    [Show full text]
  • The BLAS API of BLASFEO: Optimizing Performance for Small Matrices
    The BLAS API of BLASFEO: optimizing performance for small matrices Gianluca Frison1, Tommaso Sartor1, Andrea Zanelli1, Moritz Diehl1;2 University of Freiburg, 1 Department of Microsystems Engineering (IMTEK), 2 Department of Mathematics email: [email protected] February 5, 2020 This research was supported by the German Federal Ministry for Economic Affairs and Energy (BMWi) via eco4wind (0324125B) and DyConPV (0324166B), and by DFG via Research Unit FOR 2401. Abstract BLASFEO is a dense linear algebra library providing high-performance implementations of BLAS- and LAPACK-like routines for use in embedded optimization and other applications targeting relatively small matrices. BLASFEO defines an API which uses a packed matrix format as its native format. This format is analogous to the internal memory buffers of optimized BLAS, but it is exposed to the user and it removes the packing cost from the routine call. For matrices fitting in cache, BLASFEO outperforms optimized BLAS implementations, both open-source and proprietary. This paper investigates the addition of a standard BLAS API to the BLASFEO framework, and proposes an implementation switching between two or more algorithms optimized for different matrix sizes. Thanks to the modular assembly framework in BLASFEO, tailored linear algebra kernels with mixed column- and panel-major arguments are easily developed. This BLAS API has lower performance than the BLASFEO API, but it nonetheless outperforms optimized BLAS and especially LAPACK libraries for matrices fitting in cache. Therefore, it can boost a wide range of applications, where standard BLAS and LAPACK libraries are employed and the matrix size is moderate. In particular, this paper investigates the benefits in scientific programming languages such as Octave, SciPy and Julia.
    [Show full text]
  • Intel Libraries
    Agenda • Intel® Math Kernel Library • Intel® Integrated Performance Primitives • Intel® Data Analytics Acceleration Library Intel IPP Overview 1 Optimization Notice Copyright © 2014, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Intel® Math Kernel Library (Intel® MKL) 2 Powered by the Energy Science & Engineering Financial Signal Digital Research Design Analytics Processing Content Intel® Math Kernel Library (Intel® MKL) Creation ‒ Speeds math processing in scientific, ‒ Unleash the performance of Intel® Core, engineering and financial applications Intel® Xeon and Intel® Xeon Phi™ product ‒ Functionality for dense and sparse linear families algebra (BLAS, LAPACK, PARDISO), FFTs, ‒ Optimized for single core vectorization and vector math, summary statistics and cache utilization more ‒ Coupled with automatic OpenMP*-based ‒ Provides scientific programmers and parallelism for multi-core, manycore and domain scientists coprocessors 15 ‒ Interfaces to de-facto standard APIs from ‒ Scales to PetaFlop (10 floating-point C++, Fortran, C#, Python and more operations/second) clusters and beyond ‒ Support for Linux*, Windows* and OS X* ‒ Included in Intel® Parallel Studio XE and operating systems Intel® System Studio Suites ‒ Extract great performance with minimal effort **http://www.top500.org Used on the World’s Fastest Supercomputers** 3 Optimization Notice Copyright © 2014, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Optimized
    [Show full text]
  • Intel(R) Math Kernel Library for the Windows* OS User's Guide
    Intel® Math Kernel Library for the Windows* OS User’s Guide August 2008 Document Number: 315930-006US World Wide Web: http://developer.intel.com Version Version Information Date -001 Original issue. Documents Intel® Math Kernel Library (Intel® MKL) 9.1 beta January 2007 release. -002 Documents Intel® MKL 9.1 gold release. Document restructured. More June 2007 aspects of ILP64 interface discussed. Section “Selecting Between Static and Dynamic Linking” added to chapter 5; section “Changing the Number of Pro- cessors for Threading at Run Time” and details on redefining memory func- tions added to chapter 6; section ”Calling LAPACK, BLAS, and CBLAS Routines from C Language Environments” added to chapter 7. Cluster content is orga- nized into one separate chapter 9 “Working with Intel® Math Kernel Library Cluster Software” and restructured, appropriate links added. -003 Documents Intel® MKL 10.0 Beta release. Layered design model has been September 2007 described in chapter 3 and the content of the entire book adjusted to the model. New Intel MKL threading controls have been described in chapter 6. The User’s Guide for Intel MKL merged with the one for Intel MKL Cluster Edi- tion to reflect consolidation of the respective products. -004 Documents Intel® MKL 10.0 Gold release. Intel® Compatibility OpenMP* run- October 2007 time compiler library (libiomp) has been described. -005 Documents Intel® MKL 10.1 beta release. Information on dummy libraries in May 2008 Table "High-level directory structure" has been further detailed. Information on the Intel MKL configuration file removed. Instructions on creation/configur- ing of a project running an Intel MKL example in the Microsoft Visual Studio* IDE have been added to chapter 4.
    [Show full text]