Research Paper Creating a Compiler Optimized Inlineable

Total Page:16

File Type:pdf, Size:1020Kb

Research Paper Creating a Compiler Optimized Inlineable Academia Journal of Scientific Research 6(6): 262-271, June 2018 DOI: 10.15413/ajsr.2018.0118 ISSN 2315-7712 ©2018 Academia Publishing Research Paper Creating a compiler optimized inlineable implementation of INTEL SVML SIMD intrinsics Accepted 23rd May, 2018 ABSTRACT Single Input Multiple Data (SIMD) provides data parallelism execution through implemented SIMD instructions and registers. Most mainstream computers architectures have been enhanced to provide data parallelism through SIMD extensions. These advances in parallel hardware have not been accompanied by the necessary software libraries granting programmers a set of functions that could work across all major compilers adding a 4x, 8x or 16x performance increase as compared to standard SISD. Intel’s SVML library offers SIMD implementation of Mathematics and Scientific functions that have never been Promyk Zamojski* and Raafat Elfouly ported to work outside of Intel's own compiler. An Open Source inlineable implementation would increase performance and portability. This paper Mathematics and Computer Science illustrates the development of an alternative compiler neutral implementation of Department/ Rhode Island College, Intel's SVML library. RI *Corresponding author. E-mail: Keywords: Data Parallelism, SIMD Intrinsics, SVML, C++ Math Library, Computer [email protected]. Simulation, Scientific Mathematics Library. INTRODUCTION Data parallelism occurs when one or more operations are (https://www.cilkplus.org/), a programmer is able o force repeatedly applied to several values. Generally, the compiler to work harder on specific parts of code to programming languages sequentially programme scalar optimize data-parallelism. However, this is not always the values. Single Input Multiple Data (SIMD) compiler case since the compiler cannot re-order all scalar intrinsics allow operations to be grouped together leading operation to parallel operations. It is possible to do to improved transistors per Floating Point Operations conditional processing within SIMD; however, it requires reducing power usage and overall modern SIMD capable of some tricks since the compiler does not understand how to CPUs. SIMD has some limitation because you cannot easily implement from traditional scalar code. vectorize different functions with lots of statements. These In the case where automatic vectorization is not an limitations define SIMD Intrinsic programming to be option there needs to be a library of fictions a programmer almost a new programming paradigm within languages can target. This library should be portable enough to be like C or C++. able to port code written for this library to be able to be In most modern compilers the work of optimization can compiled by the main C/C++ compilers that support Intel generally outperform many programmer tricks to generate Intrinsics such as GCC, CLANG and MSVC. Many vendors optimal output. In the case of SIMD, the compiler is smart have specified an API for Intrinsic such as Intel® short enough to transform Vector/Arrays into optimized SIMD vector math library (SVML) operations. The idea is simple, instead of computing one (https://software.intel.com/sites/landingpage/IntrinsicsG value of an array at a time your compute identical uide/#techs=SVML), a software standard provided by operation on all values is able to fit into SIMD register. All Intel® C++ Compiler (https://software.intel.com/en- data must be subject to the same processing. With the us/node/524289). addition of compiler pragmas such as Intel® Cilk™ Plus SVML functions can be broken down into five (5) main Academia Journal of Scientific Research; Zamojski and Elfouly. 263 Figure 1: SIMD vs. SISD branching problem. categories: Distribution, Division, Exponentiation, Round, when SIMD comparative operation is used to compare two Trigonometry and subcategories of __m128, __m128d, (2) vectors resulting in mask of all 1 for true or mask of all __m128i, __m256, __m256d, __m256i, __m512, __m512d, 0 for false. Each mask corresponds to value held in vector __m512i corresponding to vector data types. When position whose size is allocated to each data type byte implementing SVML functions, the functions can be alignment (Figures 2 to 5). approximated using Approximation of Taylor series It is important to understand bitwise selection in SSE expansion in combination with bitwise manipulation of SIMD operations since they will take the pace of IEEE745 floating point representation (Malossi et al., jumping/branching based on condition in most arithmetic 2015; Nyland, 2004; Kretz, 2015). Reference operation. These bitwise operations become fundamental implementation of scalar version can be translated into in the design and analysis of SIMD optimized algorithms. SIMD version from Cephes library Since all data operation are identical across SIMD vector (http://www.netlib.org/cephes). The output of the code segments must be broken up based on condition and replacement functions can be tested and compared to properly placed into the correct vector portion without output form SVML library provided in Intel® C++ resorting to scalar operations to do so. It is important to Compiler (https://software.intel.com/en- note that SSE4.1 blendv bitwise selection under the hood is us/node/524289). performing the operation of a bitwise OR the result of a (~mask and first_input_value) and the result of the (mask and second_input_value). This function is completely BITWISE SELECTION bitwise in nature and depends on binary vector representation of TRUE and FALSE to allow the pass of Due to the nature of SIMD functions operating entirely values within each vector position within the SIMD bitwise, sequential functionality must be considered and register (Figure 6). reworked for sequential implementation of some non- Once a programmer understand that bitwise selection sequential/bitwise algorithms. This includes conditions masks are used in place of conditional statements the where an algorithm can jump from one point of the stack- porting of existing C/C++ library becomes easier. The frame to a point in order to compute one value that would programmer would still have to account for inefficiency of be common in SISD algorithms (Figure 1). computing all possible branches and corner cases which Bitwise selection occurs when vector bitmask is use to would later merge into the resulting SIMD vector. With this select values within the vector. Vector bitmask is produced in mind unnecessary corner cases and redundancy within Academia Journal of Scientific Research; Zamojski and Elfouly. 264 __m128 _mm_radian_asin_ps(__m128 a) { __m128 y, n, p; p = _mm_cmpge_ps(a, __m128_1); // if(a >= 1) Generates Mask 0xffffffff -OR- 0x0 n = _mm_cmple_ps(a, __m128_minus_1); // if(a <= -1) Generates Mask 0xffffffff -OR- 0x0 // 0xffffffff is TRUE // 0x0 is FALSE y = _mm_asin_ps(a); y = _mm_blendv_ps(y, __m128_PIo2F, p); // replace value for >= 1 in var y based on 0xffffffff mask y = _mm_blendv_ps(y, __m128_minus_PIo2F, n); // replace value for <= -1 in var y based on 0xffffffff mask return y; } Figure 2: SSE SIMD branching example. _mm_radian_asin_ps(__m128): push rsi movaps xmm9, xmm0 movups xmm8, XMMWORD PTR __m128_1[rip] cmpleps xmm8, xmm0 cmpleps xmm9, XMMWORD PTR __m128_minus_1[rip] call __svml_asinf4 movaps xmm1, xmm0 movaps xmm0, xmm8 blendvps xmm1, XMMWORD PTR __m128_PIo2F[rip], xmm0 movaps xmm0, xmm9 blendvps xmm1, XMMWORD PTR __m128_minus_PIo2F[rip], xmm0 movaps xmm0, xmm1 pop rcx ret Figure 3: SSE SIMD branching assembly language output (Intel C++ Compiler). float radian_asin(float a){ if(a >= 1){ return PIo2F; } else if(a <= -1){ return -PIo2F;} else{ return asin(a); } }Figure 4: Standard branching example. an algorithm should be evaluated as critical or non-critical to compute the final resulting value (Figure 7). Academia Journal of Scientific Research; Zamojski and Elfouly. 265 radian_asin(float): push rsi comiss xmm0, DWORD PTR .L_2il0floatpacket.3[rip] jae ..B1.6 movss xmm1, DWORD PTR .L_2il0floatpacket.1[rip] comiss xmm1, xmm0 jb ..B1.4 movss xmm0, DWORD PTR .L_2il0floatpacket.2[rip] pop rcx ret The..B1.4: FMA instruction set is an extension to the 128-bit and call asinf 256pop-bit rcx Streaming SIMD Extensions instructions in the Intel Compatibleret microprocessor instruction set to perform fused ..B1.6: multiplymovss– xmm0add (FMA), DWORD There PTR are.L_2il0floatpacket.0 two variants: [rip] FMA4pop rcxis supported in AMD processors starting with the ret Bulldozer (2011) architecture. FMA4 was realized in Figure 5: Standard branching example assembly language output (Intel C++ Compiler). hardware before FMA3 [6]. Ultimately FMA4 [9] was __m128idropped _mm_blendv_si128 in favor ( __m128ito the x, superior__m128i y, __m Intel128i mask) FMA3 { hardware implementation. FMA3 is supported in AMD processors since processors// Replace bit in x sincewith bit in2014. y when matchingIts goal bit inis mask to isremove set: CPU cycles returnby combining _mm_or_si128(_mm_andnot_si128(mask, multiplication andx), _mm_and_si128(mask, addition into y)); one } instruction. This would reduce the total exaction time of an Figure 6: Bitwise selection example code. algorithm. __m128 _mm_fmadd_ps(__m128 a, __m128 b, __m128 c) // Latency ~6 __m128 _mm_add_ps ( _mm_mul_ps (__m128 a, __m128 b), __m128 c) // Latency ~10 Figure 7: FMA vs. SSE CPU Cycle latency. FMA INSTRUCTION SET (https://software.intel.com/sites/default/files/managed/ 39/c5/325462-sdm-vol-1-2abcd-3abcd.pdf). Ultimately, The FMA instruction set is an extension to the 128-bit
Recommended publications
  • Elementary Functions: Towards Automatically Generated, Efficient
    Elementary functions : towards automatically generated, efficient, and vectorizable implementations Hugues De Lassus Saint-Genies To cite this version: Hugues De Lassus Saint-Genies. Elementary functions : towards automatically generated, efficient, and vectorizable implementations. Other [cs.OH]. Université de Perpignan, 2018. English. NNT : 2018PERP0010. tel-01841424 HAL Id: tel-01841424 https://tel.archives-ouvertes.fr/tel-01841424 Submitted on 17 Jul 2018 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Délivré par l’Université de Perpignan Via Domitia Préparée au sein de l’école doctorale 305 – Énergie et Environnement Et de l’unité de recherche DALI – LIRMM – CNRS UMR 5506 Spécialité: Informatique Présentée par Hugues de Lassus Saint-Geniès [email protected] Elementary functions: towards automatically generated, efficient, and vectorizable implementations Version soumise aux rapporteurs. Jury composé de : M. Florent de Dinechin Pr. INSA Lyon Rapporteur Mme Fabienne Jézéquel MC, HDR UParis 2 Rapporteur M. Marc Daumas Pr. UPVD Examinateur M. Lionel Lacassagne Pr. UParis 6 Examinateur M. Daniel Menard Pr. INSA Rennes Examinateur M. Éric Petit Ph.D. Intel Examinateur M. David Defour MC, HDR UPVD Directeur M. Guillaume Revy MC UPVD Codirecteur À la mémoire de ma grand-mère Françoise Lapergue et de Jos Perrot, marin-pêcheur bigouden.
    [Show full text]
  • Andre Heidekrueger
    AMD Heterogenous Computing X86 in development AMD new CPU and Accelerator Designs Building blocks for Heterogenous computing with the GPU Accelerators and the Latest x86 Platform Innovations 1 | Hot Chips | August, 2010 Server Industry Trends China has Seismic The performance of the 265m datasets online gamers fastest supercomputer typically exceed a terabyte grew 500x in the last decade The top 8 systems Accelerator-based on the Green 500 list use accelerators 800 servers on the Green 500 images list are 3x as energy are uploaded to Facebook efficient as those without every second 2 | Hot Chips | August,accelerators 2010 2 Top500.org Performance Projections: Can the Current Trajectory Achieve Exascale? 1 EFlops Might get there on current trajectory, but… • Too late for major government programs leading to 2018 • System power in traditional x86 architecture would be unmanageable Source for chart: Top500.org; annotations by AMD 3 | Hot Chips | August, 2010 Three Eras of Processor Performance Multi-Core Heterogeneous Single-Core Systems Era Era Era Enabled by: Enabled by: Enabled by: Moore’s Law Moore’s Law Moore’s Law Voltage Scaling Desire for Throughput Abundant data parallelism Micro-Architecture 20 years of SMP arch Power efficient GPUs Constrained by: Constrained by: Temporarily constrained by: Power Power Programming models Complexity Parallel SW availability Communication overheads Scalability o o ? we are we are here o here we are Performance here thread Performance - Time Time Application Targeted Time Throughput Throughput Performance (# of Processors) (Data-parallel exploitation) Single 4 | Driving HPC Performance Efficiency Fusion Architecture The Benefits of Heterogeneous Computing x86 CPU owns GPU Optimized for the Software World Modern Workloads .
    [Show full text]
  • Hierarchical Roofline Analysis for Gpus: Accelerating Performance
    Hierarchical Roofline Analysis for GPUs: Accelerating Performance Optimization for the NERSC-9 Perlmutter System Charlene Yang, Thorsten Kurth Samuel Williams National Energy Research Scientific Computing Center Computational Research Division Lawrence Berkeley National Laboratory Lawrence Berkeley National Laboratory Berkeley, CA 94720, USA Berkeley, CA 94720, USA fcjyang, [email protected] [email protected] Abstract—The Roofline performance model provides an Performance (GFLOP/s) is bound by: intuitive and insightful approach to identifying performance bottlenecks and guiding performance optimization. In prepa- Peak GFLOP/s GFLOP/s ≤ min (1) ration for the next-generation supercomputer Perlmutter at Peak GB/s × Arithmetic Intensity NERSC, this paper presents a methodology to construct a hi- erarchical Roofline on NVIDIA GPUs and extend it to support which produces the traditional Roofline formulation when reduced precision and Tensor Cores. The hierarchical Roofline incorporates L1, L2, device memory and system memory plotted on a log-log plot. bandwidths into one single figure, and it offers more profound Previously, the Roofline model was expanded to support insights into performance analysis than the traditional DRAM- the full memory hierarchy [2], [3] by adding additional band- only Roofline. We use our Roofline methodology to analyze width “ceilings”. Similarly, additional ceilings beneath the three proxy applications: GPP from BerkeleyGW, HPGMG Roofline can be added to represent performance bottlenecks from AMReX, and conv2d from TensorFlow. In so doing, we demonstrate the ability of our methodology to readily arising from lack of vectorization or the failure to exploit understand various aspects of performance and performance fused multiply-add (FMA) instructions. bottlenecks on NVIDIA GPUs and motivate code optimizations.
    [Show full text]
  • Theoretical Peak FLOPS Per Instruction Set on Modern Intel Cpus
    Theoretical Peak FLOPS per instruction set on modern Intel CPUs Romain Dolbeau Bull – Center for Excellence in Parallel Programming Email: [email protected] Abstract—It used to be that evaluating the theoretical and potentially multiple threads per core. Vector of peak performance of a CPU in FLOPS (floating point varying sizes. And more sophisticated instructions. operations per seconds) was merely a matter of multiplying Equation2 describes a more realistic view, that we the frequency by the number of floating-point instructions will explain in details in the rest of the paper, first per cycles. Today however, CPUs have features such as vectorization, fused multiply-add, hyper-threading or in general in sectionII and then for the specific “turbo” mode. In this paper, we look into this theoretical cases of Intel CPUs: first a simple one from the peak for recent full-featured Intel CPUs., taking into Nehalem/Westmere era in section III and then the account not only the simple absolute peak, but also the full complexity of the Haswell family in sectionIV. relevant instruction sets and encoding and the frequency A complement to this paper titled “Theoretical Peak scaling behavior of current Intel CPUs. FLOPS per instruction set on less conventional Revision 1.41, 2016/10/04 08:49:16 Index Terms—FLOPS hardware” [1] covers other computing devices. flop 9 I. INTRODUCTION > operation> High performance computing thrives on fast com- > > putations and high memory bandwidth. But before > operations => any code or even benchmark is run, the very first × micro − architecture instruction number to evaluate a system is the theoretical peak > > - how many floating-point operations the system > can theoretically execute in a given time.
    [Show full text]
  • Introduction to Intel Scalable Architectures
    Introduction to Intel scalable architectures Fabio Affinito (SCAI - Cineca) Available options... Right here, right now… two kind of solutions are available on the market: ● IBM+ nVIDIA (Coral-like) ● Intel-based (Xeon/Xeon Phi) IBM+NVIDIA Each node will be based on a Power CPU + 4/6/8 nVIDIA TESLA GPUs connected using an nVIDIA NVlink interconnect Intel Xeon and Xeon Phi Intel will keep on with the production of server processors on the Xeon line, together with the introduction of the Xeon Phi many-core chips Intel Xeon Phi will not be a co-processor anymore, but a self-standing CPU with a very high number of cores Such systems are integrated by several vendors in many different configurations (Cray, HP, Lenovo, E4..) MARCONI FERMI, the IBM BlueGene/Q deployed in Cineca ended its lifecycle in 2016 We needed a new HPC machine that could - increase the computational power - respect the agreements with PRACE - satisfy the needs of the italian computing community MARCONI MARCONI NeXtScale architecture nx360M5 nodes: Supporting Intel HSW & BDW Able to host both IB network Mellanox EDR & Intel Omni-Path Twelve nodes are grouped into a Chassis (6 chassis per rack) The compute node is made of: 2 x Intel Broadwell (Xeon processor E5-2697 v4) 18 cores, 2,3 HGz 8 x 16GB DIMM memory (RAM DDR4 2400 MHz), 128 GB total 1 x 129 GB SATA MLC S3500 Enterprise Value SSD Further details 1 x link OPA 100GBs 2*18*2,3*16 = 1.325 GFs peak 24 rack in total: 21 rack compute 1 rack service nodes 2 racks core switch MARCONI - Network MARCONI - Network MARCONI - Storage
    [Show full text]
  • Intel® Architecture Instruction Set Extensions and Future Features
    Intel® Architecture Instruction Set Extensions and Future Features Programming Reference May 2021 319433-044 Intel technologies may require enabled hardware, software or service activation. No product or component can be absolutely secure. Your costs and results may vary. You may not use or facilitate the use of this document in connection with any infringement or other legal analysis concerning Intel products described herein. You agree to grant Intel a non-exclusive, royalty-free license to any patent claim thereafter drafted which includes subject matter disclosed herein. No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document. All product plans and roadmaps are subject to change without notice. The products described may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade. Code names are used by Intel to identify products, technologies, or services that are in development and not publicly available. These are not “commercial” names and not intended to function as trademarks. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be ob- tained by calling 1-800-548-4725, or by visiting http://www.intel.com/design/literature.htm. Copyright © 2021, Intel Corporation. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries.
    [Show full text]
  • Intel(R) Advanced Vector Extensions Programming Reference
    Intel® Advanced Vector Extensions Programming Reference 319433-011 JUNE 2011 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANT- ED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL’S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. INTEL PRODUCTS ARE NOT INTENDED FOR USE IN MEDICAL, LIFE SAVING, OR LIFE SUSTAINING APPLICATIONS. Intel may make changes to specifications and product descriptions at any time, without notice. Developers must not rely on the absence or characteristics of any features or instructions marked “re- served” or “undefined.” Improper use of reserved or undefined features or instructions may cause unpre- dictable behavior or failure in developer's software code when running on an Intel processor. Intel reserves these features or instructions for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from their unauthorized use. The Intel® 64 architecture processors may contain design defects or errors known as errata. Current char- acterized errata are available on request. Hyper-Threading Technology requires a computer system with an Intel® processor supporting Hyper- Threading Technology and an HT Technology enabled chipset, BIOS and operating system. Performance will vary depending on the specific hardware and software you use. For more information, see http://www.in- tel.com/technology/hyperthread/index.htm; including details on which processors support HT Technology.
    [Show full text]
  • Effective Vectorization with Openmp 4.5
    ORNL/TM-2016/391 Effective Vectorization with OpenMP 4.5 Joseph Huber Oscar Hernandez Graham Lopez Approved for public release. Distribution is unlimited. March 2017 DOCUMENT AVAILABILITY Reports produced after January 1, 1996, are generally available free via US Department of Energy (DOE) SciTech Connect. Website: http://www.osti.gov/scitech/ Reports produced before January 1, 1996, may be purchased by members of the public from the following source: National Technical Information Service 5285 Port Royal Road Springfield, VA 22161 Telephone: 703-605-6000 (1-800-553-6847) TDD: 703-487-4639 Fax: 703-605-6900 E-mail: [email protected] Website: http://classic.ntis.gov/ Reports are available to DOE employees, DOE contractors, Energy Technology Data Ex- change representatives, and International Nuclear Information System representatives from the following source: Office of Scientific and Technical Information PO Box 62 Oak Ridge, TN 37831 Telephone: 865-576-8401 Fax: 865-576-5728 E-mail: [email protected] Website: http://www.osti.gov/contact.html This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any in- formation, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial prod- uct, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof.
    [Show full text]
  • Asianux Server 4 ==
    Asianux Server 4 == MIRACLE LINUX V6 SP3 リリースノート Asianux Server 4 == MIRACLE LINUX V6 SP3 リリースノート (C)2014 MIRACLE LINUX CORPORATION. All rights reserved. Copyright/Trademarks Linux は、Linus Torvalds 氏の米国およびその他の国における、登録商標または商標です。 Asianux は、ミラクル・リナックス株式会社の日本における登録商標です。 ミラクル・リナックス、MIRACLE LINUX は、ミラクル・リナックス株式会社の登録商標です。 RedHat、RPM の名称は、Red Hat, Inc. の米国およびその他の国における商標です。 Intel、Pentium は、Intel Corporation の登録商標または商標です。 Microsoft、Windows は、米国 Microsoft Corporation の米国およびその他の国における登録商標です。 Oracle、Java は、Oracle およびその関連会社の登録商標です。 その他記載された会社名およびロゴ、製品名などは該当する各社の登録商標または商標です。 目次 第 1 章 製品の概要..................................................................................................................5 1.1 本製品の特徴......................................................................................................................................5 1.1.1 スケーラビリティの重視................................................................................................................... 5 1.1.2 ビルトインの仮想化技術................................................................................................................... 5 1.1.3 RAS 機能の充実.............................................................................................................................. 5 1.1.4 Oracle Database との親和性........................................................................................................... 5 1.1.5 他の Linux との互換性・差別化......................................................................................................... 6 1.1.6 充実の有償追加サービス..................................................................................................................
    [Show full text]
  • VCL C++ Vector Class Library Manual
    VCL C++ vector class library manual Agner Fog ○c 2019-10-31. Apache license 2.0 Contents 1 Introduction 3 1.1 How it works . .4 1.2 Features of VCL . .4 1.3 Instruction sets supported . .4 1.4 Platforms supported . .5 1.5 Compilers supported . .5 1.6 Intended use . .5 1.7 How VCL uses metaprogramming . .5 1.8 Availability . .6 1.9 License . .6 2 The basics 7 2.1 How to compile . .7 2.2 Overview of vector classes . .8 2.3 Constructing vectors and loading data into vectors . .9 2.4 Getting data from vectors . 11 2.5 Arrays of vectors . 13 2.6 Using a namespace . 14 3 Operators 15 3.1 Arithmetic operators . 15 3.2 Logic operators . 16 3.3 Integer division . 19 4 Functions 21 4.1 Integer functions . 21 4.2 Floating point simple functions . 23 5 Boolean operations and per-element branches 28 5.1 Internal representation of boolean vectors . 29 5.2 Functions for use with booleans . 30 6 Conversion between vector types 32 6.1 Conversion between data vector types . 32 6.2 Conversion between boolean vector types . 38 7 Permute, blend, lookup, gather and scatter functions 40 7.1 Permute functions . 40 7.2 Blend functions . 41 7.3 Lookup functions . 42 7.4 Gather functions . 45 7.5 Scatter functions . 46 1 8 Mathematical functions 48 8.1 Floating point categorization functions . 49 8.2 Floating point control word manipulation functions . 50 8.3 Floating point mathematical functions . 52 8.4 Inline mathematical functions .
    [Show full text]
  • Speeding up Energy System Models - a Best Practice Guide
    Speeding up Energy System Models - a Best Practice Guide Yvonne Scholz1, Benjamin Fuchs1, Frieder Borggrefe1, Karl-Kien Cao1, Manuel Wetzel1, Kai von Krbek1, Felix Cebulla1, Hans Christian Gils1, Frederik Fiand2, Michael Bussieck2, Thorsten Koch3, Daniel Rehfeldt4, Ambros Gleixner4, Dmitry Khabi5, Thomas Breuer6, Daniel Rohe6, Hannes Hobbie7, David Sch¨onheit7, Hasan Umitcan¨ Yilmaz8, Evangelos Panos9, Samir Jeddi10, and Stefanie Buchholz11 1German Aerospace Center 2GAMS Software GmbH 3Technische Universit¨atBerlin 4Zuse Institute Berlin 5High Performance Computing Center Stuttgart 6Juelich Research Centre 7Technische Universit¨atDresden 8Karlsruhe Institute of Technology 9Paul Scherrer Institute 10University of Cologne 11Technical University of Denmark June 30, 2020 2 Contents 1 Introduction 13 1.1 Energy system optimization models: characteristics and dimensions............... 14 1.1.1 Specific characteristics of different energy system model types.............. 15 1.1.2 The Optimization Environment GAMS.......................... 17 1.2 Solution algorithms.......................................... 17 1.3 High performance computing.................................... 18 1.3.1 High performance computing in Germany......................... 19 1.4 The model experiment in BEAM-ME................................ 20 1.4.1 Participating models and partner institutions....................... 20 1.4.2 Aim of the model experiment................................ 20 1.4.3 Structure of the model experiment............................. 20 I Modeling
    [Show full text]
  • Intel® Architecture Instruction Set Extensions Programming Reference
    Intel® Architecture Instruction Set Extensions Programming Reference 319433-014 AUGUST 2012 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. EXCEPT AS PROVIDED IN INTEL'S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO SALE AND/OR USE OF INTEL PRODUCTS INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTIC- ULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. A "MISSION CRITICAL APPLICATION" IS ANY APPLICATION IN WHICH FAILURE OF THE INTEL PRODUCT COULD RESULT, DIRECTLY OR INDIRECT- LY, IN PERSONAL INJURY OR DEATH. SHOULD YOU PURCHASE OR USE INTEL'S PRODUCTS FOR ANY SUCH MISSION CRITICAL APPLICATION, YOU SHALL INDEMNIFY AND HOLD INTEL AND ITS SUBSIDIARIES, SUBCONTRACTORS AND AFFILIATES, AND THE DIRECTORS, OFFICERS, AND EM- PLOYEES OF EACH, HARMLESS AGAINST ALL CLAIMS COSTS, DAMAGES, AND EXPENSES AND REASONABLE ATTORNEYS' FEES ARISING OUT OF, DIRECTLY OR INDIRECTLY, ANY CLAIM OF PRODUCT LIABILITY, PERSONAL INJURY, OR DEATH ARISING IN ANY WAY OUT OF SUCH MISSION CRIT- ICAL APPLICATION, WHETHER OR NOT INTEL OR ITS SUBCONTRACTOR WAS NEGLIGENT IN THE DESIGN, MANUFACTURE, OR WARNING OF THE INTEL PRODUCT OR ANY OF ITS PARTS. Intel may make changes to specifications and product descriptions at any time, without notice. Designers must not rely on the absence or char- acteristics of any features or instructions marked "reserved" or "undefined". Intel reserves these for future definition and shall have no respon- sibility whatsoever for conflicts or incompatibilities arising from future changes to them.
    [Show full text]