Tesla Vs. Xeon Phi Vs. Radeon a Compiler Writer's Perspective

Total Page:16

File Type:pdf, Size:1020Kb

Tesla Vs. Xeon Phi Vs. Radeon a Compiler Writer's Perspective Tesla vs. Xeon Phi vs. Radeon A Compiler Writer’s Perspective Brent Leback, Douglas Miles, and Michael Wolfe The Portland Group (PGI) ABSTRACT: Today, most CPU+Accelerator systems incorporate NVIDIA GPUs. Intel Xeon Phi and the continued evolution of AMD Radeon GPUs make it likely we will soon see, and want to program, a wider variety of CPU+Accelerator systems. PGI already supports NVIDIA GPUs, and is working to add support for Xeon Phi and AMD Radeon. Here we explore the features common to all three types of accelerators, those unique to each, and the implications for programming models and performance portability from a compiler writer's and application writer’s perspective. KEYWORDS: Compiler, Accelerator, Multicore OpenACC directives. OpenMP has had great success for 1. Introduction its target domain, shared-memory multiprocessor and multicore systems. OpenACC has shown some initial Today’s high performance systems are trending success for a variety of applications [1, 2, 14, 15, 16, 17]. towards using highly parallel accelerators to meet It is just now being targeted for multiple accelerator types, performance goals and power and price limits. The most and our early experience is that it can be used to achieve popular compute accelerators today are NVIDIA GPUs. high performance for a single program across a range of Intel Xeon Phi coprocessors and AMD Radeon GPUs are accelerator systems. competing for that same market, meaning we will soon be Using directives for parallel and accelerator programming and tuning for a wider variety of host + programming divides the programming challenges accelerator systems. between the application writer and the compiler. Defining We want to avoid writing a different program for an abstract view of the target architecture used by the each type of accelerator. There are at least three current compiler allows a program to be optimized for very options for writing a single program that targets multiple different targets, and allows programmers to understand accelerator types. One is to use a library, which works how to express a program that can be optimized across really well if the library contains all the primitives your accelerators. application needs. Solutions built on class libraries with PGI has been delivering directive-based Accelerator managed data structures are really another method to compilers since 2009. The current compilers implement implement libraries, and again work well if the primitives the OpenACC v1.0 specification on 64-bit and 32-bit suit your application. The potential downside is that you Linux, OS/X and Windows operating systems, targeting depend on the library implementer to support each of your NVIDIA Tesla, Fermi and Kepler GPUs. PGI has targets now and in the future. demonstrated initial functionality on AMD Radeon GPUs A second option is to dive into a low-level, target and APUs, and Intel Xeon Phi co-processors. The PGI independent solution such as OpenCL. There are compilers are designed to allow a program to be compiled significant upfront costs to refactor your application, as for a single accelerator target, or to be compiled into a well as additional and continuing costs to tune for each single binary that will run on any of multiple accelerator accelerator. OpenCL, in particular, is quite low-level, and targets depending on which is available at runtime. They requires several levels of tuning to achieve high are also designed to generate binaries that utilize multiple performance for each device. accelerators at runtime, or even multiple accelerators from A third option is to use a high-level target different vendors. To make all these cases truly useful, independent programming model, such as OpenMP or the program must be written in a way that allows the CUG 2013 Proceedings 1 of 9 compiler to optimize for different architectures that have hits in the L2 cache, the result will take 12 cycles. An L3 common themes but fundamentally different parallelism cache hit will take 35-45 clock cycles, while missing structures. completely in the cache may take 250 or more cycles. There is additional overhead to manage multicore cache coherence for the L1 and L2 caches. 2. Accelerator Architectures Today GPU marketing literature often uses the term “GPU core” or equivalent. This is essentially a single precision Here we present important architectural aspects of (32-bit) ALU or equivalent. By this definition, an 8-core today’s multi-core CPUs and the three most common Sandy Bridge with AVX instructions has 64 GPU core- accelerators designed to be coupled to those CPUs for equivalents. technical computing: NVIDIA Kepler GPU [11], Intel Xeon Phi [13] and AMD Radeon GPU [12]. There are 2.2 NVIDIA Kepler GPU other accelerators in current use, such as FPGAs for An NVIDIA Kepler GPU has up to 15 SMX units, bioinformatics and data analytics, and other possible where each SMX unit corresponds roughly to a highly accelerator vendors, such as DSP units from Texas parallel CPU core. The SMX units execute warp- Instruments and Tilera, but these have not yet achieved a instructions, where each warp-instruction is roughly a 32- significant presence in the technical computing market. wide SIMD instruction, with predication. The warp is a fundamental unit for NVIDIA GPUs; all operations are processed as warps. The CUDA programming model uses 2.1 Common Multi-core CPU CUDA threads, which are implemented as predicated, For comparison, we present an architectural summary smart lanes in a SIMD instruction stream. In this section, of a common multi-core CPU. An Intel Sandy Bridge we use the term thread to mean a sequence of warp- processor has up to 8 cores. Each core stores the state of instructions, not a CUDA thread. The SMX units can up to 2 threads. Each core can issue up to 5 instructions store the state of up to 64 threads. Each SMX unit can in a single clock cycle; the 5 instructions issued in a single issue instructions from up to four threads in a single cycle, cycle can be a mix from the two different threads. This is and can issue one or two instructions from each thread. Intel’s Hyperthreading, which is an implementation of There are obvious structural and data hazards that would Simultaneous Multithreading. In addition to scalar limit the actual number of instructions issued in a single instructions, each core has SSE and AVX SIMD registers cycle. Instructions from a single thread are issued in- and instructions. An AVX instruction has 256-bit order. When a thread stalls while waiting for a result from operands, meaning it can process 8 single-precision memory or other slow operation, the SMX control unit floating point operations or 4 double-precision floating will select instructions from some other active thread. point operations with a single instruction. Each core has a The SMX clock is about 1.1 GHz. The SMX is also 32KB level-1 data cache and 32KB level-1 instruction pipelined, with 8 stages. cache, and a 256KB L2 cache. There is a L3 cache shared Each execution engine of the Kepler SMX has an across all cores which ranges up to 20MB. The CPU array of execution units, including 192 single precision clock rate ranges between 2.3GHz to 4.0GHz (in Turbo GPU cores (for a total of 2880 GPU cores on chip), plus mode). The core is deeply pipelined, about 14 stages. additional units for double precision, memory load/store The control unit for a modern CPU can issue and transcendental operations. The execution units are instructions even beyond the point where an instruction organized into several SIMD functional units, with a stalls. For instance, if a memory load instruction misses in hardware SIMD width of 16. This means that it takes two the L1 and L2 cache, its result won’t be ready for tens or cycles to initiate a 32-wide warp instruction on the 16- hundreds of cycles until the operand is fetched from main wide hardware SIMD function unit. memory. If a subsequent add instruction, say, needs the There is a two-level data cache hierarchy. The L1 result of that load instruction, the control unit can still data cache is 64KB per SMX unit. The L1 data cache is issue the add instruction, which will wait at a reservation only used for thread-private data. The L1 data cache is station until the result from the load becomes available. actually split between a normal associative data cache and Meanwhile, instructions beyond that add can be issued a scratchpad memory, with 25%, 50% or 75% of the and can even execute while the memory load is being 64KB selectively used as scratchpad memory. There is an processed. Thus, instructions are executed out-of-order additional 48KB read-only L1 cache, which is accessed relative to their appearance in the program. through special instructions. The 1.5MB L2 data cache is The deep, two- or three-level cache hierarchy reduces shared across all SMX units. The caches are small, the average latency for memory operations. If a memory relative to the large number of SIMD operations running data load instruction hits in the L1 cache, the result will be on all 15 SMX units. The device main memory ranges up ready in 4-5 clock cycles. If it misses in the L1 cache but to 6GB in current units and 12GB in the near future. CUG 2013 Proceedings 2 of 9 2.3 AMD Radeon GPU An AMD Radeon GPU has up to 32 compute units, 2.5 Common Accelerator Architecture Themes where each compute unit corresponds roughly to a simple There are several common themes across the Tesla, CPU core with attached vector units.
Recommended publications
  • ATI Radeon™ HD 4870 Computation Highlights
    AMD Entering the Golden Age of Heterogeneous Computing Michael Mantor Senior GPU Compute Architect / Fellow AMD Graphics Product Group [email protected] 1 The 4 Pillars of massively parallel compute offload •Performance M’Moore’s Law Î 2x < 18 Month s Frequency\Power\Complexity Wall •Power Parallel Î Opportunity for growth •Price • Programming Models GPU is the first successful massively parallel COMMODITY architecture with a programming model that managgped to tame 1000’s of parallel threads in hardware to perform useful work efficiently 2 Quick recap of where we are – Perf, Power, Price ATI Radeon™ HD 4850 4x Performance/w and Performance/mm² in a year ATI Radeon™ X1800 XT ATI Radeon™ HD 3850 ATI Radeon™ HD 2900 XT ATI Radeon™ X1900 XTX ATI Radeon™ X1950 PRO 3 Source of GigaFLOPS per watt: maximum theoretical performance divided by maximum board power. Source of GigaFLOPS per $: maximum theoretical performance divided by price as reported on www.buy.com as of 9/24/08 ATI Radeon™HD 4850 Designed to Perform in Single Slot SP Compute Power 1.0 T-FLOPS DP Compute Power 200 G-FLOPS Core Clock Speed 625 Mhz Stream Processors 800 Memory Type GDDR3 Memory Capacity 512 MB Max Board Power 110W Memory Bandwidth 64 GB/Sec 4 ATI Radeon™HD 4870 First Graphics with GDDR5 SP Compute Power 1.2 T-FLOPS DP Compute Power 240 G-FLOPS Core Clock Speed 750 Mhz Stream Processors 800 Memory Type GDDR5 3.6Gbps Memory Capacity 512 MB Max Board Power 160 W Memory Bandwidth 115.2 GB/Sec 5 ATI Radeon™HD 4870 X2 Incredible Balance of Performance,,, Power, Price
    [Show full text]
  • An Evolution of Mobile Graphics
    AN EVOLUTION OF MOBILE GRAPHICS Michael C. Shebanow Vice President, Advanced Processor Lab Samsung Electronics July 20, 20131 DISCLAIMER • The views herein are my own • They do not represent Samsung’s vision nor product plans 2 • The Mobile Market • Review of GPU Tech • GPU Efficiency • User Experience • Tech Challenges • Summary 3 The Rise of the Mobile GPU & Connectivity A NEW WORLD COMING? 4 DISCRETE GPU MARKET Flattening 5 MOBILE GPU MARKET Smart • In 2012, an estimated 800+ Phones million mobile GPUs shipped “Phablets” • ~123M tablets • ~712M smart phones Tablets • Will easily exceed 1B in the coming years • Trend: • Discrete GPU relatively flat • Mobile is growing rapidly 6 WW INTERNET TRAFFIC • Source: Cisco VNI Mobile INET IP Traffic growth Traffic • Internet traffic growth Year (TB/sec) rate (TB/sec) rate is staggering 2005 0.9 0.00 2006 1.5 65% 0.00 • 2012 total traffic is 2007 2.5 61% 0.01 13.7 GB per person 2008 3.8 54% 0.01 per month 2009 5.6 45% 0.04 2010 7.8 40% 0.10 • 2012 smart phone 2011 10.6 36% 0.23 traffic at 2012 12.4 17% 0.34 0.342 GB per person per month • 2017 smart phone traffic expected at 2.7 GB per person per month 7 WHERE ARE WE HEADED?… • Enormous quantity of GPUs • Large amount of interconnectivity • Better I/O 8 GPU Pipelines A BRIEF REVIEW OF GPU TECH 9 MOBILE GPU PIPELINE ARCHITECTURES Tile-based immediate mode rendering IA VS CCV RS PS ROP (TBIMR) Tile-based deferred IA VS CCV scene rendering (TBDR) RS PS ROP IA = input assembler VS = vertex shader CCV = cull, clip, viewport transform RS = rasterization,
    [Show full text]
  • NVIDIA Quadro P4000
    NVIDIA Quadro P4000 GP104 1792 112 64 8192 MB GDDR5 256 bit GRAPHICS PROCESSOR CORES TMUS ROPS MEMORY SIZE MEMORY TYPE BUS WIDTH The Quadro P4000 is a professional graphics card by NVIDIA, launched in February 2017. Built on the 16 nm process, and based on the GP104 graphics processor, the card supports DirectX 12.0. The GP104 graphics processor is a large chip with a die area of 314 mm² and 7,200 million transistors. Unlike the fully unlocked GeForce GTX 1080, which uses the same GPU but has all 2560 shaders enabled, NVIDIA has disabled some shading units on the Quadro P4000 to reach the product's target shader count. It features 1792 shading units, 112 texture mapping units and 64 ROPs. NVIDIA has placed 8,192 MB GDDR5 memory on the card, which are connected using a 256‐bit memory interface. The GPU is operating at a frequency of 1227 MHz, which can be boosted up to 1480 MHz, memory is running at 1502 MHz. We recommend the NVIDIA Quadro P4000 for gaming with highest details at resolutions up to, and including, 5760x1080. Being a single‐slot card, the NVIDIA Quadro P4000 draws power from 1x 6‐pin power connectors, with power draw rated at 105 W maximum. Display outputs include: 4x DisplayPort. Quadro P4000 is connected to the rest of the system using a PCIe 3.0 x16 interface. The card measures 241 mm in length, and features a single‐slot cooling solution. Graphics Processor Graphics Card GPU Name: GP104 Released: Feb 6th, 2017 Architecture: Pascal Production Active Status: Process Size: 16 nm Bus Interface: PCIe 3.0 x16 Transistors: 7,200
    [Show full text]
  • Graphics Card Support List
    Graphics card support list Device Name Chipset ASUS GTXTITAN-6GD5 NVIDIA GeForce GTX TITAN ZOTAC GTX980 NVIDIA GeForce GTX980 ASUS GTX980-4GD5 NVIDIA GeForce GTX980 MSI GTX980-4GD5 NVIDIA GeForce GTX980 Gigabyte GV-N980D5-4GD-B NVIDIA GeForce GTX980 MSI GTX970 GAMING 4G GOLDEN EDITION NVIDIA GeForce GTX970 Gigabyte GV-N970IXOC-4GD NVIDIA GeForce GTX970 ASUS GTX780TI-3GD5 NVIDIA GeForce GTX780Ti ASUS GTX770-DC2OC-2GD5 NVIDIA GeForce GTX770 ASUS GTX760-DC2OC-2GD5 NVIDIA GeForce GTX760 ASUS GTX750TI-OC-2GD5 NVIDIA GeForce GTX750Ti ASUS ENGTX560-Ti-DCII/2D1-1GD5/1G NVIDIA GeForce GTX560Ti Gigabyte GV-NTITAN-6GD-B NVIDIA GeForce GTX TITAN Gigabyte GV-N78TWF3-3GD NVIDIA GeForce GTX780Ti Gigabyte GV-N780WF3-3GD NVIDIA GeForce GTX780 Gigabyte GV-N760OC-4GD NVIDIA GeForce GTX760 Gigabyte GV-N75TOC-2GI NVIDIA GeForce GTX750Ti MSI NTITAN-6GD5 NVIDIA GeForce GTX TITAN MSI GTX 780Ti 3GD5 NVIDIA GeForce GTX780Ti MSI N780-3GD5 NVIDIA GeForce GTX780 MSI N770-2GD5/OC NVIDIA GeForce GTX770 MSI N760-2GD5 NVIDIA GeForce GTX760 MSI N750 TF 1GD5/OC NVIDIA GeForce GTX750 MSI GTX680-2GB/DDR5 NVIDIA GeForce GTX680 MSI N660Ti-PE-2GD5-OC/2G-DDR5 NVIDIA GeForce GTX660Ti MSI N680GTX Twin Frozr 2GD5/OC NVIDIA GeForce GTX680 GIGABYTE GV-N670OC-2GD NVIDIA GeForce GTX670 GIGABYTE GV-N650OC-1GI/1G-DDR5 NVIDIA GeForce GTX650 GIGABYTE GV-N590D5-3GD-B NVIDIA GeForce GTX590 MSI N580GTX-M2D15D5/1.5G NVIDIA GeForce GTX580 MSI N465GTX-M2D1G-B NVIDIA GeForce GTX465 LEADTEK GTX275/896M-DDR3 NVIDIA GeForce GTX275 LEADTEK PX8800 GTX TDH NVIDIA GeForce 8800GTX GIGABYTE GV-N26-896H-B
    [Show full text]
  • PGI Fortran Guide
    PGI® User’s Guide Parallel Fortran, C and C++ for Scientists and Engineers The Portland Group® STMicroelectronics Two Centerpointe Drive Lake Oswego, OR 97035 While every precaution has been taken in the preparation of this document, The Portland Group® (PGI®), a wholly-owned subsidiary of STMicroelectronics, Inc., makes no warranty for the use of its products and assumes no responsibility for any errors that may appear, or for damages resulting from the use of the information contained herein. The Portland Group ® retains the right to make changes to this information at any time, without notice. The software described in this document is distributed under license from STMicroelectronics, Inc. and/or The Portland Group® and may be used or copied only in accordance with the terms of the license agreement ("EULA"). No part of this document may be reproduced or transmitted in any form or by any means, for any purpose other than the purchaser's or the end user's personal use without the express written permission of STMicroelectronics, Inc and/or The Portland Group®. Many of the designations used by manufacturers and sellers to distinguish their products are claimed as trademarks. Where those designations appear in this manual, STMicroelectronics was aware of a trademark claim. The designations have been printed in caps or initial caps. PGF95, PGF90, and PGI Unified Binary are trademarks; and PGI, PGHPF, PGF77, PGCC, PGC++, PGI Visual Fortran, PVF, Cluster Development Kit, PGPROF, PGDBG, and The Portland Group are registered trademarks of The Portland Group Incorporated. PGI CDK is a registered trademark of STMicroelectronics. *Other brands and names are the property of their respective owners.
    [Show full text]
  • AMD Radeon E8860
    Components for AMD’s Embedded Radeon™ E8860 GPU INTRODUCTION The E8860 Embedded Radeon GPU available from CoreAVI is comprised of temperature screened GPUs, safety certi- fiable OpenGL®-based drivers, and safety certifiable GPU tools which have been pre-integrated and validated together to significantly de-risk the challenges typically faced when integrating hardware and software components. The plat- form is an off-the-shelf foundation upon which safety certifiable applications can be built with confidence. Figure 1: CoreAVI Support for E8860 GPU EXTENDED TEMPERATURE RANGE CoreAVI provides extended temperature versions of the E8860 GPU to facilitate its use in rugged embedded applications. CoreAVI functionally tests the E8860 over -40C Tj to +105 Tj, increasing the manufacturing yield for hardware suppliers while reducing supply delays to end customers. coreavi.com [email protected] Revision - 13Nov2020 1 E8860 GPU LONG TERM SUPPLY AND SUPPORT CoreAVI has provided consistent and dedicated support for the supply and use of the AMD embedded GPUs within the rugged Mil/Aero/Avionics market segment for over a decade. With the E8860, CoreAVI will continue that focused support to ensure that the software, hardware and long-life support are provided to meet the needs of customers’ system life cy- cles. CoreAVI has extensive environmentally controlled storage facilities which are used to store the GPUs supplied to the Mil/ Aero/Avionics marketplace, ensuring that a ready supply is available for the duration of any program. CoreAVI also provides the post Last Time Buy storage of GPUs and is often able to provide additional quantities of com- ponents when COTS hardware partners receive increased volume for existing products / systems requiring additional inventory.
    [Show full text]
  • Graphics: Mesa, AMDVLK, Adreno and Protected Xe Path
    Published on Tux Machines (http://www.tuxmachines.org) Home > content > Graphics: Mesa, AMDVLK, Adreno and Protected Xe Path Graphics: Mesa, AMDVLK, Adreno and Protected Xe Path By Roy Schestowitz Created 08/02/2021 - 11:48pm Submitted by Roy Schestowitz on Monday 8th of February 2021 11:48:11 PM Filed under Graphics/Benchmarks [1] Panfrost Gallium3D Lands Its New Bifrost Scheduler In Mesa 21.1 - Phoronix[2] Hitting Mesa 21.1 this morning is a scheduler implementation for Panfrost Gallium3D, the open-source Arm Mali graphics driver. Lead Panfrost developer Alyssa Rosenzweig has been working to implement a scheduler in panfrost for the Arm Bifrost graphics code path. The scheduler has been in the works for a number of months and is passing the relevant conformance tests and has now been merged. AMDVLK 2021.Q1.3 Brings Performance Tuning For War Thunder - Phoronix[3] AMDVLK 2021.Q1.3 is out this morning as the latest snapshot of the official open-source AMD Radeon Vulkan driver for Linux systems that is derived from their shared platform driver sources. AMDVLK 2021.Q1.3 is on the lighter side with AMDVLK 2021.Q1.2 having arrived just over one week ago. Of the two listed driver changes, AMDVLK 2021.Q1.3 is rebuilt against the Vulkan API 1.2.168 headers. Freedreno's MSM DRM Driver Adds More Adreno Support, Speedbin Capability For Linux 5.12 - Phoronix[4] The MSM Direct Rendering Manager driver originally developed as part of the Freedreno effort for open-source Qualcomm Adreno graphics on Linux while now supported by the likes of Google and Qualcomm's Code Aurora engineers has some notable changes in store for the next Linux kernel cycle.
    [Show full text]
  • The Portland Group
    ® PGI Compiler User's Guide Parallel Fortran, C and C++ for Scientists and Engineers Release 2011 The Portland Group While every precaution has been taken in the preparation of this document, The Portland Group® (PGI®), a wholly-owned subsidiary of STMicroelectronics, Inc., makes no warranty for the use of its products and assumes no responsibility for any errors that may appear, or for damages resulting from the use of the information contained herein. The Portland Group retains the right to make changes to this information at any time, without notice. The software described in this document is distributed under license from STMicroelectronics and/or The Portland Group and may be used or copied only in accordance with the terms of the end-user license agreement ("EULA"). PGI Workstation, PGI Server, PGI Accelerator, PGF95, PGF90, PGFORTRAN, and PGI Unified Binary are trademarks; and PGI, PGHPF, PGF77, PGCC, PGC++, PGI Visual Fortran, PVF, PGI CDK, Cluster Development Kit, PGPROF, PGDBG, and The Portland Group are registered trademarks of The Portland Group Incorporated. Other brands and names are property of their respective owners. No part of this document may be reproduced or transmitted in any form or by any means, for any purpose other than the purchaser's or the end user's personal use without the express written permission of STMicroelectronics and/or The Portland Group. PGI® Compiler User’s Guide Copyright © 2010-2011 STMicroelectronics, Inc. All rights reserved. Printed in the United States of America First Printing: Release 2011, 11.0, December, 2010 Second Printing: Release 2011, 11.1, January, 2011 Third Printing: Release 2011, 11.2, February, 2011 Fourth Printing: Release 2011, 11.3, March, 2011 Fourth Printing: Release 2011, 11.4, April, 2011 Technical support: [email protected] Sales: [email protected] Web: www.pgroup.com ID: 1196151 Contents Preface .....................................................................................................................................
    [Show full text]
  • The Portland Group
    ® PGI Compiler Reference Manual Parallel Fortran, C and C++ for Scientists and Engineers Release 2011 The Portland Group While every precaution has been taken in the preparation of this document, The Portland Group® (PGI®), a wholly-owned subsidiary of STMicroelectronics, Inc., makes no warranty for the use of its products and assumes no responsibility for any errors that may appear, or for damages resulting from the use of the information contained herein. The Portland Group retains the right to make changes to this information at any time, without notice. The software described in this document is distributed under license from STMicroelectronics and/or The Portland Group and may be used or copied only in accordance with the terms of the end-user license agreement ("EULA"). PGI Workstation, PGI Server, PGI Accelerator, PGF95, PGF90, PGFORTRAN, and PGI Unified Binary are trademarks; and PGI, PGHPF, PGF77, PGCC, PGC++, PGI Visual Fortran, PVF, PGI CDK, Cluster Development Kit, PGPROF, PGDBG, and The Portland Group are registered trademarks of The Portland Group Incorporated. Other brands and names are property of their respective owners. No part of this document may be reproduced or transmitted in any form or by any means, for any purpose other than the purchaser's or the end user's personal use without the express written permission of STMicroelectronics and/or The Portland Group. PGI® Compiler Reference Manual Copyright © 2010-2011 STMicroelectronics, Inc. All rights reserved. Printed in the United States of America First printing: Release 2011, 11.0, December, 2010 Second Printing: Release 2011, 11.1, January 2011 Third Printing: Release 2011, 11.3, March 2011 Fourth Printing: Release 2011, 11.4, April 2011 Fifth Printing: Release 2011, 11.5, May 2011 Technical support: [email protected] Sales: [email protected] Web: www.pgroup.com ID: 111162228 Contents Preface .....................................................................................................................................
    [Show full text]
  • The Portland Group
    PGI Visual Fortran® 2013 Installation Guide Version 13.5 The Portland Group While every precaution has been taken in the preparation of this document, The Portland Group® (PGI®), a wholly-owned subsidiary of STMicroelectronics, Inc., makes no warranty for the use of its products and assumes no responsibility for any errors that may appear, or for damages resulting from the use of the information contained herein. The Portland Group retains the right to make changes to this information at any time, without notice. The software described in this document is distributed under license from STMicroelectronics and/or The Portland Group and may be used or copied only in accordance with the terms of the license agreement ("EULA"). PGI Workstation, PGI Server, PGI Accelerator, PGF95, PGF90, PGFORTRAN, and PGI Unified Binary are trademarks; and PGI, PGHPF, PGF77, PGCC, PGC++, PGI Visual Fortran, PVF, PGI CDK, Cluster Development Kit, PGPROF, PGDBG, and The Portland Group are registered trademarks of The Portland Group Incorporated. Other brands and names are property of their respective owners. No part of this document may be reproduced or transmitted in any form or by any means, for any purpose other than the purchaser's or the end user's personal use without the express written permission of STMicroelectronics and/or The Portland Group. PVF® Installation Guide Copyright © 2010-2013 STMicroelectronics, Inc. All rights reserved. Printed in the United States of America First Printing: Release 2013, version 13.1, January 2013 Second Printing: Release 2013, version 13.2, February 2013 Third Printing: Release 2013, version 13.3, March 2013 Fourth Printing: Release 2013, version 13.4, April 2013 Fourth Printing: Release 2013, version 13.5, May 2013 Technical support: [email protected] Sales: [email protected] Web: www.pgroup.com ID: 07183206 Contents 1.
    [Show full text]
  • The Portland Group
    PGI® 2013 Release Notes Version 13.3 The Portland Group While every precaution has been taken in the preparation of this document, The Portland Group® (PGI®) makes no warranty for the use of its products and assumes no responsibility for any errors that may appear, or for damages resulting from the use of the information contained herein. The Portland Group retains the right to make changes to this information at any time, without notice. The software described in this document is distributed under license from The Portland Group and/or its licensors and may be used or copied only in accordance with the terms of the end-user license agreement ("EULA"). PGI Workstation, PGI Server, PGI Accelerator, PGF95, PGF90, PGFORTRAN, PGI Unified Binary, and PGCL are trademarks; and PGI, PGHPF, PGF77, PGCC, PGC++, PGI Visual Fortran, PVF, PGI CDK, Cluster Development Kit, PGPROF, PGDBG, and The Portland Group are registered trademarks of The Portland Group Incorporated. Other brands and names are property of their respective owners. No part of this document may be reproduced or transmitted in any form or by any means, for any purpose other than the purchaser's or the end user's personal use without the express written permission of The Portland Group, Inc. PGI® 2013 Release Notes Copyright © 2013 The Portland Group, Inc. and STMicroelectronics, Inc. All rights reserved. Printed in the United States of America First Printing: Release 2013, version 13.1, January 2013 Second Printing: Release 2013, version 13.2, February 2013 Third Printing: Release 2013, version 13.3, March 2013 Technical support: [email protected] Sales: [email protected] Web: www.pgroup.com ID: 07135184 Contents 1.
    [Show full text]
  • Graphics Processing Units
    Graphics Processing Units Graphics Processing Units (GPUs) are coprocessors that traditionally perform the rendering of 2-dimensional and 3-dimensional graphics information for display on a screen. In particular computer games request more and more realistic real-time rendering of graphics data and so GPUs became more and more powerful highly parallel specialist computing units. It did not take long until programmers realized that this computational power can also be used for tasks other than computer graphics. For example already in 1990 Lengyel, Re- ichert, Donald, and Greenberg used GPUs for real-time robot motion planning [43]. In 2003 Harris introduced the term general-purpose computations on GPUs (GPGPU) [28] for such non-graphics applications running on GPUs. At that time programming general-purpose computations on GPUs meant expressing all algorithms in terms of operations on graphics data, pixels and vectors. This was feasible for speed-critical small programs and for algorithms that operate on vectors of floating-point values in a similar way as graphics data is typically processed in the rendering pipeline. The programming paradigm shifted when the two main GPU manufacturers, NVIDIA and AMD, changed the hardware architecture from a dedicated graphics-rendering pipeline to a multi-core computing platform, implemented shader algorithms of the rendering pipeline in software running on these cores, and explic- itly supported general-purpose computations on GPUs by offering programming languages and software- development toolchains. This chapter first gives an introduction to the architectures of these modern GPUs and the tools and languages to program them. Then it highlights several applications of GPUs related to information security with a focus on applications in cryptography and cryptanalysis.
    [Show full text]