Analysis of Performance Gap Between Openacc and the Native Approach on P100 GPU and SW26010: a Case Study with GTC-P

Total Page:16

File Type:pdf, Size:1020Kb

Analysis of Performance Gap Between Openacc and the Native Approach on P100 GPU and SW26010: a Case Study with GTC-P Analysis of Performance Gap Between OpenACC and the Native Approach on P100 GPU and SW26010: A Case Study with GTC-P Stephen Wang†1, James Lin†1, William Tang†2, Stephane Ethier†2, Bei Wang†2, Simon See†1,3 †1 Shanghai Jiao Tong University, Center for HPC †2 Princeton University, Institute for Computational Science & Engineering (PICSciE) and Plasma Physics Laboratory(PPPL) †3 NVIDIA corporation GTC 2018, San Jose, USA March 27, 2018 1 Background • Sunway TaihuLight is now the No.1 supercomputer on the Top500 list. In the near future, Summit in ORNL will be the next leap in the leadership-class supercomputers. à Maintaining the single code on different supercomputers. • The real-world applications with OpenACC can achieve the portability across NVIDIA GPU and Sunway processors. GTC-P code is a case study. à We proposed to analyze the performance gap between the OpenACC version and the native programming approach on two different architectures. 2 GTC-P: Gyrokinetic Toroidal Code - Princeton • Developed by Princeton to accelerate progress in highly-scalable plasma turbulence HPC Particle-in-Cell (PIC) codes • Modern “co-design” version of the comprehensive original GTC code with focus on using Computer Science performance modeling to improve basic PIC operations to deliver simulations at extreme scales with unprecedented resolution & speed on variety of different architectures worldwide • Includes present-day multi-petaflop supercomputers, including Tianhe-2, Titan, Sequoia, Mira, etc., that feature GPU, CPU multicore, and many-core processors • KEY REFERENCE: W. Tang, B. Wang, S. Ethier, G. Kwasniewski, T. Hoefler and etc. ,“Extreme Scale Plasma Turbulence Simulations on Top Supercomputers Worldwide” , Supercomputing (SC), 2016 Conference, Salt Lake City, Utah, USA The case study of GTC-P code with OpenACC • Charge: particle to grid interpolation (SCATTER) • Smooth/Poisson/Field: grid work (local stencil) • Push: • grid to particle interpolation (GATHER) • update position and velocity • Shift: in distributed memory environment, exchange particles among processors 4 The case study of GTC-P code with OpenACC • Challenges a. Memory-bound kernels b. Data hazard c. Random memory access • Methodology a. Decrease the memory bandwidth b. Use atomic operations or duplication and reduction c. Take full advantage of local memory 5 The performance of atomic operations on P100 and SW26010 NVIDIA GPU (P100) CUDA OpenACC Elapsed Time (s) 5.9 6.0 CUDA supports global atomics in a coalesced way by transposing in shared memory Sunway processor OpenACC code Serial code on 1 MPE (SW26010) on 64 CPE Elapsed Time (s) 4.7 2360.5 504x slower !!! unacceptable Atomic operations on SW26010 are implemented by lock-and-unlock methodology. 6 Performance evaluation on NVIDIA P100 • The native atomicAdd instruction is used on P100 instead of compare-and- swap loop implemented with atomicCAS instruction on K80. • The performance gap of GTC-P between CUDA and OpenACC are narrowed with the hardware upgrade. 7 Implementation of the OpenACC version on SW26010 • Duplication and reduction algorithm is used instead of atomic operations, which is implemented with the help of the global variable acc_thread_id. • Using tile directive to coalesced access data by DMA request and fill the 64KB LDM. D M A Main Memory 8 Performance evaluation of the OpenACC version on SW26010 2500 Shift Lower is better Smooth Field • The performance is Poisson Push 2000 Charge acceptable after removing the atomic operations on SW26010. 1500 • Taking full advantage of DMA bandwidth is the 1000 key factor for the Baseline 1.1X memory-bound kernel. Elapsed time [sec] 2.5X • 500 Charge kernel is the hotspot of the OpenACC version. 0 Sequential(MPE) OpenACC(CPE) +Tile +SPMlibrary 9 +w/oatomics Register level communication on SW26010 • The low-latency register communication mechanism is among the CPE cluster, which is the key factor for data locality. 10 The RLC optimization for the charge kernel on SW26010 irregular memory access pattern in the charge kernel • The index value are preconditioned on the MPE and then transfer to the first column of the CPE cluster. • Irregular access is implemented on the rest CPE by row communication. 11 The async optimization for the charge kernel on SW26010 • The irregular memory access implemented by RLC on CPE cluster and the rest part due to the limit of SPM space are running simultaneously. • Tuning the performance manually. 12 Performance tuning of the charge kernel on SW26010 74% Finally, we achieved around 4X speedup compared with OpenACC version and the native approach on SW26010 processors. 13 How about the scaling of the OpenACC version of GTC-P code on the real supercomputers? (Early Results) 14 Experiment results of scaling evaluation on GPU cluster in SJTU Weak Scaling 15 Experiment results of scaling evaluation on Titan supercomputer • One K20X per node • ”Gemini” internconnect • Strong scaling is to be done … 16 Experiment results of scaling evaluation on Sunway TaihuLight supercomputer 17 Summary • The case study demonstrated the portability of OpenACC on GPU and Chinese home-grown many-core processor. Although the algorithm on SW26010 has to be refractored compared with GPU. • The performance gap between the OpenACC version and CUDA of GTC-P on NVIDIA P100 is narrowed with the hardware upgrade. • The experiments showed that performance gap on SW26010 can not be ignored due to the lack of high-efficiency general software cache on the CPE cluster. We designed specific register level communication to fix the problem. 18 Reference • Performance and Portability Studies with OpenACC Accelerated Version of GTC-P. Yueming Wei, Yichao Wang, Linjin Cai, William Tang, Bei Wang, Stephane Ethier, Simon See and James Lin. The 17th International Conference on Parallel and Distributed Computing, Applications and Technologies, Guangzhou, China, December 16-18, 2016. • Porting and Optimizing GTC-P on TaihuLight Supercomputer with Sunway OpenACC. Yichao Wang, James Lin, Linjin Cai, William Tang, Stephane Ethier, Bei Wang, Simon See and Satoshi Matsuoka. Journal of Computer Research and Development, 2018, 55(4). 19.
Recommended publications
  • Interconnect Your Future Enabling the Best Datacenter Return on Investment
    Interconnect Your Future Enabling the Best Datacenter Return on Investment TOP500 Supercomputers, November 2016 Mellanox Accelerates The World’s Fastest Supercomputers . Accelerates the #1 Supercomputer . 39% of Overall TOP500 Systems (194 Systems) . InfiniBand Connects 65% of the TOP500 HPC Platforms . InfiniBand Connects 46% of the Total Petascale Systems . Connects All of 40G Ethernet Systems . Connects The First 100G Ethernet System on The List (Mellanox End-to-End) . Chosen for 65 End-User TOP500 HPC Projects in 2016, 3.6X Higher versus Omni-Path, 5X Higher versus Cray Aries InfiniBand is the Interconnect of Choice for HPC Infrastructures Enabling Machine Learning, High-Performance, Web 2.0, Cloud, Storage, Big Data Applications © 2016 Mellanox Technologies 2 Mellanox Connects the World’s Fastest Supercomputer National Supercomputing Center in Wuxi, China #1 on the TOP500 List . 93 Petaflop performance, 3X higher versus #2 on the TOP500 . 41K nodes, 10 million cores, 256 cores per CPU . Mellanox adapter and switch solutions * Source: “Report on the Sunway TaihuLight System”, Jack Dongarra (University of Tennessee) , June 20, 2016 (Tech Report UT-EECS-16-742) © 2016 Mellanox Technologies 3 Mellanox In the TOP500 . Connects the world fastest supercomputer, 93 Petaflops, 41 thousand nodes, and more than 10 million CPU cores . Fastest interconnect solution, 100Gb/s throughput, 200 million messages per second, 0.6usec end-to-end latency . Broadest adoption in HPC platforms , connects 65% of the HPC platforms, and 39% of the overall TOP500 systems . Preferred solution for Petascale systems, Connects 46% of the Petascale systems on the TOP500 list . Connects all the 40G Ethernet systems and the first 100G Ethernet system on the list (Mellanox end-to-end) .
    [Show full text]
  • Introduction to Openacc 2018 HPC Workshop: Parallel Programming
    Introduction to OpenACC 2018 HPC Workshop: Parallel Programming Alexander B. Pacheco Research Computing July 17 - 18, 2018 CPU vs GPU CPU : consists of a few cores optimized for sequential serial processing GPU : has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously GPU enabled applications 2 / 45 CPU vs GPU CPU : consists of a few cores optimized for sequential serial processing GPU : has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously GPU enabled applications 2 / 45 CPU vs GPU CPU : consists of a few cores optimized for sequential serial processing GPU : has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously GPU enabled applications 2 / 45 CPU vs GPU CPU : consists of a few cores optimized for sequential serial processing GPU : has a massively parallel architecture consisting of thousands of smaller, more efficient cores designed for handling multiple tasks simultaneously GPU enabled applications 2 / 45 3 / 45 Accelerate Application for GPU 4 / 45 GPU Accelerated Libraries 5 / 45 GPU Programming Languages 6 / 45 What is OpenACC? I OpenACC Application Program Interface describes a collection of compiler directive to specify loops and regions of code in standard C, C++ and Fortran to be offloaded from a host CPU to an attached accelerator. I provides portability across operating systems, host CPUs and accelerators History I OpenACC was developed by The Portland Group (PGI), Cray, CAPS and NVIDIA. I PGI, Cray, and CAPs have spent over 2 years developing and shipping commercial compilers that use directives to enable GPU acceleration as core technology.
    [Show full text]
  • GPU Computing with Openacc Directives
    Introduction to OpenACC Directives Duncan Poole, NVIDIA Thomas Bradley, NVIDIA GPUs Reaching Broader Set of Developers 1,000,000’s CAE CFD Finance Rendering Universities Data Analytics Supercomputing Centers Life Sciences 100,000’s Oil & Gas Defense Weather Research Climate Early Adopters Plasma Physics 2004 Present Time 3 Ways to Accelerate Applications Applications OpenACC Programming Libraries Directives Languages “Drop-in” Easily Accelerate Maximum Acceleration Applications Flexibility 3 Ways to Accelerate Applications Applications OpenACC Programming Libraries Directives Languages CUDA Libraries are interoperable with OpenACC “Drop-in” Easily Accelerate Maximum Acceleration Applications Flexibility 3 Ways to Accelerate Applications Applications OpenACC Programming Libraries Directives Languages CUDA Languages are interoperable with OpenACC, “Drop-in” Easily Accelerate too! Maximum Acceleration Applications Flexibility NVIDIA cuBLAS NVIDIA cuRAND NVIDIA cuSPARSE NVIDIA NPP Vector Signal GPU Accelerated Matrix Algebra on Image Processing Linear Algebra GPU and Multicore NVIDIA cuFFT Building-block Sparse Linear C++ STL Features IMSL Library Algorithms for CUDA Algebra for CUDA GPU Accelerated Libraries “Drop-in” Acceleration for Your Applications OpenACC Directives CPU GPU Simple Compiler hints Program myscience Compiler Parallelizes code ... serial code ... !$acc kernels do k = 1,n1 do i = 1,n2 OpenACC ... parallel code ... Compiler Works on many-core GPUs & enddo enddo Hint !$acc end kernels multicore CPUs ... End Program myscience
    [Show full text]
  • Openacc Course October 2017. Lecture 1 Q&As
    OpenACC Course October 2017. Lecture 1 Q&As. Question Response I am currently working on accelerating The GCC compilers are lagging behind the PGI compilers in terms of code compiled in gcc, in your OpenACC feature implementations so I'd recommend that you use the PGI experience should I grab the gcc-7 or compilers. PGI compilers provide community version which is free and you try to compile the code with the pgi-c ? can use it to compile OpenACC codes. New to OpenACC. Other than the PGI You can find all the supported compilers here, compiler, what other compilers support https://www.openacc.org/tools OpenACC? Is it supporting Nvidia GPU on TK1? I Currently there isn't an OpenACC implementation that supports ARM think, it must be processors including TK1. PGI is considering adding this support sometime in the future, but for now, you'll need to use CUDA. OpenACC directives work with intel Xeon Phi is treated as a MultiCore x86 CPU. With PGI you can use the "- xeon phi platform? ta=multicore" flag to target multicore CPUs when using OpenACC. Does it have any application in field of In my opinion OpenACC enables good software engineering practices by Software Engineering? enabling you to write a single source code, which is more maintainable than having to maintain different code bases for CPUs, GPUs, whatever. Does the CPU comparisons include I think that the CPU comparisons include standard vectorisation that the simd vectorizations? PGI compiler applies, but no specific hand-coded vectorisation optimisations or intrinsic work. Does OpenMP also enable OpenMP does have support for GPUs, but the compilers are just now parallelization on GPU? becoming available.
    [Show full text]
  • The Sunway Taihulight Supercomputer: System and Applications
    SCIENCE CHINA Information Sciences . RESEARCH PAPER . July 2016, Vol. 59 072001:1–072001:16 doi: 10.1007/s11432-016-5588-7 The Sunway TaihuLight supercomputer: system and applications Haohuan FU1,3 , Junfeng LIAO1,2,3 , Jinzhe YANG2, Lanning WANG4 , Zhenya SONG6 , Xiaomeng HUANG1,3 , Chao YANG5, Wei XUE1,2,3 , Fangfang LIU5 , Fangli QIAO6 , Wei ZHAO6 , Xunqiang YIN6 , Chaofeng HOU7 , Chenglong ZHANG7, Wei GE7 , Jian ZHANG8, Yangang WANG8, Chunbo ZHOU8 & Guangwen YANG1,2,3* 1Ministry of Education Key Laboratory for Earth System Modeling, and Center for Earth System Science, Tsinghua University, Beijing 100084, China; 2Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China; 3National Supercomputing Center in Wuxi, Wuxi 214072, China; 4College of Global Change and Earth System Science, Beijing Normal University, Beijing 100875, China; 5Institute of Software, Chinese Academy of Sciences, Beijing 100190, China; 6First Institute of Oceanography, State Oceanic Administration, Qingdao 266061, China; 7Institute of Process Engineering, Chinese Academy of Sciences, Beijing 100190, China; 8Computer Network Information Center, Chinese Academy of Sciences, Beijing 100190, China Received May 27, 2016; accepted June 11, 2016; published online June 21, 2016 Abstract The Sunway TaihuLight supercomputer is the world’s first system with a peak performance greater than 100 PFlops. In this paper, we provide a detailed introduction to the TaihuLight system. In contrast with other existing heterogeneous supercomputers, which include both CPU processors and PCIe-connected many-core accelerators (NVIDIA GPU or Intel Xeon Phi), the computing power of TaihuLight is provided by a homegrown many-core SW26010 CPU that includes both the management processing elements (MPEs) and computing processing elements (CPEs) in one chip.
    [Show full text]
  • It's a Multi-Core World
    It’s a Multicore World John Urbanic Pittsburgh Supercomputing Center Parallel Computing Scientist Moore's Law abandoned serial programming around 2004 Courtesy Liberty Computer Architecture Research Group Moore’s Law is not to blame. Intel process technology capabilities High Volume Manufacturing 2004 2006 2008 2010 2012 2014 2016 2018 Feature Size 90nm 65nm 45nm 32nm 22nm 16nm 11nm 8nm Integration Capacity (Billions of 2 4 8 16 32 64 128 256 Transistors) Transistor for Influenza Virus 90nm Process Source: CDC 50nm Source: Intel At end of day, we keep using all those new transistors. That Power and Clock Inflection Point in 2004… didn’t get better. Fun fact: At 100+ Watts and <1V, currents are beginning to exceed 100A at the point of load! Source: Kogge and Shalf, IEEE CISE Courtesy Horst Simon, LBNL Not a new problem, just a new scale… CPU Power W) Cray-2 with cooling tower in foreground, circa 1985 And how to get more performance from more transistors with the same power. RULE OF THUMB A 15% Frequency Power Performance Reduction Reduction Reduction Reduction In Voltage 15% 45% 10% Yields SINGLE CORE DUAL CORE Area = 1 Area = 2 Voltage = 1 Voltage = 0.85 Freq = 1 Freq = 0.85 Power = 1 Power = 1 Perf = 1 Perf = ~1.8 Single Socket Parallelism Processor Year Vector Bits SP FLOPs / core / Cores FLOPs/cycle cycle Pentium III 1999 SSE 128 3 1 3 Pentium IV 2001 SSE2 128 4 1 4 Core 2006 SSE3 128 8 2 16 Nehalem 2008 SSE4 128 8 10 80 Sandybridge 2011 AVX 256 16 12 192 Haswell 2013 AVX2 256 32 18 576 KNC 2012 AVX512 512 32 64 2048 KNL 2016 AVX512 512 64 72 4608 Skylake 2017 AVX512 512 96 28 2688 Putting It All Together Prototypical Application: Serial Weather Model CPU MEMORY First Parallel Weather Modeling Algorithm: Richardson in 1917 Courtesy John Burkhardt, Virginia Tech Weather Model: Shared Memory (OpenMP) Core Fortran: !$omp parallel do Core do i = 1, n Core a(i) = b(i) + c(i) enddoCore C/C++: MEMORY #pragma omp parallel for Four meteorologists in the samefor(i=1; room sharingi<=n; i++) the map.
    [Show full text]
  • Multi-Threaded GPU Accelerration of ORBIT with Minimal Code
    Princeton Plasma Physics Laboratory PPPL- 4996 4996 Multi-threaded GPU Acceleration of ORBIT with Minimal Code Modifications Ante Qu, Stephane Ethier, Eliot Feibush and Roscoe White FEBRUARY, 2014 Prepared for the U.S. Department of Energy under Contract DE-AC02-09CH11466. Princeton Plasma Physics Laboratory Report Disclaimers Full Legal Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor any of their employees, nor any of their contractors, subcontractors or their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or any third party’s use or the results of such use of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof or its contractors or subcontractors. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof. Trademark Disclaimer Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof or its contractors or subcontractors. PPPL Report Availability Princeton Plasma Physics Laboratory: http://www.pppl.gov/techreports.cfm Office of Scientific and Technical Information (OSTI): http://www.osti.gov/bridge Related Links: U.S.
    [Show full text]
  • Challenges in Programming Extreme Scale Systems William Gropp Wgropp.Cs.Illinois.Edu
    1 Challenges in Programming Extreme Scale Systems William Gropp wgropp.cs.illinois.edu Towards Exascale Architectures Figure 1: Core Group for Node (Low Capacity, High Bandwidth) 3D Stacked (High Capacity, Memory Low Bandwidth) DRAM Thin Cores / Accelerators Fat Core NVRAM Fat Core Integrated NIC Core for Off-Chip Coherence Domain Communication Figure 2.1: Abstract Machine Model of an exascale Node Architecture 2.1 Overarching Abstract Machine Model We begin with asingle model that highlights the anticipated key hardware architectural features that may support exascale computing. Figure 2.1 pictorially presents this as a single model, while the next subsections Figure 2: Basic Layout of a Node describe several emergingFrom technology “Abstract themes that characterize moreMachine specific hardware design choices by com- Sunway TaihuLightmercial vendors. In Section 2.2, we describe the most plausible set of realizations of the singleAdapteva model that are Epiphany-V DOE Sierra viable candidates forModels future supercomputing and architectures. Proxy • 1024 RISC June• 19, Heterogeneous2016 2.1.1 Processor 2 • Power 9 with 4 NVIDA It is likely that futureArchitectures exascale machines will feature heterogeneous for nodes composed of a collectionprocessors of more processors (MPE,than a single type of processing element. The so-called fat cores that are found in many contemporary desktop Volta GPU and server processorsExascale characterized by deep pipelines, Computing multiple levels of the memory hierarchy, instruction-level parallelism
    [Show full text]
  • HPVM: Heterogeneous Parallel Virtual Machine
    HPVM: Heterogeneous Parallel Virtual Machine Maria Kotsifakou∗ Prakalp Srivastava∗ Matthew D. Sinclair Department of Computer Science Department of Computer Science Department of Computer Science University of Illinois at University of Illinois at University of Illinois at Urbana-Champaign Urbana-Champaign Urbana-Champaign [email protected] [email protected] [email protected] Rakesh Komuravelli Vikram Adve Sarita Adve Qualcomm Technologies Inc. Department of Computer Science Department of Computer Science [email protected]. University of Illinois at University of Illinois at com Urbana-Champaign Urbana-Champaign [email protected] [email protected] Abstract hardware, and that runtime scheduling policies can make We propose a parallel program representation for heteroge- use of both program and runtime information to exploit the neous systems, designed to enable performance portability flexible compilation capabilities. Overall, we conclude that across a wide range of popular parallel hardware, including the HPVM representation is a promising basis for achieving GPUs, vector instruction sets, multicore CPUs and poten- performance portability and for implementing parallelizing tially FPGAs. Our representation, which we call HPVM, is a compilers for heterogeneous parallel systems. hierarchical dataflow graph with shared memory and vector CCS Concepts • Computer systems organization → instructions. HPVM supports three important capabilities for Heterogeneous (hybrid) systems; programming heterogeneous systems: a compiler interme- diate representation (IR), a virtual instruction set (ISA), and Keywords Virtual ISA, Compiler, Parallel IR, Heterogeneous a basis for runtime scheduling; previous systems focus on Systems, GPU, Vector SIMD only one of these capabilities. As a compiler IR, HPVM aims to enable effective code generation and optimization for het- 1 Introduction erogeneous systems.
    [Show full text]
  • Openacc Getting Started Guide
    OPENACC GETTING STARTED GUIDE Version 2018 TABLE OF CONTENTS Chapter 1. Overview............................................................................................ 1 1.1. Terms and Definitions....................................................................................1 1.2. System Prerequisites..................................................................................... 2 1.3. Prepare Your System..................................................................................... 2 1.4. Supporting Documentation and Examples............................................................ 3 Chapter 2. Using OpenACC with the PGI Compilers...................................................... 4 2.1. OpenACC Directive Summary........................................................................... 4 2.2. CUDA Toolkit Versions....................................................................................6 2.3. C Structs in OpenACC....................................................................................8 2.4. C++ Classes in OpenACC.................................................................................9 2.5. Fortran Derived Types in OpenACC...................................................................13 2.6. Fortran I/O............................................................................................... 15 2.6.1. OpenACC PRINT Example......................................................................... 15 2.7. OpenACC Atomic Support.............................................................................
    [Show full text]
  • Optimizing High-Resolution Community Earth System
    https://doi.org/10.5194/gmd-2020-18 Preprint. Discussion started: 21 February 2020 c Author(s) 2020. CC BY 4.0 License. Optimizing High-Resolution Community Earth System Model on a Heterogeneous Many-Core Supercomputing Platform (CESM- HR_sw1.0) Shaoqing Zhang1,4,5, Haohuan Fu*2,3,1, Lixin Wu*4,5, Yuxuan Li6, Hong Wang1,4,5, Yunhui Zeng7, Xiaohui 5 Duan3,8, Wubing Wan3, Li Wang7, Yuan Zhuang7, Hongsong Meng3, Kai Xu3,8, Ping Xu3,6, Lin Gan3,6, Zhao Liu3,6, Sihai Wu3, Yuhu Chen9, Haining Yu3, Shupeng Shi3, Lanning Wang3,10, Shiming Xu2, Wei Xue3,6, Weiguo Liu3,8, Qiang Guo7, Jie Zhang7, Guanghui Zhu7, Yang Tu7, Jim Edwards1,11, Allison Baker1,11, Jianlin Yong5, Man Yuan5, Yangyang Yu5, Qiuying Zhang1,12, Zedong Liu9, Mingkui Li1,4,5, Dongning Jia9, Guangwen Yang1,3,6, Zhiqiang Wei9, Jingshan Pan7, Ping Chang1,12, Gokhan 10 Danabasoglu1,11, Stephen Yeager1,11, Nan Rosenbloom 1,11, and Ying Guo7 1 International Laboratory for High-Resolution Earth System Model and Prediction (iHESP), Qingdao, China 2 Ministry of Education Key Lab. for Earth System Modeling, and Department of Earth System Science, Tsinghua University, Beijing, China 15 3 National Supercomputing Center in Wuxi, Wuxi, China 4 Laboratory for Ocean Dynamics and Climate, Qingdao Pilot National Laboratory for Marine Science and Technology, Qingdao, China 5 Key Laboratory of Physical Oceanography, the College of Oceanic and Atmospheric Sciences & Institute for Advanced Ocean Study, Ocean University of China, Qingdao, China 20 6 Department of Computer Science & Technology, Tsinghua
    [Show full text]
  • Introduction to GPU Programming with CUDA and Openacc
    Introduction to GPU Programming with CUDA and OpenACC Alabama Supercomputer Center 1 Alabama Research and Education Network Contents Topics § Why GPU chips and CUDA? § GPU chip architecture overview § CUDA programming § Queue system commands § Other GPU programming options § OpenACC programming § Comparing GPUs to other processors 2 What is a GPU chip? GPU § A Graphic Processing Unit (GPU) chips is an adaptation of the technology in a video rendering chip to be used as a math coprocessor. § The earliest graphic cards simply mapped memory bytes to screen pixels – i.e. the Apple ][ in 1980. § The next generation of graphics cards (1990s) had 2D rendering capabilities for rendering lines and shaded areas. § Graphics cards started accelerating 3D rendering with standards like OpenGL and DirectX in the early 2000s. § The most recent graphics cards have programmable processors, so that game physics can be offloaded from the main processor to the GPU. § A series of GPU chips sometimes called GPGPU (General Purpose GPU) have double precision capability so that they can be used as math coprocessors. 3 Why GPUs? GPU Comparison of peak theoretical GFLOPs and memory bandwidth for NVIDIA GPUs and Intel CPUs over the past few years. Graphs from the NVIDIA CUDA C Programming Guide 4.0. 4 CUDA Programming Language CUDA The GPU chips are massive multithreaded, manycore SIMD processors. SIMD stands for Single Instruction Multiple Data. Previously chips were programmed using standard graphics APIs (DirectX, OpenGL). CUDA, an extension of C, is the most popular GPU programming language. CUDA can also be called from a C++ program. The CUDA standard has no FORTRAN support, but Portland Group sells a third party CUDA FORTRAN.
    [Show full text]