Trace-Based Performance Analysis for Hardware Accelerators
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Parallel Rendering on Hybrid Multi-GPU Clusters
Eurographics Symposium on Parallel Graphics and Visualization (2012) H. Childs and T. Kuhlen (Editors) Parallel Rendering on Hybrid Multi-GPU Clusters S. Eilemann†1, A. Bilgili1, M. Abdellah1, J. Hernando2, M. Makhinya3, R. Pajarola3, F. Schürmann1 1Blue Brain Project, EPFL; 2CeSViMa, UPM; 3Visualization and MultiMedia Lab, University of Zürich Abstract Achieving efficient scalable parallel rendering for interactive visualization applications on medium-sized graphics clusters remains a challenging problem. Framerates of up to 60hz require a carefully designed and fine-tuned parallel rendering implementation that fits all required operations into the 16ms time budget available for each rendered frame. Furthermore, modern commodity hardware embraces more and more a NUMA architecture, where multiple processor sockets each have their locally attached memory and where auxiliary devices such as GPUs and network interfaces are directly attached to one of the processors. Such so called fat NUMA processing and graphics nodes are increasingly used to build cost-effective hybrid shared/distributed memory visualization clusters. In this paper we present a thorough analysis of the asynchronous parallelization of the rendering stages and we derive and implement important optimizations to achieve highly interactive framerates on such hybrid multi-GPU clusters. We use both a benchmark program and a real-world scientific application used to visualize, navigate and interact with simulations of cortical neuron circuit models. Categories and Subject Descriptors (according to ACM CCS): I.3.2 [Computer Graphics]: Graphics Systems— Distributed Graphics; I.3.m [Computer Graphics]: Miscellaneous—Parallel Rendering 1. Introduction Hybrid GPU clusters are often build from so called fat render nodes using a NUMA architecture with multiple The decomposition of parallel rendering systems across mul- CPUs and GPUs per machine. -
CUDA Particle-Based Fluid Simulation
CUDA Particle-based Fluid Simulation Simon Green Overview Fluid Simulation Techniques CUDA particle simulation Spatial subdivision techniques Rendering methods Future © NVIDIA Corporation 2008 Fluid Simulation Techniques Various approaches: Grid based (Eulerian) Stable fluids Particle level set Particle based (Lagrangian) SPH (smoothed particle hydrodynamics) MPS (Moving-Particle Semi-Implicit) Height field FFT (Tessendorf) Wave propagation – e.g. Kass and Miller © NVIDIA Corporation 2008 CUDA N-Body Demo Computes gravitational attraction between n bodies Computes all n2 interactions Uses shared memory to reduce memory bandwidth 16K bodies @ 44 FPS x 20 FLOPS / interaction x 16K2 interactions / frame = 240 GFLOP/s GeForce 8800 GTX © NVIDIA Corporation 2008 Particle-based Fluid Simulation Advantages Conservation of mass is trivial Easy to track free surface Only performs computation where necessary Not necessarily constrained to a finite grid Easy to parallelize Disadvantages Hard to extract smooth surface from particles Requires large number of particles for realistic results © NVIDIA Corporation 2008 Particle Fluid Simulation Papers Particle-Based Fluid Simulation for Interactive Applications, M. Müller, 2003 3000 particles, 5fps Particle-based Viscoelastic Fluid Simulation, Clavet et al, 2005 1000 particles, 10fps 20,000 particles, 2 secs / frame © NVIDIA Corporation 2008 CUDA SDK Particles Demo Particles with simple collisions Uses uniform grid based on sorting Uses fast CUDA radix sort Current performance: >100 fps for 65K interacting -
NVIDIA Physx SDK EULA
NVIDIA CORPORATION NVIDIA® PHYSX® SDK END USER LICENSE AGREEMENT Welcome to the new world of reality gaming brought to you by PhysX® acceleration from NVIDIA®. NVIDIA Corporation (“NVIDIA”) is willing to license the PHYSX SDK and the accompanying documentation to you only on the condition that you accept all the terms in this License Agreement (“Agreement”). IMPORTANT: READ THE FOLLOWING TERMS AND CONDITIONS BEFORE USING THE ACCOMPANYING NVIDIA PHYSX SDK. IF YOU DO NOT AGREE TO THE TERMS OF THIS AGREEMENT, NVIDIA IS NOT WILLING TO LICENSE THE PHYSX SDK TO YOU. IF YOU DO NOT AGREE TO THESE TERMS, YOU SHALL DESTROY THIS ENTIRE PRODUCT AND PROVIDE EMAIL VERIFICATION TO [email protected] OF DELETION OF ALL COPIES OF THE ENTIRE PRODUCT. NVIDIA MAY MODIFY THE TERMS OF THIS AGREEMENT FROM TIME TO TIME. ANY USE OF THE PHYSX SDK WILL BE SUBJECT TO SUCH UPDATED TERMS. A CURRENT VERSION OF THIS AGREEMENT IS POSTED ON NVIDIA’S DEVELOPER WEBSITE: www.developer.nvidia.com/object/physx_eula.html 1. Definitions. “Licensed Platforms” means the following: - Any PC or Apple Mac computer with a NVIDIA CUDA-enabled processor executing NVIDIA PhysX; - Any PC or Apple Mac computer running NVIDIA PhysX software executing on the primary central processing unit of the PC only; - Any PC utilizing an AGEIA PhysX processor executing NVIDIA PhysX code; - Microsoft XBOX 360; - Nintendo Wii; and/or - Sony Playstation 3 “Physics Application” means a software application designed for use and fully compatible with the PhysX SDK and/or NVIDIA Graphics processor products, including but not limited to, a video game, visual simulation, movie, or other product. -
Parallel Texture-Based Vector Field Visualization on Curved Surfaces Using GPU Cluster Computers
Eurographics Symposium on Parallel Graphics and Visualization (2006) Alan Heirich, Bruno Raffin, and Luis Paulo dos Santos (Editors) Parallel Texture-Based Vector Field Visualization on Curved Surfaces Using GPU Cluster Computers S. Bachthaler1, M. Strengert1, D. Weiskopf2, and T. Ertl1 1Institute of Visualization and Interactive Systems, University of Stuttgart, Germany 2School of Computing Science, Simon Fraser University, Canada Abstract We adopt a technique for texture-based visualization of flow fields on curved surfaces for parallel computation on a GPU cluster. The underlying LIC method relies on image-space calculations and allows the user to visualize a full 3D vector field on arbitrary and changing hypersurfaces. By using parallelization, both the visualization speed and the maximum data set size are scaled with the number of cluster nodes. A sort-first strategy with image-space decomposition is employed to distribute the workload for the LIC computation, while a sort-last approach with an object-space partitioning of the vector field is used to increase the total amount of available GPU memory. We specifically address issues for parallel GPU-based vector field visualization, such as reduced locality of memory accesses caused by particle tracing, dynamic load balancing for changing camera parameters, and the combination of image-space and object-space decomposition in a hybrid approach. Performance measurements document the behavior of our implementation on a GPU cluster with AMD Opteron CPUs, NVIDIA GeForce 6800 Ultra GPUs, and Infiniband network connection. Categories and Subject Descriptors (according to ACM CCS): I.3.3 [Computer Graphics]: Viewing algorithms I.3.3 [Three-Dimensional Graphics and Realism]: Color, shading, shadowing, and texture 1. -
Pathscale ENZO GTC12 S0631 – Programming Heterogeneous Many-Cores Using Directives C
PathScale ENZO GTC12 S0631 – Programming Heterogeneous Many-Cores Using Directives C. Bergström | May 14th, 2012 Brief Introduction to ENZO 2 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 ENZO Overview & Goals Speed transition to GPU & many-core systems • Simplify the task of migrating software written in C, C++ & Fortran • Uses OpenHMPP Standard (easy migration) • CAPS HMPP compatible Performance & HPC focused • Fully exploits NVIDIA GPU features • Generates native instructions optimized for NVIDIA GPU 3 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Project Schedule & Status 4 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Project Schedule . ENZO Production release June 2012 – OpenHMPP 2.5 C, C++ and Fortran . Next ENZO Production release October 2012 – More tools and better support for libraries – x8664 OpenHMPP task parallelism (similar to OMP3 tasks) – More optimizations (IPA / CG2 / textures) – OpenHMPP 3.0 – CUDA 4.x – Kepler 5 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Project Status . OpenHMPP 2.5 – Running CAPS C & Fortran Labs – PathScale written HMPP test suite – Customer code . New C++ compiler – Perennial C++VS and CVSA regression free – Corner case compile time issues – Corner case runtime issues . Ongoing effort – Performance tuning & benchmarking – Compiler robustness – Nightly compiler builds to address issues 6 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Performance 7 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Performance . NVIDIA Tesla 2050 - “Lab2” SGEMM – ENZO – /opt/enzo/bin/pathcc -hmpp -
Evolution of the Graphical Processing Unit
University of Nevada Reno Evolution of the Graphical Processing Unit A professional paper submitted in partial fulfillment of the requirements for the degree of Master of Science with a major in Computer Science by Thomas Scott Crow Dr. Frederick C. Harris, Jr., Advisor December 2004 Dedication To my wife Windee, thank you for all of your patience, intelligence and love. i Acknowledgements I would like to thank my advisor Dr. Harris for his patience and the help he has provided me. The field of Computer Science needs more individuals like Dr. Harris. I would like to thank Dr. Mensing for unknowingly giving me an excellent model of what a Man can be and for his confidence in my work. I am very grateful to Dr. Egbert and Dr. Mensing for agreeing to be committee members and for their valuable time. Thank you jeffs. ii Abstract In this paper we discuss some major contributions to the field of computer graphics that have led to the implementation of the modern graphical processing unit. We also compare the performance of matrix‐matrix multiplication on the GPU to the same computation on the CPU. Although the CPU performs better in this comparison, modern GPUs have a lot of potential since their rate of growth far exceeds that of the CPU. The history of the rate of growth of the GPU shows that the transistor count doubles every 6 months where that of the CPU is only every 18 months. There is currently much research going on regarding general purpose computing on GPUs and although there has been moderate success, there are several issues that keep the commodity GPU from expanding out from pure graphics computing with limited cache bandwidth being one. -
Multiprocessing Contents
Multiprocessing Contents 1 Multiprocessing 1 1.1 Pre-history .............................................. 1 1.2 Key topics ............................................... 1 1.2.1 Processor symmetry ...................................... 1 1.2.2 Instruction and data streams ................................. 1 1.2.3 Processor coupling ...................................... 2 1.2.4 Multiprocessor Communication Architecture ......................... 2 1.3 Flynn’s taxonomy ........................................... 2 1.3.1 SISD multiprocessing ..................................... 2 1.3.2 SIMD multiprocessing .................................... 2 1.3.3 MISD multiprocessing .................................... 3 1.3.4 MIMD multiprocessing .................................... 3 1.4 See also ................................................ 3 1.5 References ............................................... 3 2 Computer multitasking 5 2.1 Multiprogramming .......................................... 5 2.2 Cooperative multitasking ....................................... 6 2.3 Preemptive multitasking ....................................... 6 2.4 Real time ............................................... 7 2.5 Multithreading ............................................ 7 2.6 Memory protection .......................................... 7 2.7 Memory swapping .......................................... 7 2.8 Programming ............................................. 7 2.9 See also ................................................ 8 2.10 References ............................................. -
Locality-Aware Automatic Parallelization for GPGPU with Openhmpp Directives
Locality-Aware Automatic Parallelization for GPGPU with OpenHMPP Directives José M. Andión, Manuel Arenaz, François Bodin, Gabriel Rodríguez and Juan Touriño 7th International Symposium on High-Level Parallel Programming and Applications (HLPP 2014) July 3-4, 2014 — Amsterdam, Netherlands Outline • Motivation: General Purpose Computation with GPUs • GPGPU with CUDA & OpenHMPP • The KIR: an IR for the Detection of Parallelism • Locality-Aware Generation of Efficient GPGPU Code • Case Studies: CONV3D & SGEMM • Performance Evaluation • Conclusions & Future Work J.M. Andión et al. Locality-Aware Automatic Parallelization for GPGPU with OpenHMPP Directives. HLPP 2014. Outline • Motivation: General Purpose Computation with GPUs • GPGPU with CUDA & OpenHMPP • The KIR: an IR for the Detection of Parallelism • Locality-Aware Generation of Efficient GPGPU Code • Case Studies: CONV3D & SGEMM • Performance Evaluation • Conclusions & Future Work J.M. Andión et al. Locality-Aware Automatic Parallelization for GPGPU with OpenHMPP Directives. HLPP 2014. 100,000 Intel Xeon 6 cores, 3.3 GHz (boost to 3.6 GHz) Intel Xeon 4 cores, 3.3 GHz (boost to 3.6 GHz) Intel Core i7 Extreme 4 cores 3.2 GHz (boost to 3.5 GHz) 24,129 Intel Core Duo Extreme 2 cores, 3.0 GHz 21,871 Intel Core 2 Extreme 2 cores, 2.9 GHz 19,484 10,000 AMD Athlon 64, 2.8 GHz 14,387 AMD Athlon, 2.6 GHz 11,865 Intel Xeon EE 3.2 GHz 7,108 Intel D850EMVR motherboard (3.06 GHz, Pentium 4 processor with Hyper-Threading Technology) 6,043 6,681 IBM Power4, 1.3 GHz 4,195 3,016 Intel VC820 motherboard, 1.0 GHz Pentium III processor 1,779 Professional Workstation XP1000, 667 MHz 21264A Digital AlphaServer 8400 6/575, 575 MHz 21264 1,267 1000 993 AlphaServer 4000 5/600, 600 MHz 21164 649 Digital Alphastation 5/500, 500 MHz 481 Digital Alphastation 5/300, 300 MHz 280 22%/year Digital Alphastation 4/266, 266 MHz 183 IBM POWERstation 100, 150 MHz 117 100 Digital 3000 AXP/500, 150 MHz 80 HP 9000/750, 66 MHz 51 IBM RS6000/540, 30 MHz 24 52%/year Performance (vs. -
Nvidia Corporation 2016 Annual Report
2017 NVIDIA CORPORATION ANNUAL REVIEW NOTICE OF ANNUAL MEETING PROXY STATEMENT FORM 10-K THE AGE OF THE GPU IS UPON US THE NEXT PLATFORM A decade ago, we set out to transform the GPU into a powerful computing platform—a specialized tool for the da Vincis and Einsteins of our time. GPU computing has since opened a floodgate of innovation. From gaming and VR to AI and self-driving cars, we’re at the center of the most promising trends in our lifetime. The world has taken notice. Jensen Huang CEO and Founder, NVIDIA NVIDIA GEFORCE HAS MOVED FROM GRAPHICS CARD TO GAMING PLATFORM FORBES The PC drives the growth of computer gaming, the largest entertainment industry in the world. The mass popularity of eSports, the evolution of gaming into a social medium, and the advance of new technologies like 4K, HDR, and VR will fuel its further growth. Today, 200 million gamers play on GeForce, the world’s largest gaming platform. Our breakthrough NVIDIA Pascal architecture delighted gamers, and we built on its foundation with new capabilities, like NVIDIA Ansel, the most advanced in-game photography system ever built. To serve the 1 billion new or infrequent gamers whose PCs are not ready for today’s games, we brought the GeForce NOW game streaming service to Macs and PCs. Mass Effect: Andromeda. Courtesy of Electronic Arts. THE BEST ANDROID TV DEVICE JUST GOT BETTER ENGADGET NVIDIA SHIELD TV, controller, and remote. NVIDIA SHIELD is taking NVIDIA gaming and AI into living rooms around the world. The most advanced streamer now boasts 1,000 games, has the largest open catalog of media in 4K, and can serve as the brain of the AI home. -
Literature Review and Implementation Overview: High Performance
LITERATURE REVIEW AND IMPLEMENTATION OVERVIEW: HIGH PERFORMANCE COMPUTING WITH GRAPHICS PROCESSING UNITS FOR CLASSROOM AND RESEARCH USE APREPRINT Nathan George Department of Data Sciences Regis University Denver, CO 80221 [email protected] May 18, 2020 ABSTRACT In this report, I discuss the history and current state of GPU HPC systems. Although high-power GPUs have only existed a short time, they have found rapid adoption in deep learning applications. I also discuss an implementation of a commodity-hardware NVIDIA GPU HPC cluster for deep learning research and academic teaching use. Keywords GPU · Neural Networks · Deep Learning · HPC 1 Introduction, Background, and GPU HPC History High performance computing (HPC) is typically characterized by large amounts of memory and processing power. HPC, sometimes also called supercomputing, has been around since the 1960s with the introduction of the CDC STAR-100, and continues to push the limits of computing power and capabilities for large-scale problems [1, 2]. However, use of graphics processing unit (GPU) in HPC supercomputers has only started in the mid to late 2000s [3, 4]. Although graphics processing chips have been around since the 1970s, GPUs were not widely used for computations until the 2000s. During the early 2000s, GPU clusters began to appear for HPC applications. Most of these clusters were designed to run large calculations requiring vast computing power, and many clusters are still designed for that purpose [5]. GPUs have been increasingly used for computations due to their commodification, following Moore’s Law (demonstrated arXiv:2005.07598v1 [cs.DC] 13 May 2020 in Figure 1), and usage in specific applications like neural networks. -
How to Write Code That Will Survive
Programming Heterogeneous Many-cores Using Directives HMPP - OpenAcc F. Bodin, CAPS CTO Introduction • Programming many-core systems faces the following dilemma o Achieve "portable" performance • Multiple forms of parallelism cohabiting – Multiple devices (e.g. GPUs) with their own address space – Multiple threads inside a device – Vector/SIMD parallelism inside a thread • Massive parallelism – Tens of thousands of threads needed o The constraint of keeping a unique version of codes, preferably mono- language • Reduces maintenance cost • Preserves code assets • Less sensitive to fast moving hardware targets • Codes last several generations of hardware architecture • For legacy codes, directive-based approach may be an alternative o And may benefit from auto-tuning techniques CC 2012 www.caps-entreprise.com 2 Profile of a Legacy Application • Written in C/C++/Fortran • Mix of user code and while(many){ library calls ... mylib1(A,B); ... • Hotspots may or may not be myuserfunc1(B,A); parallel ... mylib2(A,B); ... • Lifetime in 10s of years myuserfunc2(B,A); ... • Cannot be fully re-written } • Migration can be risky and mandatory CC 2012 www.caps-entreprise.com 3 Overview of the Presentation • Many-core architectures o Definition and forecast o Why usual parallel programming techniques won't work per se • Directive-based programming o OpenACC sets of directives o HMPP directives o Library integration issue • Toward a portable infrastructure for auto-tuning o Current auto-tuning directives in HMPP 3.0 o CodeletFinder for offline auto-tuning o Toward a standard auto-tuning interface CC 2012 www.caps-entreprise.com 4 Many-Core Architectures Heterogeneous Many-Cores • Many general purposes cores coupled with a massively parallel accelerator (HWA) Data/stream/vector CPU and HWA linked with a parallelism to be PCIx bus exploited by HWA e.g. -
AMD Pcie® ADD-IN BOARD
NVIDIA GT610 PCIe® for passive ADD-IN BOARD Datasheet AEGX-N3A1-01FSS1 CONTENTS 1. Feature .................................................................................................................. 3 2. Functional Overview ............................................................................................. 4 2.1. GPU Block diagram .................................................................................... 4 2.2. GPU ............................................................................................................ 4 2.3. Memory Interface ..................................................................................... 5 2.4. Features and Technologies ....................................................................... 5 2.5. Display Support ......................................................................................... 5 2.6. Digital Audio .............................................................................................. 5 2.7. Video ......................................................................................................... 6 3. PIN Assignment and Description………………………………………………………………………..7 3.1. DVI-I Connector Pinout ............................................................................ .7 3.2. HDMI 1.4a Connector Pinout ................................................................... .8 3.3. VGA Connector Pinout ............................................................................. .9 3.4. VGA Header Pinout ................................................................................