NVIDIA Gpus in Earth System Modeling

Total Page:16

File Type:pdf, Size:1020Kb

NVIDIA Gpus in Earth System Modeling NVIDIA GPUs in Earth System Modelling Thomas Bradley © NVIDIA Corporation 2011 Agenda: GPU Developments for CWO . Motivation for GPUs in CWO . Parallelisation Considerations . GPU Technology Roadmap © NVIDIA Corporation 2011 MOTIVATION FOR GPUS IN CWO © NVIDIA Corporation 2011 NVIDIA GPUs Power 3 of Top 5 Supercomputers #2 : Tianhe-1A #4 : Nebulae #5 : Tsubame 2.0 7168 Tesla GPU’s 2.5 PFLOPS 4650 Tesla GPU’s 1.2 PFLOPS 4224 Tesla GPUs 1.194 PFLOPS “We not only created the world's fastest computer, but also implemented a heterogeneous computing architecture incorporating CPU and GPU, this is a new innovation. ” Premier Wen Jiabao Public comments acknowledging Tianhe-1A © NVIDIA Corporation 2011 GPU Systems: More Power Efficient (for HPL) Performance Power 2500 8 7 2000 6 1500 5 4 Gigaflops 1000 3 Megawatts 2 500 1 0 0 Tianhe-1A Jaguar Nebulae Tsubame LBNL GPU-CPU Supercomputer CPU only Supercomputer Power © NVIDIA Corporation 2011 Comparison with Top Supercomputer K in Japan K Computer: Custom SPARC Processors Tsubame: Intel CPUs + NVIDIA Tesla 8.1 PetaFlop 1.2 PetaFlop 68,500 CPUs 2K CPUs, 4K GPUs 672 Racks 2.3x better flops/rack 44 Racks 10 Megawatt 1.06x better flop/watt 1.4 Megawatt $700 Million 2.6x better $/flop $40 Million © NVIDIA Corporation 2011 Real Science on GPUs: ASUCA NWP on Tsubame Tsubame 2.0 Tokyo Institute of Technology 1.19 Petaflops 4,224 Tesla M2050 GPUs 3990 Tesla M2050s 145.0 Tflops SP 76.1 Tflops DP Simulation on Tsubame 2.0, TiTech Supercomputer © NVIDIA Corporation 2011 CWO Performance: Full GPU Approach Physics Only • WRF • COSMO (1) Dynamics Only • COSMO (2) • ICON • NIM (sample physics) • CAM • HOMME • HIRLAM Full GPU Approach • ASUCA • GEOS-5 • GRAPE © NVIDIA Corporation 2011 NVIDIA Features GPUs at Conferences Supercomputing 2010 | Nov 2010 | New Orleans, LA COSMO: GPU Considerations for Next Generation Weather Simulations Thomas Schulthess, Swiss National Supercomputing Centre (CSCS) ASUCA: Full GPU Implementation of Weather Prediction Code on TSUBAME Supercomputer Takayuki Aoki, GSIC of Tokyo Institute of Technology (TiTech) NIM: Using GPUs to Run Next-Generation Weather Models Mark Govett, National Oceanic and Atmospheric Administration (NOAA) BoF: GPUs and Numerical Weather Prediction (organized by CSCS and NVIDIA) Featured organizations: TiTech (ASUCA), NASA (GEOS-5), NOAA (NIM), Cray, PGI NVIDIA GPU Technology Conference | Sep 2010 | San Jose, CA ASUCA: Full GPU Implementation of Weather Prediction Code on TSUBAME Supercomputer Takayuki Aoki, GSIC of Tokyo Institute of Technology (TiTech) NIM: Using GPUs to Run Next-Generation Weather Models Mark Govett, National Oceanic and Atmospheric Administration (NOAA) MITgcm: Designing a Geoscience Accelerator Library Accessible from High Level Languages Chris Hill, Massachusetts Institute of Technology (MIT) © NVIDIA Corporation 2011 PARALLELISATION CONSIDERATIONS © NVIDIA Corporation 2011 GPU Considerations for CWO Codes . Initial efforts are mostly implicit linear solvers on GPU . If linear solver ~50% of profile time – only 2x speed-up is possible . More of application must be moved to GPUs for additional benefit . Explicit schemes – no linear algebra, no solver, operations on stencil . Most codes are parallel and scale across multiple CPU cores . Multi-core CPUs can contribute to parallel matrix assembly, others . Most codes use a domain decomposition parallel method . Fits GPU model very well and preserves costly MPI investment © NVIDIA Corporation 2011 Options for Parallel Programming of GPUs Approach Examples Applications MATLAB, Mathematica, LabVIEW Libraries FFT, BLAS, SPARSE, RNG, IMSL, CUSP, etc. Directives PGI Accelerator, HMPP, Cray, F2C-Acc Wrappers PyCUDA, CUDA.NET, jCUDA CUDA C/C++, Languages PGI CUDA Fortran, GPU.net APIs CUDA C/C++, OpenCL © NVIDIA Corporation 2011 Most Implementations Focus on Dynamical Core Application Code Dynamics Rest of Code GPU Use CUDA to Parallelize CPU + © NVIDIA Corporation 2011 Sparse Iterative Solvers for Dynamics . Sparse-matrix vector multiply (SpMV) & BLAS1 . Memory-bound . GPU can deliver good SpMV performance . ~10-20 Gflops for unstructured matrices in double precision . Best sparse matrix data structure on GPU different from CPU . Explore for your specific case . A massively parallel preconditioner is key: Lectures: Jon Cohen at IMA Workshop: “Thinking parallel: sparse iterative solvers with CUDA” Nathan Bell (4-parts) at PASI: “Iterative methods for sparse linear systems on GPU” © NVIDIA Corporation 2011 Typical Sparse Matrix Formats Structured Unstructured © NVIDIA Corporation 2011 Hybrid Sparse Matrix Format for GPUs . ELL handles typical entries . COO handles exceptional entries . Implemented with segmented reduction . Some overheads in matrix format conversion, can be hidden if the solver does O(100) of iterations © NVIDIA Corporation 2011 SpMV Performance for Unstructured Matrices COO CSR (scalar) CSR (vector) HYB 18 16 14 12 10 8 GFLOP/s 6 4 2 0 Flops=2*nz/t, BW = (2*sizeof(double)+size(int))/t © NVIDIA Corporation 2011 GPU TECHNOLOGY © NVIDIA Corporation 2011 Soul of NVIDIA’s GPU Roadmap Increase Performance / Watt Make Parallel Programming Easier Run more of the Application on the GPU © NVIDIA Corporation 2011 Tesla CUDA GPU Roadmap Maxwell 16 14 12 per per Watt 10 8 6 Kepler DP GFLOPS DP 4 Fermi 2 T10 2008 2010 2012 2014 © NVIDIA Corporation 2011 NVIDIA Announced “Project Denver” Jan 2011 It's true folks, NVIDIA's building a CPU! Madness! The future just got a lot more exciting. http://www.engadget.com/2011/01/05/nvidia-announces-project-denver-arm-cpu-for-the-desktop/ © NVIDIA Corporation 2011 Tegra Roadmap 100 STARK LOGAN 10 WAYNE PERFORMANCE 5x KAL-EL Core 2 Duo 1 TEGRA 2 2010 2011 2012 2013 2014 © NVIDIA Corporation 2011 CUDA 4.0: Big Leap In Usability CUDA 4.0 Parallel Programming Made Easy Ease of Use CUDA 3.0 Fermi, New Libraries, CUDA 2.0 Big Perf. Boost Debugging, Profiling, CUDA 1.0 Double Precision Program GPUs using C, Libraries Performance © NVIDIA Corporation 2011 NVIDIA GPUDirect™: Eliminating CPU Overhead Accelerated Communication Peer-to-Peer Communication with Network and Storage Devices Between GPUs • Direct access to CUDA memory for 3rd • Peer-to-Peer memory access, party devices transfers & synchronization • Eliminates unnecessary memory • Less code, higher programmer copies & CPU overhead productivity • Supported by Mellanox and Qlogic Supported since CUDA 3.1 New in CUDA 4.0 © NVIDIA Corporation 2011 http://developer.nvidia.com/gpudirect MPI Integration of NVIDIA GPUDirect™ . MPI libraries with support for NVIDIA GPUDirect and Unified Virtual Addressing (UVA) enables: - MPI transfer primitives copy data directly to/from GPU memory - MPI library can differentiate between device memory and host memory without any hints from the user - Programmer productivity: less application code for data transfers Code without MPI integration Code with MPI integration At Sender: At Sender: cudaMemcpy(s_buf, s_device, size, cudaMemcpyDeviceToHost); MPI_Send(s_device, size, …); MPI_Send(s_buf, size, MPI_CHAR, 1, 1, MPI_COMM_WORLD); At Receiver: At Receiver: MPI_Recv(r_buf, size, MPI_CHAR, 0, 1, MPI_COMM_WORLD, &req); MPI_Recv(r_device, size, …); cudaMemcpy(r_device, r_buf, size, cudaMemcpyHostToDevice); © NVIDIA Corporation 2011 Open MPI . Transfer data directly to/from CUDA device memory via MPI calls . Code is currently available in the Open MPI trunk, available at: . http://www.open-mpi.org/nightly/trunk (contributed by NVIDIA) . More details in the Open MPI FAQ . Features: http://www.open-mpi.org/faq/?category=running#mpi-cuda-support . Build Instructions: http://www.open-mpi.org/faq/?category=building#build-cuda © NVIDIA Corporation 2011 MVAPICH2-GPU . Upcoming MVAPICH2 support for GPU-GPU communication with . Memory detection and overlap CUDA copy and RDMA transfer With GPUDirect With GPUDirect • 45% improvement compared to Memcpy+Send (4MB) • 45% improvement compared to Memcpy+Put • 24% improvement compared to MemcpyAsync+Isend (4MB) Without GPUDirect Without GPUDirect • 39% improvement compared with Memcpy+Put • 38% improvement compared to Memcpy+send (4MB) Similar improvement for Get operation • 33% improvement compared to MemcpyAsync+Isend (4MB) Major improvement in programming Measurements from: H. Wang, S. Potluri, M. Luo, A. Singh, S. Sur and D. K. Panda, "MVAPICH2-GPU: Optimized GPU to GPU © NVIDIA Corporation 2011 Communication for InfiniBand Clusters", Int'l Supercomputing Conference 2011 (ISC), Hamburg http://mvapich.cse.ohio-state.edu/ GPUDirect: Further Information . http://developer.nvidia.com/gpudirect . More details including supported configurations . Instructions . System design guidelines . Also talk to NVIDIA Solution Architects © NVIDIA Corporation 2011 QUESTIONS © NVIDIA Corporation 2011 .
Recommended publications
  • Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators
    Linpack Evaluation on a Supercomputer with Heterogeneous Accelerators Toshio Endo Akira Nukada Graduate School of Information Science and Engineering Global Scientific Information and Computing Center Tokyo Institute of Technology Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Satoshi Matsuoka Naoya Maruyama Global Scientific Information and Computing Center Global Scientific Information and Computing Center Tokyo Institute of Technology/National Institute of Informatics Tokyo Institute of Technology Tokyo, Japan Tokyo, Japan [email protected] [email protected] Abstract—We report Linpack benchmark results on the Roadrunner or other systems described above, it includes TSUBAME supercomputer, a large scale heterogeneous system two types of accelerators. This is due to incremental upgrade equipped with NVIDIA Tesla GPUs and ClearSpeed SIMD of the system, which has been the case in commodity CPU accelerators. With all of 10,480 Opteron cores, 640 Xeon cores, 648 ClearSpeed accelerators and 624 NVIDIA Tesla GPUs, clusters; they may have processors with different speeds as we have achieved 87.01TFlops, which is the third record as a result of incremental upgrade. In this paper, we present a heterogeneous system in the world. This paper describes a Linpack implementation and evaluation results on TSUB- careful tuning and load balancing method required to achieve AME with 10,480 Opteron cores, 624 Tesla GPUs and 648 this performance. On the other hand, since the peak speed is ClearSpeed accelerators. In the evaluation, we also used a 163 TFlops, the efficiency is 53%, which is lower than other systems.
    [Show full text]
  • Tsubame 2.5 Towards 3.0 and Beyond to Exascale
    Being Very Green with Tsubame 2.5 towards 3.0 and beyond to Exascale Satoshi Matsuoka Professor Global Scientific Information and Computing (GSIC) Center Tokyo Institute of Technology ACM Fellow / SC13 Tech Program Chair NVIDIA Theater Presentation 2013/11/19 Denver, Colorado TSUBAME2.0 NEC Confidential TSUBAME2.0 Nov. 1, 2010 “The Greenest Production Supercomputer in the World” TSUBAME 2.0 New Development >600TB/s Mem BW 220Tbps NW >12TB/s Mem BW >400GB/s Mem BW >1.6TB/s Mem BW Bisecion BW 80Gbps NW BW 35KW Max 1.4MW Max 32nm 40nm ~1KW max 3 Performance Comparison of CPU vs. GPU 1750 GPU 200 GPU ] 1500 160 1250 GByte/s 1000 120 750 80 500 CPU CPU 250 40 Peak Performance [GFLOPS] Performance Peak 0 Memory Bandwidth [ Bandwidth Memory 0 x5-6 socket-to-socket advantage in both compute and memory bandwidth, Same power (200W GPU vs. 200W CPU+memory+NW+…) NEC Confidential TSUBAME2.0 Compute Node 1.6 Tflops Thin 400GB/s Productized Node Mem BW as HP 80GBps NW ProLiant Infiniband QDR x2 (80Gbps) ~1KW max SL390s HP SL390G7 (Developed for TSUBAME 2.0) GPU: NVIDIA Fermi M2050 x 3 515GFlops, 3GByte memory /GPU CPU: Intel Westmere-EP 2.93GHz x2 (12cores/node) Multi I/O chips, 72 PCI-e (16 x 4 + 4 x 2) lanes --- 3GPUs + 2 IB QDR Memory: 54, 96 GB DDR3-1333 SSD:60GBx2, 120GBx2 Total Perf 2.4PFlops Mem: ~100TB NEC Confidential SSD: ~200TB 4-1 2010: TSUBAME2.0 as No.1 in Japan > All Other Japanese Centers on the Top500 COMBINED 2.3 PetaFlops Total 2.4 Petaflops #4 Top500, Nov.
    [Show full text]
  • Gpus: the Hype, the Reality, and the Future
    Uppsala Programming for Multicore Architectures Research Center GPUs: The Hype, The Reality, and The Future David Black-Schaffer Assistant Professor, Department of Informaon Technology Uppsala University David Black-Schaffer Uppsala University / Department of Informaon Technology 25/11/2011 | 2 Today 1. The hype 2. What makes a GPU a GPU? 3. Why are GPUs scaling so well? 4. What are the problems? 5. What’s the Future? David Black-Schaffer Uppsala University / Department of Informaon Technology 25/11/2011 | 3 THE HYPE David Black-Schaffer Uppsala University / Department of Informaon Technology 25/11/2011 | 4 How Good are GPUs? 100x 3x David Black-Schaffer Uppsala University / Department of Informaon Technology 25/11/2011 | 5 Real World SoVware • Press release 10 Nov 2011: – “NVIDIA today announced that four leading applicaons… have added support for mul<ple GPU acceleraon, enabling them to cut simulaon mes from days to hours.” • GROMACS – 2-3x overall – Implicit solvers 10x, PME simulaons 1x • LAMPS – 2-8x for double precision – Up to 15x for mixed • QMCPACK – 3x 2x is AWESOME! Most research claims 5-10%. David Black-Schaffer Uppsala University / Department of Informaon Technology 25/11/2011 | 6 GPUs for Linear Algebra 5 CPUs = 75% of 1 GPU StarPU for MAGMA David Black-Schaffer Uppsala University / Department of Informaon Technology 25/11/2011 | 7 GPUs by the Numbers (Peak and TDP) 5 791 675 4 192 176 3 Intel 32nm vs. 40nm 36% smaller per transistor 2 244 250 3.0 Normalized to Intel 3960X 130 172 51 3.3 2.3 2.4 1 1.5 0.9 0 WaEs GFLOP Bandwidth
    [Show full text]
  • Accelerating Applications with Pattern-Specific Optimizations On
    Accelerating Applications with Pattern-specific Optimizations on Accelerators and Coprocessors Dissertation Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Linchuan Chen, B.S., M.S. Graduate Program in Computer Science and Engineering The Ohio State University 2015 Dissertation Committee: Dr. Gagan Agrawal, Advisor Dr. P. Sadayappan Dr. Feng Qin ⃝c Copyright by Linchuan Chen 2015 Abstract Because of the bottleneck in the increase of clock frequency, multi-cores emerged as a way of improving the overall performance of CPUs. In the recent decade, many-cores begin to play a more and more important role in scientific computing. The highly cost- effective nature of many-cores makes them extremely suitable for data-intensive computa- tions. Specifically, many-cores are in the forms of GPUs (e.g., NVIDIA or AMD GPUs) and more recently, coprocessers (Intel MIC). Even though these highly parallel architec- tures offer significant amount of computation power, it is very hard to program them, and harder to fully exploit the computation power of them. Combing the power of multi-cores and many-cores, i.e., making use of the heterogeneous cores is extremely complicated. Our efforts have been made on performing optimizations to important sets of appli- cations on such parallel systems. We address this issue from the perspective of commu- nication patterns. Scientific applications can be classified based on the properties (com- munication patterns), which have been specified in the Berkeley Dwarfs many years ago. By investigating the characteristics of each class, we are able to derive efficient execution strategies, across different levels of the parallelism.
    [Show full text]
  • NVIDIA Corp NVDA (XNAS)
    Morningstar Equity Analyst Report | Report as of 14 Sep 2020 04:02, UTC | Page 1 of 14 NVIDIA Corp NVDA (XNAS) Morningstar Rating Last Price Fair Value Estimate Price/Fair Value Trailing Dividend Yield % Forward Dividend Yield % Market Cap (Bil) Industry Stewardship Q 486.58 USD 250.00 USD 1.95 0.13 0.13 300.22 Semiconductors Exemplary 11 Sep 2020 11 Sep 2020 20 Aug 2020 11 Sep 2020 11 Sep 2020 11 Sep 2020 21:37, UTC 01:27, UTC Morningstar Pillars Analyst Quantitative Important Disclosure: Economic Moat Narrow Wide The conduct of Morningstar’s analysts is governed by Code of Ethics/Code of Conduct Policy, Personal Security Trading Policy (or an equivalent of), Valuation Q Overvalued and Investment Research Policy. For information regarding conflicts of interest, please visit http://global.morningstar.com/equitydisclosures Uncertainty Very High High Financial Health — Moderate Nvidia to Buy ARM in $40 Billion Deal with Eyes Set on Data Center Source: Morningstar Equity Research Dominance; Maintain FVE Quantitative Valuation NVDA Business Strategy and Outlook could limit Nvidia’s future growth. a USA Abhinav Davuluri, CFA, Analyst, 19 August 2020 Undervalued Fairly Valued Overvalued Nvidia is the leading designer of graphics processing units Analyst Note that enhance the visual experience on computing Abhinav Davuluri, CFA, Analyst, 13 September 2020 Current 5-Yr Avg Sector Country Price/Quant Fair Value 1.67 1.43 0.77 0.83 platforms. The firm's chips are used in a variety of end On Sept. 13, Nvidia announced it would acquire ARM from Price/Earnings 89.3 37.0 21.4 20.1 markets, including high-end PCs for gaming, data centers, the SoftBank Group in a transaction valued at $40 billion.
    [Show full text]
  • (Intel® OPA) for Tsubame 3
    CASE STUDY High Performance Computing (HPC) with Intel® Omni-Path Architecture Tokyo Institute of Technology Chooses Intel® Omni-Path Architecture for Tsubame 3 Price/performance, thermal stability, and adaptive routing are key features for enabling #1 on Green 500 list Challenge How do you make a good thing better? Professor Satoshi Matsuoka of the Tokyo Institute of Technology (Tokyo Tech) has been designing and building high- performance computing (HPC) clusters for 20 years. Among the systems he and his team at Tokyo Tech have architected, Tsubame 1 (2006) and Tsubame 2 (2010) have shown him the importance of heterogeneous HPC systems for scientific research, analytics, and artificial intelligence (AI). Tsubame 2, built on Intel® Xeon® Tsubame at a glance processors and Nvidia* GPUs with InfiniBand* QDR, was Japan’s first peta-scale • Tsubame 3, the second- HPC production system that achieved #4 on the Top500, was the #1 Green 500 generation large, production production supercomputer, and was the fastest supercomputer in Japan at the time. cluster based on heterogeneous computing at Tokyo Institute of Technology (Tokyo Tech); #61 on June 2017 Top 500 list and #1 on June 2017 Green 500 list For Matsuoka, the next-generation machine needed to take all the goodness of Tsubame 2, enhance it with new technologies to not only advance all the current • The system based upon HPE and latest generations of simulation codes, but also drive the latest application Apollo* 8600 blades, which targets—which included deep learning/machine learning, AI, and very big data are smaller than a 1U server, analytics—and make it more efficient that its predecessor.
    [Show full text]
  • Massively Parallel Computation Using Graphics Processors with Application to Optimal Experimentation in Dynamic Control
    Munich Personal RePEc Archive Massively parallel computation using graphics processors with application to optimal experimentation in dynamic control Morozov, Sergei and Mathur, Sudhanshu Morgan Stanley 10 August 2009 Online at https://mpra.ub.uni-muenchen.de/30298/ MPRA Paper No. 30298, posted 03 May 2011 17:05 UTC MASSIVELY PARALLEL COMPUTATION USING GRAPHICS PROCESSORS WITH APPLICATION TO OPTIMAL EXPERIMENTATION IN DYNAMIC CONTROL SERGEI MOROZOV AND SUDHANSHU MATHUR Abstract. The rapid growth in the performance of graphics hardware, coupled with re- cent improvements in its programmability has lead to its adoption in many non-graphics applications, including a wide variety of scientific computing fields. At the same time, a number of important dynamic optimal policy problems in economics are athirst of computing power to help overcome dual curses of complexity and dimensionality. We investigate if computational economics may benefit from new tools on a case study of imperfect information dynamic programming problem with learning and experimenta- tion trade-off, that is, a choice between controlling the policy target and learning system parameters. Specifically, we use a model of active learning and control of a linear au- toregression with the unknown slope that appeared in a variety of macroeconomic policy and other contexts. The endogeneity of posterior beliefs makes the problem difficult in that the value function need not be convex and the policy function need not be con- tinuous. This complication makes the problem a suitable target for massively-parallel computation using graphics processors (GPUs). Our findings are cautiously optimistic in that new tools let us easily achieve a factor of 15 performance gain relative to an implementation targeting single-core processors.
    [Show full text]
  • TSUBAME---A Year Later
    1 TSUBAME---A Year Later Satoshi Matsuoka, Professor/Dr.Sci. Global Scientific Information and Computing Center Tokyo Inst. Technology & NAREGI Project National Inst. Informatics EuroPVM/MPI, Paris, France, Oct. 2, 2007 2 Topics for Today •Intro • Upgrades and other New stuff • New Programs • The Top 500 and Acceleration • Towards TSUBAME 2.0 The TSUBAME Production 3 “Supercomputing Grid Cluster” Spring 2006-2010 Voltaire ISR9288 Infiniband 10Gbps Sun Galaxy 4 (Opteron Dual x2 (DDR next ver.) core 8-socket) ~1310+50 Ports “Fastest ~13.5Terabits/s (3Tbits bisection) Supercomputer in 10480core/655Nodes Asia, 29th 21.4Terabytes 10Gbps+External 50.4TeraFlops Network [email protected] OS Linux (SuSE 9, 10) Unified IB NAREGI Grid MW network NEC SX-8i (for porting) 500GB 500GB 48disks 500GB 48disks 48disks Storage 1.5PB 1.0 Petabyte (Sun “Thumper”) ClearSpeed CSX600 0.1Petabyte (NEC iStore) SIMD accelerator Lustre FS, NFS, CIF, WebDAV (over IP) 360 boards, 70GB/s 50GB/s aggregate I/O BW 35TeraFlops(Current)) 4 Titech TSUBAME ~76 racks 350m2 floor area 1.2 MW (peak) 5 Local Infiniband Switch (288 ports) Node Rear Currently 2GB/s / node Easily scalable to 8GB/s / node Cooling Towers (~32 units) ~500 TB out of 1.1PB 6 TSUBAME assembled like iPod… NEC: Main Integrator, Storage, Operations SUN: Galaxy Compute Nodes, Storage, Solaris AMD: Opteron CPU Voltaire: Infiniband Network ClearSpeed: CSX600 Accel. CFS: Parallel FSCFS Novell: Suse 9/10 NAREGI: Grid MW Titech GSIC: us UK Germany AMD:Fab36 USA Israel Japan 7 TheThe racksracks werewere readyready
    [Show full text]
  • Tokyo Tech's TSUBAME 3.0 and AIST's AAIC Ranked 1St and 3Rd on the Green500
    PRESS RELEASE Sources: Tokyo Institute of Technology National Institute of Advanced Industrial Science and Technology For immediate release: June 21, 2017 Subject line: Tokyo Tech’s TSUBAME 3.0 and AIST’s AAIC ranked 1st and 3rd on the Green500 List Highlights ►Tokyo Tech’s next-generation supercomputer TSUBAME 3.0 ranks 1st in the Green500 list (Ranking of the most energy efficient supercomputers). ►AIST’s AI Cloud, AAIC, ranks 3rd in the Green500 list, and 1st among air-cooled systems. ►These achievements were made possible through collaboration between Tokyo Tech and AIST via the AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL). The supercomputers at Tokyo Institute of Technology (Tokyo Tech) and, the National Institute of Advanced Industrial Science and Technology (AIST) have been ranked 1st and 3rd, respectively, on the Green500 List1, which ranks supercomputers worldwide in the order of their energy efficiency. The rankings were announced on June 19 (German time) at the international conference, ISC HIGH PERFORMANCE 2017 (ISC 2017), in Frankfurt, Germany. These achievements were made possible through our collaboration at the AIST-Tokyo Tech Real World Big-Data Computation Open Innovation Laboratory (RWBC-OIL), which was established on February 20th this year, headed by Director Satoshi Matsuoka. At the award ceremony (Fourth from left to right: Professor Satoshi Matsuoka, Specially Appointed Associate Professor Akira Nukada) The TSUBAME 3.0 supercomputer of the Global Scientific Information and Computing Center (GSIC) in Tokyo Tech will commence operation in August 2017; it can achieve 14.110 GFLOPS2 per watt. It has been ranked 1st on the Green500 List of June 2017, making it Japan’s first supercomputer to top the list.
    [Show full text]
  • World's Greenest Petaflop Supercomputers Built with NVIDIA Tesla Gpus
    World's Greenest Petaflop Supercomputers Built With NVIDIA Tesla GPUs GPU Supercomputers Deliver World Leading Performance and Efficiency in Latest Green500 List Leaders in GPU Supercomputing talk about their Green500 systems Tianhe-1A Supercomputer at the National Supercomputer Center in Tianjin Tsubame 2.0 from Tokyo Institute of Technology Tokyo Tech talks about their Tsubame 2.0 supercomputer - Part 1 Tokyo Tech talk about their Tsubame 2.0 supercomputer - Part 2 NEW ORLEANS, LA--(Marketwire - November 18, 2010) - SC10 -- The "Green500" list of the world's most energy-efficient supercomputers was released today, revealed that the only petaflop system in the top 10 is powered by NVIDIA® Tesla™ GPUs. The system was Tsubame 2.0 from Tokyo Institute of Technology (Tokyo Tech), which was ranked number two. "The rise of GPU supercomputers on the Green500 signifies that heterogeneous systems, built with both GPUs and CPUs, deliver the highest performance and unprecedented energy efficiency," said Wu-chun Feng, founder of the Green500 and associate professor of Computer Science at Virginia Tech. GPUs have quickly become the enabling technology behind the world's top supercomputers. They contain hundreds of parallel processor cores capable of dividing up large computational workloads and processing them simultaneously. This significantly increases overall system efficiency as measured by performance per watt. "Top500" supercomputers based on heterogeneous architectures are, on average, almost three times more power-efficient than non-heterogeneous systems. Three other Tesla GPU-based systems made the Top 10. The National Center for Supercomputing Applications (NCSA) and Georgia Institute of Technology in the U.S. and the National Institute for Environmental Studies in Japan secured 3rd, 9th and 10th respectively.
    [Show full text]
  • Parallelization Schemes & GPU Acceleration
    Parallelization schemes & GPU Acceleration Erik Lindahl / Szilárd Páll GROMACS USA workshop September 13, 2013 Outline ● MD algorithmic overview ● Motivation: why parallelize/accelerate MD? ● Acceleration on CPUs ● Parallelization schemes ● Heterogeneous/GPU acceleration GROMACS USA workshop 2 Important technical terms ● Hardware: – CPU core: physical, logical, "Bulldozer" module – Node: compute machine (!= mdrun “node”) – Accelerators: GPUs, Intel MIC ● Algorithmic/parallelization: – Accelerated code, compute kernels: CPU SIMD: SSE, AVX; GPU SIMT: CUDA – multi-threading: OpenMP (thread-MPI) – SPMD, MPMD: MPI GROMACS USA workshop 3 Molecular dynamics: algorithm overview Neighbor search/DD step: every 10-50 iterations MD step NS, DD Bonded F Non-bonded F PME Integration Constraints ~ milliseconds GROMACS USA workshop 4 Why parallelize/accelerate? ● Need for speed: – MD is computationally demanding – but a fast* simulation needs: → short time-step → strong scaling Neighbor search/DD step: every 10-50 iterations MD step NS, DD Bonded F Non-bonded F PME Integration Constraints Goal: making it as short as possible currently at peak: ~100s of microsec. GROMACS USA workshop 5 Motivation: hardware evolution ● Hardware is increasingly: ● parallel on multiple levels ● heterogenous: accelerators ` + x2 x100-1000s SIMD & SIMT multicore, NUMA network: topology, memory & cache accelerator:GROMACS PCI-E, USA topologyworkshop bandwidth, latency 6 Motivation: need to keep up! ● Multiple levels of harware parallelism → need to address each level with suitable
    [Show full text]
  • The GPU Computing Revolution
    The GPU Computing Revolution From Multi-Core CPUs to Many-Core Graphics Processors A Knowledge Transfer Report from the London Mathematical Society and Knowledge Transfer Network for Industrial Mathematics By Simon McIntosh-Smith Copyright © 2011 by Simon McIntosh-Smith Front cover image credits: Top left: Umberto Shtanzman / Shutterstock.com Top right: godrick / Shutterstock.com Bottom left: Double Negative Visual Effects Bottom right: University of Bristol Background: Serg64 / Shutterstock.com THE GPU COMPUTING REVOLUTION From Multi-Core CPUs To Many-Core Graphics Processors By Simon McIntosh-Smith Contents Page Executive Summary 3 From Multi-Core to Many-Core: Background and Development 4 Success Stories 7 GPUs in Depth 11 Current Challenges 18 Next Steps 19 Appendix 1: Active Researchers and Practitioner Groups 21 Appendix 2: Software Applications Available on GPUs 23 References 24 September 2011 A Knowledge Transfer Report from the London Mathematical Society and the Knowledge Transfer Network for Industrial Mathematics Edited by Robert Leese and Tom Melham London Mathematical Society, De Morgan House, 57–58 Russell Square, London WC1B 4HS KTN for Industrial Mathematics, Surrey Technology Centre, Surrey Research Park, Guildford GU2 7YG 2 THE GPU COMPUTING REVOLUTION From Multi-Core CPUs To Many-Core Graphics Processors AUTHOR Simon McIntosh-Smith is head of the Microelectronics Research Group at the Univer- sity of Bristol and chair of the Many-Core and Reconfigurable Supercomputing Conference (MRSC), Europe’s largest conference dedicated to the use of massively parallel computer architectures. Prior to joining the university he spent fifteen years in industry where he designed massively parallel hardware and software at companies such as Inmos, STMicro- electronics and Pixelfusion, before co-founding ClearSpeed as Vice-President of Architec- ture and Applications.
    [Show full text]