Titan: a New Leadership Computer for Science
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
ORNL Debuts Titan Supercomputer Table of Contents
ReporterRetiree Newsletter December 2012/January 2013 SCIENCE ORNL debuts Titan supercomputer ORNL has completed the installation of Titan, a supercomputer capable of churning through more than 20,000 trillion calculations each second—or 20 petaflops—by employing a family of processors called graphic processing units first created for computer gaming. Titan will be 10 times more powerful than ORNL’s last world-leading system, Jaguar, while overcoming power and space limitations inherent in the previous generation of high- performance computers. ORNL is now home to Titan, the world’s most powerful supercomputer for open science Titan, which is supported by the DOE, with a theoretical peak performance exceeding 20 petaflops (quadrillion calculations per second). (Image: Jason Richards) will provide unprecedented computing power for research in energy, climate change, efficient engines, materials and other disciplines and pave the way for a wide range of achievements in “Titan will provide science and technology. unprecedented computing Table of Contents The Cray XK7 system contains 18,688 nodes, power for research in energy, with each holding a 16-core AMD Opteron ORNL debuts Titan climate change, materials 6274 processor and an NVIDIA Tesla K20 supercomputer ............1 graphics processing unit (GPU) accelerator. and other disciplines to Titan also has more than 700 terabytes of enable scientific leadership.” Betty Matthews memory. The combination of central processing loves to travel .............2 units, the traditional foundation of high- performance computers, and more recent GPUs will allow Titan to occupy the same space as Service anniversaries ......3 its Jaguar predecessor while using only marginally more electricity. “One challenge in supercomputers today is power consumption,” said Jeff Nichols, Benefits ..................4 associate laboratory director for computing and computational sciences. -
Safety and Security Challenge
SAFETY AND SECURITY CHALLENGE TOP SUPERCOMPUTERS IN THE WORLD - FEATURING TWO of DOE’S!! Summary: The U.S. Department of Energy (DOE) plays a very special role in In fields where scientists deal with issues from disaster relief to the keeping you safe. DOE has two supercomputers in the top ten supercomputers in electric grid, simulations provide real-time situational awareness to the whole world. Titan is the name of the supercomputer at the Oak Ridge inform decisions. DOE supercomputers have helped the Federal National Laboratory (ORNL) in Oak Ridge, Tennessee. Sequoia is the name of Bureau of Investigation find criminals, and the Department of the supercomputer at Lawrence Livermore National Laboratory (LLNL) in Defense assess terrorist threats. Currently, ORNL is building a Livermore, California. How do supercomputers keep us safe and what makes computing infrastructure to help the Centers for Medicare and them in the Top Ten in the world? Medicaid Services combat fraud. An important focus lab-wide is managing the tsunamis of data generated by supercomputers and facilities like ORNL’s Spallation Neutron Source. In terms of national security, ORNL plays an important role in national and global security due to its expertise in advanced materials, nuclear science, supercomputing and other scientific specialties. Discovery and innovation in these areas are essential for protecting US citizens and advancing national and global security priorities. Titan Supercomputer at Oak Ridge National Laboratory Background: ORNL is using computing to tackle national challenges such as safe nuclear energy systems and running simulations for lower costs for vehicle Lawrence Livermore's Sequoia ranked No. -
Lessons Learned in Deploying the World's Largest Scale Lustre File
Lessons Learned in Deploying the World’s Largest Scale Lustre File System Galen M. Shipman, David A. Dillow, Sarp Oral, Feiyi Wang, Douglas Fuller, Jason Hill, Zhe Zhang Oak Ridge Leadership Computing Facility, Oak Ridge National Laboratory Oak Ridge, TN 37831, USA fgshipman,dillowda,oralhs,fwang2,fullerdj,hilljj,[email protected] Abstract 1 Introduction The Spider system at the Oak Ridge National Labo- The Oak Ridge Leadership Computing Facility ratory’s Leadership Computing Facility (OLCF) is the (OLCF) at Oak Ridge National Laboratory (ORNL) world’s largest scale Lustre parallel file system. Envi- hosts the world’s most powerful supercomputer, sioned as a shared parallel file system capable of de- Jaguar [2, 14, 7], a 2.332 Petaflop/s Cray XT5 [5]. livering both the bandwidth and capacity requirements OLCF also hosts an array of other computational re- of the OLCF’s diverse computational environment, the sources such as a 263 Teraflop/s Cray XT4 [1], visual- project had a number of ambitious goals. To support the ization, and application development platforms. Each of workloads of the OLCF’s diverse computational plat- these systems requires a reliable, high-performance and forms, the aggregate performance and storage capacity scalable file system for data storage. of Spider exceed that of our previously deployed systems Parallel file systems on leadership-class systems have by a factor of 6x - 240 GB/sec, and 17x - 10 Petabytes, traditionally been tightly coupled to single simulation respectively. Furthermore, Spider supports over 26,000 platforms. This approach had resulted in the deploy- clients concurrently accessing the file system, which ex- ment of a dedicated file system for each computational ceeds our previously deployed systems by nearly 4x. -
Musings RIK FARROWOPINION
Musings RIK FARROWOPINION Rik is the editor of ;login:. While preparing this issue of ;login:, I found myself falling down a rabbit hole, like [email protected] Alice in Wonderland . And when I hit bottom, all I could do was look around and puzzle about what I discovered there . My adventures started with a casual com- ment, made by an ex-Cray Research employee, about the design of current super- computers . He told me that today’s supercomputers cannot perform some of the tasks that they are designed for, and used weather forecasting as his example . I was stunned . Could this be true? Or was I just being dragged down some fictional rabbit hole? I decided to learn more about supercomputer history . Supercomputers It is humbling to learn about the early history of computer design . Things we take for granted, such as pipelining instructions and vector processing, were impor- tant inventions in the 1970s . The first supercomputers were built from discrete components—that is, transistors soldered to circuit boards—and had clock speeds in the tens of nanoseconds . To put that in real terms, the Control Data Corpora- tion’s (CDC) 7600 had a clock cycle of 27 .5 ns, or in today’s terms, 36 4. MHz . This was CDC’s second supercomputer (the 6600 was first), but included instruction pipelining, an invention of Seymour Cray . The CDC 7600 peaked at 36 MFLOPS, but generally got 10 MFLOPS with carefully tuned code . The other cool thing about the CDC 7600 was that it broke down at least once a day . -
Jaguar Supercomputer
Jaguar Supercomputer Jake Baskin '10, Jānis Lībeks '10 Jan 27, 2010 What is it? Currently the fastest supercomputer in the world, at up to 2.33 PFLOPS, located at Oak Ridge National Laboratory (ORNL). Leader in "petascale scientific supercomputing". Uses Massively parallel simulations. Modeling: Climate Supernovas Volcanoes Cellulose http://www.nccs.gov/wp- content/themes/nightfall/img/jaguarXT5/gallery/jaguar-1.jpg Overview Processor Specifics Network Architecture Programming Models NCCS networking Spider file system Scalability The guts 84 XT4 and 200 XT5 cabinets XT5 18688 compute nodes 256 service and i/o nodes XT4 7832 compute nodes 116 service and i/o nodes (XT5) Compute Nodes 2 Opteron 2435 Istanbul (6 core) processors per node 64K L1 instruction cache 65K L1 data cache per core 512KB L2 cache per core 6MB L3 cache per processor (shared) 8GB of DDR2-800 RAM directly attached to each processor by integrated memory controller. http://www.cray.com/Assets/PDF/products/xt/CrayXT5Brochure.pdf How are they organized? 3-D Torus topology XT5 and XT4 segments are connected by an InfiniBand DDR network 889 GB/sec bisectional bandwidth http://www.cray.com/Assets/PDF/products/xt/CrayXT5Brochure.pdf Programming Models Jaguar supports these programming models: MPI (Message Passing Interface) OpenMP (Open Multi Processing) SHMEM (SHared MEMory access library) PGAS (Partitioned global address space) NCCS networking Jaguar usually performs computations on large datasets. These datasets have to be transferred to ORNL. Jaguar is connected to ESnet (Energy Sciences Network, scientific institutions) and Internet2 (higher education institutions). ORNL owns its own optical network that allows 10Gb/s to various locations around the US. -
EVGA Geforce GTX TITAN X Superclocked
EVGA GeForce GTX TITAN X Superclocked Part Number: 12G-P4-2992-KR The EVGA GeForce GTX TITAN X combines the technologies and performance of the new NVIDIA Maxwell architecture in the fastest and most advanced graphics card on the planet. This incredible GPU delivers unrivaled graphics, acoustic, thermal and power-efficient performance. The most demanding enthusiast can now experience extreme resolutions up to 4K-and beyond. Enjoy hyper-realistic, real-time lighting with advanced NVIDIA VXGI, as well as NVIDIA G-SYNC display technology for smooth, tear-free gaming. Plus, you get DSR technology that delivers a brilliant 4K experience, even on a 1080p display. SPECIFICATIONS KEY FEATURES RESOLUTION & REFRESH Base Clock: 1127 MHZ NVIDIA Dynamic Super Resolution Technology Max Monitors Supported: 4 Boost Clock: 1216 MHz NVIDIA MFAA Technology 240Hz Max Refresh Rate Memory Clock: 7010 MHz Effective NVIDIA GameWorks Technology Max Analog: 2048x1536 CUDA Cores: 3072 NVIDIA GameStream Technology Max Digital: 4096x2160 Bus Type: PCI-E 3.0 NVIDIA G-SYNC Ready Memory Detail: 12288MB GDDR5 Microsoft DirectX 12 REQUIREMENTS Memory Bit Width: 384 Bit NVIDIA GPU Boost 2.0 600 Watt or greater power supply.**** Memory Speed: 0.28ns NVIDIA Adaptive Vertical Sync PCI Express, PCI Express 2.0 or PCI Express Memory Bandwidth: 336.5 GB/s NVIDIA Surround Technology 3.0 compliant motherboard with one graphics NVIDIA SLI Ready slot. DIMENSIONS NVIDIA CUDA Technology An available 6-pin PCI-E power connector and Height: 4.376in - 111.15mm OpenGL 4.4 Support an available 8 pin PCI-E power connector Length: 10.5in - 266.7mm OpenCL Support Windows 8 32/64bit, Windows 7 32/64bit, Windows Vista 32/64bit Width: Dual Slot HDMI 2.0, DisplayPort 1.2 and Dual-link DVI PCI Express 3.0 **Support for HDMI includes GPU-accelerated Blu-ray 3D support (Blu-ray 3D playback requires the purchase of a compatible software player from CyberLink, ArcSoft, Corel, or Sonic), x.v.Color, HDMI Deep Color, and 7.1 digital surround sound. -
A Comprehensive Performance Comparison of Dedicated and Embedded GPU Systems
DUJE (Dicle University Journal of Engineering) 11:3 (2020) Page 1011-1020 Research Article A Comprehensive Performance Comparison of Dedicated and Embedded GPU Systems Adnan Ozsoy 1,* 1 Department of Computer Engineering, Hacettepe University, [email protected] (ORCID: https://orcid.org/0000-0002-0302-3721) ARTICLE INFO ABSTRACT Article history: General purpose usage of graphics processing units (GPGPU) is becoming increasingly important as Received 26 May 2020 Received in revised form 29 June 2020 graphics processing units (GPUs) get more powerful and their widespread usage in performance-oriented Accepted 9 July 2020 computing. GPGPUs are mainstream performance hardware in workstation and cluster environments and Available online 30 September 2020 their behavior in such setups are highly analyzed. Recently, NVIDIA, the leader hardware and software vendor in GPGPU computing, started to produce more energy efficient embedded GPGPU systems, Jetson Keywords: NVIDIA Jetson, series GPUs, to make GPGPU computing more applicable in domains where energy and space are limited. Embedded GPGPU, CUDA Although, the architecture of the GPUs in Jetson systems is the same as the traditional dedicated desktop graphic cards, the interaction between the GPU and the other components of the system such as main memory, central processing unit (CPU), and hard disk, is a lot different than traditional desktop solutions. To fully understand the capabilities of the Jetson series embedded solutions, in this paper we run several applications from many different domains and compare the performance characteristics of these applications on both embedded and dedicated desktop GPUs. After analyzing the collected data, we have identified certain application domains and program behaviors that Jetson series can deliver performance comparable to dedicated GPU performance. -
Power Efficiency and Performance with ORNL's Cray XK7 Titan
Power Efficiency and Performance with ORNL's Cray XK7 Titan Work by Jim Rogers Presented by Arthur Bland Director of Operations National Center for Computational Sciences Oak Ridge National Laboratory ORNL’s “Titan” Hybrid System: Cray XK7 with AMD Opteron and NVIDIA Tesla processors SYSTEM SPECIFICATIONS: • Peak performance of 27.1 PF • 24.5 GPU + 2.6 CPU • 18,688 Compute Nodes each with: • 16-Core AMD Opteron CPU • NVIDIA Tesla “K20x” GPU • 32 + 6 GB memory • 512 Service and I/O nodes 2 4,352 ft • 200 Cabinets 2 404 m • 710 TB total system memory • Cray Gemini 3D Torus Interconnect • 8.9 MW peak power 2 ORNL ISC’13 ORNL's Cray XK7 Titan | Sample Run: HPL Consumption MW, Instantaneous kW-hours (Cumulative) 10 8,000 7,545.56kW-hr 9 7,000 8 RmaxPower = 8,296.53kW 6,000 7 Instantaneous Measurements 5,000 6 8.93 MW 21.42 PF 5 2,397.14 MF/Watt 4,000 4 3,000 kW, Instantaneous 3 2,000 RmaxPower 2 kW-hours, cumulative 1,000 1 Run Time Duration (hh:mm:ss) - - 3 0:00:53 0:01:53 0:02:53 0:03:53 0:04:53 0:05:53 0:06:53 0:07:53 0:08:53 0:09:53 0:10:53 0:11:53 0:12:53 0:13:53 0:14:53 0:15:53 0:16:53 0:17:53 0:18:53 0:19:53 0:20:53 0:21:53 0:22:53 0:23:53 0:24:53 0:25:53 0:26:53 0:27:53 0:28:53 0:29:53 0:30:53 0:31:53 0:32:53 0:33:53 0:34:53 0:35:53 0:36:53 0:37:53 0:38:53 0:39:53 0:40:53 0:41:53 0:42:53 0:43:53 0:44:53 ORNL0:45:53 0:46:53 0:47:53 ISC’130:48:53 0:49:53 0:50:53 0:51:53 0:52:53 0:53:53 0:54:53 Revised Metering Capabilities for HPC Systems at ORNL • Each of the seven (7) 2.5 MVA or 3.0 MVA Cabinet/rack transformers designated for HPC use is Chassis/crate now metered by a separate Schneider Electric CM4000 PowerLogic Circuit blade Monitor CPU • “F” Position within the EEHPCWG A Aspect 4 Category Power • Highly accurate power quality monitor Conv. -
Harnessing the Power of Titan with the Uintah Computational Framework
Alan Humphrey, Qingyu Meng, Brad Peterson, Martin Berzins Scientific Computing and Imaging Institute, University of Utah I. Uintah Framework – Overview II. Extending Uintah to Leverage GPUs III. Target Application – DOE NNSA PSAAP II Multidisciplinary Simulation Center IV. A Developing GPU-based Radiation Model V. Summary and Questions Central Theme: Shielding developers from complexities inherent in heterogeneous systems like Titan & Keeneland Thanks to: John Schmidt, Todd Harman, Jeremy Thornock, J. Davison de St. Germain Justin Luitjens and Steve Parker, NVIDIA DOE Titan – 20 Petaflops DOE for funding the CSAFE project from 1997-2010, 18,688 GPUs DOE NETL, DOE NNSA, INCITE, ALCC NSF for funding via SDCI and PetaApps, XSEDE NSF Keeneland Keeneland Computing Facility 792 GPUs Oak Ridge Leadership Computing Facility for access to Titan DOE NNSA PSAPP II (March 2014) Parallel, adaptive multi-physics framework Fluid-structure interaction problems Patch-based AMR: Particle system and mesh-based fluid solve Plume Fires Chemical/Gas Mixing Foam Explosions Angiogenesis Compaction Sandstone Compaction Industrial Flares Shaped Charges MD – Multiscale Materials Design Patch-based domain decomposition Asynchronous ALCF Mira task-based paradigm Task - serial code on generic “patch” OLCF Titan Task specifies desired halo region Clear separation of user code from parallelism/runtime Strong Scaling: Uintah infrastructure provides: Fluid-structure interaction problem using MPMICE algorithm w/ AMR • automatic MPI message generation • load balancing • -
Digital Signal Processor Supercomputing
Digital Signal Processor Supercomputing ENCM 515: Individual Report Prepared by Steven Rahn Submitted: November 29, 2013 Abstract: Analyzing the history of supercomputers: how the industry arrived to where it is in its current form and what technological advancements enabled the movement. Explore the applications of Digital Signal Processor in high performance computing and whether or not tests have already taken place. Examine whether Digital Signal Processors have a future with supercomputers. i Table of Contents Table of Contents ................................................................................................................................... ii List of Figures ........................................................................................................................................ iii Introduction ........................................................................................................................................... 1 What is a supercomputer? ................................................................................................................. 1 What are supercomputers used for? ................................................................................................. 1 History of Supercomputing .................................................................................................................... 2 The Past (1964 – 1985) ....................................................................................................................... 2 The Present -
AI Chips: What They Are and Why They Matter
APRIL 2020 AI Chips: What They Are and Why They Matter An AI Chips Reference AUTHORS Saif M. Khan Alexander Mann Table of Contents Introduction and Summary 3 The Laws of Chip Innovation 7 Transistor Shrinkage: Moore’s Law 7 Efficiency and Speed Improvements 8 Increasing Transistor Density Unlocks Improved Designs for Efficiency and Speed 9 Transistor Design is Reaching Fundamental Size Limits 10 The Slowing of Moore’s Law and the Decline of General-Purpose Chips 10 The Economies of Scale of General-Purpose Chips 10 Costs are Increasing Faster than the Semiconductor Market 11 The Semiconductor Industry’s Growth Rate is Unlikely to Increase 14 Chip Improvements as Moore’s Law Slows 15 Transistor Improvements Continue, but are Slowing 16 Improved Transistor Density Enables Specialization 18 The AI Chip Zoo 19 AI Chip Types 20 AI Chip Benchmarks 22 The Value of State-of-the-Art AI Chips 23 The Efficiency of State-of-the-Art AI Chips Translates into Cost-Effectiveness 23 Compute-Intensive AI Algorithms are Bottlenecked by Chip Costs and Speed 26 U.S. and Chinese AI Chips and Implications for National Competitiveness 27 Appendix A: Basics of Semiconductors and Chips 31 Appendix B: How AI Chips Work 33 Parallel Computing 33 Low-Precision Computing 34 Memory Optimization 35 Domain-Specific Languages 36 Appendix C: AI Chip Benchmarking Studies 37 Appendix D: Chip Economics Model 39 Chip Transistor Density, Design Costs, and Energy Costs 40 Foundry, Assembly, Test and Packaging Costs 41 Acknowledgments 44 Center for Security and Emerging Technology | 2 Introduction and Summary Artificial intelligence will play an important role in national and international security in the years to come. -
Efficient Object Storage Journaling in a Distributed Parallel File System Presented by Sarp Oral
Efficient Object Storage Journaling in a Distributed Parallel File System Presented by Sarp Oral Sarp Oral, Feiyi Wang, David Dillow, Galen Shipman, Ross Miller, and Oleg Drokin FAST’10, Feb 25, 2010 A Demanding Computational Environment Jaguar XT5 18,688 224,256 300+ TB 2.3 PFlops Nodes Cores memory Jaguar XT4 7,832 31,328 63 TB 263 TFlops Nodes Cores memory Frost (SGI Ice) 128 Node institutional cluster Smoky 80 Node software development cluster Lens 30 Node visualization and analysis cluster 2 FAST’10, Feb 25, 2010 Spider Fastest Lustre file system in the world Demonstrated bandwidth of 240 GB/s on the center wide file system Largest scale Lustre file system in the world Demonstrated stability and concurrent mounts on major OLCF systems • Jaguar XT5 • Jaguar XT4 • Opteron Dev Cluster (Smoky) • Visualization Cluster (Lens) Over 26,000 clients mounting the file system and performing I/O General availability on Jaguar XT5, Lens, Smoky, and GridFTP servers Cutting edge resiliency at scale Demonstrated resiliency features on Jaguar XT5 • DM Multipath • Lustre Router failover 3 FAST’10, Feb 25, 2010 Designed to Support Peak Performance 100.00 ReadBW GB/s WriteBW GB/s 90.00 80.00 70.00 60.00 50.00 Bandwidth GB/s 40.00 30.00 20.00 10.00 0.00 1/1/10 0:00 1/6/10 0:00 1/11/10 0:00 1/16/10 0:00 1/21/10 0:00 1/26/10 0:00 1/31/10 0:00 Timeline (January 2010) Max data rates (hourly) on ½ of available storage controllers 4 FAST’10, Feb 25, 2010 Motivations for a Center Wide File System • Building dedicated file systems for platforms does not scale