Also in This Issue Roadrunner—On the Road to Trinity Big Data, Fast Data Killing Killer Asteroids Strategic Deterrence in the 21St Century
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
ORNL Debuts Titan Supercomputer Table of Contents
ReporterRetiree Newsletter December 2012/January 2013 SCIENCE ORNL debuts Titan supercomputer ORNL has completed the installation of Titan, a supercomputer capable of churning through more than 20,000 trillion calculations each second—or 20 petaflops—by employing a family of processors called graphic processing units first created for computer gaming. Titan will be 10 times more powerful than ORNL’s last world-leading system, Jaguar, while overcoming power and space limitations inherent in the previous generation of high- performance computers. ORNL is now home to Titan, the world’s most powerful supercomputer for open science Titan, which is supported by the DOE, with a theoretical peak performance exceeding 20 petaflops (quadrillion calculations per second). (Image: Jason Richards) will provide unprecedented computing power for research in energy, climate change, efficient engines, materials and other disciplines and pave the way for a wide range of achievements in “Titan will provide science and technology. unprecedented computing Table of Contents The Cray XK7 system contains 18,688 nodes, power for research in energy, with each holding a 16-core AMD Opteron ORNL debuts Titan climate change, materials 6274 processor and an NVIDIA Tesla K20 supercomputer ............1 graphics processing unit (GPU) accelerator. and other disciplines to Titan also has more than 700 terabytes of enable scientific leadership.” Betty Matthews memory. The combination of central processing loves to travel .............2 units, the traditional foundation of high- performance computers, and more recent GPUs will allow Titan to occupy the same space as Service anniversaries ......3 its Jaguar predecessor while using only marginally more electricity. “One challenge in supercomputers today is power consumption,” said Jeff Nichols, Benefits ..................4 associate laboratory director for computing and computational sciences. -
2017 HPC Annual Report Team Would Like to Acknowledge the Invaluable Assistance Provided by John Noe
sandia national laboratories 2017 HIGH PERformance computing The 2017 High Performance Computing Annual Report is dedicated to John Noe and Dino Pavlakos. Building a foundational framework Editor in high performance computing Yasmin Dennig Contributing Writers Megan Davidson Sandia National Laboratories has a long history of significant contributions to the high performance computing Mattie Hensley community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraflop barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing capabilities to gain a tremendous competitive edge in the marketplace. Contributing Editor Laura Sowko As part of our continuing commitment to provide modern computing infrastructure and systems in support of Sandia’s missions, we made a major investment in expanding Building 725 to serve as the new home of high performance computer (HPC) systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) prototype Design platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing Stacey Long advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment, and application code performance. -
Safety and Security Challenge
SAFETY AND SECURITY CHALLENGE TOP SUPERCOMPUTERS IN THE WORLD - FEATURING TWO of DOE’S!! Summary: The U.S. Department of Energy (DOE) plays a very special role in In fields where scientists deal with issues from disaster relief to the keeping you safe. DOE has two supercomputers in the top ten supercomputers in electric grid, simulations provide real-time situational awareness to the whole world. Titan is the name of the supercomputer at the Oak Ridge inform decisions. DOE supercomputers have helped the Federal National Laboratory (ORNL) in Oak Ridge, Tennessee. Sequoia is the name of Bureau of Investigation find criminals, and the Department of the supercomputer at Lawrence Livermore National Laboratory (LLNL) in Defense assess terrorist threats. Currently, ORNL is building a Livermore, California. How do supercomputers keep us safe and what makes computing infrastructure to help the Centers for Medicare and them in the Top Ten in the world? Medicaid Services combat fraud. An important focus lab-wide is managing the tsunamis of data generated by supercomputers and facilities like ORNL’s Spallation Neutron Source. In terms of national security, ORNL plays an important role in national and global security due to its expertise in advanced materials, nuclear science, supercomputing and other scientific specialties. Discovery and innovation in these areas are essential for protecting US citizens and advancing national and global security priorities. Titan Supercomputer at Oak Ridge National Laboratory Background: ORNL is using computing to tackle national challenges such as safe nuclear energy systems and running simulations for lower costs for vehicle Lawrence Livermore's Sequoia ranked No. -
Titan: a New Leadership Computer for Science
Titan: A New Leadership Computer for Science Presented to: DOE Advanced Scientific Computing Advisory Committee November 1, 2011 Arthur S. Bland OLCF Project Director Office of Science Statement of Mission Need • Increase the computational resources of the Leadership Computing Facilities by 20-40 petaflops • INCITE program is oversubscribed • Programmatic requirements for leadership computing continue to grow • Needed to avoid an unacceptable gap between the needs of the science programs and the available resources • Approved: Raymond Orbach January 9, 2009 • The OLCF-3 project comes out of this requirement 2 ASCAC – Nov. 1, 2011 Arthur Bland INCITE is 2.5 to 3.5 times oversubscribed 2007 2008 2009 2010 2011 2012 3 ASCAC – Nov. 1, 2011 Arthur Bland What is OLCF-3 • The next phase of the Leadership Computing Facility program at ORNL • An upgrade of Jaguar from 2.3 Petaflops (peak) today to between 10 and 20 PF by the end of 2012 with operations in 2013 • Built with Cray’s newest XK6 compute blades • When completed, the new system will be called Titan 4 ASCAC – Nov. 1, 2011 Arthur Bland Cray XK6 Compute Node XK6 Compute Node Characteristics AMD Opteron 6200 “Interlagos” 16 core processor @ 2.2GHz Tesla M2090 “Fermi” @ 665 GF with 6GB GDDR5 memory Host Memory 32GB 1600 MHz DDR3 Gemini High Speed Interconnect Upgradeable to NVIDIA’s next generation “Kepler” processor in 2012 Four compute nodes per XK6 blade. 24 blades per rack 5 ASCAC – Nov. 1, 2011 Arthur Bland ORNL’s “Titan” System • Upgrade of existing Jaguar Cray XT5 • Cray Linux Environment -
Head-On Impact Deflection of Neas: a Case Study for 99942 Apophis
Planetary Defense Conference 2007, Wahington D.C., USA, 05-08 March 2007 (c) 2007 by the authors Head-On Impact Deflection of NEAs: A Case Study for 99942 Apophis Bernd Dachwald∗ and Ralph Kahle† German Aerospace Center (DLR), 82234 Wessling, Germany Bong Wie‡ Arizona State University, Tempe, AZ 85287, USA Near-Earth asteroid (NEA) 99942 Apophis provides a typical example for the evolution of asteroid orbits that lead to Earth-impacts after a close Earth-encounter that results in a resonant return. Apophis will have a close Earth-encounter in 2029 with potential very close subsequent Earth-encounters (or even an impact) in 2036 or later, depending on whether it passes through one of several less than 1 km-sized gravitational keyholes during its 2029- encounter. A pre-2029 kinetic impact is a very favorable option to nudge the asteroid out of a keyhole. The highest impact velocity and thus deflection can be achieved from a trajectory that is retrograde to Apophis orbit. With a chemical or electric propulsion system, however, many gravity assists and thus a long time is required to achieve this. We show in this paper that the solar sail might be the better propulsion system for such a mission: a solar sail Kinetic Energy Impactor (KEI) spacecraft could impact Apophis from a retrograde trajectory with a very high relative velocity (75-80 km/s) during one of its perihelion passages. The spacecraft consists of a 160 m × 160 m, 168 kg solar sail assembly and a 150 kg impactor. Although conventional spacecraft can also achieve the required minimum deflection of 1 km for this approx. -
EVGA Geforce GTX TITAN X Superclocked
EVGA GeForce GTX TITAN X Superclocked Part Number: 12G-P4-2992-KR The EVGA GeForce GTX TITAN X combines the technologies and performance of the new NVIDIA Maxwell architecture in the fastest and most advanced graphics card on the planet. This incredible GPU delivers unrivaled graphics, acoustic, thermal and power-efficient performance. The most demanding enthusiast can now experience extreme resolutions up to 4K-and beyond. Enjoy hyper-realistic, real-time lighting with advanced NVIDIA VXGI, as well as NVIDIA G-SYNC display technology for smooth, tear-free gaming. Plus, you get DSR technology that delivers a brilliant 4K experience, even on a 1080p display. SPECIFICATIONS KEY FEATURES RESOLUTION & REFRESH Base Clock: 1127 MHZ NVIDIA Dynamic Super Resolution Technology Max Monitors Supported: 4 Boost Clock: 1216 MHz NVIDIA MFAA Technology 240Hz Max Refresh Rate Memory Clock: 7010 MHz Effective NVIDIA GameWorks Technology Max Analog: 2048x1536 CUDA Cores: 3072 NVIDIA GameStream Technology Max Digital: 4096x2160 Bus Type: PCI-E 3.0 NVIDIA G-SYNC Ready Memory Detail: 12288MB GDDR5 Microsoft DirectX 12 REQUIREMENTS Memory Bit Width: 384 Bit NVIDIA GPU Boost 2.0 600 Watt or greater power supply.**** Memory Speed: 0.28ns NVIDIA Adaptive Vertical Sync PCI Express, PCI Express 2.0 or PCI Express Memory Bandwidth: 336.5 GB/s NVIDIA Surround Technology 3.0 compliant motherboard with one graphics NVIDIA SLI Ready slot. DIMENSIONS NVIDIA CUDA Technology An available 6-pin PCI-E power connector and Height: 4.376in - 111.15mm OpenGL 4.4 Support an available 8 pin PCI-E power connector Length: 10.5in - 266.7mm OpenCL Support Windows 8 32/64bit, Windows 7 32/64bit, Windows Vista 32/64bit Width: Dual Slot HDMI 2.0, DisplayPort 1.2 and Dual-link DVI PCI Express 3.0 **Support for HDMI includes GPU-accelerated Blu-ray 3D support (Blu-ray 3D playback requires the purchase of a compatible software player from CyberLink, ArcSoft, Corel, or Sonic), x.v.Color, HDMI Deep Color, and 7.1 digital surround sound. -
The Artisanal Nuke, 2014
The Artisanal Nuke Mary C. Dixon US Air Force Center for Unconventional Weapons Studies Maxwell Air Force Base, Alabama THE ARTISANAL NUKE by Mary C. Dixon USAF Center for Unconventional Weapons Studies 125 Chennault Circle Maxwell Air Force Base, Alabama 36112-6427 July 2014 Disclaimer The opinions, conclusions, and recommendations expressed or implied in this publication are those of the author and do not necessarily reflect the views of the Air University, Air Force, or Department of Defense. ii Table of Contents Chapter Page Disclaimer ................................................................................................... ii Table of Contents ....................................................................................... iii About the Author ......................................................................................... v Acknowledgements ..................................................................................... vi Abstract ....................................................................................................... ix 1 Introduction .............................................................................................. 1 2 Background ............................................................................................ 19 3 Stockpile Stewardship ........................................................................... 27 4 Opposition & Problems ......................................................................... 71 5 Milestones & Accomplishments .......................................................... -
A Comprehensive Performance Comparison of Dedicated and Embedded GPU Systems
DUJE (Dicle University Journal of Engineering) 11:3 (2020) Page 1011-1020 Research Article A Comprehensive Performance Comparison of Dedicated and Embedded GPU Systems Adnan Ozsoy 1,* 1 Department of Computer Engineering, Hacettepe University, [email protected] (ORCID: https://orcid.org/0000-0002-0302-3721) ARTICLE INFO ABSTRACT Article history: General purpose usage of graphics processing units (GPGPU) is becoming increasingly important as Received 26 May 2020 Received in revised form 29 June 2020 graphics processing units (GPUs) get more powerful and their widespread usage in performance-oriented Accepted 9 July 2020 computing. GPGPUs are mainstream performance hardware in workstation and cluster environments and Available online 30 September 2020 their behavior in such setups are highly analyzed. Recently, NVIDIA, the leader hardware and software vendor in GPGPU computing, started to produce more energy efficient embedded GPGPU systems, Jetson Keywords: NVIDIA Jetson, series GPUs, to make GPGPU computing more applicable in domains where energy and space are limited. Embedded GPGPU, CUDA Although, the architecture of the GPUs in Jetson systems is the same as the traditional dedicated desktop graphic cards, the interaction between the GPU and the other components of the system such as main memory, central processing unit (CPU), and hard disk, is a lot different than traditional desktop solutions. To fully understand the capabilities of the Jetson series embedded solutions, in this paper we run several applications from many different domains and compare the performance characteristics of these applications on both embedded and dedicated desktop GPUs. After analyzing the collected data, we have identified certain application domains and program behaviors that Jetson series can deliver performance comparable to dedicated GPU performance. -
Power Efficiency and Performance with ORNL's Cray XK7 Titan
Power Efficiency and Performance with ORNL's Cray XK7 Titan Work by Jim Rogers Presented by Arthur Bland Director of Operations National Center for Computational Sciences Oak Ridge National Laboratory ORNL’s “Titan” Hybrid System: Cray XK7 with AMD Opteron and NVIDIA Tesla processors SYSTEM SPECIFICATIONS: • Peak performance of 27.1 PF • 24.5 GPU + 2.6 CPU • 18,688 Compute Nodes each with: • 16-Core AMD Opteron CPU • NVIDIA Tesla “K20x” GPU • 32 + 6 GB memory • 512 Service and I/O nodes 2 4,352 ft • 200 Cabinets 2 404 m • 710 TB total system memory • Cray Gemini 3D Torus Interconnect • 8.9 MW peak power 2 ORNL ISC’13 ORNL's Cray XK7 Titan | Sample Run: HPL Consumption MW, Instantaneous kW-hours (Cumulative) 10 8,000 7,545.56kW-hr 9 7,000 8 RmaxPower = 8,296.53kW 6,000 7 Instantaneous Measurements 5,000 6 8.93 MW 21.42 PF 5 2,397.14 MF/Watt 4,000 4 3,000 kW, Instantaneous 3 2,000 RmaxPower 2 kW-hours, cumulative 1,000 1 Run Time Duration (hh:mm:ss) - - 3 0:00:53 0:01:53 0:02:53 0:03:53 0:04:53 0:05:53 0:06:53 0:07:53 0:08:53 0:09:53 0:10:53 0:11:53 0:12:53 0:13:53 0:14:53 0:15:53 0:16:53 0:17:53 0:18:53 0:19:53 0:20:53 0:21:53 0:22:53 0:23:53 0:24:53 0:25:53 0:26:53 0:27:53 0:28:53 0:29:53 0:30:53 0:31:53 0:32:53 0:33:53 0:34:53 0:35:53 0:36:53 0:37:53 0:38:53 0:39:53 0:40:53 0:41:53 0:42:53 0:43:53 0:44:53 ORNL0:45:53 0:46:53 0:47:53 ISC’130:48:53 0:49:53 0:50:53 0:51:53 0:52:53 0:53:53 0:54:53 Revised Metering Capabilities for HPC Systems at ORNL • Each of the seven (7) 2.5 MVA or 3.0 MVA Cabinet/rack transformers designated for HPC use is Chassis/crate now metered by a separate Schneider Electric CM4000 PowerLogic Circuit blade Monitor CPU • “F” Position within the EEHPCWG A Aspect 4 Category Power • Highly accurate power quality monitor Conv. -
Harnessing the Power of Titan with the Uintah Computational Framework
Alan Humphrey, Qingyu Meng, Brad Peterson, Martin Berzins Scientific Computing and Imaging Institute, University of Utah I. Uintah Framework – Overview II. Extending Uintah to Leverage GPUs III. Target Application – DOE NNSA PSAAP II Multidisciplinary Simulation Center IV. A Developing GPU-based Radiation Model V. Summary and Questions Central Theme: Shielding developers from complexities inherent in heterogeneous systems like Titan & Keeneland Thanks to: John Schmidt, Todd Harman, Jeremy Thornock, J. Davison de St. Germain Justin Luitjens and Steve Parker, NVIDIA DOE Titan – 20 Petaflops DOE for funding the CSAFE project from 1997-2010, 18,688 GPUs DOE NETL, DOE NNSA, INCITE, ALCC NSF for funding via SDCI and PetaApps, XSEDE NSF Keeneland Keeneland Computing Facility 792 GPUs Oak Ridge Leadership Computing Facility for access to Titan DOE NNSA PSAPP II (March 2014) Parallel, adaptive multi-physics framework Fluid-structure interaction problems Patch-based AMR: Particle system and mesh-based fluid solve Plume Fires Chemical/Gas Mixing Foam Explosions Angiogenesis Compaction Sandstone Compaction Industrial Flares Shaped Charges MD – Multiscale Materials Design Patch-based domain decomposition Asynchronous ALCF Mira task-based paradigm Task - serial code on generic “patch” OLCF Titan Task specifies desired halo region Clear separation of user code from parallelism/runtime Strong Scaling: Uintah infrastructure provides: Fluid-structure interaction problem using MPMICE algorithm w/ AMR • automatic MPI message generation • load balancing • -
Technical Issues in Keeping the Nuclear Stockpile Safe, Secure, and Reliable
Technical Issues in Keeping the Nuclear Stockpile Safe, Secure, and Reliable Marvin L. Adams Texas A&M University Sidney D. Drell Stanford University ABSTRACT The United States has maintained a safe, secure, and reliable nuclear stockpile for 16 years without relying on underground nuclear explosive tests. We argue that a key ingredient of this success so far has been the expertise of the scientists and engineers at the national labs with the responsibility for the nuclear arsenal and infrastructure. Furthermore for the foreseeable future this cadre will remain the critical asset in sustaining the nuclear enterprise on which the nation will rely, independent of the size of the stockpile or the details of its purpose, and to adapt to new requirements generated by new findings and changes in policies. Expert personnel are the foundation of a lasting and responsive deterrent. Thus far, the United States has maintained a safe, secure, and reliable nuclear stockpile by adhering closely to the designs of existing “legacy” weapons. It remains to be determined whether this is the best course to continue, as opposed to introducing new designs (as envisaged in the original RRW program), re-using previously designed and tested components, or maintaining an evolving combination of new, re-use, and legacy designs. We argue the need for a stewardship program that explores a broad spectrum of options, includes strong peer review in its assessment of options, and provides the necessary flexibility to adjust to different force postures depending on evolving strategic developments. U.S. decisions and actions about nuclear weapons can be expected to affect the nuclear policy choices of other nations – non-nuclear as well as nuclear – on whose cooperation we rely in efforts to reduce the global nuclear danger. -
A Comparison of the Current Top Supercomputers
A Comparison of the Current Top Supercomputers Russell Martin Xavier Gallardo Top500.org • Started in 1993 to maintain statistics on the top 500 performing supercomputers in the world • Started by Hans Meuer, updated twice annually • Uses LINPACK as main benchmark November 2012 List 1. Titan, Cray, USA 2. Sequoia, IBM, USA 3. K Computer, Fujitsu, Japan 4. Mira, IBM, USA 5. JUQueen, IBM, Germany 6. SuperMUC, IBM, Germany Performance Trends http://top500.org/statistics/perfdevel/ Trends • 84.8% use 6+ processor cores • 76% Intel, 12% AMD Opteron, 10.6% IBM Power • Most common Interconnects - Infiniband, Gigabit Ethernet • 251 in USA, 123 in Asia, 105 in Europe • Current #1 Computer • Oak Ridge National Laboratory Oak Ridge, Tennessee • Operational Oct 29, 2012 • Scientific research • AMD Opteron / NVIDIA Tesla • 18688 Nodes, 560640 Cores • Cray Gemini Interconnect • Theoretical 27 PFLOPS Sequoia • Ranked #2 • Located at the Lawrence Livermore National Library in Livermore, CA • Designed for "Uncertainty Quantification Studies" • Fully classified work starting April 17 2013 Sequoia • Cores: 1572864 • Nodes: 98304 • RAM: 1572864 GB • Interconnect: Custom 5D Torus • Max Performance: 16324.8 Tflops • Theoretical Max: 20132.7 Tflops • Power Consumption: 7890 kW Sequoia • IBM BlueGene/Q Architecture o Started in 1999 for protein folding o Custom SoC Layout 16 PPC Cores per Chip, each 1.6Ghz, 0.8V • Built for OpenMP and POSIX programs • Automatic SIMD exploitation Sequoia • #3 on Top 500, #1 June 2011 • RIKEN Advanced Institute for Computational