ASC Platforms Impact on the USA HPC Industry

Total Page:16

File Type:pdf, Size:1020Kb

ASC Platforms Impact on the USA HPC Industry Performance Measures x.x, x.x, and x.x High Performance Computing at Livermore, Petascale and Beyond Presentation to: LSD, May 2, 2008 Livermore, CA Dr. Mark K. Seager ASC Platforms Lead at LLNL UCRL-PRES-403698 This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Talk Overview . Current Computing Environments at LLNL • Systems & Simulation Environment . Peta and Exascale Programmatic Drivers • Weapons Physics for Know Unknowns • Uncertainty Quantification for Existing Stockpile . Sequoia Target Architecture . Comments on evolutionary changes required for petascale programming model LLNL HPC for LSD 2 Our platform strategy is to straddle multiple technology curves to appropriately balance risk and cost/performance benefit Any given technology curve is ultimately Three complementary curves… limited by Moore’s Law Advanced Architectures 33 1.Delivers to today’s stockpile’s demanding (IBM BG/L, Cell) needs 2 Production environment 2 Commodity Linux Clusters For “must have” deliverables now 11 Vendor integrated SMP Cluster 2.Delivers transition for next generation (IBM SP, HP SC) “Near production”, riskier environment Capability system for risk tolerant programs Capacity systems for risk averse programs Purple Mainframes 3.Delivers affordable path to petaFLOP/s (RIP) ThunderBird Q Research environment, leading transition to Performance MCR petaflop systems? Are there other paths to a breakthrough regime by 2006-8? White Time Today FY05 LLNL HPC for LSD 3 NNSA has a sterling record of delivering production computers, setting the standard for national supercomputing user facilities ASC White [2001]: First shared tri-laboratory resource came in 23% over required peak Purple Utilization 100.0% 90.0% 80.0% 70.0% 60.0% Idle 50.0% Down 40.0% Used 30.0% 20.0% 10.0% 0.0% Nov-10 Dec-10 Jan-11 Feb-11 Mar-11 Apr-11 May-11 Jun-11 Jul-11 Aug-11 Sep-11 Oct-11 Nov-11 Dec-11 ASC Purple [2005 ]: First NNSA User Facility came in $50M under budget and is routinely 90+% utilized LLNL HPC for LSD 4 Linux Clusters Curve 2 leveraged strategic tri-labs institutional investments to create huge efficiency for institution and programs with scalable COTS and Open Source SW approach . LLNL institutional investments filled gaps in Linux cluster technology • LLNL – Early Lustre adoption leading to integrated simulation environment • LLNL – SLURM and Linux Distro development for manageability • LLNL – Thunder was TOP500(23) #2 . Real world synergy • LLNL Peloton SUs procurement for three institutional clusters • Enabled >50% TCO improvement for ASC 138 4Socket Dual Core Compute Nodes (1,104 CPUs) Minos - 6 SUs - 33TF QsNet288 Port Elan3, Infiniband 100BaseT Control 4x One Scalable Unit building block 144 Port IBA 1 Login/ 4 Gateway GW GW GW GW Remote Partition Master nodes Server 1GbE Management 1/10 GbEnet Federated Switch S S S S MD MD S S S S … Peloton is not a computer - it is a procurement and a way of thinking Oneaggressively “Peloton” about SUachieving ~ 5.55 TCO TFlopsefficiency LLNL HPC for LSD 5 America’s dominance in simulation today is the result of NNSA’s investments and its astute technical choices 500 Purple Power (IBM) sites Today’s Top 12 supercomputers Blue Gene (IBM) sites 400 Blue Gene/L depended on NNSA investments (LLNL) Red Storm (Cray) sites Curve 3 Other ASC technology 300 investments 37% of the Top 500 systems today 200 depended on ASC technology Curve 2 Red Storm (SNL) ASC Purple investments Curve 1 (LLNL) 100 Curve 1 Delivered Speed (TF/s) 0 BG/L is #1 for the seventh consecutive Top 500 list 1 2 3 4 5 6 7 8 9 10 11 12 Four Gordon Bell prizes in three years Position on TOP500 List World’s first low-power consuming supercomputer reduces energy footprint by 75% LLNL HPC for LSD 6 Livermore Currently Has the Worlds Best Computing Infrastructure, by a Wide Margin LLNL HPC for LSD 7 Open Computing Facility Simulation Environment is Based on Lustre ALC Atlas Thunder IGS (test) 9 TFLOPS 44 TFLOPS 23 TFLOPS 95TB Zeus 11 TFLOPS Prism 1 TFLOP Lustre Federated Ethernet Core Network Yana 3 TFLOPS BGL-Dev 6 TFLOPS lscratchb lscratcha Total: ~100 TFLOPS 744TB 1.2PB ~2PB 8 Secure Computing Facility Simulation Environment Replicates Many Aspects of the OCF Total: 665 TFLOPS 3.5 PB BGL Minos 596 TFLOPS 33 TFLOPS 75GB/s Rhea 22 TFLOPS Gauss 2 TFLOPS 15GB/s Lustre Federated 30GB/s Lilac Ethernet Core 2.5GB/s 9 TFLOPS Hopi Network 3 TFLOPS 45GB/s lscratch1 lscratch3 1PB 2.5PB 9 Predictive simulation roadmap through exascale DP Goals: Develop and demonstrate Assess & certify without capability to certify aging Certify LEPs and RRWs (near requiring reliance on UGTs weapons with codes calibrated neighbors to the test base) to past UGTs ASC Goals: • Validated predictive capabilities • Create 3D integrated design • Ad hoc factors replaced by physics • First principles/atomistic codes based models calculations of key physics • Calibrate codes with AGEX • Transition to quantified 3D predictive • Design with new materials or and UGT data capability components in months Initial Technical need Principal uncertainties: Energy Conditions Secondary for future nuclear Boost Balance For Boost Performance tests eliminated 96 97 98 99 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 Roadrunner Red Storm 124 Computing Power: Purple Dawn 100 BG/L 14-20PF 150PF BG/L 590 360 1EF Terascale systems Petascale systems Exascale systems Uncertainty Quantification LLNL HPC for LSD 10 Predicting stockpile performance drives five separate classes of petascale calculations 1. Quantifying uncertainty (for all classes of simulation) 2. Identify and model missing physics (e.g., boost) 3. Improving accuracy in material property data 4. Improving models for known physical processes 5. Improving the performance of complex models and algorithms in macro-scale simulation codes Each of these mission drivers require petascale computing LLNL HPC for LSD 11 Sequoia is the key integrating tool for Stockpile Stewardship in 2011-2015 time frame 10X 100X CAPABILITY UQ 5 PF/s Exascale Sequoia will provide 4,400 runs at 10 credible UQ for 3D First 3D look at Certification stockpile certification TF/s/run in 1 initial conditions month for boost with predictive UQ •Execution of Integrated RRW Primary Bound impact of Boost Program (IBP) Engineering 3D features ~2020 requires Sequoia – IBP can’t be done on UQ Purple UQ CAPABILITY (on 10% of Purple) 4,400 runs at 10 2D TF/s/run 5 PF/s •IBP is necessary to Baseline of Simulation Uncertainties Expose true physics High resolution achieve certification uncertainty to drive discovery for IBP with predictive UQ 4,400 runs in 1 month IBP Standard High Ultra Resolution Resolution Resolution LLNL HPC for LSD 12 Sequoia Target Architecture Leverages LLNL Success with Five Generations of ASC Platforms . Sequoia Peta-Scale Architecture • Smoothly integrates into LLNL environment • Architecture suitable for weapons science and Integrated Design Codes (IDC) 24x Purple for design codes in support of certification 20-50x for weapon science requirements • Risk Reduction woven into every aspect of Sequoia Associated R&D contracts accelerate technology development Targets and Off-ramps allow innovation with shared risk • ASC HQ supports: $250M Total Project Cost Sequoia, ID system and R&D LLNL HPC for LSD 13 Each year we get faster more processors Moore’s Historically: Boost single- Montecito Law stream performance via more complex chips, first Intel CPU Trends via one big feature, then via (sources: Intel, Wikipedia, K. Olukotun) lots of smaller features. Now: Deliver more cores per chip. Pentium The free lunch is over for 386 today’s sequential apps and many concurrent apps (expect some regressions). We need killer apps with lots of latent parallelism. A generational advance >OO is necessary to get above the “threads+locks” programming model. From Herb Sutter 14 1414 LLNL HPC for LSD <[email protected]> How many cores are you coding for? 600 How 512 500 Do We Get to 400 Here? OoO-Cores InO-Cores InO-Threads 300 256 You 256 Parallelism 200 Are Here! 128 128 100 64 64 32 32 32 1 2 4 8 8 16 16 0 2005 2006 2007 2008 2009 2010 2011 2012 2013 Microprocessor parallelism will increase exponentially (2x/1.5yr) in the next decade From Herb Sutter LLNL HPC for 15LSD 15 <[email protected]> DRAM component density is doubling every 3 years Source: IBM LLNL HPC for LSD 16 Source: IBM Source: Memory power projection (Quad projection power Memory 25 20 15 - LLNL HPC HPC LLNLfor LSD rank DIMM) rank 10 DDR Power [Watt]Power DDR2 5 DDR3 DDR4 0 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 17 Year 17 Application Programming Model Requirements . MPI Parallelism at top level • Static allocation of MPI tasks to nodes and sets of cores+threads . Effectively absorb multiple cores+threads in MPI task . Support multiple languages: C/C++/Fortran03 . Allow different physics packages to express node concurrency in different ways LLNL HPC for LSD 18 18 Unified Nested Node Concurrency TM/SE TM/SE OpenMP OpenMP OpenMP OpenMP MPI_FINALIZE MPI_INIT MPI Call MPI MPI Call MPI MPI Call MPI Call MPI Call MPI Call MPI Call MPI Thread0 Thread1 W W Thread2 W W Exit MAIN MAIN Funct2 Funct1 Thread3 W Funct1 1-3 W 1-3 1-3 1-3 1-3 1-3 1-3 1) Pthreads born with MAIN 2) Only Thread0 calls functions to nest parallelism 3) Pthreads based MAIN calls OpenMP based Funct1 4) OpenMP Funct1 calls TM/SE based Funct2 5) Funct2 returns to OpenMP based Funct1 6) Funct1 returns to Pthreads based MAIN .
Recommended publications
  • Year in Review 2 NEWSLINE January 7, 2011 2010: S&T Achievement and Building for the Future
    Published for Nthe employees of LawrenceEWSLINE Livermore National Laboratory January 7, 2011 Vol. 4, No. 1 Year in review 2 NEWSLINE January 7, 2011 2010: S&T achievement and building for the future hile delivering on its mission obligations with award-winning sci- ence and technology, the Laboratory also spent 2010 building for the future. W In an October all-hands address, Director George Miller said his top priori- ties are investing for the future in programmatic growth and the underpinning infrastructure, as well as recruiting and retaining top talent at the Lab. In Review “It’s an incredibly exciting situation we find ourselves in,” Miller said in an earlier talk about the Lab’s strategic outlook. “If you look at the set of issues facing the country, the Laboratory has experience in all of them.” Defining “national security” broadly, Miller said the Lab will continue to make vital contributions to stockpile stewardship, homeland security, nonprolif- eration, arms control, the environment, climate change and sustainable energy. “Energy, environment and climate change are national security issues,” he said. With an eye toward accelerating the development of technologies that benefit national security and industry, the Lab partnered with Sandia-Calif. to launch the Livermore Valley Open Campus (LVOC) on the Lab’s southeast side. Construction has begun on an R&D campus outside the fence that will allow for collaboration in a broad set of disciplines critical to the fulfillment DOE/NNSA missions and to strengthening U.S. industry’s economic competitiveness, includ- If you look at the set of issues facing ing high-performance computing, energy, cyber security and environment.
    [Show full text]
  • 2017 HPC Annual Report Team Would Like to Acknowledge the Invaluable Assistance Provided by John Noe
    sandia national laboratories 2017 HIGH PERformance computing The 2017 High Performance Computing Annual Report is dedicated to John Noe and Dino Pavlakos. Building a foundational framework Editor in high performance computing Yasmin Dennig Contributing Writers Megan Davidson Sandia National Laboratories has a long history of significant contributions to the high performance computing Mattie Hensley community and industry. Our innovative computer architectures allowed the United States to become the first to break the teraflop barrier—propelling us to the international spotlight. Our advanced simulation and modeling capabilities have been integral in high consequence US operations such as Operation Burnt Frost. Strong partnerships with industry leaders, such as Cray, Inc. and Goodyear, have enabled them to leverage our high performance computing capabilities to gain a tremendous competitive edge in the marketplace. Contributing Editor Laura Sowko As part of our continuing commitment to provide modern computing infrastructure and systems in support of Sandia’s missions, we made a major investment in expanding Building 725 to serve as the new home of high performance computer (HPC) systems at Sandia. Work is expected to be completed in 2018 and will result in a modern facility of approximately 15,000 square feet of computer center space. The facility will be ready to house the newest National Nuclear Security Administration/Advanced Simulation and Computing (NNSA/ASC) prototype Design platform being acquired by Sandia, with delivery in late 2019 or early 2020. This new system will enable continuing Stacey Long advances by Sandia science and engineering staff in the areas of operating system R&D, operation cost effectiveness (power and innovative cooling technologies), user environment, and application code performance.
    [Show full text]
  • Safety and Security Challenge
    SAFETY AND SECURITY CHALLENGE TOP SUPERCOMPUTERS IN THE WORLD - FEATURING TWO of DOE’S!! Summary: The U.S. Department of Energy (DOE) plays a very special role in In fields where scientists deal with issues from disaster relief to the keeping you safe. DOE has two supercomputers in the top ten supercomputers in electric grid, simulations provide real-time situational awareness to the whole world. Titan is the name of the supercomputer at the Oak Ridge inform decisions. DOE supercomputers have helped the Federal National Laboratory (ORNL) in Oak Ridge, Tennessee. Sequoia is the name of Bureau of Investigation find criminals, and the Department of the supercomputer at Lawrence Livermore National Laboratory (LLNL) in Defense assess terrorist threats. Currently, ORNL is building a Livermore, California. How do supercomputers keep us safe and what makes computing infrastructure to help the Centers for Medicare and them in the Top Ten in the world? Medicaid Services combat fraud. An important focus lab-wide is managing the tsunamis of data generated by supercomputers and facilities like ORNL’s Spallation Neutron Source. In terms of national security, ORNL plays an important role in national and global security due to its expertise in advanced materials, nuclear science, supercomputing and other scientific specialties. Discovery and innovation in these areas are essential for protecting US citizens and advancing national and global security priorities. Titan Supercomputer at Oak Ridge National Laboratory Background: ORNL is using computing to tackle national challenges such as safe nuclear energy systems and running simulations for lower costs for vehicle Lawrence Livermore's Sequoia ranked No.
    [Show full text]
  • FY 2005 Annual Performance Evaluation and Appraisal Lawrence Livermore National Laboratory (Rev
    Description of document: FY 2005 Annual Performance Evaluation and Appraisal Lawrence Livermore National Laboratory (Rev. 1 June 15, 2006) Requested date: 26-January-2007 Released date: 11-September-2007 Posted date: 15-October-2007 Title of Document Fiscal Year 2005 Annual Performance Evaluation and Appraisal Lawrence Livermore National Laboratory Date/date range of document: FY 2005 Source of document: Department of Energy National Nuclear Security Administration Service Center P.O. Box 5400 Albuquerque, NM 87185 Freedom of Information Act U.S. Department of Energy 1000 Independence Ave., S.W. Washington, DC 20585 (202) 586-5955 [email protected] http://management.energy.gov/foia_pa.htm The governmentattic.org web site (“the site”) is noncommercial and free to the public. The site and materials made available on the site, such as this file, are for reference only. The governmentattic.org web site and its principals have made every effort to make this information as complete and as accurate as possible, however, there may be mistakes and omissions, both typographical and in content. The governmentattic.org web site and its principals shall have neither liability nor responsibility to any person or entity with respect to any loss or damage caused, or alleged to have been caused, directly or indirectly, by the information provided on the governmentattic.org web site or in this file. Department of Energy National Nuclear Security Administration Service Center P. O. Box 5400 Albuquerque, NM 87185 SEP 11 200t CERTIFIED MAIL - RESTRICTED DELIVERY - RETURN RECEIPT REQUESTED This is in final response to your Freedom oflnformation Act (FOIA) request dated January 26, 2007, for "a copy ofthe most recent two annualperformance reviews for Pantex Site, Kansas City Site, Sandia Site, Los Alamos Site, Y-12 Site and Livermore Site." I contacted the Site Offices who have oversight responsibility for the records you requested, and they are enclosed.
    [Show full text]
  • The Artisanal Nuke, 2014
    The Artisanal Nuke Mary C. Dixon US Air Force Center for Unconventional Weapons Studies Maxwell Air Force Base, Alabama THE ARTISANAL NUKE by Mary C. Dixon USAF Center for Unconventional Weapons Studies 125 Chennault Circle Maxwell Air Force Base, Alabama 36112-6427 July 2014 Disclaimer The opinions, conclusions, and recommendations expressed or implied in this publication are those of the author and do not necessarily reflect the views of the Air University, Air Force, or Department of Defense. ii Table of Contents Chapter Page Disclaimer ................................................................................................... ii Table of Contents ....................................................................................... iii About the Author ......................................................................................... v Acknowledgements ..................................................................................... vi Abstract ....................................................................................................... ix 1 Introduction .............................................................................................. 1 2 Background ............................................................................................ 19 3 Stockpile Stewardship ........................................................................... 27 4 Opposition & Problems ......................................................................... 71 5 Milestones & Accomplishments ..........................................................
    [Show full text]
  • Technical Issues in Keeping the Nuclear Stockpile Safe, Secure, and Reliable
    Technical Issues in Keeping the Nuclear Stockpile Safe, Secure, and Reliable Marvin L. Adams Texas A&M University Sidney D. Drell Stanford University ABSTRACT The United States has maintained a safe, secure, and reliable nuclear stockpile for 16 years without relying on underground nuclear explosive tests. We argue that a key ingredient of this success so far has been the expertise of the scientists and engineers at the national labs with the responsibility for the nuclear arsenal and infrastructure. Furthermore for the foreseeable future this cadre will remain the critical asset in sustaining the nuclear enterprise on which the nation will rely, independent of the size of the stockpile or the details of its purpose, and to adapt to new requirements generated by new findings and changes in policies. Expert personnel are the foundation of a lasting and responsive deterrent. Thus far, the United States has maintained a safe, secure, and reliable nuclear stockpile by adhering closely to the designs of existing “legacy” weapons. It remains to be determined whether this is the best course to continue, as opposed to introducing new designs (as envisaged in the original RRW program), re-using previously designed and tested components, or maintaining an evolving combination of new, re-use, and legacy designs. We argue the need for a stewardship program that explores a broad spectrum of options, includes strong peer review in its assessment of options, and provides the necessary flexibility to adjust to different force postures depending on evolving strategic developments. U.S. decisions and actions about nuclear weapons can be expected to affect the nuclear policy choices of other nations – non-nuclear as well as nuclear – on whose cooperation we rely in efforts to reduce the global nuclear danger.
    [Show full text]
  • A Comparison of the Current Top Supercomputers
    A Comparison of the Current Top Supercomputers Russell Martin Xavier Gallardo Top500.org • Started in 1993 to maintain statistics on the top 500 performing supercomputers in the world • Started by Hans Meuer, updated twice annually • Uses LINPACK as main benchmark November 2012 List 1. Titan, Cray, USA 2. Sequoia, IBM, USA 3. K Computer, Fujitsu, Japan 4. Mira, IBM, USA 5. JUQueen, IBM, Germany 6. SuperMUC, IBM, Germany Performance Trends http://top500.org/statistics/perfdevel/ Trends • 84.8% use 6+ processor cores • 76% Intel, 12% AMD Opteron, 10.6% IBM Power • Most common Interconnects - Infiniband, Gigabit Ethernet • 251 in USA, 123 in Asia, 105 in Europe • Current #1 Computer • Oak Ridge National Laboratory Oak Ridge, Tennessee • Operational Oct 29, 2012 • Scientific research • AMD Opteron / NVIDIA Tesla • 18688 Nodes, 560640 Cores • Cray Gemini Interconnect • Theoretical 27 PFLOPS Sequoia • Ranked #2 • Located at the Lawrence Livermore National Library in Livermore, CA • Designed for "Uncertainty Quantification Studies" • Fully classified work starting April 17 2013 Sequoia • Cores: 1572864 • Nodes: 98304 • RAM: 1572864 GB • Interconnect: Custom 5D Torus • Max Performance: 16324.8 Tflops • Theoretical Max: 20132.7 Tflops • Power Consumption: 7890 kW Sequoia • IBM BlueGene/Q Architecture o Started in 1999 for protein folding o Custom SoC Layout 16 PPC Cores per Chip, each 1.6Ghz, 0.8V • Built for OpenMP and POSIX programs • Automatic SIMD exploitation Sequoia • #3 on Top 500, #1 June 2011 • RIKEN Advanced Institute for Computational
    [Show full text]
  • FY 2006 Annual Performance Evaluation and Appraisal Lawrence Livermore National Laboratory (Rev
    Description of document: FY 2006 Annual Performance Evaluation and Appraisal Lawrence Livermore National Laboratory (Rev. 1 January 19, 2007) Requested date: 26-January-2007 Released date: 11-September-2007 Posted date: 15-October-2007 Title of Document Fiscal Year 2006 Annual Performance Evaluation and Appraisal Lawrence Livermore National Laboratory Date/date range of document: FY 2006 Source of document: Department of Energy National Nuclear Security Administration Service Center P.O. Box 5400 Albuquerque, NM 87185 Freedom of Information Act U.S. Department of Energy 1000 Independence Ave., S.W. Washington, DC 20585 (202) 586-5955 [email protected] http://management.energy.gov/foia_pa.htm The governmentattic.org web site (“the site”) is noncommercial and free to the public. The site and materials made available on the site, such as this file, are for reference only. The governmentattic.org web site and its principals have made every effort to make this information as complete and as accurate as possible, however, there may be mistakes and omissions, both typographical and in content. The governmentattic.org web site and its principals shall have neither liability nor responsibility to any person or entity with respect to any loss or damage caused, or alleged to have been caused, directly or indirectly, by the information provided on the governmentattic.org web site or in this file. Department of Energy National Nuclear Security Administration Service Center P. O. Box 5400 Albuquerque, NM 87185 SEP 11 200t CERTIFIED MAIL - RESTRICTED DELIVERY - RETURN RECEIPT REQUESTED This is in final response to your Freedom oflnformation Act (FOIA) request dated January 26, 2007, for "a copy ofthe most recent two annualperformance reviews for Pantex Site, Kansas City Site, Sandia Site, Los Alamos Site, Y-12 Site and Livermore Site." I contacted the Site Offices who have oversight responsibility for the records you requested, and they are enclosed.
    [Show full text]
  • The Blue Gene/Q Compute Chip
    The Blue Gene/Q Compute Chip Ruud Haring / IBM BlueGene Team © 2011 IBM Corporation Acknowledgements The IBM Blue Gene/Q development teams are located in – Yorktown Heights NY, – Rochester MN, – Hopewell Jct NY, – Burlington VT, – Austin TX, – Bromont QC, – Toronto ON, – San Jose CA, – Boeblingen (FRG), – Haifa (Israel), –Hursley (UK). Columbia University . University of Edinburgh . The Blue Gene/Q project has been supported and partially funded by Argonne National Laboratory and the Lawrence Livermore National Laboratory on behalf of the United States Department of Energy, under Lawrence Livermore National Laboratory Subcontract No. B554331 2 03 Sep 2012 © 2012 IBM Corporation Blue Gene/Q . Blue Gene/Q is the third generation of IBM’s Blue Gene family of supercomputers – Blue Gene/L was announced in 2004 – Blue Gene/P was announced in 2007 . Blue Gene/Q – was announced in 2011 – is currently the fastest supercomputer in the world • June 2012 Top500: #1,3,7,8, … 15 machines in top100 (+ 101-103) – is currently the most power efficient supercomputer architecture in the world • June 2012 Green500: top 20 places – also shines at data-intensive computing • June 2012 Graph500: top 2 places -- by a 7x margin – is relatively easy to program -- for a massively parallel computer – and its FLOPS are actually usable •this is not a GPGPU design … – incorporates innovations that enhance multi-core / multi-threaded computing • in-memory atomic operations •1st commercial processor to support hardware transactional memory / speculative execution •… – is just a cool chip (and system) design • pushing state-of-the-art ASIC design where it counts • innovative power and cooling 3 03 Sep 2012 © 2012 IBM Corporation Blue Gene system objectives .
    [Show full text]
  • Stewarding a Reduced Stockpile
    LLNL-CONF-403041 Stewarding a Reduced Stockpile Bruce T. Goodwin, Glenn L. Mara April 24, 2008 AAAS Technical Issues Workshop Washington, DC, United States Disclaimer This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes. This work performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. Stewarding a Reduced Stockpile Bruce T. Goodwin, Principal Associate Director Weapons and Complex Integration Lawrence Livermore National Laboratory Glenn Mara, Principal Associate Director for Weapons Programs Los Alamos National Laboratory The future of the US nuclear arsenal continues to be guided by two distinct drivers: the preservation of world peace and the prevention of further proliferation through our extended deterrent umbrella. Timely implementation of US nuclear policy decisions depends, in part, on the current state of stockpile weapons, their delivery systems, and the supporting infrastructure within the Department of Defense (DoD) and the Department of Energy’s National Nuclear Security Administration (NNSA).
    [Show full text]
  • January/February 2005 University of California Science & Technology Review Lawrence Livermore National Laboratory P.O
    January/February 2005 University of California Science & Technology Review Lawrence Livermore National Laboratory P.O. Box 808, L-664 Livermore, California 94551 National Nuclear Security Administration’s Lawrence Livermore National Laboratory Also in this issue: • Jobs for Russian Weapons Workers • New Facility for Today’s and Tomorrow’s Printed on recycled paper. Supercomputers • Extracting Marketable Minerals from Geothermal Fluids About the Cover Livermore scientists use powerful machines and codes for computer simulations that have changed the way science is done at the Laboratory. As the article beginning on p. 4 describes, computer simulations have become powerful tools for understanding and predicting the physical universe, from the interactions of individual atoms to the details of climate change. For example, Laboratory physicists have predicted a new melt curve of hydrogen, resulting in the possible existence of a novel superfluid. The cover illustrates the transition of hydrogen from a molecular solid (top) to a quantum liquid (bottom), with a “metallic sea” added in the background. Cover design: Amy Henke Amy design: Cover About the Review Lawrence Livermore National Laboratory is operated by the University of California for the Department of Energy’s National Nuclear Security Administration. At Livermore, we focus science and technology on ensuring our nation’s security. We also apply that expertise to solve other important national problems in energy, bioscience, and the environment. Science & Technology Review is published 10 times a year to communicate, to a broad audience, the Laboratory’s scientific and technological accomplishments in fulfilling its primary missions. The publication’s goal is to help readers understand these accomplishments and appreciate their value to the individual citizen, the nation, and the world.
    [Show full text]
  • B UCRL-AR-143313-08 Lawrence Livermore National Laboratory
    UCRL-AR-143313-08 B Lawrence Livermore National Laboratory UCRL-AR-143313-08 FY09 Ten Year Site Plan A UCRL-AR-143313-08 Acknowledgments Responsible Organization Publication Editor Contributing Authors Institutional Facilities Karen Kline Jacky Angell Dan Knight Management Art Director Mike Auble Bill Maciel Sharon Beall Matt Mlekush Responsible Manager Scott Dougherty Denise Robinson Jeff Brenner Al Moser Design and Production Dennis Chew Ray Pierce Publication Directors Scott Dougherty Ray Chin Paul Reynolds Carey Bailey Marleen Emig Paul Chrzanowski Larry Sedlacek Paul Chu Paul Chu Rich Shonfeld Publication Manager Shawne Ellerbee Mark Sueksdorf Marleen Emig Kent Johnson Doug Sweeney Paul Kempel Jesse Yow And a special thanks to the NNSA Livermore Site Office, LLNL Directorates, Programs, Area Facility Operations Managers, contributors, and reviewers. DISCLAIMER This document was prepared as an account of work sponsored by an agency of the United States government. Neither the United States government nor Lawrence Livermore National Security, LLC, nor any of their employees makes any warranty, expressed or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States government or Lawrence Livermore National Security, LLC. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States government or Lawrence Livermore National Security, LLC, and shall not be used for advertising or product endorsement purposes.
    [Show full text]