High Performance Computing Workshop

Session VII

HPC Resources and Training Opportunities at the NNSA/ASC National Labs

Blaise Barney, Lawrence Livermore National Laboratory Overview

• Who Are the NNSA/ASC National Labs?

• Leadership in High Performance Computing

• HPC Science at the Labs

• Overview of HPC Platforms at LLNL, LANL and Sandia

• HPC Training Opportunities at the NNSA/ASC Labs

• ASC Academic Alliance Program

• Future Platforms

DOE CSGF HPC Workshop 1 Who Are the NNSA/ASC National Labs?

• NNSA: the Department of Energy’s National Nuclear Security Administration. • Broad range of responsibilities including nuclear security, non- proliferation, naval reactors and defense programs. $24B budget • National security and in a post cold-war era are essential mission components. • ASC: Advanced Simulation and Computing Program within NNSA • Established in 1995 to support NNSA’s shift in emphasis from test based confidence to simulation based confidence. $611M budget • Under ASC, computer simulation capabilities are developed to analyze, predict and certify the performance, safety, and reliability of the nation’s nuclear stockpile. • Multi-disciplinary simulation science and implementation of the world’s most powerful computing resources.

DOE CSGF HPC Workshop 2 Who Are the NNSA/ASC National Labs? • The Department of Energy funds a wide range of national laboratories and related facilities: • Ames • Los Alamos • Princeton Plasma Physics • Argonne • National Energy • Sandia • Brookhaven Technology • Savannah River • Fermi National Accelerator • National Renewable • Stanford Linear Accelerator • Idaho Energy • Thomas Jefferson National • Lawrence Berkeley • Oak Ridge Accelerator Facility • Lawrence Livermore • Three of these national laboratories are designated as NNSA defense labs: • Lawrence Livermore • Los Alamos • Sandia • The ASC program integrates the work of the NNSA defense labs to facilitate NNSA national security goals and mission. • The ASC program also funds and integrates the work of academic Alliance research centers at 8 U.S. universities (discussed later): Caltech U. Chicago U. Illinois U. Michigan Purdue Stanford U. Texas U. Utah

DOE CSGF HPC Workshop 3 Who Are the NNSA/ASC National Labs? (aka Tri-labs)

• Lawrence Livermore National Laboratory (LLNL) • Located in Livermore, CA; Established in 1952 • 7,800+ employees and contractors • Managed by Lawrence Livermore National Security, LLC (LLNS). Budget @ 1.6 billion

• Los Alamos National Laboratory (LANL) • Located in Los Alamos, NM; Origins in the Manhattan Project (@1943) • 9,500+ employees and contractors • Managed by Los Alamos National Security, LLC (LANS). Budget @ 2.2 billion

• Sandia National Laboratories (SNL) • Albuquerque, NM (main), Livermore, CA, test facilities in Nevada and Hawaii; Established 1949 • 8,400+ employees and contractors • Managed by Lockheed Martin Corporation. Budget @ 2.3 billion

DOE CSGF HPC Workshop 4 Leadership in High Performance Computing

The NNSA/ASC Labs have a long HPC history – even before the term "HPC" and the NNSA/ASC were invented. For example - at LLNL:

DOE CSGF HPC Workshop 5 Leadership in High Performance Computing

At LANL….

DOE CSGF HPC Workshop 6 Leadership in High Performance Computing

At Sandia... • 1987 - 1024 node nCUBE 10 • 1990 - two 1024-node nCUBE-2’s • 1993 - Sandia fielded first Intel Paragon. 1850 nodes/3900 cpus Reached #1 on Top500 nCube • 1997-2006 - ASCI Red - Intel. World’s first Tflop system; Upgraded to 3Tflops, 4510 nodes/9298 cpus. Fastest computer on TOP500 list for 3 years. • 1997 - Cplant DEC/Compaq with Myrinet network. Achieved 997 Gflops in Nov 2003. • 2006 - current - Thunderbird - Dell. Linux cluster 65.4 Tflops. 4,480 nodes/8960 cpus. Jun 2006 #6 on TOP500 list. • 2006 - current - Red Storm - Cray. 284 Tflops theoretical peak. 12960 nodes/38,400 cpus. Nov 2009 Listed #9 in TOP500 list Paragon

ASCI Red Red Storm Cplant

DOE CSGF HPC Workshop 7 Leadership in High Performance Computing

The NNSA/ASC Labs have dominated the Top 500 list (.org) for most of the 16 years since the list was created in 1993.

Ranked #1 position 22 times (as of 6/09)

DOE CSGF HPC Workshop 8 Leadership in High Performance Computing

• Award winning science and technology • Gordon Bell Prize recognizing outstanding achievements in HPC - 16 times since it began in 1987 • Many R&D 100 Awards • Very long lists of other awards by government, industry and scientific organizations: www.sandia.gov/news/corp/awards/ www.llnl.gov/llnl/sciencetech/awards.jsp www.lanl.gov/science/awards/

• The science in the Labs’ state- of-the-art simulations is recognized in scientific publications around the world

DOE CSGF HPC Workshop 9 Leadership in High Performance Computing

• The NNSA/ASC Labs have been instrumental in revitalizing the U.S. HPC industry by working closely with vendors such as Cray and IBM From "Getting Up to Speed: The Future of Supercomputing" by Susan L. Graham, Marc Snir, and Cynthia A. Patterson, Editors, Committee on the Future of Supercomputing, National Research Council. 2004

Supercomputers play a significant and growing role in a variety of areas important to the nation. They are used to address challenging science and technology problems. In recent years, however, progress in supercomputing in the United States has slowed. The development of the Earth Simulator by Japan showed that the United States could lose its competitive advantage and, more importantly, the national competence needed to achieve national goals. In the wake of this development, the Department of Energy asked the NRC to assess the state of U.S. supercomputing capabilities and relevant R&D. Subsequently, the Senate directed DOE in S. Rpt. 107-220 to ask the NRC to evaluate the Advanced Simulation and Computing program of the National Nuclear Security Administration at DOE in light of the development of the Earth Simulator. This report provides an assessment of the current status of supercomputing in the United States including a review of current demand and technology, infrastructure and institutions, and international activities.

DOE CSGF HPC Workshop 10 Leadership in High Performance Computing

• The NNSA/ASC Labs have been instrumental in revitalizing the U.S. HPC industry by working closely with vendors such as Cray and IBM. From "Getting Up to Speed: The Future of Supercomputing" by Susan L. Graham, Marc Snir, and Cynthia A. Patterson, Editors, Committee on the Future of Supercomputing, National Research Council. 2004

Supercomputers play a significant and growing role in a variety of areas important to the nation. They are used to address Thingschallenging have science and technology problems. In recent years, however, progress in supercomputing in the United States has changedslowed. …The development of the Earth Simulator supercomputer by Japan showed that the United States could largelylose due its competitive to advantage and, more importantly, the national competence needed to achieve national goals. In the DOE/NNSA/ASCwake of this development, the Department of Energy asked the NRC to assess the state of U.S. supercomputing capabilities investmentsand relevant inR&D. Subsequently, the Senate directed DOE in S. Rpt. 107-220 to ask the NRC to evaluate the Advanced HPC Simulation and Computing program of the National Nuclear Security Administration at DOE in light of the development of the Earth Simulator. This report provides an assessment of the current status of supercomputing in the United States including a review of current demand and technology, infrastructure and institutions, and international activities.

DOE CSGF HPC Workshop 11 HPC Science at the Labs

• There is a very broad range of science being explored at the Labs, much depending upon HPC: • Physics - applied, nuclear, particle & accelerator, condensed matter, high pressure, fusion, photonics • Atmosphere, Earth, Environment and Energy • Biosciences and Biotechnology • Engineering - defense technologies, laser systems (NIF), mechanical… • Chemistry • Materials • Microelectronics • Pulsed power • Computer & Information Science, Mathematics

• Diverse community of scientists, researchers and collaborations - Tri-lab, other labs, universities, international… DOE CSGF HPC Workshop 12 HPC Science at the Labs: A Few Examples

Concurrently using 196,608 processors in a single run, the high-fidelity simulations of a three-dimensional laser beam interacting with target is critical to achieving fusion ignition on the National Ignition Facility. BlueGene/L is refining the design of the National Ignition Facility, scheduled to achieve fusion ignition in 2010. Obtaining controlled laboratory fusion is the holy grail of national energy independence.

DOE CSGF HPC Workshop 13 HPC Science at the Labs: A Few Examples

DOE CSGF HPC Workshop 14 HPC Science at the Labs: A Few Examples

DOE CSGF HPC Workshop 15 HPC Science at the Labs: A Few Examples

University of Texas: Center for Predictive Engineering and Computational Sciences (PECOS). The goal of the PECOS Center is development of advanced computational methods for predictive simulation of multiscale, multiphysics phenomena applied to the problem of reentry of vehicles into the atmosphere.

DOE CSGF HPC Workshop 16 Overview of HPC Platforms at the Tri-labs

DOE CSGF HPC Workshop 17 HPC @ LLNL: BG/L and BG/P

• 1999 origins in IBM exploring a novel, new architecture: massively scalable parallel with low power consumption and small footprint.

• Targeted for protein folding research, but later became mainstream; BG systems now found internationally and rank prominently on the Top500 list

• Collaboration with LLNL resulted in the world’s fastest computer from 11/04 to 6/08. LLNL's BG/L system: 596 Tflops; 106,496 nodes; 212,992 cpus

• BG/L ―System on a Chip‖ PowerPC design • 32-bit architecture • dual-core processor @ 700 MHz • 512MB - 1GB memory/node • double floating-point unit (double hummer) per cpu

DOE CSGF HPC Workshop 18 HPC @ LLNL: BG/L and BG/P

• 2 nodes/compute card; 16 cards/board, 32 boards/rack (1024 nodes) • Compute nodes use an extremely lightweight custom kernel • I/O nodes run a customized Linux kernel

• Multiple networks: • 3D torus – 6 nearest neighbor topology for point-to-point MPI (175 MB/s x 6 links x 2 ways) • Global tree – MPI collectives (350 MB/s x 3 links x 2 ways) • MPI Barrier • I/O GB ethernet • Service network

DOE CSGF HPC Workshop 19 HPC @ LLNL: BG/L and BG/P

• BG/P design is very similar to its BG/L predecessor – some notable improvements: • 32-bit architecture • quad-core processor @ 850 MHz • Up to 4GB memory/node • Bandwidth of internal networks more than doubled • Maximum size now scales to over 1 million cpus and 3.56 petaflops

• Stay tuned: success has led to widespread adoption of this architecture and further development in ―BG/X‖ systems.

DOE CSGF HPC Workshop 20 HPC @ LLNL: ASC Purple

• Debuted in 2005 as #3 on the Top500 list, and fulfilled ASC's 10 year goal of a 100 Tflop system.

• LLNL's ASC Purple system: • Built on IBM's Power5 P5-575 node (8cpus) • 1,532 nodes; 12,288 cpus • 32 GB memory/node • High speed internal switch network • AIX

• IBM Power5 architecture • 64-bit architecture • Dual-core processor @ 1.9 GHz • Multiple chips combined to form modules • Up to 64 cpus/node by combining modules

• P5-575 node is specially designed for scientific use – one core is shut off, providing better memory-cpu bandwidth; L2/L3 cache dedicated to single cpu. DOE CSGF HPC Workshop 21 HPC @ LLNL: ASC Purple

• ASC Purple configuration • Majority of nodes dedicated to production parallel batch jobs • Subset of nodes dedicated as parallel I/O servers or login nodes • High speed switch connects all nodes • Large GPFS parallel file system (2.8 Petabytes) • GigE external network to HPSS mass storage and other LC clusters

• uP is smaller version of Purple that exists on the unclassified network - @ 100 nodes / 800 cpus

DOE CSGF HPC Workshop 22 HPC @ LANL: Linux AMD Clusters

• LANL is almost an all-AMD IA-64 ―capacity‖ Linux shop (ie. throughput engines), with a few exceptions

• Over 32,000 Opteron cores @ 2.0-2.6 GHz spread across eight clusters

• 1,916 Intel Xeon @ 2.4 GHz

• Usually 4 GB memory/core

• Primarily Myrinet, migrating toward Infiniband interconnect

• Range in size from 32 - 3300 nodes

DOE CSGF HPC Workshop 23 HPC @ LANL: Roadrunner

• LANL Roadrunner cluster, current #1 on Top 500 list • After a long search for a partner, LANL signed contract with IBM on 9/8/06. Three-phase delivery with final system delivered in 2008. • First machine to break the Petaflop barrier - 1.026 PF Linpack on 5/26/08. Currently rated at 1.105 PF Linpack. • Hybrid architecture combining dual-core Opterons with IBM Cell processors

How much is a petaflop? It would take the entire population of the earth (about six billion) each working a handheld calculator at the rate of one calculation per second, more than 46 years to do what Roadrunner can do in one day.

DOE CSGF HPC Workshop 24 HPC @ LANL: Roadrunner

• Hybrid architecture - each Roadrunner "TriBlade" node is comprised of: • LS21 blade: dual-socket, dual-core, 1.8 GHz AMD Opteron (2210 HE) processors • QS22 blades (2): total of four IBM PowerXCell 8i 3.2 GHz Cell processors • Expansion blade: connects LS21, QS22s and ConnectX Infiniband 4X DDR • Node design points: — One Cell chip per Opteron core — ~400 GF/s double-precision & ~800 GF/s single-precision (Cells total) — 16 GB Cell memory & 16 GB Opteron memory — Opterons manage standard processing, such as filesystem I/O; Cell processors handle mathematical and CPU-intensive tasks.

DOE CSGF HPC Workshop 25 HPC @ LANL: Roadrunner

• Key component: Cell Broadband Engine developed by Sony-Toshiba-IBM, used in Sony Play Station 3

• The Cell processor is an (8+1)-way heterogeneous parallel processor (SPU)

• 8 Synergistic Processing Elements (SPE) • SXU = 128-bit vector engines • LS = 256 kB local store memory • SMF = Direct Memory Access engine (25.6 GB/s each) • EIB = Chip interconnect • Run SPE-code as POSIX

• Current Cell Performance: • 204.8 GF/s single precision • 102.4 GF/s double precision • 4-8 GB @ 25.6 GB/s DDR memory

• PowerPC PPE runs Linux OS

DOE CSGF HPC Workshop 26 HPC @ LANL: Roadrunner

• Roadrunner is Cell-accelerated, not a cluster of Cells. Built from ―Connected Units‖ (CUs) • CU = 180 compute nodes w/ Cells + 12 I/O nodes + IB networking • 17 CUs = 3,060 total compute nodes • 6,120 dual-core Opterons (AMD 2210 HE) = 44 Tflops • 12,240 Cell chips = 1.2 Pflops

DOE CSGF HPC Workshop 27 HPC @ LANL: Roadrunner

Roadrunner Summary • Cluster of 17 Connected Units (CU) • 216 GB/s sustained File System I/O • 3,060 compute + 204 I/O nodes over 216x2 10G Ethernets to • 12,240 compute + 864 I/O 1.8 GHz AMD Panasas parallel file system Opteron (2210 HE) processors • 12,240 IBM PowerXCell 8i 3.2 GHz Cell • Roadrunner TriBlades are completely processors • 44 Tflops (+4.5 I/O) Opteron peak diskless and run from RAM disks • 1.22 Pflops Cell peak with NFS & Panasas only to the • 1 PF sustained Linpack LS21 blades • InfiniBand 4x DDR fabric • Fedora Linux (RHEL possible) • 2-stage fat-tree; all-optical cables • Full bi-section BW within each CU • SDK for Multicore Acceleration 384 GB/s (bi-directional) • Cell compilers, libraries, tools • Half bi-section BW among CUs 3.45 TB/s (bi-directional) • xCAT Cluster Management • Non-disruptive expansion to 24 CUs • System-wide GigEnet network 104 TB aggregate memory • • 3.9 MW Power; 0.35 GF/ • 52 TB Opteron • 52 TB Cell • Area: 296 racks; 5500 ft2

DOE CSGF HPC Workshop 28 HPC @ LANL: Roadrunner

• Programming: Three types of processors work together

• Parallel computing on Cell • data partitioning & work queue pipelining • process management & synchronization

• Remote communication to/from Cell • data communication & synchronization • process management & synchronization • computationally-intense offload

• MPI remains as the foundation

DOE CSGF HPC Workshop 29 HPC @ Sandia: Thunderbird

• Comprised of 4,480 Dell nodes • 2 cpus/node = 8,960 total processors • 3.6 GHz Intel EM64T processors • 65.4 Tflop peak performance • Linux operating system • Ranked #5 on Top500 in 11/05 • 6 GB RAM/node • Thunderbird's high-speed message passing fabric is Infiniband

DOE CSGF HPC Workshop 30 HPC @ Sandia: Red Storm • Comprised of 12,960 Cray nodes with a total of 38,400 processors • 6,240 quad-core, 2.2 GHz, AMD Opteron nodes • 6,720 dual-core, 2.4 GHz, AMD Opteron nodes • 75 terabytes of DDR memory • 284 Tflop peak performance. Ranked #2 on Top500 list in 11/06. • Light weight kernel operating system (Catamount OS) • Aggregate system memory bandwidth of 83 TB/s; • High-speed, high-bandwidth, 3D, mesh- based Cray interconnect with minimum sustained aggregate bandwidth of 120 TB/s; • High-performance I/O subsystem - minimum sustained file system bandwidth of 100 GB/s to 1159 TB of parallel disk storage and sustained external network bandwidth of 50 GB/s. DOE CSGF HPC Workshop 31 TLCC: Tri-lab Opteron / Infiniband Clusters

• Design based upon a common NNSA/ASC three lab procurement: Tri- lab Linux Capacity Clusters (TLCC) • "Scalable Unit" (SU) building block of 144 nodes connected by twelve 24-port 4X DDR Infiniband switches. • Each SU = approx. 20 Tflops; $1.2M • Multiple SUs then connected by 288-port Infiniband switches to make larger systems - 288, 576, 1152, etc. 1,152 node interconnect • AMD Socket F Opteron nodes • Quad-core, quad-socket nodes (16 cpus) @ 2.2 GHz; • 32 GB memory/node

• Software stack • Customized Red Hat Linux OS (Chaos) • Moab batch scheduler • MPI, compilers, parallel file systems

DOE CSGF HPC Workshop 32 TLCC: Tri-lab Opteron / Infiniband Clusters

Where Cluster SU Nodes CPUs Tflops Sandia Unity 2 288 4608 40.6 Glory 2 288 4608 40.6 Whitney 2 288 4608 40.6 LLNL Juno 8 1152 18432 162.2 Hera 6 864 13824 121.7 Eos 2 288 4608 40.6 LANL Lobo 2 288 4608 40.6 Hurricane 2 288 4608 40.6 Turing 0.5 72 576 10.2

• LLNL also has six similar clusters (dual-core instead of quad-core) ranging in size from 80 - 1152 nodes and 3.1 - 44.2 Tflops.

• More info: • https://asc.llnl.gov/publications/sc2007-tlcc.pdf

DOE CSGF HPC Workshop 33 HPC Training Opportunities at the NNSA/ASC Labs

• HPC workshops, including hands-on, conducted at all three labs • Available to lab employees, collaborators, university Alliances, student interns, HPC users from any other location.

• Example workshops topics: • Introduction to parallel programming • Architecture specific training for new platforms - ASC Purple, IBM BG/L & BG/P, Cray Red Storm, IBM Roadrunner, AMD Opteron ... • Parallel performance analysis tools • Linux clusters • Parallel debuggers • Compilers • OpenMP • Message Passing Interface (MPI) • POSIX Threads • Batch schedulers • Getting started topics for new users

DOE CSGF HPC Workshop 34 HPC Training Opportunities at the NNSA/ASC Labs

• Expert software training conducted at the labs by invited vendors and developers: • Cray, IBM, Intel, Totalview Technologies, ParaTools ...

• Remote training conducted at user's site • Customized to meet user needs • Possible to combine instructors from all 3 labs and/or vendors

• Collaborations with HPC training at other labs

• Online tutorials and workshop materials • computing.llnl.gov/tutorials • LANL and Sandia materials available for authenticated users

• Access Grid workshops/seminars available in some cases

DOE CSGF HPC Workshop 35 HPC Training Opportunities at the NNSA/ASC Labs

• Frequent seminars on a broad range of HPC related research topics. For example, Apr-May '09 seminars hosted by LLNL's Computation directorate: • Profiling and Incremental Profiling of OpenMP Applications • IBM High Performance Computing Toolkit • Efficient Nonparametric Density Estimation for Randomly Perturbed Elliptic Problems • Non-intrusive Detection of Faults in High Throughput Distributed Systems • Petascale Direct Numerical Simulation of Turbulent Combustion • Model Reduction for Uncertainty Quantification in Large-Scale Complex Systems • The MDA Digital Simulation Architecture and the Technologies that Support It • The Missile Defense Agency, its Mission and Challenges for Simulation • Spade: Faceted Metadata Search for File Systems • Practical UQ Methods and Methodologies for Large Scale Multi-Physics Models

DOE CSGF HPC Workshop 36 HPC Training Opportunities at the NNSA/ASC Labs

• Summer interns, high school, undergrad, graduate, and post-doc opportunities (include much more than just HPC): • https://postdocs.llnl.gov/ • https://www.llnl.gov/llnl/internships/ • jobs.llnl.gov • http://www.lanl.gov/education/ • http://www.hr.lanl.gov/FindJob/ • http://www.lanl.gov/source/science/postdocs/ • http://www.sandia.gov/employment/special-prog/ • http://www.sandia.gov/employment/

DOE CSGF HPC Workshop 37 The ASC Academic Alliance Program

• The NNSA/ASC Labs have a long history of university collaborations. In fact, LLNL and LANL were both managed by the University of California until recently.

• Since 1997, ASC has been funding selected university centers in long- term relationships through its Academic Strategic Alliance Program (ASAP) • California Institute of Technology: Center for Simulating the Dynamic Response of Materials • University of Chicago: Center for Astrophysical Thermonuclear Flashes • University of Illinois, Urbana-Champaign: Center for Simulation of Advanced Rockets • Stanford University: Center for Integrated Turbulence Simulations • University of Utah: Center for Simulation of Accidental Fires and Explosions

• Funding level of @ $5 million/year per Alliance

DOE CSGF HPC Workshop 38 The ASC Academic Alliance Program

• Focus on developing integrated, multidisciplinary scientific applications of national importance and in support of ASC goals.

• Promoting collaborative interactions with Lab researchers and use of HPC resources at each Lab

• In 2008, ASC initiated the follow-on academic alliance program with a strong emphasis on prediction and verification, called the Predictive Science Academic Alliance Program (PSAAP). • California Institute of Technology: Center for the Predictive Modeling and Simulation of High-Energy Density Dynamic Response of Materials • University of Michigan: Center for Radiative Shock Hydrodynamics • Purdue University: Center for Prediction of Reliability, Integrity and Survivability of Microsystems • Stanford University: Center for Predictive Simulations of Multi-Physics Flow Phenomena with Application to Integrated Hypersonic Systems • University of Texas: Center for Predictive Engineering and Computational Sciences

• Funding level of $17 million per Alliance over 5 years

DOE CSGF HPC Workshop 39 HPC Compute Resources Available To The Alliances

Site System Arch Nodes CPU/node GB/node TFlop LANL Lobo AMD Opteron 288 16 32 40.6 LANL Cerrillos Opteron+Cell 360 4+4 16+16 152 LLNL Hera AMD Opteron 864 16 16 121.7 LLNL UBGL IBM BG/L 43,008 2 512MB 240.8

DOE CSGF HPC Workshop 40 Future Platforms: Zia at LANL/Sandia

• ZIA • Next generation capability platform, ie. large scale apps across whole platform • Petascale production capability • Collaboration between Sandia and Los Alamos: "Alliance for Computing at Extreme Scale" (ACES) • Funding profile established with NNSA • Will be competitive procurement issued in FY09 for availability in 2010 • RFP will specify minimum peak, aggregate memory bandwidth and interconnect bandwidth • 2GB memory per core (minimum) • Subject to change

DOE CSGF HPC Workshop 41 Future Platforms: at LLNL

• RFP released 7/16/08 and closed 8/21/08. Contract awarded to IBM on 2/3/09.

• @ $215 million award over a 2008 - 2015 time span

• 20 Petaflop system with 0.5 petaflop early delivery BG/P system

• Will be based on future IBM BlueGene technology and use 1.6 million IBM processors and 1.6 petabytes of memory, housed in 96 refrigerator sized racks.

• Sequoia will deploy a state of the art switching infrastructure that will take advantage of advanced fiber optics at all levels.

DOE CSGF HPC Workshop 42 Future Platforms: Sequoia at LLNL

• ASC Sequoia is the next generation Production platform for NNSA Stockpile Stewardship • Will be 2D ultra-res and 3D high- res Quantification of Uncertainty engine • 3D Science capability for known unknowns and unknown unknowns • @20 Pflops target • Light weight kernel on compute nodes • Linux/Unix on I/O nodes • Generally available in 2010/2011

DOE CSGF HPC Workshop 43 References and More Information

• NNSA: nnsa.energy.gov/ • ASC: www.sandia.gov/NNSA/ASC/ • LANL: www.lanl.gov computing.lanl.gov • LLNL: www.llnl.gov computing.llnl.gov • SNL: www.sandia.gov hpc.sandia.gov • ASC Alliance Program: www.sandia.gov/NNSA/ASC/univ/univ.html

LLNL Review & Release: LLNL-PRES-406636

DOE CSGF HPC Workshop 44