ANL-12/22

Argonne Leadership Computing Facility

2011 annual report Shaping Future Supercomputing Argonne Leadership Computing Facility

2011 alcf annual report www.alcf.anl.gov

Contents Overview...... 2

Mira...... 4

Science Highlights...... 8

Computing Resources...... 26

2011 ALCF Publications...... 28

2012 INCITE Projects...... 39

On the cover This image shows the charge density difference of a carbon monoxide molecule adsorbed (adhered to the surface) of a 309-atom gold naoparticle. Gold, oxygen, and carbon atoms are colored gold, red, and white, respectively. The blue translucent isosurface depicts charge density depletion, and the green translucent isosurface depicts charge density accumulation. Image credit: “Finite Size Effects in Chemical Bonding: From Small Clusters to Solids,” Catalysis Letters, Vol. 141, No. 8, June 2011, pp. 1067-1071. Image rendered by Joe Insley, Argonne National Laboratory. hand-in-hand withthemto ensure that ALCF resources provided maximum performance. These projects that usedapproximately 1.2billioncore-hours on Intrepid. Dedicated ALCF staff worked build ontoday’s achievements. In2011,scientists conducted research onadiverse range of key Intrepid, theALCF’s BlueGene/Psystem—and tomorrow’s strides inscientific supercomputing will However, equallycompelling challenges have beenmet successfully onMira’s predecessor— performance to produce unequaledscientific breakthroughs. will alsopresent researchers andALCF staff withthechallenges ofmaximizing its capabilities and provide asmallfootprint andultra-low power consumption. Withallthesebenefits, the system offer anopensource and standards-based programming environment. Itwill achieve faster times to solutions,and create more accurate models.Itwill calculations persecond, allowing scientists to tacklelarger problems, and challenges. TheBG/Qiscapable ofcarrying out10quadrillion Like any new arrival, thesupercomputer promises to deliver bothmarvels once themachineisfullyupandrunning. means viewfinder. The words reference thediscoveries that willenable remarkable, extraordinary The name“Mira” comes from theLatin words for even before itwas installed!) intheworld. (Ittopped supercomputing’s Green 500list than Intrepid, theBlueGene/P, ourcurrent machine,andthegreenest the 10-petaflops IBMBlueGene/Q system that willbe20times faster With great anticipation, we’ve been preparing forthearrival ofMira, Shaping theFutureofSupercomputingwithMira Argonne NationalLaboratory Computing, Environment, andLife Sciences and DeputyAssociate Laboratory Director Argonne LeadershipComputing Facility Division Director Michael E.Papka addressed withthesestrides insupercomputing! most intractable problems insuchcritical areas ashealth,energy, climate, andphysics can be will calculate quintillions offloating point operations per second.Thinkhowquicklythe world’s Miracomputing, willbringuscloser to ourvision for thefuture: exascale speed,where computers Full production onMira is expected in2013.Besidesbeinga star performer inleadership-class use ontheBlueGene/Qarchitecture. tools for users willbegiven accessto thesystem to complete theprocess ofportingthe tools for selected through theEarly ScienceProgram. Inaddition, key developers ofessential computational delivered 2012.Onceinstalled, inearlyJanuary thefirst researchers to access themwillbethose technology inthedata center. Two testing anddevelopment racks—named Cetus and Vesta—were To supportthesciencethat willbe runonMira, theALCF team installed 35petabytes ofstorage collaborative efforts resulted in breathtaking scientific resultsand pavedthe way for Mira.

. It’s alsosimilarto theItalian word

wonderful, surprising, mirino , which Shaping FutureSupercomputing

1 2011 alcf annual report 2 Overview 2011 alcf annual report ALCF Evolution – – – – – 2011 – – – – 2010 – – – 2009 – – – – 2008 – – – – – 2007 – – – 2006 – 2005 – 2004 – – – – – – – – – – – – – – – – – – – – – – – – – – Delivered 1.2billioncore-hours ofscience predecessor, Eureka Prepared for Tukey, whichwillbetwiceasfast asits Planned facility enhancements completed for Mira Signed contract for delivery ofMira to theALCF in2012 Supported 30INCITE, 7ALCC, and 16ESP projects of sciencesince theALCF’s inception Delivered more than2billioncore-hours to runonthe10-petaflops Mira Selected 16Early ScienceProgram (ESP) projects Blue Gene/Q supercomputer Signed contract for Mira, the next-generation Supported 35INCITE projects and10ALCC projects Delivered more than1millioncore-hours ofscience Supported 28INCITE projects Brought the 557-teraflops Intrepid into fullproduction Installed Eureka, providing aquantum leapinvisualization for openscience Intrepid namedworld’s fastest supercomputer Dedicated the ALCF inApril Supported 20INCITE projects Accepted 100-teraflops BlueGene/P Installed 100-teraflops BlueGene/P Held Next-Generation BlueGene workshop Continued development projects Supported 9INCITE projects Signed contract for Intrepid of Blue Gene/L Continued code development and evaluation Supported 6INCITE projects Installed 5-teraflops Blue Gene/L for evaluation Formed theBlue Gene ConsortiumwithIBM

for Computing Class Leadership- ALCF: OVERVIEW from theuseofALCF resources. performance ofapplications and to maximize benefits and assistance to supportuser projects to achieve top operation in2006withits team providing expertise groundbreaking scienceandengineering.Itbegan a world-class computing capability dedicated to provides thecomputational science community with by theU.S. Department ofEnergy (DOE).TheALCF one oftwo leadership computing facilities supported The Argonne Leadership Computing Facility (ALCF) is Science Pioneering science community. in partnership withthecomputational providing world-leading computing facilities breakthroughs for humanityby designingand scientific discoveries andengineering The ALCF’s missionisto accelerate major Mission andVision the universe. change, andefforts to unravel theoriginsof energy, studies oftheeffects ofglobalclimate endeavors asexplorations into renewable were devoted to suchworthy research In 2011,theALCF’s 1.2billioncore-hours Enabling Science transformational science. researchers worldwide for rapidly conducting provide billionsmore core-hours eachyear to will beinfullproduction at theALCF. Itwill at 10quadrillioncalculations persecond— supercomputer capable ofrunningprograms second. By 2013,Mira—an IBMBlueGene/Q of more than500trillioncalculations per Argonne’s IBMBlueGene/P, iscapable fastest computers for openscience, Intrepid, energy technologies. Oneoftheworld’s natural ecosystems, andsophisticated new materials, advanced energy networks, to study intricate chemical processes, exotic and engineers usemodelingandsimulation increasingly important asmore scientists High-performance computing isbecoming Discovery Accelerating Transformational approaches andthelargest-scale systems. for thenation that require innovative frontiers ofscience by solving key problems computational center for extending the The ALCF strives to bethe forefront Shaping FutureSupercomputing

3 2011 alcf annual report Overview 2011 alcf annual report 4 point operationspersecond. is capable ofcarrying out and apeakperformance of10petaflops. The system For atotal of768Kcores, 768terabytes ofRAM,      system, willconsist of: Mira, theALCF’s next-generation BlueGene/Q PETAFLOPS POWER MIRA PROVIDES power by eliminating the extra cooling step. Inaddition, pipe cold water directly alongsidethechips, whichsaves Blue Gene/Ppredecessor at theALCF. Coppertubeswill will befive timesmore energy efficient than Intrepid, its need to get more energy efficient tobe practical. Mira As computers get faster andmore powerful, they also Mira willaccomplish inonesecond! take themmore thantwo weeks to dothework that one calculation persecond, around theclock,it would into perspective: If every singleperson onearthsolved quadrillion calculations persecond. To putthisspeed 10-petaflops machineis capable of out10 carrying scientific supercomputing.marvel, Anengineeringthe being installed at theALCF, willusherinanew era of Mira, thenew petascale IBMBlueGene/Qsystem SUPERCOMPUTING NEW ERAOFSCIENTIFIC M      240 GB/s, 35PBstorage 384 I/Onodes 16 GBRAMpernode 1.6 GHz16-core processor and 1,024 nodesperrack 48 racks ira USHERSINA 10 quadrillion floating-

To ensure that scienceapplications will be prepared to distances. saves theenergy lost whentransporting data across long chips, whichspeedscommunication between cores and reduces thedistance that data hasto travel between the Mira fitsmore cores onto asinglechip.Thisarrangement millions ofprocessors withinthe decade. systems for openscienceequippedwithhundreds of a milestone intheongoing effort to develop exascale remarkable next-generation supercomputer represents Mira isexpected to beinfullproduction in2013.The safe, andreliable sources ofenergy. modeling ofbiological organisms, andthedesignofnew, advanced materials, exploration oftheuniverse, computational workload, includingsimulations of of scientific fields representative of Mira’s projected These Early ScienceProgram projects cover arange available for Early Scienceusers. become familiar onIntrepid, are ported to Mira and tools, debuggers andlibraries, withwhichthey have a performance project to ensure that theperformance been working withthecommunity andvendors through ALCF staff to helpdevelop solutions. ALCF staff hasalso root causes ofany system instabilities and work with this shakedown phase,users assist inidentifying the and accessto substantial computational time.During a significant head start for adapting tothe new machine hardware. Thisperiodprovides theteams’ projects with port andtunetheircodes onBlueGene/Qprototype with 16research teams from across thenation to run onMira assoonpossible,ALCF staff is working

Shaping FutureSupercomputing

5 2011 alcf annual report Mira Ushers in a New Era of Scientific Supercomputing

Biological Sciences GETTING A NAMD—The Engine for Large-Scale Classical MD Simulations of Biomolecular Systems Based on a Polarizable Force Field JUMP START PI: Benoit Roux, The University of Chicago 80 Million Hours Multiscale Molecular Simulations at the Petascale WITH EARLY PI: Gregory Voth, The University of Chicago 150 Million Hours

SCIENCE Chemistry The ALCF’s Early Science Program aims to prepare High-Accuracy Predictions of the Bulk Properties key applications for the architecture and scale of of Water Mira and to solidify libraries and infrastructure PI: Mark Gordon, Iowa State University that will pave the way for other future production 150 Million Hours applications. Two billion core-hours have been Accurate Numerical Simulations of Chemical allocated to 16 Early Science projects on Mira. Phenomena Involved in Energy Production and Storage with MADNESS and MPQC The projects, in addition to promising delivery of PI: Robert Harrison, Oak Ridge National Laboratory exciting new science, are all based on state-of-the- 150 Million Hours art, petascale, parallel applications. The project High-Speed Combustion and Detonation (HSCD) teams, in collaboration with ALCF staff and IBM, PI: Alexei Khokhlov, The University of Chicago have undertaken intensive efforts to adapt their 150 Million Hours software to take advantage of Mira’s Blue Gene/Q architecture, which, in a number of ways, is a Earth Science precursor to future high-performance-computing Climate-Weather Modeling Studies Using a Prototype Global Cloud-System Resolving Model architectures. PI: Venkatramani Balaji, Geophysical Fluid Dynamics Laboratory 150 Million Hours

Engineering Direct Numerical Simulation of Autoignition in a Jet in a Cross-Flow PI: Christos Frouzakis, Swiss Federal Institute of Technology 150 Million Hours Petascale, Adaptive CFD PI: Kenneth Jansen, University of Colorado—Boulder 150 Million Hours Petascale Direct Numerical Simulations of Turbulent Channel Flow PI: Robert Moser, University of Texas 60 Million Hours 2011 alcf annual report 6 Shaping Future Supercomputing

Geophysics Ab-initio Reaction Calculations for Carbon-12 Using Multi-Scale Dynamic Rupture Models PI: Steven . Pieper, Argonne National Laboratory to Improve Ground Motion Estimates 110 Million Hours PI: Thomas Jordan, University of Southern California Global Simulation of Plasma Microturbulence 150 Million Hours at the Petascale and Beyond PI: William Tang, Princeton Plasma Physics Laboratory Materials Science 50 Million Hours Materials Design and Discovery: Catalysis and Energy Storage PI: Larry Curtiss, Argonne National Laboratory 50 Million Hours

Physics Cosmic Structure Probes of the Dark Universe PI: Salman Habib, Los Alamos National Laboratory 150 Million Hours Petascale Simulations of Turbulent Nuclear Combustion PI: Don Lamb, The University of Chicago 150 Million Hours Lattice Quantum Chromodynamics PI: Paul Mackenzie, Fermilab 150 Million Hours 2011 alcf annual report 7 8 Science Highlights 2011 alcf annual report SCIENCE HIGHLIGHTS SCIENCE HIGHLIGHTS SCIENCE HIGHLIGHTS outreach to users, DOE,andthebroader community. to existing and potential ALCF users. The teamalso provides marketing and The for system software are addressed. works smoothlytogether; andI/Operformance issues,bugfixes, and requests architectures andscale ofALCF resources; theentire system software stack reliably andoptimally; system tools are matched to theunique system The data exploration, batch visualization, and production visualization. methods for high-performance, post-processing oflarge datasets, interactive The applications andthe techniques used to implement thosealgorithms. on theBlueGenesystem by assessingandimproving thealgorithms usedby The domains. scientists to maximize andaccelerate research intheirspecificscientific The science ontheBlueGenesystem inkey ways. Skilled experts at theALCF enableresearchers to conduct breakthrough again topped theGreen500 list. 500 list for thesecond consecutive year. Thenext-generation BG/Q prototype Bell Prize finalist. Intrepid, theALCF’s BG/P system, ranked No.1ontheGraph Research onmultiscale brain bloodflowsimulations wasnameda Gordon leadership-class resources was honored withseveral prestigious awards. In 2011,theALCF’s commitment to providing outstanding science and expertise in computation science. unattainable, due to theALCF’s world-class supercomputing resources and partnering withALCF staff have reached research milestones previously energy, climate, materials, physics, andotherscientific realms.Users science that solves someof the most difficultchallenges inbiology, chemistry, The Argonne Leadership Computing Facility (ALCF) enablestransformative User Services andOutreachTeamUser Services Operations Team Data andVisualization Analytics Performance Engineering Team Catalyst Team matches project PIswithexperienced computational ensures that system hardware andsoftware work facilitates theeffective useof applications Team lendsexpertise in tools and offers frontline andsupport services Shaping FutureSupercomputing

9 2011 alcf annual report 10 Director ofScience |ArgonneLeadership Computing Facility Paul Messina sampling. Director’s Discretionary. Ihope you willenjoy reading this projects from allthree award categories: INCITE,ALCC, and to represent every major application domainandincludes pursue themwithskillanddedication. Thissubset was selected on ALCF resources have compelling goals, andtheresearchers was painful to feature sofew. Theprojects that winallocations In thisreport, we describe22projects that usedIntrepid. It computing experiences that cause career swerves. in theircareer. We aimto continue to provide leadership-level in the1980sandthat theexperience caused amajorchange their first exposure to occurred at Argonne scientists. It is rewarding to learnfrom eminent scientists that software, andatraining ground for computational and computer resources for scientific advances, testbeds for methodologies and 24 processors! Many otherparallel computers ensued,providing systems ofdiverse architectures, oneofwhichhadasmany as 1984 the first system was installed, followed shortly by four other programming models,andtools for parallel architectures. In years ago, Argonne scientists began to investigate algorithms, back at Argonne’s activitiesinparallel computing.Thirty As we lookforward to Mira, we are alsodrawn to looking groundbreaking computational scienceonMira. experience runningonBG/Qhardware bodewell for pursuing engineering results. Suchscalability advances andthe early and 32K-node partitionsand produced important scientific and system, were by longproduction jobs that ran on8K-, 16K-, under 50%ofthecore-hours usedonIntrepid, theALCF’s BG/P the samenumberofBG/Pnodes.Furthermore, in2011just on BG/Qandachieve substantial performance increases over Blue Gene/P(BG/P)systems require nomodifications torun BG/Q configurations has confirmed that codesrunningonIBM However, earlyexperience inportingapplications tosmall and 786 terabytes ofmemory, may bedaunting at first. 2012. Itspowerful configuration, with48Knodes, 768K cores, the The next majorcomputational resource intheALCF—Mira, Pursuing GroundbreakingScience Message Director’s Science Science Highlights 2011 alcf annual report 10-petaflops arrive in IBMBlueGene/Q(BG/Q)—will

DOE’s INCITEprogram. TheDOEASCRLeadership intensive, large-scale research projects through are awarded to researchers withcomputationally Approximately 60%percent ofALCF resources (DOE) OfficeofScienceallocation programs. research primarilythrough Department ofEnergy supercomputer at theALCF, isavailable for Access to Intrepid, theIBMBlueGene/P better future. scientific leadership, and prepare the nation fora them solve complex challenges, advance America’s federal, state, andmunicipalagencies—to help industry, andnational laboratories—as well as works closelywithresearchers from academia, Argonne Leadership Computing Facility (ALCF) staff Array ofUsers,Challenges ALCF Awards BenefitA Wide Science Highlights allocation oftime. Early ScienceProgram projects alsotap into this prepare smallerprojects for afuture INCITE award. initiative (about10%of resources) designed to Director’s Discretionary program, asmaller also applyfor timethrough thelocally managed DOE’s energy mission.Argonne researchers may of theALCF’s resources for projects ofinterest to Computing Challenge (ALCC) program allocates 30% Early Science Program(ESP) allocations attheALCF. Percentage ofINCITE,ALCC, Director’s Discretionary, and INCITE 30% 10% ALCC 60% (ESP included) Discretionary

Science Highlights Shaping Future Supercomputing

Innovative and Novel Computational Early Science Program (ESP) Impact on Theory and Experiment Allocations through the Early Science Program (ESP) provide researchers with preproduction hours (between (INCITE) Program system installation and full production) on the ALCF’s ALCF resources are available to researchers as part next-generation, 10-petaflops IBM Blue Gene system. of the U.S. Department of Energy’s INCITE program. This early science period provides projects with a Established in 2003, the program encompasses high- significant head start for adapting to the new machine end computing resources at Argonne and Oak Ridge and access to substantial computational time. During this national laboratories. The INCITE program specifically shakedown period, users assist in identifying the root seeks computationally intensive, large-scale research causes of any system instabilities, and work with ALCF projects with the potential to significantly advance staff to help develop solutions. Two billion core-hours key areas in science and engineering. The program are allocated through ESP. encourages proposals from universities, other research institutions, and industry. Projects are awarded an INCITE allocation based on a peer review for scientific merit Discretionary Projects and computational readiness. The program continues Discretionary allocations are “start up” awards made to to expand, with current research applications in areas potential future INCITE projects so that they can achieve such as chemistry, combustion, astrophysics, genetics, computational readiness. Projects must demonstrate materials science, and turbulence. a need for leadership-class resources. Awards may be made year round to industry, academia, laboratories and others, and are usually between three and six months ASCR Leadership Computing in duration. The size of the award varies based on the Challenge Program (ALCC) application and its readiness/ability to scale; awards Open to scientists from the research community in are generally from the low tens of thousands to the low academia and industry, the ALCC program allocates millions of hours. resources to projects with an emphasis on high-risk, high-payoff simulations in areas directly related to the Department’s energy mission, national emergencies, or for broadening the community of researchers capable of using leadership computing resources. eport r 2011 alcf annual 11 12 2011 alcf annual report Biological Sciences Science Highlights such blood-related diseasesasbrain aneurysms, sickle-cellanemia,andcerebral malaria. picture of blood flow, andthe greatest hope for thedevelopment ofnew lifesaving treatments for which was namedaGordon BellPrize finalist. Multiscale models provide doctors withamore realistic Discretionary project, inwhichtheruns were conducted for Karniadakis’ Gordon Bellsubmission, visualizations ofthehuge amount ofdata generated. The research is an extension ofanearlierALCF vasculature. Theteam collaborated withtheALCF’s Mike Papka andJoeInsley on creating multiscale The simulations useMRIdata ofactualpatients for generating the geometry andmeshesofthe models that show theinterconnected workings ofmultiplescales ofthebrain’s blood vessels. team ledby George Karniadakis from Brown University isusingALCF resources to create multiscale multiple scales ofblood vessel networks work withinthebrain, bothaloneand together. A research To treat diseasesinvolving disruptions ofbloodflow tothe brain, doctors must first understand how PI: George Karniadakis ([email protected]) |Brown University INCITE Allocation: 50MillionHours of BrainBloodFlow Multiscale ModelsOfferMoreRealisticPicture computational designofantiviral proteins is feasible. on influenza The hemagglutinin. resultssuggest that de novo to designhigh-affinity binders tothe conserved stem region protein-protein interactions denovo andusedthemethod the team devised acomputational method for designing In anothereffort addressing thechallenges ofinfluenza, andtorsionside chainpacking, spaceminimization. optimization by structure combinatorial rebuilding, series ofstarting modelsare used to guideenergy from molecularreplacement solutions for eachofa approach inwhichelectron densitymaps generated from NMRspectroscopy data. They alsoinvented an limits ofprotein size that can bestructurally solved Magnetic Resonance (NMR)data that pushesthe new approach for computational analysis ofNuclear calculation anddesign.The researchers developed a several exciting breakthroughs inprotein structure Baker from theUniversity ofWashington hasachieved savings. Incollaboration withother researchers, David predictions of structure can yieldsignificant cost andtime novel anduseful functions. Faster andmore accurate the insights to necessary designnew moleculeswith function andinteractions ofbiomoleculesand provides Protein structure prediction is key to understanding the PI: David Baker ([email protected]) |University ofWashington INCITE Allocation: 30MillionHours Breakthroughs inProteinStructureCalculationandDesign

Shaping Future Supercomputing

Chemistry Performing the Largest Unstructured Large-Eddy Simulation of a Real, Full Combustion Chamber INCITE Allocation: 10 Million Hours PI: Thierry Poinsot ([email protected]) | CERFACS Researchers from CERFACS (the European Centre for Research and Advanced Training in Scientific Computation) have performed top-of-the-line quality simulations on highly complex cases in their goal towards the fully numerical modeling of an actual combustor. Led by Thierry Poinsot, their research is focused on Large Eddy Simulation (LES) of gas turbine engines with the inclusion of liquid phase phenomena. CERFACS has performed simulations and validation of two-phase flow experiments. In parallel, taking advantage of the ALCF leadership-class supercomputer, the researchers have performed the largest unstructured LES done to date of a commercial, full combustion chamber (330 million elements) on more than 16K cores. This simulation contributes to the validation of the LES approach when dealing with combustion instabilities. In these cases, the effects of mesh refinement are a highly critical point. A second mesh independency validation was performed, but this time it used a simpler, two-phase-flow single burner with three levels of refinement (4-, 8-, and 16-million elements). Research results are featured in a paper by Pierre Wolf, et al., “Using LES to Study Reacting Flows and Instabilities in Annular Combustion Chambers,” Flow Turbulence & Combustion, Springer Science & Business Media, published online in September 2011.

Potential Energy Surfaces for Simulating Complex Chemical Processes INCITE Allocation: 15 Million Hours PI: Don Truhlar ([email protected]) | University of Minnesota Large-scale electronic structure theory can provide potential energy surfaces and force fields for simulating complex chemical processes important for technology and biological chemistry. During the past year, University of Minnesota researchers carried out calculations to study structural and electronic factors in metallofullerenes and semimetal-fullerenes; the goal is to achieve a deeper understanding of the interactions that are important to their functional use as molecular electronic device components. From both a fundamental, theoretical point of view and a practical one, it is essential to find the energy minima and saddle points of low-lying electronic states and to map the topography of the seams of conical intersections in these fascinating systems.

ALCF staff worked with the project team to improve the performance of the key algorithm, which is the multi-configurational self-consistent field (MCSCF) method in the GAMESS code. This improvement enabled the researchers to perform large-scale calculations very efficiently. For example, a state-averaged complete active space self-consistent field (SA-CASSCF) single-point energy calculation of the Ca@C60 system (the endohedral complex consisting of a single Ca atom inside a “buckyball” fullerene) running on 8,192 cores on Intrepid, the Blue Gene/P supercomputer at the ALCF, takes only ~40 minutes. This time is twice as fast as the same calculation using the standard version of the GAMESS code. This difference is significant when one considers that typically researchers 2011 alcf annual report need to run hundreds of such calculations. The researchers expect an even greater improvement in the performance for bigger problems. 13 14 2011 alcf annual report Simulations ofDeflagration-to-Detonation Transition Science Highlights computer scienceand to produce advances in leading theExMproject National Laboratory is Wilde from Argonne task applications. Michael the demandsofmany- does nottypically scale to (SPMD) computations program multipledata today’s mainstream, single software designed for is challenging.System extreme-scale computers concurrent, interacting tasks. Runningsuch“many-task” applications efficiently, reliably, andeasilyon Exascale computers willenableanddemandnew problem-solving methods that involve many |ArgonnePI: MichaelWilde([email protected]) National Laboratory Director’s Discretionary Allocation: 3MillionHours Many-Task Applications forExtreme-Scale, ExM: SystemSupport Computer Science PI: Alexei Khokhlov ([email protected]) |TheUniversity ofChicago INCITE Allocation: 18MillionHours in ReactiveGases other industrial applications. friendly, cleanfuelthat hasthepotential to reduce thenation’s dependenceon Hydrogen isthemost abundant element intheuniverse. Itisanenvironmentally foreign oil,improve theenvironment, andboost oureconomy. However, hydrogen fuel presents asafety challenge. Thefuelisvery energetic andprone to accidents. In certain conditions, a hydrogen-oxygen mixture can react violently anddetonate. The process oftransition from slow burning to adetonation is called a deflagration- to-detonation transition or DDT. Predicting DDTin various combustion settings remains anoutstandingcombustion problem. theory Led by Alexei Khokhlov help make hydrogen aviablefuelalternative for powering vehicles and with TheUniversity ofChicago, theHighSpeedCombustion and computing comple to safely study how hydrogen burns.Ultimately, thisknowledge may DDT. Theseextremely detailed computer modelsallow researchers principles, reactive flow Navier-Stokes fluiddynamicsimulations of Detonation (HSCD)project usesALCF resources to perform first- C Ex

ompute node application Many-task tr T ask graph exec eme utor -scale x

Vi Global persisten Graph ex Graph ex r tual datastor storage ecutor ecutor

e t Ultra-fast task distribution Graph ex

ecutor Shaping Future Supercomputing

usable middleware that enable the efficient and reliable use of exascale computers for new classes of applications. The project is accelerating access to exascale computers for important existing applications and facilitating the broader use of large-scale parallel computing by new application communities for which it is currently out of reach. ExM project goals aim to achieve the technical advances required to execute many-task applications efficiently, reliably, and easily on petascale and exascale facilities. ExM researchers are developing middleware that will enable new problem solving methods and application classes on these extreme-scale systems. Researchers are evaluating ExM tools on three science applications—earthquake simulation, image processing, and protein/RNA interaction.

Earth Science How Can More Intricate Climate Models Help Curb Global Warming? INCITE Allocation: 40 Million Hours PI: Warren Washington ([email protected]) | National Center for Atmospheric Research (NCAR)

Global warming increases the occurrence of droughts, heat waves, wildfires, and floods. By improving the understanding of global warming’s impact, society can optimally address climate adaptation considerations. Advanced computation allows researchers Warren Washington (NCAR), Mark Taylor (Sandia National Laboratories), Joe Insley and Robert Jacob (Argonne National Laboratory) and Andy Bauer (Kitware) to develop more complex and intricate climate models. The vital information these improved models provide will help guide environmental policy. Using ALCF resources, the Climate Science Computational End Station (CCES) is advancing climate science through both aggressive model development activity and an extensive suite of climate simulations to correctly simulate the global carbon cycle and its feedback to the climate system, including its variability and modulation by ocean and land ecosystems. Researchers are testing a new, highly scalable method for solving the fluid dynamics of the atmosphere for use in future climate simulations. This method, called HOMME, is included in the CAM-SE model and has run with a resolution as high as ⅛th of a degree of latitude on more than 80,000 cores. Next, researchers will use CAM-SE to perform standard climate model benchmark simulations for comparisons with other models. They will also test new versions of the Community Earth System Model on the ALCF’s Blue Gene/P. 2011 alcf annual report 15 16 2011 alcf annual report Simulating Regional Climate at Convection-Permitting Simulating RegionalClimateatConvection-Permitting Science Highlights subchannel codes. provide validation data for lower-cost reactor designsimulations basedon reduced-order modelsor span abroad range ofmodelingscales. Animportant role for thecompute-intensive simulations is to coolant flow) innext-generation sodium-and gas-cooled nuclear reactors. The project simulations a low carbon footprint. Thisresearch isfocused onanalysis ofthermal-hydraulics (heat transfer and Advanced nuclearreactors are akey technology for providing power at areasonable cost andwith PI: Paul |Argonne Fischer([email protected]) National Laboratory INCITE Allocation: 25MillionHours HydraulicModeling Advanced ReactorThermal Energy Technologies broaden thecommunity ofresearchers capable ofusingleadership computing resources. change. Thiswilladvance scientists’ understanding of Earth’s climate for national emergencies and towards theoverarching goal to better quantify high-impact weather underclimate variabilityand to bethestandard for thenext generation of regional climate models represents anessential step and challenges ofsimulating regional climate at sufficient resolution to resolve individual Nested Regional Climate Model(NRCM) ontheALCF’s BlueGene/Pto demonstrate theopportunities Researchers at theNational Center for Atmospheric Research (NCAR)andtheALCF are usingthe PI: Greg Holland([email protected]) |National Center for Atmospheric Research Director’s Discretionary Allocation: 13MillionHours Resolution thunderstorm circulations. Supported by theNational Science Foundation and the WillisResearch Network, thework provides athorough test ofconvection-

North America. Theexperience gained at aresolution that isset permitting resolutiononclimate timescales withan emphasis onhigh-impactweather andclimate. Analysis will focus onphenomenathat have highsensitivity

to modelresolution, including water-snowpack assessments for themid-andwestern U.S., wind energy assessments, andhigh-impact events suchaswinter storms and hurricanes. Researchers are simulating both therecord-breaking 2005 hurricane seasonandwinter storms of2005/2006on Atlantic Basin andmost of covering theentire North grid spacingfor aregion Gene/P system, using4km Intrepid, theALCF Blue Shaping Future Supercomputing

During a worldwide validation effort for nuclear simulation codes, the Nuclear Energy Agency/ Organization for Economic Co-operation and Development (NEA/OECD) conducted a blind-benchmark study in 2010. Participants submitted computational results for velocity and temperature distributions in a T-junction (where streams of differing temperatures merge with limited mixing) that are important to understanding thermal-mechanical fatigue in light water reactors. Results were compared against experiments conducted for the benchmark. Staff worked with the project team from Argonne National Laboratory and Lawrence Berkeley National Laboratory to enable the Nek5000 code to scale up to 163,840 cores on Intrepid, the IBM Blue Gene/P (BG/P) system at the ALCF. This was leveraged in a workshop using the Juelich BG/P, where 19% of peak was realized on 262,000 cores. Runs from this project ranked first of 29 in temperature and sixth in velocity.

Large-Eddy Simulation for Green Energy and Propulsion Systems INCITE Allocation: 20 Million Hours PI: Umesh Paliath ([email protected]) | GE Global Research An understanding of the complex turbulent mixing noise sources for wind turbine airfoils and jet exhaust nozzles is critical to delivering the next generation of “green,” low-noise wind turbines and jet engines. Scientists at GE Global Research are developing and proving hi-fidelity direct- from-first-principles predictions of noise to characterize these hard-to-measure acoustic sources. A scalable, compressible Large Eddy Simulation (LES)–based Computational Aeroacoustics (CAA) solver is being used to study free-shear layer noise from jet exhaust nozzles and boundary layer noise sources from airfoils. GE’s LES strategy pushes application/validation to realistic conditions and scale, addresses fundamental physics and source characterization challenges, and extends capability to handle complex system interactions.

Powered by scalability improvements at the ALCF, earlier INCITE work demonstrated how this first-principles-based LES capability can transform product development. The research under way for jet engines will demonstrate (a) a numerical jet-noise rig that provides faster, cheaper, and additional details of the turbulent flow-fields than possible via physical experimentation, helping to explain how various chevron designs affect noise; and (b) the ability to capture propulsion- airframe integration effects such as the effect of pylon and jet-flap interaction. For wind turbines, simulations are in progress to demonstrate LES readiness in guiding low-noise design, validation of the complex scaling of turbulent self-noise, and the use of effective wall-models to enable large-span blade computations. The latter are needed to drive noise-reduction concepts beyond low-noise airfoil design. 2011 alcf annual report 17 18 2011 alcf annual report Understanding the Ultimate Battery Chemistry: Chemistry: Understanding theUltimateBattery Science Highlights experiments. the turbulent energy cascade. Themean andunsteady wall pressures alsocompare well with a point inthewake: theinertial range offrequencies with exponent -5/3isadecade long, revealing of thesimulations intheturbulent region isillustrated by the Power Spectral Densityof velocity at notorious for affecting CFD convergence; furthermore, the acoustic energy isminute. Theaccuracy fundamental problem oflow-Mach-number aero-acoustic computations. low Machnumbers Very are or free-slip conditions boundary inthelateral direction. Theobjective is to gaininsight into the grids (upto 60millioncells)andwidedomains(upto 16cylinderdiameters), witheither periodic resolving CFDapproaches to massively separated flows. The IBMBlueGene/P at theALCF allows fine bridges, heat exchangers, andbuildings. Thesimulations reflect the state oftheartinturbulence- PI: PhilippeSpalart([email protected]) |Boeing INCITE Allocation: 45MillionHours for Tandem Cylinders Detached-Eddy SimulationsandNoisePredictions Engineering in solvingthedischarge/recharge reactions at theelectrode-electrolyte interface inthefuture. Blue Gene/Psystem at theALCF. Theseresults willprovide useful insights for thedesignofLi/Air cells been obtained by usingthewell-parallelized DensityFunctional code installed onIntrepid, theIBM of thesesystems that are governed by electronic, structural, andthermodynamicproperties have cathode interfaces astheLi/Air cells’mainreaction products. So far, thefundamental understanding the peroxides (Li explored. Thus,at Argonne, thecurrent focus primary isonthebasicstudy oflithium oxides (Li reaction product inaLithium/Air battery, various properties ofLi increase thenumberoftimesbattery canbe cycled, andimprove the power density. Asthe key a highpercentage ofthetheoretical energy density, improve electrical efficiency of recharging, National Laboratory (ANL)is focusing onthisproblem. Currently, themainchallenges are to realize from IBMResearch, Vanderbilt University, OakRidge National Laboratory (ORNL),andArgonne this enormouspotential isa challengingscientificvery problem.Therefore, an interdisciplinary effort Ion battery ofthesame weight, makingpractical widespread useoffullyelectric cars. But realizing A rechargeable Lithium/Air battery can potentially store five to tentimesthe energy ofaLithium/ PI: JackWells ([email protected]) |OakRidge National Laboratory INCITE Allocation: 25MillionHours Rechargeable Lithium/Air 2 O 2 ) system bulkcrystals, (e.g., surfaces, andnanoparticles)that resides onthe

are typical inaircraft landing gear, windturbines, solid body, andtheresulting noise. These features separation, theimpingement ofturbulenceona and withNASA experiments. Itinvolves massive computational aero-acoustics (CAA)approaches, among computational fluiddynamics(CFD)and Cylinders, aprimetest case for detailed comparisons of theturbulenceandnoisegenerated by Tandem are conducting athorough numerical investigation Researchers ledby Dr. PhilippeSpalartat Boeing 2 O 2 remain elusive andneededto be

2 O), Shaping Future Supercomputing

A central issue of aero-acoustics for low-Mach-number flows is the accuracy of noise predictions made using only the solid-surface terms in the Ffowcs-Williams-Hawkings (FWH) integral. It is widely assumed that this convenient simplification suggested by Curle is adequate for airframe noise. The researchers’ work on a landing gear, however, failed to confirm this even at a Mach number as low as 0.115; this is also the case at Mach 0.1285, for the higher frequencies. This unexpected finding is being explored, including the effect of numerical resolution and dissipation, first inside the turbulence and then in the radiated noise, via the FWH equation. In addition, the researchers are computing the noise generated by so-called “long span bodies,” extruded from a two-dimensional section. A series of simulations and FWH noise computations carried out with different spanwise sizes of the domain ranging from 1.5D up to 16D (the size of the model in the NASA experiments) confirmed the validity of the noise correction proposed, thus allowing a significant reduction of the span size of the computational domain in LES, and therefore, of the overall cost of the simulations.

Prediction of Supersonic Jet Noise Using Large-Eddy Simulation ALCC Alloction: 60 Million Hours PI: Parviz Moin ([email protected]) | Center for Turbulence Research, Stanford University Noise generated by engine exhaust jets is one of the greatest barriers to the deployment of high- speed commercial aircraft. Very high jet velocities, typical of supersonic aircraft, heighten noise pollution levels in airport communities and accelerate hearing loss for crew on aircraft carrier decks. Modifying the nozzle geometry by adding chevrons (serrated geometric edges) to the nozzle lip is experimentally known to reduce jet noise, but the physical reasons for this are not well understood. Small-scale turbulence and shocks generated by chevrons strongly influence the development of the large turbulent eddies far downstream that control the production of noise. To capture these effects, compressible Large-Eddy Simulation using unstructured grids, which minimizes numerical dissipation and dispersion, is needed. For this purpose, the CharLES computational infrastructure, developed at the Center for Turbulence Research, is being used for these massively parallel simulations at the ALCF.

Using ALCF resources, researchers are working to fully resolve the effects of chevrons and heating on the noise produced in supersonic turbulent flow from a rectangular nozzle. For the first time, they are able to predict the far-field acoustic spectra from such a complex nozzle at all angles, including the broadband shock-associated noise. These simulations and comparisons with companion experiments are revealing new insight into how enhanced shear-layer mixing due to chevrons sustains itself and to address the question of the optimal chevron penetration angle from a jet-noise standpoint. Predictive, validated simulations such as these made possible by leadership computing facilities are critical to the design and viability of next-generation supersonic aircraft. 2011 alcf annual report 19 20 2011 alcf annual report Fusion Science Highlights terms offundamental units.Through theirsimulations to correctly interpret empirical measurements in cost overruns. Computingtheflow allows researchers help ensure itsoptimum performance andeliminate a better understanding of concrete’s flow properties to the flow ofmatter—known asrheology—will provide accurately measured inindustrial settings. This studyof particle densesuspensionslike concrete that can’t be supercomputer are enablingnew insights into how to measure andcontrol flow properties oflarge- Flow simulations ofthousandsirregularly shapedparticlesonthe ALCF’s BlueGene/P PI: WilliamGeorge ([email protected]) |National Institute of Standards and Technology INCITE Allocation: 25MillionHours into Concrete’s FlowProperties ALCF’s BlueGene/PEnablesNewInsights Materials Science the next-generation BlueGene/Qhave yieldedsignificant results. microturbulence-driven transport intokamaks. Efforts to scale the codes to thepetascale power of GTC-P andGTS codes, whichare highlyscalable particle-in-cellgyrokinetic codesused forsimulating they are constructed andoperational. Indealingwiththischallenge, researchers are deploying the simulations even more critical, sinceimprovements inITER-sized devices canonlybe validated after tokamaks are noteven one-third theradial dimensionofITER,makinghigh-fidelity predictive radius increases, to ITER-scale plasmas,expected to beinsensitive to size variations. Present-day from current scale experiments, whichexhibit anunfavorable scaling ofconfinement astheplasma PI: WilliamTang ([email protected]) |Princeton PlasmaPhysics Laboratory Early ScienceProgram Allocation: 50MillionHours at thePetascaleandBeyond Global SimulationofPlasmaMicroturbulence characteristics inmagnetically confined tokamak plasmas that spanthe range requires asystematic analysis of theunderlyingnonlinearturbulence confinement properties inadvanced tokamak systems, like This ITER.

Researchers are studying theinfluence ofplasmasize on extreme scale. this knowledge requires computational efforts atthe is anunderstanding ofturbulent transport losses.Acquiring in thedesignandoperation offuture fusionpower sources fusion, thepower source ofthesun.Ofutmost importance world’s energy needs,there isincreasing interest innuclear As scientists look for alternatives to fossilfuels tomeet the

Shaping Future Supercomputing

on the Blue Gene/P at the ALCF, researchers led by William George from the National Institute of Standards and Technology (NIST) have gained fundamental new insights into the yield stress of dense suspensions. Studies by the group indicate that particle contacts are an important factor in controlling the onset of flow in dense suspensions. Further, such interactions can be strongly influenced by the shape of the aggregates and lead to jamming effects that restrict the easy placement of concrete in forms. The researchers also discovered that for suspensions with a non-Newtonian fluid matrix, the local shear rates between aggregates strongly determine their rheological properties. These results have been validated against physical experiments with excellent agreement. Knowledge gained through this research has technological application in the building, coatings, water-treatment, food- processing, and pharmaceutical industries.

Better Catalytic System Designs through Nanoscale Research INCITE Allocation: 15 Million Hours PI: Jeff Greeley ([email protected]) | Argonne National Laboratory In life, sometimes to get the ball rolling, a little nudge is needed. In a chemical reaction, that nudge often comes in the form of a catalyst. A catalyst is an agent that speeds a chemical reaction along, or causes a reaction that otherwise would not have occurred. Platinum, a common catalyst, is used in catalytic converters to remove toxins from exhaust. Improved emissions control requires an understanding of how catalysts behave at their most fundamental atomic level—the nanoscale. Jeff Greeley at Argonne National Laboratory leads a team, including researchers from the Technical University of Denmark and Stanford University, that uses the supercomputing resources at the ALCF to study catalytic nanoparticles. Calculating catalysis on particles with as few as one thousand atoms takes several days on the world’s fastest . The process is so time intensive and the calculations are so complex, the research would be impossible without a leadership- class system like the ALCF’s Blue Gene/P. With access to the world-class computing resources needed to explore the behavior of catalysts at the nanoscale, Greeley and his team are paving the way for improved catalytic system designs with wide-ranging industrial applications. 2011 alcf annual report 21 22 2011 alcf annual report Reactive MDSimulationofShock-Induced Science Highlights reactions, suchas(π,π’)on reliably compute neutrino- states. UsingGreen’s FunctionMonte Carlo(GFMC),these calculations willallow researchers to response, theone-bodydensitymatrix, andtransition matrix elements between isospin-0and -1 This project aimsto calculate several fundamental properties ofthe PI: Steve Pieper([email protected]) |Argonne National Laboratory Early ScienceProgram Allocation: 110MillionHours Ab InitioReactionCalculationsfor Nuclear Structure simulations are being conducted usingtheINCITEallocation. featured theresearch asthemost important highlight inadecade. Currently, thenanobubblecollapse Department ofEnergy widely usedtheteam’s theCongressional earlierin findings budget, andSciDAC Sulfur stress corrosion cracking, usinganArgonne Director’s Discretionary allocation in2010.The with whichthesesimulations runonIntrepid results from theUSCgroup’s successful work onNickel- PI: Priya Vashishta ([email protected]) |University ofSouthernCalifornia INCITE Allocation: 45MillionHours Cavitation Damage to improve boththesafety andlongevity ofnuclearreactors. Theefficiency this nanobubblecollapse simulation is to understand molecularprocesses because itruns efficiently on163,840 cores, thefull system. The goalof mechanochemistry problem. The1-billion-atom simulation is feasible Blue Gene/Psystem at theALCF, to simulate andunravel thecomplex at theUniversity ofSouthernCalifornia (USC)are usingIntrepid, theIBM surface, Priya Vashishta andhiscolleagues, RajivKalia andAiichiro Nakano, To get amolecular-level understanding ofnanobubblecollapse nearasolid relieve thetensile stresses that cause SCCinthematerial. near asolidsurface, theresult isthecreation ofhigh-pressure areas that nanobubbles form, they create low-pressure regions, butwhenthey collapse the biggest reason thelifetime ofnuclear reactors isshortened. When benefits. Nanobubblesare used to prevent Stress Corrosion Cracking (SCC)— of thesurfaces ofmaterials. However, cavitation bubblesalsoprovide they collapse andhitasolid surface, andtherefore cause deterioration within theliquid.Thesecavities, or cavitation bubbles, cause stress when experiences rapid change inpressure that creates low-pressure cavities term degradation innuclearpower plants. Cavitation occurs whenaliquid erosion” ofcooling system components isasignificant mechanism for long- scientists, engineers, andthe general public.Amongmany factors, “cavitation Maintaining thesoundnessofnuclearreactors isamajorconcern for 12 12 C scattering, quasi-elastic electron scattering, andthe resultsof older C.

12 C 12 C nucleus:theimaginary-time

Shaping Future Supercomputing

0.10 Such properties are critical to a real understanding of the physics of nucleonic matter. Electron scattering experiments in the quasi-elastic regime, where the 0.08

) 0.06

dominant process is knocking a single nucleon out of the nucleus, are under way -3 12C (fm for a range of nuclei. The separation into longitudinal and transverse response P Density

ρ 0.04 allows study of the propagation of charges and currents, respectively, in the 0.02 GFMC – AV18+IL7 nucleus. The nontrivial changes from studies of the nucleon and deuteron to Experiment 0.00 larger nuclei require processes beyond one-nucleon knockout. Researchers 0 1 2 3 4 will compute the transition density matrices on a two-dimensional grid of r (fm) 7 the magnitudes of the initial and final positions and will make a partial wave 1+ 6 2 expansion of the angle between the two vectors. These matrices will be used 1– 5 2 as inputs for a variety of reaction calculations. To address Blue Gene/Q-related 3– issues for GFMC, researchers changed the format for some matrices from dense 4 2

(b) R-Matrix LJ to compressed, realizing a universal improvement on Blue Gene architecture. σ 3 Pole location Currently running on a mid-plane of Blue Gene/Q, the code shows one of the 2 highest per-node performance improvements from Blue Gene/P to Blue Gene/Q. 1

0 0 1 2 3 4 5 E (MeV) Physics c.m. Advancing the Understanding of Nuclear Structure INCITE Allocation: 15 Million Hours PI: James Vary ([email protected]) | Iowa State University Researchers from Iowa State University, Oak Ridge and Argonne national laboratories are using complementary techniques, including Green’s Function Monte Carlo, the No Core Shell Model, and Coupled-Cluster methods to perform ab initio calculations of both structural and reaction properties of light- and medium-mass nuclei. The calculations use realistic models of nuclear interactions, including both two- and three-nucleon forces. Their work could advance understanding of the triple- alpha burning reaction, which is essential to life on earth.

They also are exploring the role of the three-nucleon force in substantially heavier nuclei, as well as using Density Functional Theory (DFT) to calculate properties of nuclei across the entire range of nuclear masses. These DFT studies will help predict nuclear properties relevant to nuclear reactions such as neutron-nucleus reaction cross-sections and fission. This new understanding of nuclei has far- reaching implications, impacting the fields of energy and astrophysics. The researchers are conducting their calculations on the IBM Blue Gene/P (BG/P) at the ALCF and the Cray XT at Oak Ridge National Laboratory. The BG/P research team has completed ground-state C calculations—a key milestone. The ground state represents the best converged ab initio calculations of C ever. 2011 alcf annual report 23 24 tLifetime (hour) 10 0 2011 alcf annual report 2 4 6 8 1.0 x10 -5 Area1 (m 1.5 x10 Cosmic StructureProbesoftheDarkUniverse Science Highlights advance important concept work for next-generation “ultimate” storage rings. create complex particle-tracking simulations. Researchers willusea portionoftheir ALCCallocation to extreme computing power oftheBlueGene/P at the ALCF withtheAPS-developed code “elegant” to aperture (to ensure sufficient beamlifetime). To tacklethischallenge, researchers willpairthe provide both sufficient dynamicaperture (to ensure high-injection efficiency)and momentum Scientists at work ontheAPSupgrade are challenged withoptimizing thenonlineardynamics to PI: MichaelBorland([email protected])|Argonne National Laboratory ALCC Allocation: 36MillionHours for theAPSUpgradeandBeyond Direct MultiobjectiveOptimizationofStorageRingLattices the short-range force calculations. the high-resolution modulesiscurrently inprogress; thescaling properties of HACC donotdependon on thecurrent BG/Pacross runningonaBG/Qprototype thefullmachineandisnow system. Porting arriving at Argonne in2012.HACC’s medium-resolution infrastructure hasbeen tested for scalability Code) framework aimedat exploiting emerging supercomputer architectures suchastheIBMBG/Q The simulation program isbasedaround thenew HACC (Hardware/Hybrid Accelerated Cosmology will beanessential component ofDarkUniverse science for years to come. of-magnitude improvement over currently available resources. Thedatabase created from thisproject interpreting next-generation observations. This effort targets an approximately two- to three-orders- simulation suite covering approximately ahundred different cosmologies—anessential resource for volumes representative of state-of-the-art skysurveys. A key aspectoftheproject willbeamajor distribution ofmatter intheUniverse, resolving galaxy-scale mass concentrations over observational Laboratory isusingtheALCF to carry outsomeofthelargest high-resolution simulations ofthe however, remains amystery. Ateam ofresearchers ledby SalmanHabibat LosAlamosNational Dark energy anddarkmatter are thedominant components oftheUniverse. Theirultimate nature, |LosAlamosNationalPI: SalmanHabib([email protected]) Laboratory Early ScienceProgram Allocation: 12MillionHours * m) -5 2.0 x10 -5 100 150 200 Rank 50 scientific research. in animproved source ofhigh-energy, high-brightness, tunablex-rays for Without disruption to current operating modes,theupgrade will result a uniqueposition for enablingtime-resolved sciencewithhard x-rays. systems for dramatically reducing the x-ray pulselength, givingtheAPS compared to standard APSdevices. Theupgrade willalsoaccommodate will increase brightness by anorder ofmagnitudefor x-rays above 20keV this world-class resource. Theadditionoflongsuperconducting devices the APSwillreconfigure the facility’s magnets (its“lattice”) toenhance are usedby more than5,000scientists worldwide. Aplannedupgrade to Hemisphere are created by Argonne’s Advanced Photon Source (APS)and The brightest storage ring-generated x-ray beamsintheWestern

Shaping Future Supercomputing

Simulating Laser-Plasma Interactions in Targets for the National Ignition Facility and Beyond INCITE Allocation: 50 Million hours PI: Denise Hinkel ([email protected]) | Lawrence Livermore National Laboratory Lawrence Livermore National Laboratory (LLNL) has been tasked with achieving ignition at the National Ignition Facility (NIF). An important aspect of the ignition campaign involves quantitative prediction of the level of laser backscatter in these targets. Mitigation of laser backscatter is important, as backscatter reduces the amount of input energy available for driving the fusion process. It can also alter implosion symmetry as well as preheat the ignition capsule via generation of hot electrons.

Recent experimental results from the National Ignition Campaign at NIF show that backscatter occurs in the laser target where quads of laser beams are overlapped. The goal of these simulations is to quantify how overlapping beam quads impact laser backscatter. The simulations being conducted by LLNL researchers will generate scientific results that will have a major impact on the national ignition campaign—inertial fusion—as well as on the fundamental science of LPI. These state-of-the-art simulations are only possible because of the INCITE award allocated to Intrepid, the Blue Gene/P system at the ALCF. 2011 alcf annual report 25 26 Computing Resources 2011 alcf annual report       Intrepid consists of standards-based MPIcommunications tools. Blue Geneapplications use common languages and common insimulations onlarge, parallel computers. collective network that minimizes thebottleneck scalable torus network, aswell asahigh-performance Named Intrepid, thisBlueGene/Psystem hasahighly Intrepid quadrillion calculations persecond. Breakthrough Science ALCF Resourcesfor IBM BlueGene/Qsystem, willrunprograms at 10       80 TBofRAM. 88 GB/s, 7.6PBstorage; 640 I/Onodes; (163,840 processors); 40,960 quad-core compute nodes 1,024 nodesperrack; 40 racks; Soon to come: Mira, apowerful 10-petaflops and systems software development. software testing and optimization, tool andapplication porting, The ALCF’s majorcomputing rack system usedfor is a nearly 14-TF, single- resources include and develop. Surveyor and isusedto debug performance of13.9TF Challenger hasapeak production science. and isdedicated to 557 Teraflops (TF) has apeakspeedof Surveyor. Intrepid Challenger, and systems Intrepid, the IBMBlueGene/P

ALCF’s Surveyor isaBlue Genesystem dedicated to Surveyor Peak performance is13.9TF. nodes (4,096processors) andtwo terabytes ofmemory. andtestdebugging runs.Challenger has1,024quad-core queue. Itisintended for small,short,interactive Challenger is the home for the prod-devel job submission Challenger and eight NVIDIAQuadro FX5600 GPUsintwo S4s. two 2.0 GHzquad-core Xeon servers with32GBRAM visualization. Ithas four compute nodes,eachwith Gadzooks istheALCF’s test anddevelopment system for Gadzooks     solution has Eureka —theALCF’s visualization anddata analytics capability invisualization anddata analysis. Eureka enablesbreakthrough levels ofproductivity and computing system asthebasegraphics buildingblock, processing units(GPUs).By usingtheNVIDIAvisual installations ofNVIDIAQuadro Plex S4 external graphics researchers employ Eureka, oneoftheworld’s largest To facilitate data andvisualization analytics atthe ALCF, Eureka 13.9 teraflops. and two terabytes ofmemory. Peak performance is Surveyor has1,024quad-core nodes(4,096processors) optimization, and systems software development. tool andapplication software porting, testing and     more than3.2TBofRAM. 111 teraflops, 200 Quadro FX5600 GPUs, 100 dualquad-core servers,

Research and Education Network (MREN). Energy ScienceNetwork (ESNet) andtheMetropolitan institutions over fast research networks suchas the allows scientists to transfer datasets to andfrom other a total of20GBspublicnetwork connectivity. This Intrepid connects to other research institutions using Networking with 16,000tapes intwo 10,000-slotlibraries. automated tape storage system provides archival storage an effective capacityof16–24 petabytes. AnHPSS between 1.25:1and2:1,dependingonthedata, giving hardware compression allowing compression ratios of to accessthestorage. Thetape drives have built-in uses two parallel filesystems —PVFS andGPFS — transfer speedof88gigabytes persecond. TheALCF 7.6 petabytes ofraw storage andamaximum aggregate that control 7,680diskdrives withatotal capacity of nodes that connect to 16storage area networks (SANs) The supercomputer’s data systems consist of640I/O Data Storage predecessor, Eureka. ItwillbebasedonNVIDIAFermi Being installed in2012,Tukey willbetwiceasfast asits Tukey research facilities around the world. to scientists from industry, academia, and government an IBMBlueGene/Psystem, Mira willbemadeavailable and 768terabytes ofmemory. Like theALCF’s Intrepid, feature 48K16-way compute nodes(768Kprocessors), and technology. The10-petaflops supercomputer will leadership-class system to propel innovation inscience size,speed, memory anddisk storage capacity ofthis Computational scientists andengineers look tothe Leadership Computing Facility (ALCF) in2012. supercomputer, willbedelivered to theArgonne Mira, IBM’s next-generation BlueGene/Q Mira direct accessto data generated by Mira simulations. share theMira network andparallel filesystem, enabling aggregate 103teraflops indoubleprecision. Tukey will computation engine.The200GPUsare rated at an will make itapowerful, general-purpose analysis and GPUs withdoubleprecision floating points, which Shaping FutureSupercomputing 27 2011 alcf annual report 2011 ALCF Publications

Madduri, K. Ibrahim, K., Williams, S., Im, E.J., Ethier, S., Shalf, J., Oliker, L., “Gyrokinetic Toroidal Simulations on Leading Multi- and Manycore HPC Systems,” SC ‘11 2011 ALCF Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and PUBLICATIONS Analysis, No. 23, ACM-New York, December 2011. Dinan, J., Krishnamoorthy, S., Balaji, P., Hammond, J.R.,* Krishnan, M., Tipparaju, V., Vishnu, A., “Noncollective Communicator Creation in MPI,” Researchers who use Argonne Leadership Computing Facility Recent Advances in the Message Passing Interface, Vol. 6960, SpringerLink, (ALCF) resources are major contributors to numerous December 2011, pp. 282-291. publications that document their breakthrough science and engineering. The refereed journal articles and conference Solomonik, E., Demmel, J., “Communication-Optimal proceedings represent research ventures undertaken at the Parallel 2.5D Matrix Multiplication and LU Factorization Algorithms,” Proceedings of the 17th International ALCF through programs supported by the U.S. Department of Conference on Parallel Processing - Volume Part II, Energy and Argonne National Laboratory. These documented Springer-Verlag Berlin, Heidelberg, December 2011. efforts are tangible evidence of the ALCF’s major contribution to key advances in a wide range of scientific disciplines, Solomonik, E., Bhatele, A., Demmel, J., “Improving Communication Performance in Dense Linear Algebra including astrophysics, biological sciences, chemistry, via Topology Aware Collectives,” SC '11 Proceedings of computer science and mathematics, earth sciences, 2011 International Conference for High Performance engineering, materials science, nuclear physics, and plasma Computing, Networking, Storage and Analysis, Seattle, physics. Washington, ACM New York, December 2011.

This list contains 132 publications in descending order of Sudarsan, R., Borrill, J., Cantalupo, C., Kisner, T., their publication dates. To view their abstracts, visit the ALCF Madduri, K., Oliker, L., Simon, H., Zheng, Y., “Cosmic website (http://www.alcf.anl.gov/publications). Microwave Background Map-Making at the Petascale and Beyond,” ICS '11 Proceedings of the International Conference on Supercomputing, Tucson, Arizona, ACM- New York, December 2011, pp. 305-316.

Peterka, T.,* Ross, R.,* Nouanesengsey, B., Lee, T.-Y., Shen, H.-W., Kendall, W., Huang, J., “A Study of Parallel Particle Tracing for Steady-State and Time-Varying Flow Fields,” IPDPS '11 Proceedings of the 2011 IEEE International Parallel & Distributed Processing Symposium, Washington, DC, IEEE Computer Society, December 2011, pp. 580-591.

Peterka, T.,* Ross, R.,* Gyulassy, A., Pascucci, V., Kendall, W., Han-Wei Shen, Teng-Yok Lee, Chaudhuri, A., “Scalable Parallel Building Blocks for Custom Data Analysis,” Large Data Analysis and Visualization Data (LDAV), 2011 IEE Symposium, IEEE Explore, December 2011, pp. 105-112. 2011 alcf annual report

28 An asterisk after a name designates an Argonne author. Shaping Future Supercomputing

Pei, J.C., Kruppa, A.T., Nazarewicz, W., “Quasiparticle Sillero, J., Jiménez, J., Moser, R.D., Malaya, N.P., “Direct Continuum and Resonances in the Hartree-Fock- Simulation of a Zero-Pressure-Gradient Turbulent

Bogoliubov Theory,” Physical Review C, Vol. 84, No. 2, Boundary Layer up to Reθ=6650,” Journal of Physics: American Physical Society, December 2011. Conference Series, Vol. 318, No. 2, IOP Publishing, December 2011. Tallent, N., Mellor-Crummey, J., Franco, M., Landrum, R., Adhianto, L., “Scalable Fine-Grained Call Path Tracing,” Li, X. J., A. S. Popel, G. E. Karniadakis, “Multiscale ICS ‘11 Proceedings of the International Conference Modeling of Blood-Plasma Separation in Bifurcations,” on Supercomputing, Tucson, Arizona, ACM-New York, APS Physics, Vol. 56, No. 18, 64th Annual Meeting of the December 2011, pp. 63-74. APS Division of Fluid Dynamics, Baltimore, Maryland, APS, November 2011. WIlliams, S., Oliker, L., Carter, J., Shalf, J., “Extracting Ultra-Scale Lattice Boltzmann Performance via DePrince, E., Pelton, M., Guest, J.R., Gray, S.K., Hierarchical and Distributed Auto-Tuning,” SC ‘11 “Emergence of Excited-State Plasmon Modes in Linear Proceedings of 2011 International Conference for High Hydrogen Chains from Time-Dependent Quantum Performance Computing, Networking, Storage and Mechanical Methods,” Physical Review Letters, Vol. 107, Analysis, No. 55, ACM-New York, December 2011. No. 19, American Physical Society, November 2011.

Meehl, G.A., Washington, W.M., Arblaster, J.M., Hu, A., Bazavov, A., Petreczky, P., “Study of Chiral and Teng, H., Tebaldi, C., White III, J.B., Strand Jr., W.G., Deconfinement Transition in Lattice QCD with Improved “Climate System Response to External Forcings and Staggered Actions,” Journal of Physics G, Vol. 38, No. 12, Climate Change Projections in CCSM4,” Journal of November 2011. Climate, AMS Journals Online, American Meteorological Society, December 2011. 2011 alcf annual report 29 30 2011 ALCFPublications 2011 alcf annual report An asterisk after anamedesignates anArgonne author. Washington, ACM-New York, November 2011. Networking, StorageComputing, andAnalysis, Seattle, 2011 International Conference for High Performance Extreme-Scale Scientific Data,” Query-Driven withISABELA-Compressed Analytics Ku, C.S.,Ethier, S.H.,Chang, S.,Klasky, S., Samatova, N.F., Arkatkar, Z.,Kolla, I.,Gong, H.,Chen, J., Lakshminarasimhan, S.,Jenkins, J., Latham,R.,* Washington, ACM-New York, November 2011. Networking, Storage,Computing, andAnalysis, Seattle, 2011 International Conference for High Performance Applications onUltra-Scale Platforms,” Communication Techniques for Gyrokinetic Fusion Koniges, A., B.,Shalf,Preissl, Wichmann, N.,Long, R., J., Ethier, S., ACM Digital Library, November 2011. Networking, Storage andAnalysis, Seattle, Washington, Conference for HighPerformance Computing, Runtime,” Machines withaMulticore-Optimized Message-Driven Simulations of100Million Atoms on Petascale J.C., Harrison,C., Mei, C.,Sun,Y., Kale, G.,Bohm,E.J., Zheng, L.V., Phillips, November 2011. Society, Vol. 56,No.18,TheAmerican Physical Society, Baltimore, Bulletin Maryland, ofTheAmerican Physical Meeting oftheAPSDivisionFluidDynamics, Shock Wave withIsotropic Turbulence,” Bhagatwala, Lele,S., A., The American Physical Society, November 2011. Bulletin ofTheAmerican Physical Society, the APSDivisionofFluidDynamics, Unstable Curtain ofDenseGas,” Turbulence inaReshocked Richtmyer Meshkov Shankar, S.,Lele, Library, November 2011. Computation: Practice andExperience, A CaseStudywithNWChem,” Characterization ofGlobalAddress SpaceApplications: Romero, Malony, N.A., A.D., Hammond, J.R.,* Krishnamoorthy, S.,Shende, SC ‘11Proceedings of2011International “Multithreaded GlobalAddress Space “Enabling andScaling Biomolecular “Numerical Investigation of “Interaction ofaConverging “Performance

Concurrency and SC‘11Proceedings of 64th AnnualMeeting of

Baltimore, Maryland, Wiley Online Proceedings of 64thAnnual Vol. 56,No.18, “ISABELA-QA:

American Meteorological Society, November 2011, Journal ofClimate M., Zhang, Rasch, P.J., Vertenstein, M.,Worley, P.H., Yang, Z.L., Hunke, E.C., Jayne, Lawrence, S.R., D.M., Neale,R.B., Gent, P., Danabasoglu,G.,Donner, Holland,M.M., L.J., Springer-Link, November 2011,pp.463-467. Physics for Femto- andNanotechnologies, Vol. 75,No.4, Conference onNucleus-2010.Methods ofNuclear Interaction,” Approach withRealistic Inverse Scattering NN- Mazur, Mazur, A.I., E.A., Shirokov, Vary, A.M., J.P., Kulikov, V.A., Maris, P., Research Letters Distribution Due to Climate Change,” Wingenter, O., Smith, C.,Elliott,S.,Maltrud,M.,Erickson, D., Analysis, Seattle, Washington, November 2011. Performance Networking, Storage Computing, and the IEEE/ACM International Conference for High from CPUCodeSkeletons,” and Uram, T., J., Morozov,Meng, V.,* Kumaran, K.,*Vishwanath, V.,* Analysis, Seattle, Washington, November 2011. Performance Networking, Storage Computing, and the IEEE/ACM International Conference for High Supercomputing Systems,” and Staging for I/OAcceleration onBlueGene/P Papka, Vishwanath, V.,* Hereld, M.,*Morozov, V.,* November 2011. Division ofFluidDynamics,American Physical Society, Particle Dynamics,” Modeling ofSickleAnemiaBloodFlow by Dissipative Huan, L.,Caswell, B.,Karniadakis, G., November 2011, pp.218-227. Advances intheMessage Passing Interface, SpringerLink, European MPIUsers’ Group Conference onRecent 32K Cores,” OpenMP-MPI Turbulent Layer Boundary CodeOver Sillero, J., Borrell, G.,Jimenez, J., Moser, R.D., pp. 4973-4991.

M.E.,* “The CommunityClimate System“The Model4,” EuroMPI’11 Proceedings ofthe18th Proceedings oftheInternational “GROPHECY: GPUPerformance Projection “Topology-Aware Data Movement “Changes inDimethyl SulfideOceanic , Vol. 38,No.11,November 2011. , Vol. 24,No.19,AMSJournalsOnline, 64thAnnualMeeting oftheAPS “Light NucleiinAbInitio SC’11Proceedings of SC‘11Proceedings of “Multiscale

Geophysical “Hybrid

An asterisk after anamedesignates anArgonne author. pp. 4636-4654. Transfer, in aChannel,” of Turbulent Heat Transfer inSwept Flow over aWire Pantano,Ranjana, R., C.,Fischer, P.,* Modeling,” Wire-Wrapped PinBundles:Effect ofPin-Wire Contact Fischer, P.,* Merzari, E.,Smith,J.G., Tentner, Pointer, A., W.D., IEEE Xplore. International Conference, October 2011, Taipei City, Computation Phases,” “Unveiling Internal Evolution of Parallel Application Servat, H.,Llort,G.,Gimenez, J., Huck,K.,Labarta, J., Austin, TX, IEEEXplore, October 2011. (CLUSTER), 2011IEEEInternational Conference, Output withNon-Dedicated Cores,” Miller, P., Li,S.,Mei,C., The American Physical Society, October 2011. Trapped Neutron Drops,” Matrix ExpansionAgainst AbInitioCalculations of Maris, P., Stoitsov, M.,Vary, J.P., Bogner, S.,Furnstahl, Hergert, R.J., H.,Kortelainen, M., Analysis, Seattle, Washington, November 2011. High Performance Networking, Storage Computing, and with theSC’11IEEE/ACM International Conference for ChallengesAnalytics: andOpportunities, co-located The 2ndInternational Workshop on Petascale Data “Example ofIn-Transit Visualization,” Parashar, M., Hereld, M.,*Papka, M.E.,*Klasky, S., Podhorszki, N.,Vishwanath, V.,* Fabian, N.,Docan, C., Moreland,R., Marion, K.,Oldfield, P., Jourdain, S., Washington, November 2011. Networking, StorageComputing, andAnalysis, Seattle, ACM International Conference for High Performance Grid CFDat 160KCores,” of FullData andInSituData Extracts from Unstructured Papka, M.E.,*Kumaran, K.,*Geveci, B., Sahni, O., Fu,J., Liu,N.,Carothers, C.,Shephard, M., Hereld, M.,*Jansen,K.,Loy, Bauer, R.,* Zhou, M., A., Rasquin, M.,Marion,P., Vishwanath, V.,* Matthews, B., Vol. 54,No.21-22,Elsevier B.V., October 2011,

Science Direct, “Numerical Simulation oftheFlow in

International JournalofHeatandMass Parallel Processing (ICPP),2011 “Asynchronous Collective SC’11Proceedings oftheIEEE/

Phys. Rev. Lett., Elsevier B.V., October 2011. “Testing theDensity Cluster Computing “Direct Simulation Proceedings of “Co-Visualization Vol. 84,No.4, Moreland, K.,Kendall, W., Peterka, T.,* J Huang, October 2011. and Analysis, for HighPerformance Networking, Storage Computing, SC ‘11 Proceedings of2011International Conference Erickson, D., Kendall, W., Wang, J., Allen,M.,Peterka, T.,* J., Huang, Austin, TX, IEEEExplore, October 2011. (CLUSTER), 2011IEEEInternational Conference, Output withNon-Dedicated Cores,” Miller, P., Li,S.,Mei,C., Astronomical Society ofthePacific, October 2011,p.21. Space PlasmaFlows (Astronum 2010),SanFrancisco, CA, International Conference ofNumerical Modelingof Quantification for Turbulent MixingSimulations,” Kaman, T., Glimm,J., Sharp,D. H., October 2011. Visualization (LDAV), Providence, RhodeIsland, of theIEEESymposium onLarge Data Analysis and Leadership-Class Systems UsingGLEAN,” “Simulation-Time Data Analysis andI/O Acceleration on Vishwanath, V.,* Hereld, M.,*andPapka, M.E.,* pp. 40-47. Symposia, SanDiego, ACM-New CA, York, October 2011, Proceedings ofthe19thHighPerformance Computing for AbInitioNuclearPhysics Applications,” M., Maris,P., Vary, J.P., Liu, F., Sosonkina,M.,Cockrell, Mundhe,R., C.,Aronnax, pp. 103-111. Conference, Cluster Computing(CLUSTER), 2011IEEEInternational Resolution Multi-Dimensional Scientific Datasets Grout, R., Scorzelli, G.,Pascucci, V., Ross, Chen,J., R.,* Kolla, H., Kumar, S.,Vishwanath, V.,* Carns,P.,* Summa,B., October 2011. and Analysis, Performance Networking, Storage Computing, Proceedings of2011International Conference for High “An Image CompositingSolution at Scale,” “PIDX: Efficient Parallel I/O forMulti-

Austin, TX,IEEEExplore, October 2011, “Simplified Parallel Domain Traversal,”

Seattle, Washington, ACM-New York, Seattle, Washington, ACM-New York, “A Data Management System Shaping FutureSupercomputing “Asynchronous Collective “Uncertainty Cluster Computing Proceedings

SC ‘11 HPC ‘11

., ,”

31 2011 alcf annual report 32 2011 ALCFPublications 2011 alcf annual report An asterisk after anamedesignates anArgonne author. SpringerLink, September 2011,pp.191-206. Flow, Turbulence andCombustion, and Instabilities inAnnularCombustion Chambers,” Balakrishnan, R., Staffelbach, G., Wolf, P., A., Roux, Poinsot, T., Shanghai, September 2011,pp.1332–1339. Forum, 2011IEEEInternational Symposium, IEEEXplore, Parallel andDistributed Processing Workshops andPhd Calculations onMulticore Computer Architectures,” “Dynamic Adaptations inAbInitioNuclear Physics Srinivasa, Sosonkina, M.,Maris,P., A., Vary, J.P., September 2011. Vol. 84,No.11,American Physical Society, Energies ofCircular Quantum Dots,” Kvaal, S., Pederiva, F., Pedersen, Lohne,M.,Hagen, C., Hjorth-Jensen, M., September 2011. Letters, and Form Factors from Lattice QCD,” Negele, J.W., Sato, T., Tsapalis, A., Alexandrou C.,Gregory, E.B.,Korzec, T., Koutsou, G., September 2011. Journal ofPhysical Chemistry Letters, Water from AbInitioMolecularDynamics,” C.,Spanu,L.,Galli,G., Zhang, September 2011. Digital Discovery, SaltLake City, Utah, ACM-New York, Proceedings ofthe2011TeraGrid Conference: Extreme of Multiscale Simulation Data: Brain BloodFlow,” Insley, J. Grinberg, L.,Papka, A.,* M.E.,* Applied Mathematics, September 2011. SIAM News, Karniadakis, G.E., (LDAV), Providence, RhodeIsland,October 2011. Symposium onLarge Data Analysis andVisualization Data over theWideArea,” Norman, M.,andWagner, R., Insley, J.,* Hereld, M.,*Olson,E.,Vishwanath, V., * Vol. 107,No.14,American Physical Society, Vol. 44,No.7,Society for Industrial and “Using LES to StudyReacting Flows “Multiscale ModelingofMalaria,” “Ab InitioComputation ofthe Proceedings oftheIEEE “Entropy ofLiquid “Interactively Exploring “Δ(1232) AxialCharge Vol. 88, No.1-2,

ACS Publications,

Physical Review B, Physical Review “Visualization

The TG ‘11

September 2011. Physics, Induced Amorphization ofNickel,” Nakano,R.K., Vashishta, A., P., Yuan, Z.,Chen,H.P, Wang, W., Nomura, K.,Kalia, pp. 1089-1100. International, IEEEXplore, September 2011, Distributed Processing Symposium, 2011IEEE Benchmarks inCoarray 2.0,” and Performance Evaluation oftheHPCChallenge Scherer, W.N., Chaoran Yang, Guohua, J., Mellor-Crummey, J., Adhianto, L., Xplore, September 2011,pp.275-286. Processing Symposium, 2011IEEEInternational, IEEE Modern Multicore Architectures,” “Challenges ofScaling Algebraic MultigridAcross Baker, Gamblin,T., A.H., Schulz,M.,Yang, U.M., Taipei, Taiwan, September 2011. and System Software for High-End Computing(P2S2), International Workshop on Parallel Programming Models End Computing Systems,” and Lan,Z., Tang, W., Desai,N.,*Vishwanath, V.,* Buettner, D.,* Cluster Austin, Computing, Texas, September 2011. Proceedings oftheIEEEInternational Conference on Resolution Multi-DimensionalScientific Datasets,” and Grout, R., Scorzelli, G., Pacucci, V., Ross, Chen,J., R.,* Kolla, H., Kumar, S.,Vishwanath, V.,* Carns,P.,* Summa,B., Archives, SanFrancisco, California, September 2011. of Energy Best Practices Workshop onFile Systems and ”Position Paper onGLEAN,” Vishwanath, V.,* Hereld, M.,*andPapka, M.E.,* September 2011,pp.379-383. Capture,” Calculation oftheView theMathML Source Radiative Navratil, P., R., Quaglioni,S., Roth, Publishing Group, September 2011,pp.360-364. Hiatus Periods,” Ocean Heat Uptake DuringSurface-Temperature Trenberth, K.E., Arblaster,Meehl, G.A., J.M., Fasullo, T.J., Hu,A., Vol. 110,No.6,American Institute ofPhysics, Physics Letters B, “Job Co-Scheduling onCoupledHigh- “PIDX: Efficient Parallel I/O forMulti- “Model-Based Evidence ofDeep-

Nature Climate Change, Proceedings ofthe4th Vol. 704,No.5,Elsevier Ltd., (Invited), U.S. Department “Implementation “Sulfur-Impurity “Ab InitioMany-Body Parallel &Distributed JournalofApplied Parallel & Vol. 1,Nature

Shaping Future Supercomputing

Baran, A., Staszczak, A., Nazarewicz, W., “Fission Half Kim, S.H., Lamm, M.H., “Reintroducing Explicit Solvent Lives of Fermium Isotopes within Skyrme Hartree- to a Solvent-Free Coarse-Grained Model,” Physical Fock-Bogoliubov Theory,” International Journal of Review E, Vol. 84, No. 2, American Physical Society, Modern Physics E (IJMPE), Vol. 20, No. 2, World Scientific August 2011. Publishing Co., August 2011. Taborda, R., Bielak, J., “Large-Scale Earthquake Zhang, C., Wu, J., Gygi, F., Galli, G., “Structural and Simulation: Computational Seismology and Complex Vibrational Properties of Liquid Water from van der Engineering Systems,” Computing in Science & Waals Density Functionals,” Journal of Chemical Theory Engineering, Vol. 13, No. 4, IEEE Xplore, August 2011, & Computation, ACS Publications, August 2011. pp. 14-27.

Aktulga, H.M., Yang, C., Ng, E.G., Maris, P., Vary, J.P., Ferraris, C. F.; Martys, N., “Measurement Science “Large-Scale Parallel Null Space Calculation for Nuclear Roadmap for Workability of Cementitious Materials: Configuration Interaction,” Istanbul, Turkey, IEEE Xplore Workshop Summary Report,” The National Institute of 2011, August 2011, pp. 176-185. Standards and Technology (NIST), August 2011.

Nomura, K., Chen, Y., Kalia, R.K., Nakano, A., Staszczak, A., Baran, A., Nazarewicz, W., “Breaking Vashishta, P., “Defect Migration and Recombination in of Axial and Reflection Symmetries in Spontaneous Nanoindentation of Silica Glass,” Applied Physics Letters, Fission of Fermium Isotopes,” International Journal of Vol. 99, No. 111906, American Institute of Physics, Modern Physics E (IJMPE), Vol. 20, No. 2, World Scientific August 2011. Publishing Co., August 2011. 2011 alcf annual report 33 34 2011 ALCFPublications 2011 alcf annual report An asterisk after anamedesignates anArgonne author. Applications, International JournalofHigh Performance Computing Scalability oftheCommunityAtmosphere Model,” Worley,Mirin, A.A., P.H., July 2011, pp.2710-2720. Computation, Umbrella Sampling,” Unfolding andRefolding ofanRNA by Nonequilibrium Hammond, J.R., Dinner,A., A.R., Dickson, Maienschein-Cline,M.,Tovo-Dwyer, A., Vol. 107, No.3,TheAmerican Physical Society, July2011. Proton Removal inOxygen Isotopes,” B., Gade,A., Jensen, O., Hagen, G.,Hjorth-Jensen,M.,Alex Brown, Geophysical Union,July2011. of Geophysical Research Oceans, and IceAlgal BiomasswithinArctic SeaIce,” Jeffery, N., Deal, C.,Jin,M.,Elliott,S.,Hunke, E.,Maltrud,M., July 2011. of theProgram,” Oscillator Basis.(VII)HFODD (v2.49t):ANew Version Equations intheCartesian Deformed Harmonic- “Solution oftheSkyrme-Hartree-Fock-Bogolyubov Staszczak,Sheikh, J.A., Stoitsov, A., M.,Toivanen, P., Schunck, N.,Dobaczewski, J., McDonnell,J., Satula, W., August 2011. Vol. Washington, 312, No.8,IOPPublishing, DC, Series Materials &MechanismsofSuperconductivity of Light-ion Reactions,” Navrátil, P., Quaglioni,S., R., Roth, Letters, the AbInitioDescription of “Similarity-Transformed Chiral NN+3NInteractions for Roth, Langhammer, R., J., Binder, Calci,A., S.,Navrátil, P., The American Physical Society, August 2011. Confinement Fusion Facility,” and n-2HScattering at 14.1MeV By UsinganInertial of theDifferential Cross Sections fortheElastic n-3H R.C., Glebov, V.U., Meyerhofer, D.D., R.D., McNabb,D.P., Navratil, P., Quaglioni,S.,Sangster, Frenje, Li,C.K.,Seguin,F.H., J.A., Casey, D.T., Petrasso, Vol. 107,American Physical Society, August 2011. “Large Scale Production ModelingofPrimary “Quenching ofSpectroscopic Factors for SAGE Publications, July2011. Vol. 7 (9), American Chemical Society,

Computer Physics Communications, Journal of Chemical Theory & JournalofChemical Theory J “Improving thePerformance ournal ofPhysics: Conference 12 C and Physical Review Letters, “Flow-Dependent Vol. 116,American “Ab InitioTheory 16 “Measurements O,” O,”

Phys. Rev. Lett., Physical Review

Journal

The ,

Jeffery, N., Jin, M.,Deal,C.,Elliott,S.,Hunke, E.,Maltrud,M., Institute ofAeronautics and Astronautics, June2011. Nozzles,” Eddy Simulation for Jets from Chevron andDualFlow Paliath, U., Shen,H.,Avancha, Shieh,C., R., SCIDAC, Denver, Colorado, July2011. Simulations onLeadership-Class Systems UsingGLEAN,” Analysis andI/OAcceleration ofFLASHAstrophysics Jordan, G.,Daley, C., Vishwanath, V.,* Hereld, M.,*Papka, M.,*Hudson,R., July 2011. Area,” “Interactive Large Data Exploration over theWide Vishwanath, V.,* Norman,M.,andWagner, R., Insley, J.,* Hereld, M.,*Olson,E.,Papka, M.,* Letters, From SmallClusters to Solids,” Thygesen, K.S., Falsig, H., Larsen, Lu,J., A.H., Mortensen, J.J., Dulak,M., Kleis, J., Greeley, J., Romero, Morozov, N.A.,* V.A.,* Direct/Parallel Computing, Global Address SpaceProgramming Models,” “Tuning Collective Communication for Partitioned Nishtala, Y., Zheng, R., Hargrove, P.H., Yelick, K.A., 2011, Tuscon, Arizona, ACM, June2011. 25th ACM International Conference onSupercomputing Strategies for Large-Scale Parallel Architectures,” Hoefler, T., Snir, M., June 2011. National Center for Biotechnology Information, Equations,” Principles for Locating Periodic OrbitsofDifferential Boghosian, B.M.,Fazendeiro, L.M., ACM, June2011,pp.19-26. System andApplication Performance, LSAP, June2011, of theThird International Workshop onLarge-Scale System Behavior for High-EndComputing,” Iskra, K., Beckman,P., Ross, R., Muelder, C.,Sigovan, S., C.,Ma,K.L.,Cope,J., Lang, Elsevier B.V., June2011. ProductionPrimary for the Period 1992,” Proceedings ofTeragrid 2011,SaltLake City, Utah, Springer, June2011. AIAA/CEAS, Portland, Oregon, American “Investigation of Arctic SeaIceandOcean

PhIlos Transact AMathPhys Eng. Sci., “Finite Size Effects inChemical Bonding: “Generic Topology Mapping “Towards Simulation-Time Data Elsevier B.V., June2011. “Visual Analysis ofI/O “Visual

Springer Link-Catalysis “New Variational

Science Direct, “Large Proceedings

Science

An asterisk after anamedesignates anArgonne author. Nature Publishing Group, May 2011,pp.540-543. Protein Structure Optimization,” Molecular Replacement by Density- andEnergy-Guided Axelrod, H.L.,Das,D., Vorobiev, S.M.,Iwa, Oberdorfer, G.,Wagner, U., Valkov, Fass, E.,Alon,A., D., DiMaio, F., Terwilliger, T.C., Read, Wlodawer, R.J., A., Physics, May 2011. Proceedings, Vol. 1355,No.73,American Institute of Carlo ShellModelinLight Nuclei,” Vary, J.P., Abe, T., Maris,P., Otsuka, T., Shimizu, N.,Utsuno, Y., Information, May 2011,pp.816-21. Science, Conserved Stem Region ofInfluenza Hemagglutinin,” “Computational DesignofProteins Targeting the Corn, J.E., Strauch, Baker, E.M.,WilsonI.A., D., Whitehead,Fleishman, S.J., T.A., Ekiert,D.C., Dreyfus, C., May Newport Beach,CA, and GridComputing, 2011. IEEE/ACM International Symposium onCluster, Cloud, Multinode Cooperation on Petascale Cray xt5 Systems,” “Network-Friendly One-SidedCommunication through Que, X.,Yu, W., Tipparaju, V., Vetter, J., B., Wang, Physical Society, May 2011. 14C,” Dean, D.J., Maris, P., Vary, J.P., Navratil, P., Ormand, W.E., Nam,H., No. 5, May 2011. Coupled-Cluster Theory,” Papenbrock, T., Hjorth-Jensen,M.,Hagen,Jansen, G.R., G., Journal International, Earthquakes inSouthernCalifornia,” in aThree-Dimensional Earth Structure Modelfor Full-Wave Centroid Moment Tensor (CMT)Inversion Lee, E.-J., Chen,P., Jordan, T. H.,Wang, L., Ratio,” Number Rotating andStratified Flows forSmallAspect Kurien, S.,Smith,L.M.,

Phys. Rev. Lett.,

Science Direct, Vol. 332,National Center for Biotechnology “Benchmark Calculation ofNo-Core Monte “Origin oftheAnomalousLongLifetime of “Toward Open-ShellNucleiwith Vol. 106,No.20,TheAmerican JohnWiley &Sons,Inc.,May 2011. Elsevier B.V., June2011. “Asymptotics ofUnitBurger

Physical Review C,

Nature, AIPConference

Geophysical Vol. 473, NPG- “Rapid “Improved Vol. 83,

Version 4,” M., Zhang, Rasch, P.J., Vertenstein, M.,Worley, P.H., Yang, Z.Y., Hunke, E.C., Jayne, Lawrence, S.R., D.M., Neale,R.B., Gent, P.R., Danabasoglu,G.,Donner, Holland, M.M., L.J., Denver, CO, IEEEComputer Society, May 2011,pp.1-14. Symposium onMassStorage Systems andTechnologies, Through Continuous Characterization,” Improving Computational ScienceStorage Access S.,*Ross,Latham, Lang, R.,* R.,* Carns, P.,* Harms,K.,*Allcock, W.,* Bacon, C.,* May 2011. Systems for Supercomputers, ACM New York, NY, 1st International Workshop onRuntime andOperating on BlueGene/PLinux,” Implementation and Benchmarkingthe“BigMemory” Yoshii, K.,Naik,H.,Yu, C.,Beckman,P.,* No. 16, American Physical Society, April2011. from Lattice QCD,” Walker-Loud, A., Luu, T.C., Orginos, K., Parreño, Savage, A., M.J.,Torok, A., E.,Detmold, W., Chang, Beane, S.R., Joo, B.,Lin,H.W., April 2011,pp.6288-6298. American Chemical Society, Shifts and Residual DipolarCouplings,� Symmetric Protein Oligomers from NMRChemical Baker, D., Fitzkee, N.C., Rossi, P., Montelione, G.T., Bax, A., Sgourakis, N.G., Lange, O.F., DiMaio, F., Andr, I., Geophysical Union,April2011. Geophysical Research Letters, Vol. 38,L08704,American Change onSimulated Contemporary GlobalRiver Flow,” Climate, CO Kassianov, E., Qian,Y., Morrison,H., Wang, M.,Ghan,S.,Ovchinnikov, M.,Liu,X.,Easter, R., Development, Model Description and Evaluation,” Multi-Scale Aerosol-Climate“The ModelPNNL-MMF: Schanen, D., Yu, H.,Khairoutdinov, M.,Morrison, H., Liu, X., Kassianov, E., Qian,Y., Gustafson, W., Larson, V., Wang, M.,Ghan,S.,Easter, Ovtchinnikov, R., M., Meteorological Society, May 2011. �Determination oftheStructures of

“NCAR CommunityClimate System Model Journal ofClimate, 2 , Nitrogen DepositionandLandUse CopernicusPublication, April2011. “Evidence for aBoundHDibaryon

Physical Review Letters, Shaping FutureSupercomputing ROSS '11Proceedings ofthe Vol. 133, ACS Publications, E-View, American “Understanding and “The Impactof “The

Geoscientific Model 2011IEEE27th

“Extending Journal ofthe Vol. 106,

35 2011 alcf annual report 36 2011 ALCFPublications 2011 alcf annual report An asterisk after anamedesignates anArgonne author. Physical Society, March 2011. Polonica B, Prospects from Quarks to theCosmos,” Vary, J.P., March 2011. Systems (PMBS10),Vol. 38,No.4,ACM Digital Library, and Simulation ofHigh Performance Computing Workshop onPerformance Benchmarking Modeling, Evaluation Review, SpecialIssueonthe1st International Supercomputers,” Benchmarks SPandBT onLarge-scale Multicore Hybrid MPI/OpenMPImplementations ofNAS Parallel Wu, X.,Taylor, V., Computing Applications, The International JournalofHigh Performance Hierarchical LoadBalancingfor Large Supercomputers,” G.,Bhatele,Zheng, Meneses,E.,Kale, A., L.V., Chemical Society, March 2011. andComputation,Theory Using HybridDensityFunctionals,” Simulations oftheInfrared SpectrumofLiquid Water C.,Donadio,Zhang, D., Gygi, F., Galli,G., Vol. 23,No.3,American Institute ofPhysics, March 2011. Wave WithIsotropic Turbulence,” Bhagatwala, Lele,S.K., A, American Meteorological Society (AMS),April2011. Simulation,� the Madden-JulianOscillation inaCloud Resolving L.R., Hagos, S.,Leung, American Physical Society, April2011. Domain Wall QCD,” et al., Aoki, Y., Arthur, Blum,T., R., Boyle, P.A., Brommel, D. Science Direct, Structure Software,” CalculationsPython-Based with Mortensen, J. J., Enkovaara, J., Romero, Shende,S., N.A.,* No. 21,American Meteorological Society, April2011. Multidecadal Oscillation,” American SummerPrecipitation Driven by the Atlantic Hu, Q.,Feng, S.,Oglesby, R.J., “Continuum LimitPhysics from 2+1Flavor “Ab InitioNuclear Theory—Progress and Vol. 42,No.3-4,recognized by theEuropean

Journal ofClimate, Vol. 4,Elsevier B.V., April2011,pp.17-25. “GPAW -Massively Parallel Electronic “Performance Characteristics of ACM SIGMETRICSPerformance

Phys. Rev. Lett. D, �Moist Thermodynamicsof SAGE Publications, March 2011. ACS Publications, American “Interaction ofa Taylor Blast

Journal ofClimate, “Variations inNorth Vol. 24,No.21,

Physics ofFluid,

Journal ofChemical Vol. 83,No.7,

“First Principles Acta Physica Vol. 24, “Periodic

Nikolov, N.,Schunck, Nazarewicz, W., Bender, M., Press, Spain,March 2011. Mixing anditsBasisinMathematical Theory,” “A Numerical Method for theSimulation of Turbulent Li, Y., Wu, L.,Glimm,J., Jiao, X.,Li,X.-L., Samulyak, R., Kaman, T., Lim,H.,Yu, Y., Wang, D., Hu,Y., Kim,J.-D., March 2011. Lett., Corrections to Superallowed Beta Decay,� �Microscopic Calculations ofIsospin-Breaking Satula, W., Dobaczewski, J., Nazarewicz, W., Rafalski, M., Society, March 2011,p.415. Vol. 42,No.3-4,recognized by theEuropean Physical Superallowed \beta -decay,” “Isospin MixinginNucleiaround N\simeqZandthe Satula, W., Dobaczewski, J., Nazarewicz, W., Rafalski, M., February 2011. Simulations,” Present andFuture Patterns in Regional Climate “ENSO Anomaliesover theWestern United States: Y.,Zhang, Qian,Y., Duliere, V., Salathe, E.P., L.R., Leung, IEEE Xplore, No.1,DonaPaula, February 2011. Computing (HiPC),2010International Conference, Graphs onMeshInterconnects,” “Automated MappingofRegular Communication Bhatelé, Gupta, A., Kalé, G.R., L.V., I-Hsin, Chung, Physical Society, February 2011. Cluster Theory,� “Closed-Shell Properties of24OwithAbInitioCoupled- Jensen, O., Hagen, G.,Hjorth-Jensen,M.,Vaagen, J.S., Institute ofPhysics, 2011. February Gas Cylinder,” Viscous Flow Simulation ofaShockAccelerated Heavy Shankar, S.K.,Kawai, S.,Lele,S.K., Physical Society, March 2011. Crystals,” “Structure ofBluePhaseIIICholesteric Liquid Henrich, O., Stratford, K.,Cates, M.E.,Marenduzzo, D., American Physical Society, March 2011. Density Functionals,” Pei, J., Vol. 106,No.13,American Physical Society, “Surface Symmetry Energy ofNuclearEnergy

Physical Review D,

Climatic Change,

Physics ofFluid,

Phys. Rev. C,

Phys. Rev. Lett, Vol. 106,No.10,American

Vol. 83, No.2,American Acta Physica Polonica B, Vol. 23,No.2,American SpringerLink, HighPerformance “Two-Dimensional Vol. 83,No.3,

Phys. Rev. CRC

Shaping Future Supercomputing

Willcock, J., Hoefler, T., Edmonds, N., Lumsdaine, A., Saraswat, V., Kambadur, P., Kodali, S., Grove, D., “Active Pebbles: Parallel Programming for Data-Driven Krishnamoorthy, S., “Lifeline-Based Global Load Applications,” 25th ACM International Conference on Balancing,” PPoPP '11 Proceedings of the 16th ACM Supercomputing 2011, Tuscon, AZ, ACM, February 2011. Symposium on Principles and Practice of Parallel Programming, Association for Computing Machinery, Bernard, C., DeTar, C., Di Pierro, M., El-Khadra, A.X., February 2011, pp. 201-212. Evans, R.T., Freeland, et al., “Tuning Fermilab Heavy Quarks in 2+1 Flavor Lattice QCD with Application to Hasenkamp, D., Sim, A., Wehner, M., Wu, K., “Finding Hyperfine Splittings,” Phys. Rev. D, Vol. 83, American Tropical Cyclones on a Cloud Computing Cluster: Physical Society, February 2011. Using Parallel Virtualization for Large-Scale Climate Simulation Analysis,” IEEE Xplore, February 2011. Roberts, A., Cherry, J., Doscher, R., Elliott, S., Sushama, L, “Exploring the Potential for Arctic System Modeling,” Zheng, G. Gupta, G.; Bohm, E.; Dooley, I.; Kale, L.V., Journal of Climate, No. 92, American Meteorological “Simulating Large Scale Parallel Applications Using Society, February 2011. Statistical Models for Sequential Execution Blocks,” IEEE Xplore, Shanghai, January 2011, pp. 221-228. Song, H., Bass, S., Heinz, U., “Viscous QCD Matter in a Hybrid Hydrodynamic+Boltzmann Approach,” Phys. Rev. Sayeeda, M., Magib, V., Abraham, J., “Enhancing the C, Vol. 83, The American Physical Society, February 2011. Performance of a Parallel Solver for Turbulent Reacting Flow Simulations,” International Journal of Computation Willcock, J. Hoefler, T., Edmonds, N., Lumsdaine, A., and Methodology, Vol. 59, No. 3, Taylor & Francis Group, “Active Pebbles: A Programming Model for Highly January 2011, pp. 169-189. Parallel Fine-Grained Data-Driven Computations,” ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, February 2011. 2011 alcf annual report 37 38 2011 ALCFPublications 2011 alcf annual report An asterisk after anamedesignates anArgonne author. Machinery, 2011,pp.172-181. January ACM Digital Library, Association for Computing of theInternational Conference onSupercomputing, Multigrid Cycle onHPCPlatforms,” Gropp, W., Gahvari, H.,Baker, Schulz,M., Yang, A., U., Jorda, K., 2011. January Sciences, Constrained Moisture,” Madden-Julian Oscillation ina Regional Modelwith Dudhia,J., L.R., Hagos, S.,Leung, Copernicus Publication, 2011,pp.3399-3459. January MMF,” Effects inaMulti-scale Aerosol-climate ModelPNNL- Kassianov, E.,Qian,Y., Morrison,H., Wang, M.Ghan,S.,Ovchinnikov, M.,Liu,X.,Easter, R., 2011,pp.104-9124. January Geometries,” Performance Parallel ComputingofFlows inComplex Deniaua, H.,Staffelbach, G., Wolf, P., Poinsot, T., Gicquela, L.Y.M., Gourdaina, N.,Boussugea, J.F.,

Atmospheric Chemistry andPhysics, American Meteorological Society, “Modeling thePerformance ofanAlgebraic

Science Direct,

Journal oftheAtmospheric Vol. 339,No.2,Elsevier B.V, “Thermodynamics of “Thermodynamics ICS'11Proceedings “Aerosol Indirect No.11, “High Institute, 2011. January International, Science ofConcrete withSupercomputers,” Satterfield, S.G., Terrill, J.E., Bullard, J.W., George, Garboczi,E.J., W.L., Martys, N.S., 2011,pp.C685-C691. License, January Discussions, Production andExportofDetritus,” Parameters Affecting theAir-Sea CO2Flux,Primary Cycle ModelintheNorthAtlantic: anInvestigation of Elliott, S., 2011. January Vol. 106, No.1,TheAmerican Physical Society, Trapped inExternal Fields,” Gandolfi, S.,Carlson, J., Pieper, S.,* “Sensitivity Analysis ofanOceanCarbon Vol. 7,Creative Commons Attribute 3.0 Vol. 33,No.1,American Concrete “Advancing theMaterials

Physical Review Letters, “Cold Neutrons

Ocean Science

Concrete

Impact on Theory andExperiment)Impact onTheory Program. INCITE (Innovative andNovel Computational at theALCF through theDepartment ofEnergy’s supercomputer core-hours ontheBlueGene/P were awarded approximately 732million In 2012,31projects indiverse scientific disciplines PROJECTS INCITE 2012 2012 INCITEProjects Energetic AspectsofCO 20 MillionHours PI: in ReactiveGases Simulations ofDeflagration-to-Detonation Transition 33 MillionHours PI: Calculation andDesign Towards BreakthroughsinProteinStructure Chemistry 50 MillionHours PI: Multiscale BloodFlowSimulations 10 MillionHours PI: Protein-Ligand InteractionSimulationsandAnalysis Biological Sciences 10 MillionHours PI: End Station EvaluationandAnalysisConsortium Performance 10 MillionHours PI: Fault-Oblivious ExascaleComputingEnvironment 5 MillionHours PI: and Productivity Scalable SystemSoftwareforPerformance Computer Science 15 MillionHours PI: Complex ChemicalProcesses forSimulating Potential EnergySurfaces 10 MillionHours and Advanced Training inScientific Computation PI: Combustion inGasTurbines Large-Eddy SimulationofTwo-Phase Flow 4 MillionHours PI: Liquids fromQuantumMonteCarlo Alexei Khokhlov, TheUniversityofChicago David Baker, UniversityofWashington George Karniadakis, Brown University T. Andrew Binkowski, Argonne NationalLaboratory Patrick H.Worley, OakRidgeNationalLaboratory Ronald Minnich,SandiaNationalLaboratories Ewing Lusk,ArgonneNationalLaboratory Donald Truhlar, UniversityofMinnesota Poinsot,Thierry European Center for Research William Lester, Jr., UCBerkeley Shaping FutureSupercomputing 2 AbsorptionbyIonic

39 2011 alcf annual report 40 2012 INCITEProjects 2011 alcf annual report 40 MillionHours PI: Tail withActiveFlowControl Adaptive DetachedEddySimulation ofaVertical 35 MillionHours PI: Combustion Stochastic (w*)ConvergenceforTurbulent 20 MillionHours PI: Droplets inaTurbulent Flow Direct SimulationofFullyResolvedVaporizing Engineering 45 MillionHours PI: via DirectNoiseComputation Enabling GreenEnergyandPropulsionSystems 10 MillionHours Sandia NationalLaboratories PI: Eutectics ofMoltenSaltMixtures Atomistic AdaptiveEnsembleCalculationsof 25 MillionHours PI: HydraulicModeling Advanced ReactorThermal 10 MillionHours PI: Under Uncertainty Optimization ofComplexEnergySystem Energy Technologies 30 MillionHours for Atmospheric Research PI: The ClimateEndStationII Climate-Science ComputationalDevelopmentTeam: 20 MillionHours PI: Large-Eddy SimulationsofContrail-to-CirrusTransition 2 MillionHours Earthquake Center PI: Seismic HazardAnalysis CyberShake 3.0:Physics-BasedProbabilistic Science Earth Kenneth Jansen,UniversityofColorado—Boulder James Glimm,Stony BrookUniversity Said Elghobashi,UniversityofCalifornia—Irvine Umesh Paliath, GEGlobalResearch Saivenkataraman Jayaraman, Paul Fischer, ArgonneNationalLaboratory Mihai Anitescu, ArgonneNationalLaboratory Warren Washington, NationalCenter Roberto Paoli, CERFACS Thomas Jordan,SouthernCalifornia

10 MillionHours PI: withElectronicStructureCalculations Nanoparticles Probing theNon-ScalableNanoRegimeinCatalytic 22 MillionHours and Technology PI: Flow forPracticalRheometry High-Fidelity SimulationofComplexSuspension 25 MillionHours PI: and Solid/LiquidInterfaces Vibrational SpectroscopyofLiquidMixtures Materials Science 20 MillionHours PI: Richtmyer-Meshkov Instability Turbulent Multi-MaterialMixinginthe 18 MillionHours PI: Nuclear Structureand Reactions 10 MillionHours PI: Alfven Turbulence intheSolarWind Petascale SimulationsofInhomogeneous 50 MillionHours Accelerator Laboratory PI: Lattice QCD 40 MillionHours PI: of CurrentModels VerificationSupernovae: and Validation Toward ExascaleComputingofType IaandIb,c 63 MillionHours National Laboratory PI: for theNationalIgnitionFacilityandBeyond Simulations ofLaser-Plasma InteractionsinTargets Physics 25 MillionHours and ArgonneNationalLabortory PI: Multiscale ModelingofEnergyStorageMaterials 45 MillionHours PI: Petascale SimulationsofStressCorrosionCracking Jeffrey Greeley, ArgonneNationalLaboratory William George,NationalInstitute of Standards Giulia Galli,UniversityofCalifornia—Davis Sanjiva Lele,Stanford University James Vary, Iowa State University Jean Perez, University of New Hampshire Paul Mackenzie, Fermi National Donald Lamb,TheUniversityofChicago Denise Hinkel, Lawrence Livermore Voth,Gregory TheUniversityofChicago Priya Vashishta, UniversityofSouthernCalifornia

Shaping Future Supercomputing

www.alcf.anl.gov The Leadership Computing Facility Division operates the Argonne Leadership Computing Facility — the ALCF — as part of the U.S. Department of Energy’s (DOE) effort to provide leadership-class computing resources to the scientific community.

About Argonne National Laboratory Argonne is a U.S. Department of Energy laboratory managed by UChicago Argonne, LLC under contract DE-AC02-06CH11357. The Laboratory’s main facility is outside Chicago, at 9700 South Cass Avenue, Argonne, Illinois 60439. For information about Argonne and its pioneering science and technology programs, see www.anl.gov.

Availability of This Report This report is available, at no cost, at http://www.osti.gov/bridge. It is also available on paper to the U.S. Department of Energy and its contractors, for a processing fee, from: U.S. Department of Energy Office of Scientific and Technical Information P.O. Box 62 Oak Ridge, TN 37831-0062 phone (865) 576-8401 fax (865) 576-5728 [email protected]

Disclaimer This report was prepared as an account of work sponsored by an agency of the United States Government. Neither the United States Government nor any agency thereof, nor UChicago Argonne, LLC, nor any of their employees or officers, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or represents that its use would not infringe privately owned rights. Reference herein to any specific commercial product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof. The views and opinions of document authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof, Argonne National Laboratory, or UChicago Argonne, LLC. Argonne LLeadershipeadership CComputingomputing FacilitFacility y

Argonne Leadership Computing Facility Argonne National Laboratory 9700 South Cass Avenue, Bldg. 240 Argonne, IL 60439-4847

Argonne National Laboratory is a U.S. Department of Energy argonne laboratory managed by UChicago Argonne, LLC.

CONTACT  Argonne Leadership Computing Facility | www.alcf.anl.gov | [email protected] alcf-2011annrep-0412