2007 - 2008 2007 - 2008 DEIXIS 2007 - 2008 THE DOE CSGF ANNUAL

DEIXIS – DEIXIS –

THE DOE CSGF ANNUAL THE DOE CSGF ANNUAL

DEPARTMENT OF ENERGY COMPUTATIONAL SCIENCE GRADUATE FELLOWSHIP

Funded by: The Krell Institute and 1609 Golden Aspen Drive, Suite 101 National Nuclear Security Ames, IA 50010 Administration’s Office of (515) 956-3696 Defense Programs www.krellinst.org/csgf

THE DOE CSGF ANNUAL

Department of Energy DEIXIS Computational Science Graduate Fellowship TABLE OF CONTENTS

2007-2008

Editor DEIXIS (ΔΕΙΞΙΣ) transliterated Shelly Olsan from classical Greek into the Roman alphabet, (pronounced da¯ksis) means Copy Editor a display, mode or process of proof; Ron Winther PAGE 30 PAGE 14 the process of showing, proving or Design demonstrating. DEIXIS can also julsdesign, inc. PAGE 22 refer to the workings of an individual’s 42 keen intellect, or to the means by which Contributors 4 Alumni Profiles such individuals, e.g. DOE CSGF Jacob Berkowitz Practicum fellows, are identified. Alan S. Brown Experiences PAGE 18 42 Mary Biddy Victor D. Chase 14 BP plc DEIXIS is an annual publication Karyn Hede 4 The Summer Job Department of Energy of the Department of Energy Thomas R. O’Donnell That Stretches Lab Research 43 Eric Held Computational Science Graduate Utah State University Fellowship (DOE CSGF) program. DEIXIS, The DOE CSGF Annual 4 Ethan Coon 14 Power Play 30 Model Mixes Ice, Heat, DEIXIS illustrates work done at is published by the Krell Institute. The Krell Institute administers the Department The Dilemma of Scale Lawrence Berkeley Water and Salt to View the 44 Ahmed Ismail some multi-program DOE laboratories of Energy Computational Science Graduate National Laboratory Ocean’s Future Sandia National Laboratories and highlights the DOE CSGF fellows Fellowship program for the DOE under contract DE-FG02-97ER25308. The Krell Institute is a 8 Jeff Hammond Los Alamos National Laboratory and alumni. The DOE CSGF is non-profit company that works in partnership Applying Science 18 From Flames to Fusion funded by the Office of Science and with national science and technology Sandia National Laboratories 34 Harnessing Hundreds of the National Nuclear Security research and education communities 45 to help them achieve national priorities. 10 Jimena Davis Thousands of Processors Howes Scholars Administration’s Office of Math Tools for Life 22 Measuring Up On the Petascale Pacific Northwest Defense Programs. Oak Ridge National Laboratory National Laboratory For additional information about the Department of Energy Computational 12 Stefan Wild 46 This issue of DEIXIS highlights Science Graduate Fellowship program, Green Silicon 26 Modeling an 38 The New Paradigm of Alumni Directory research supported by the Department the Krell Institute, or topics covered in Earth-Shaking Event Petascale Computing of Energy, Office of Science, Office of this publication, please contact: Lawrence Livermore Argonne National Laboratory Advanced Scientific Computing and Editor, DEIXIS The Krell Institute National Laboratory 54 Research. This research represents an 1609 Golden Aspen Drive, Suite 101 Fellows Directory important part of the overall efforts Ames, IA 50010 by the Department of Energy labs that (515) 956-3696 www.krellinst.org/csgf use computing to advance scientific discovery and engineering practice. Copyright 2007 by the Krell Institute. All rights reserved. PAGE 26

PAGE 38

PAGE 8 FEATURE PRACTICUM EXPERIENCES 5 The Summer Job That Stretches

THE DEPARTMENT OF ENERGY Computational Science Graduate Fellowship (DOE CSGF) supports the nation’s brightest science and engineering students, allowing them to concentrate on learning and research. The work of more than 200 DOE CSGF alumni has helped the United States remain competitive in a global economy.

The Dilemma As a student in applied mathematics at Columbia University, Coon works of Scale with those at the forefront of merging computational modeling and earth sciences. That includes his thesis ETHAN COON adviser, Marc Spiegelman, an associate professor jointly appointed Columbia University | Los Alamos to Columbia’s renowned Lamont- National Laboratory | Story by Doherty Earth Observatory and the Jacob Berkowitz Department of Applied Physics and Applied Mathematics. As every rock climber knows, survival is in In his doctoral research, Coon is the details. You creating a new mathematical model can’t think of the of the two-dimensional geometry 3,000 foot-tall rock of rock faults in order to improve face while you’re computer models of earthquakes. clinging to it. The only thing that matters is “Many standard physics-based Left to right: the half-inch-wide granite lip computational models, including Jeff Hammond, onto which your feet are wedged. geological ones, are inherently bad Stefan Wild, at capturing small-scale effects,” Ethan Coon and It’s a perspective born of years of Coon says. “Like a postcard view, Jimena Davis. rock climbing that Ethan Coon they can only show the broad brought to his summer practicum at strokes, while features smaller Los Alamos National Laboratory in than these strokes are blurred 2006. The DOE CSGF fellow took out or completely missed.” Solutions for saturation of an oil-reservoir simulation. Water is pumped a page from his climbers’ manual into an oil reservoir at the upper right corner to flush the reservoir and to make more physically accurate It’s not from lack of trying. increase production. The oil-water mixture is pumped out at the lower computer simulations of Earth Computational scientists continually left corner. On the left is a simulation run on the fine-scale mesh processes, putting hand-grip level work at putting the fine-scale into (57 by 217 grid points). On the right is an upscaled solution run on a details into the big-picture view. the big picture to create more much coarser mesh (8 by 28 grid points). The upscaled solution takes And it turns out that the attention detailed models. But they face a approximately 1/64th of the computation time of the fine-scale solution BESIDES STUDIES AND RESEARCH at their home to detail that keeps climbers safe perennial hurdle: the increased while capturing much of the fine-scale detail. universities, each DOE CSGF participant must spend at least three months — usually can help us better predict the time and cost it takes to run in the summer — working with scientists at one of the Department of Energy’s consequences of earthquakes and these more complex simulations. national laboratories. The practicum experience exposes the fellows to research squeeze more precious petroleum distinctly different from their doctoral projects, stretching them with new challenges. out of aging reservoirs. Some students return to their routines with new perspectives that improve their Coon created an algorithm…to simulate the flow work. Others may take their doctoral studies in entirely new directions. And many decide they want to spend their careers at the national laboratories. of two materials over time — an ability that’s critical to Regardless, all of them are changed by their summer experience. simulating the interaction of oil and water in a reservoir.

4 7

baby-step up from the previous one. “It was a great experience. The other While in New Mexico, Coon took time And in this way it is more effective.” seven speakers in my session were all to get what geologists call “ground tenure-track professors, and I received truth” with rocks. He scaled Sandia When Coon arrived at Los Alamos, great feedback,” he says. Coon made Mountain outside Albuquerque and MLUPS was developed to the point contacts with researchers at the also hiked to the summit of Mount Our multilevel upscaling algorithm it could model a single phase, or University of Texas at Austin and Wheeler, the state’s highest peak, near constructs a self-consistent hierarchy of material, in a static state 15 times the University of Bath in England Taos. Both provided stunning vistas of PROGRAM coarse-scale models for single-phase faster than the main competing while there. the surrounding deserts and forests. REQUIREMENTS saturated flow, as well as the corresponding technique. Coon created an algorithm multiscale basis functions, without solving enabling MLUPS to simulate the The DOE CSGF practicum also was a Similarly, from his scientific perch at Students selected for fellowships any local or global fine-scale problems. The flow of two materials over time — an chance for Coon to see the fine scale Los Alamos, Coon was able to see the agree to undertake study and multiscale basis function for the center ability that’s critical to simulating in Los Alamos’ scientific community. grander-scale picture that ties together research in computational science. of the domain, shown in the figure, was the interaction of oil and water in his work using computational models The program of study must provide generated using this algorithm. The a reservoir. “I really liked the ability to get out to understand how the Earth — and background in a scientific or fine-scale structure is clearly visible in and talk with the engineers and things in it — moves. engineering discipline, computer the surface, which accurately represents In collaboration with Scott MacLachlan, scientists at the lab — people outside science, and applied mathematics. the influence of this structure on the flow. a former Los Alamos summer student my immediate field — and to work on “Talking with David Moulton about my In order to be considered for the and (as of January 2008) an assistant a larger, multidisciplinary project,” thesis work really helped because he DOE CSGF, students must be U.S. professor at Tufts University, Coon says Coon, who notes that the T-7 had a very different perspective on it,” citizens or permanent resident also upgraded MLUPS so it incorporates group is noted for its highly Coon says. “In the end I saw that both aliens and working toward a Ph.D. mass conservation of the fluids, ensuring collaborative approach. problems come down to a question of at a United States university. that the amount of material is the how you incorporate levels of detail To tackle this challenge, computational from the overlying rock and spurts same at the start and end of a model that you can’t completely resolve in Students applying for fellowships must scientists turn to approximations — out of the ground like a geyser. run. While not always critical in initial your model.” be undergraduate seniors or in their they average out fine-scale details However, there’s still substantial oil model development, mass conservation first or second year of graduate study. over the big-picture view. In modeling left when the geyser stops. How to get is essential to making technology that Prior to the third year of the fellowship, geological formations, this could it out? Oil companies often pump in can be practically applied. fellows must complete a practicum mean averaging out tiny layers of water or carbon dioxide to flush out assignment at a Department of Energy sedimentary rock that compose a the remaining oil. Coon programmed the additional laboratory. Currently, over 20% of miles-long and miles-deep oil reservoir. abilities into MLUPS in Python, a program alumni work or have worked Put into a climber’s perspective, it’s “Being able to increase the efficiency powerful computer language, and at a Department of Energy laboratory. like averaging the number and locations of these oil reservoirs has numerous then added the components to the of rocky hand-holds, without knowing advantages. It means more energy, existing code, which was developed exactly where each one is — a situation it makes business sense, and it also in the legacy language Fortran. Coon that can render the model useless as minimizes the environmental impact was introduced to Python through a real-life guide. of more drilling,” Coon says. a collaboration with staff at Argonne DISCIPLINES PURSUED National Laboratory as part of his “This is one of the big challenges in To do this, it’s critical to know how doctoral research. The fellows involved in the DOE CSGF trying to model real geological systems,” the oil and water flow in the rock, study widely varying subjects. However, says David Moulton, an applied and that means understanding and The work used real-world data from they all are using high-performance mathematician with the Mathematical modeling the fine-level details. the Society of Petroleum Engineers’ computing towards their research Modeling and Analysis Group (or T-7) Tenth Comparative Solution Project — goals. Fellows’ disciplines include at Los Alamos, and Coon’s summer At Los Alamos, Moulton leads a baseline data set that exists as a biophysics, chemistry, biochemistry, practicum adviser. “We obviously can’t development of a state-of-the-art testbed for developing and testing civil engineering, computer science, resolve everything, but we need to computational tool for improved computational models. Moulton says aerospace engineering, applied math, physics, bioengineering, aeronautical ensure that the approximations modeling of flows through porous MLUPS is now at the proof-of-concept, engineering, chemical engineering, we introduce are useful. The key rock structures. desktop stage for two-phase modeling. A geostatistical realization of a strongly heterogeneous bioinformatics, computational chemistry, is to create approximations that He’s confident it will demonstrate permeability field with variation (from light to dark) of and computational mechanics. capture the influence of unresolved “Our Multilevel Upscaling (MLUPS) significant advantages over 6 orders of magnitude. features well enough to advance approach to modeling flow through competing models. our understanding of the system heterogeneous media doesn’t make and inform policy decisions.” the jump to a coarse-scale model Coon presented the results of his directly from the fine-scale one, as summer research as an invited speaker For Moulton and Coon this means do most existing approaches,” at the March 2007 meeting of the how to best extract an increasingly Moulton says. “Instead it uses a Society for Industrial and Applied rare and valuable resource — oil. In multi-resolution approach, building Mathematics (SIAM). a new well, the oil is under pressure a hierarchy of models, each one a 6 accurately, safely and reliably What he lacked in particulars, one located at DOE’s National User The researchers also modeled the Back at the University of Chicago to Applying model not just nuclear wastes, but Hammond made up for in drive and Facility, the Environmental Molecular spectra of a group of complex organic continue his thesis work, Hammond 9 new medicines and even molecules broad quantum chemistry knowledge. Sciences Laboratory at PNNL. molecules that includes polyaromatic says his time at PNNL and driving Science light-years away. After sadly leaving his new wife hydrocarbons, or PAHs. They ran past Hanford “profoundly changed behind in Chicago, he settled into “The chemistry part of computational the largest calculations of this kind the way I’m going to do things in “In academia, you don’t have to think 90-hour work weeks fueled by lots chemistry really does come last,” ever performed and were able to the future.” JEFF HAMMOND about the big picture all of the time of Mountain Dew. Hammond says. “Half the time you’re determine the spectroscopic properties — it’s there when you write the grant thinking about theoretical physics, of these molecules at a level of accuracy “By using existing methods and University of Chicago | Pacific but day-to-day it doesn’t really matter,” His challenge was significant: Give another 45 percent of the time is not previously possible. software, rather than starting from Northwest National Laboratory | Hammond says. “But the DOE has a NWChem the power to do highly spent getting the code to work scratch, I can spend less time deriving Story by Jacob Berkowitz much more applications-driven accurate spectroscopy — to model correctly, and only then do you “The massively parallel codes developed equations and debugging code, mission. At PNNL I was inundated the light emitted and absorbed by spend about five percent of the time by Jeff enable the full integration of the and more time studying important Every day, while making by the importance of applications, a molecule. actually thinking about chemicals.” coupled cluster codes with NWChem’s chemical problems,” he says. the 15-minute drive to specifically in biology and nuclear existing molecular dynamics module, his summer practicum sciences. When you’re driving by “Experimentalists study molecules When Hammond and others on and this is truly cutting-edge,” PNNL’s In addition to his mostly–theoretical at the DOE’s Pacific tanks of nuclear waste each day that’s spectroscopically,” Hammond says. the NWChem team did get to the DeJong says. “For the first time ever, thesis research, Hammond also wants Northwest National quite the motivation to ensure that “But most computer codes can’t chemistry, the results were impressive. this merger will enable us to model the to talk to a University of Chicago SCOPE OF Laboratory (PNNL) in those tanks are still solid, decades calculate spectra directly. This is a big They demonstrated the new code’s properties of the molecule with the full spectroscopist about PAHs in space. PROGRAM southeast Washington from now, when your kids drive past.” problem and makes the models less ability to simulate the molecular inclusion of surrounding environment. These organic molecules produced by State, Jeff Hammond passed the useful as predictive tools. What I did properties of oxygen dichloride and We believe that this will have a profound dying stars are plentiful in interstellar Since its inception, Hanford Site. Formerly used to Hammond, however, almost didn’t was give NWChem the ability to calculate carbon tetrachloride in solution. effect on the interplay between theory space, and are thought to be a key the DOE CSGF program produce plutonium for America’s make it to what he describes as the spectra of molecules based on This is a crucial capability, because and experiment.” source of organic material for the has supported over 250 first nuclear weapons, the DOE’s “the best professional experience their quantum chemical characteristics.” most pollutants and almost all origins of life. Hammond’s work with students studying at 586-square-mile reservation now is the of my life.” The job was to work on biological molecules exist in solution, Hammond’s contributions to NWChem NWChem could help astrobiologists more than 50 universities nation’s most contaminated nuclear NWChem, but there was a problem: Hammond spent his first month at usually water. will be available to the computational identify PAHs in interstellar space — throughout the U.S. Currently it supports over waste site and home to the world’s The 26-year-old Hammond wasn’t PNNL on a steep learning curve — chemistry community in the next one more application of his science 60 students in 18 states. largest environmental cleanup. familiar with the specific physics reading scientific papers, deriving “At the end of the day we’re getting release of the software, expected in late that could help all of us see the really theory required for the project. And equations and talking with Kowalski to closer to giving biologists and nuclear 2007. The research resulted in at least big picture. The Hanford Site threw a new light then there was the fact that he had develop a solid working understanding scientists the ability to model things four papers, with more to come as on Hammond’s summer internship. almost no experience with computer of coupled-cluster theory. That’s a that are dangerous to study firsthand, an ongoing collaboration expands The University of Chicago doctoral coding for parallel supercomputers. well-established theoretical physics or simply impossible to do so — for in scope. student and DOE Computational tool for approximately solving the example, the interaction of uranium For over 15 years, the Science Graduate Fellow had wanted “No problem,” said his adviser, Bert Schrödinger equation; this enables and other molecules in groundwater,” DOE CSGF program has to be a scientist since he was a kid de Jong, a staff scientist in PNNL’s scientists to predict a molecule’s Hammond says. encouraged the training growing up in Seattle. He was Environmental Molecular Sciences reactivity with other molecules or its of computational entranced by the pure intellectual Lab. “You can learn.” response to a jolt from a laser beam. scientists by providing intrigue of math, chemistry and financial support to theoretical physics. Later he found It was a heady introduction to the “The math that we did can model the some of the most a love for big — really big — world of on-the-job training — one interaction of molecules with lasers,” talented graduate students in the nation. computers, which he could use to that Hammond relished. Hammond says. “But with a little more do chemistry without breaking lab work it can also model the molecule’s glassware. But Hanford was the other “It was such an amazing experience interaction with magnetic fields. This face of science — where equations to be in a work environment where, means modeling nuclear magnetic and algorithms meet the pavement. as opposed to academia, they actually resonance, one of the most important have people whose job it is to help tools biologists use to study the As fate would have it, during his others do stuff they don’t know structure of large, complex proteins.” time at PNNL Hammond boosted how to do,” Hammond says. “The the capabilities of DOE’s premier computer people there, along with With Kowalski, Hammond developed computational chemistry tool, Bert and (PNNL staff scientist) the specific code to implement the NWChem, so that chemists, Karol Kowalski, taught me basically coupled-cluster theory in NWChem biologists, nuclear scientists and everything I know about compilers, and subsequently spent a month even astrophysicists can more parallel computing, debugging debugging it so it could run effectively and Fortran.” on supercomputers, including the

Cl2O solvated in CCl4 . The oxygen dichloride molecule (in the center of the page, oxygen is red) was modeled using quantum mechanics, while the solvent molecules were modeled using classical mechanics. This picture represents the system studied in “The DOE has a much more applications-driven mission… a paper which appeared in the Journal of Physical Chemistry A on June 5th, 2007, When you’re driving by tanks of nuclear waste each day that’s http://pubs.acs.org/cgi-bin/abstract.cgi/jpcafh/2007/111/i25/abs/jp070553x.html. quite the motivation to ensure that those tanks are still solid, decades from now, when your kids drive past.” 8 “What we know is that there isn’t Davis’ task was to improve a “The E. coli work was completely new Simulated Mosquitofish Population Density Data Math Tools a single growth rate to describe the component of the system dedicated to her, but in three short months she 11 entire mosquitofish population but, to computationally modeling the was able to significantly contribute to for Life rather, a distribution of growth rates. E. coli bacterium’s central metabolic our work,” May says. “Using DAKOTA 1 Individuals, and individuals at system. Once again, as with the and empirical data, she examined DOE CSGF different sizes, grow at different mosquitofish work, Davis was various computational approaches HIGHLIGHTS JIMENA DAVIS 0.5 rates, just as with humans,” Davis thrown into deep waters. for the estimation of multiple-rate says. “Estimating the distribution parameters for the E. coli central > Payment of tuition North Carolina State University | Sandia 0 of the growth rates is impossible “I had to hit the biochemistry textbooks metabolic system. The results and required fees National Laboratories – New Mexico | 0 without computationally efficient to get a handle on this project. But demonstrate the feasibility of this

Story by Jacob Berkowitz Population Density, u(t,x) approximation methods.” I wanted to step outside the box of approach in determining reliable 0.1 > Yearly stipend what I’d been doing at NC State model parameters for systems of $32,400 Seated on her 0.2 This problem, she notes, is an example and do something totally different,” biology applications.” apartment patio in 0.3 0 of her specialty: inverse mathematical she says. > A $1,000 yearly 0.2 Albuquerque, New 0.4 0.4 questions. These are ones in which Now completing her thesis at academic allowance Mexico, Jimena Davis 0.6 the final solution is known and the While the specifics were completely NC State, Davis hopes to move 0.8 Time, t 0.5 looked west across 1 Size, x challenge is to determine the new, Davis saw that the essence of from fish and bacteria to humans. > Matching funds of up to the cactus-covered parameters, or factors, that led to the problem was her specialty: an $2,500 for a computer foothills and watched it — in this case the distribution inverse problem — the mathematical “I want to work on problems such as workstation purchase the setting sun splash pink light across of growth rates among the fish. modeling of biological systems in HIV-AIDS or cancer research that will This figure shows the simulated mosquitofish population density data the peaks of the Sandia Mountains. which there’s a significant amount really be beneficial to people — that > Opportunity to complete generated with a Bi-Gaussian growth rate. The sweet fragrance of desert sage With the mosquitofish population of data uncertainty. will make a difference in people’s a practicum working wafted through the air. Compared to model, she’s not only improved the lives,” she says. with scientists and her usual surroundings, she might as approximation methods for the As a first contribution, Davis reduced researchers at a well have been on a different planet, growth rate distribution but also the level of unknown factors in the And while she’s helping shape the big DOE Laboratory but she felt right at home. tested their reliability. model by using a reformulation picture in computational biology, she technique developed at the Lab to also wants to go back to high schools > Yearly fellows’ It was the first time this Department at Clemson University her freshman At NC State, Davis was quickly hooked “We’ve been able to go a step further input known chemical reaction rates, to talk about how math can directly conference with of Energy Computational Science applied math professor pulled on a long-standing computational and compute confidence bands for derived from an on-line database. shape lives. opportunities to Graduate Fellow had visited or lived her aside one day after class and biology model that Banks, now her these probability distributions,” she meet other fellows in part of the United States outside asked: Have you considered majoring advisor, had been angling for: says. “This is exciting because these Davis then coupled the reformulated “I want to mentor young female and academic and government the East Coast. The summer practicum in math? understanding the population techniques aren’t just applicable to model with DAKOTA, a computational students,” Davis says. “To tell them professionals at Sandia National Laboratories that dynamics of mosquitofish. this problem, but to a whole range optimization toolkit also developed ‘Yes, you can major in math. Yes, brought her to Albuquerque was like “I was surprised. I didn’t know I could of probability estimation problems.” at Sandia. you can be a mathematical scientist.’ ” > Renewable up to the mountains — they pulled her major in mathematical science,” Davis “I’d never heard of a mosquitofish,” four years out of herself and made her think recalls. “The high school guidance laughs Davis, who’s now contributed Initial desktop tests of the revised creatively, both personally and counselors had never mentioned it.” as much to understanding their Bacterium Biology model show it’s effective in parameter For more information: professionally. As a result, she’s using population biology as almost anyone estimation using simulated data. www.krellinst.org/csgf math to contribute to computational Within weeks, Davis was a math major. on Earth. At Sandia, Davis teamed with her biology tools that are enhancing our During her last semester at Clemson practicum advisor, Elebeoba May, ability to mimic cellular processes, and her math path was further paved In rice fields in the United States and to tackle a more complex inverse Top: Here is an example of the results obtained from the inverse thus better understand what makes life when, at a workshop sponsored by other major rice-growing countries, biological computation problem — Known and Estimated Probability Distributions with Confidence Bands (M=32 1 problem for the estimation of the growth rate distribution using the tick — including us. SAMSI, the Statistical and Applied including India, mosquitofish are one that took her from fish populations Known 0.9 Estimated delta function approximation method with 32 delta functions. The 0.8 PM Mathematical Sciences Institute, she used as a natural mosquito control to bacterium biochemistry. − M known distribution is Bi-Gaussian and is shown in the solid blue 0.7 P met North Carolina State applied tool, reducing the need for pesticides. + line. The estimated probability distribution is shown in red. Also Fished In mathematician H.T. Banks. He spoke The challenge for biologists is May is leading an ambitious project 0.6 shown in this figure are the confidence bands (lower in green and passionately about the use of applied to understand the mechanisms to develop a large-scale systems 0.5 0.4 upper in black) quantifying the uncertainty associated with this As a high school student in Mullins, mathematics on a variety of problems behind the growth of mosquitofish biology simulation platform that can 0.3 estimated distribution.

South Carolina — population in science and engineering. It was populations as a way to maximize the computationally model whole-cell, Probability Distribution Values 0.2 5,000, with an economy rooted in exactly what she was looking for — fishes’ impact on mosquitoes. For multi-cellular and host-pathogen 0.1 Bottom: Here is an example of the results obtained from the centuries-old tobacco farms — it the ability to make a difference with example, when seeding a field with systems at the molecular level. 0 1 2 3 4 5 6 7 8 inverse problem for the estimation of the growth rate distribution was clear Davis had a passion and a subject she loves. fish, what’s the optimum fish-size Called the BioXyce Project, it’s based Intrinsic Growth Rates, b using the spline based approximation method with 16 piecewise a penchant for math. Her guidance distribution to use? on modeling the flow of proteins Known and Estimated Probability Distributions with Confidence Bands (M=16 linear spline elements. The known distribution is Bi-Gaussian and counselors recommended a career and genetic molecules as if they were 1 Known is shown in the solid blue line. The estimated probability distribution is in engineering and she agreed. But current flowing through an electrical 0.9 Estimated PM 0.8 − shown in red, while the upper and lower confidence bands are circuit. This large-scale, parallel PM 0.7 + shown in black and green, respectively. computing biocircuit simulation is 0.6

one of the holy grails of computational 0.5 “I’d never heard of a mosquitofish,” laughs Davis, biology — to mathematically model 0.4 the hundreds of interacting genetic 0.3 who’s now contributed as much to understanding and protein pathways that constitute Probability Distribution Values 0.2 an organism’s metabolism. 0.1 0 their population biology as almost anyone on Earth. 1 2 3 4 5 6 7 8 10 Intrinsic Growth Rates, b Wild, a Department of Energy is looking for mathematical ways to Green Silicon Computational Science Graduate keep the detail, but boost the speed.” 13 Fellow (DOE CSGF), is working to For Wild, optimization is about streamline cordon-bleu-complex bringing math to the people — to STEFAN WILD computer models and make them turn an unusable model into a stove-top fast. The results will have dinner-table standard. Cornell University | Argonne National green impacts of another kind: Laboratory | Story by Jacob Berkowitz environmental engineers use the It’s why he sought out his doctoral computer models he’s improving advisor, Cornell engineering professor When constructing interpolation surrogates, one often to find the best ways to stop the Christine Shoemaker. She’s a world encounters a battle between the optimization (points Along with being home spread of pollution and to clean leader in the application of sophisticated suggested by the surrogate) and the numerical to Cornell University, polluted groundwater. computations to solve environmental conditioning of the resulting interpolation set. The Ithaca, New York problems and, like Wild, takes a optimizer’s goal is to make good progress while is famous for the “math with a mission” approach. respecting numerical stability concerns. Moosewood Restaurant, Big, Not Fast America’s best-known It’s also why Wild’s DOE CSGF vegetarian eatery. Even with advances in computer practicum stint at Argonne National hardware and modeling software, Laboratory, outside of Chicago, was The Moosewood’s fame is largely computational scientists often face a so valuable. based on its spectacular and creative central hurdle: when the model bites cookbooks. The cookbooks are off more than computers can easily The Laboratory for Advanced renowned for complex recipes chew. Many models are so detailed, Numerical Simulations in Argonne’s that both taste and look great. The contain so much information, and are Mathematics and Computer Science problem is that for busy cooks with so complex that they become technically Division is a world leader in optimization 20 minutes to whip up something unwieldy. The mathematical recipes, technologies. The lab’s Toolkit for for a 5:30 family dinner, the recipes or algorithms, that make up the Advanced Optimization (TAO) is DOE CSGF are impractical — fun to read, but model are beautiful, but take too a collection of high-quality, high- HIGHLIGHTS often impossible to cook. long to run to get timely results performance codes, primarily for groundwater in on-site remediation “It’s already beating the competition,” involving pollutants from radioactive within budget. distributed computing applications, using wells, pumps and purifying Moré says. wastes to pesticides and petroleum It’s a conundrum Stefan Wild used by hundreds of researchers techniques, the parameters include residues. Remediating these sites > Payment of tuition understands well — in a computational “We call these computationally within DOE, industry and academia the rate of pumping, the changing For Wild, it’s an important step often costs tens of millions of dollars and required fees science sense — and he’s rolling up expensive models,” Wild says. “They in the United States and beyond. levels of contamination, and toward bringing applied math to and involves decades of work. his sleeves in the computational often give researchers headaches groundwater movement in response the environmental engineering Optimizing the cleanup approach > Yearly stipend kitchen to help solve the problem. because the model evaluation Wild spent the summer of 2006 at to the pumping. community. He says one of the most before starting can save years of of $32,400 During a summer practicum at expense makes many of their favorite Argonne, working with practicum rewarding aspects of the practicum at work and millions of dollars. Argonne National Laboratory and analysis tools ineffective in practice.” advisor Jorge Moré to create a new Some computationally expensive Argonne was realizing the enormous > A $1,000 yearly in his doctoral research at Cornell, code for the optimizer’s toolkit: an models can involve a hundred or impact of optimization codes. When Wild presented his preliminary academic allowance Enter the world of model optimization. algorithm specifically designed for more parameters. The more parameters, results at the Society for Industrial Model optimizers are applied engineers with computationally the more expensive and complex the “Argonne’s Toolkit for Advanced and Applied Mathematics (SIAM) > Matching funds of up to mathematicians, like Wild, who expensive models. model becomes, since each parameter Optimization is a facilitator for meeting on Computer Science and $2,500 for a computer are passionate about getting requires its own algorithm, or part of important science around the world, Engineering in September 2006, he workstation purchase the most from cumbersome “The optimization technique that the simulation code that must interact and it felt great contributing to this,” was approached by scientists from computational models. Stefan is developing is different from with all the other parts. This boosts he says. the United States Geological Survey > Opportunity to complete most,” says Moré, an Argonne staff the time it takes to run the model on interested in applying the optimization a practicum working “It’s not about making the models scientist who develops algorithms and a computer, often making it impractical Nowhere is this truer than in the techniques to their models. with scientists and researchers at a less detailed,” Wild says. “The original software for large-scale optimization to use. realm of environmental engineering. DOE Laboratory modeler is thinking of replicating problems, such as modeling nuclear There are tens of thousands of It was a heartening response — one some physical phenomenon, not energy production. “What we’ve done,” Wild says, “is to contaminated groundwater sites in that told Wild he had the right recipe > Yearly fellows’ the larger goal of using the model to build a faster mathematical surrogate the United States, from EPA-designated for long-term success. conference with improve some system. If they were to “Most of the optimization techniques that can replace key bottlenecks in Superfund sites to smaller ones, opportunities to make the model faster, they’d try and we’re currently using require additional expensive models and make them meet other fellows make it simpler. What we’re doing information from the user,” Moré says. computationally inexpensive to evaluate.” and academic “The technique that Stefan’s developing and government doesn’t require additional inputs, Wild and Moré’s optimization professionals and this makes it much simpler and algorithm is performing well in a more user-friendly.” Matlab implementation, and the two For Wild, optimization is about > Renewable up to are continuing to finesse the code four years Computationally expensive models to solve implementation details bringing math to the people — to involve as many as dozens of parameters, and make it user-ready for addition For more information: Shown is a quadratic approximation (transparent) of a function or variables. For example, in assessing to TAO. turn an unusable model into a www.krellinst.org/csgf (opaque) using only 4 function values. The quadratic “surrogate” the best way to clean contaminated dinner-table standard. is easy to evaluate and hence it can be efficiently used to suggest a new point (the square) at which the expensive 12 function should be evaluated. DOE LAB RESEARCH Lawrence Berkeley | Sandia | Oak Ridge | Lawrence Livermore | Los Alamos | Pacific Northwest | Argonne 155

By Thomas R. O’Donnell Power Play NERSC Makes High-Performance Computing Accessible

THE PERSONAL COMPUTERS in Lawrence Pratt’s laboratory weren’t cutting it. His research on the structures and interactions of lithium compounds was hindered because the Pentium 4-type machines he uses at Fisk University in Tennessee couldn’t keep up with the demands of modern computational chemistry.

His work finally kicked into to millions of processor hours per from dozens of universities, private In-house expertise is part of what high gear with a grant of 150,000 year. A small amount of time is set research institutions and DOE makes the NERSC Center popular high-performance computer processor aside for “startups,” or researchers laboratories work on around 300 with researchers. The staff works hours — and the help of the National who are still preparing their software projects each year. Yet, users rarely with computational scientists to Energy Research Scientific Computing for massively parallel processing. visit the NERSC facility. Most connect tune applications for the best (NERSC) Center, based at the to and use center computers via performance, visualize their results >> NERSC Department of Energy’s Lawrence Simon would be happy to have more ESnet, DOE’s high-speed network, and make their research more Berkeley National Laboratory such projects use NERSC computers and the Internet. Their work produces effective. There’s a strong culture (LBNL) in California. as a gateway to high-performance mountains of results — for 2006, focused on assisting users, and >> Computational Science computing. In particular, he’s working researchers published more than the staff is experienced and stable, “I use every last minute” of computer to provide more students — especially 1,400 papers related to calculations Simon says. The center stages regular time, says Pratt, who got onto the DOE Computational Science Graduate on NERSC computers. training sessions, encourages user >> Using LCC powerful NERSC machines through Fellowship (DOE CSGF) participants communities to exchange information, the DOE’s Innovative and Novel — more opportunities to try their The NERSC Center’s role as a DOE and hosts databases of user questions. Computational Impact on Theory programs. “We want to get students service facility means the projects >> Applied Math and Experiment (INCITE) program. interested in using NERSC or any running on its computers cover virtually “If there’s a difficult project that He’s since qualified for another of the other DOE computational every strategic theme pursued by NERSC staff can work with the grant of 150,000 processor hours resources, so they have a positive DOE and its Office of Science — scientific users on, it often turns >> Climate Modeling and “I’m burning through it like experience and become used to “Everything from astrophysics down into a scientific collaboration,” with crazy, but I’m also publishing a lot integrating scientific computing to nanoscience,” Simon says. Fusion the staff member listed as a joint Large allocations of NERSC computer Accelerating a thermonuclear flame to a large fraction of the speed of of papers — five so far this year.” into their work,” especially after energy, materials science, chemistry, author on research, Simon adds. time generally are awarded competitively sound (possibly supersonic) is one of the main difficulties in modeling Type Ia supernova explosions, which likely begin as a nuclear runaway near the >> Algorithms graduation, says Simon, who is climate, genomics, computational through annual requests for proposals, As INCITE projects go, Pratt’s is leaving his post soon to concentrate biology, applied math and computer Pratt, who scaled his mathematical with DOE program managers making center of a carbon-oxygen white dwarf. The outward propagating flame small. Most INCITE projects, which on other roles at the Berkeley lab. science are just some of the disciplines models up from the single processor the decisions. Computing experts is unstable, which accelerates it toward the speed of sound. Bell et al. investigated the unstable flame at the transition from the flamelet regime >> Activities for LCC hold potential for major scientific with research on NERSC Center in his desktop computers to eight at NERSC, Argonne National breakthroughs, are awarded millions The center is a perfect place for a first machines. “Our mission is really to processors, says NERSC experts Laboratory, Oak Ridge National to the distributed-burning regime through detailed, fully resolved simulations. of processor hours — but they taste of science on massively parallel be the high-end production resource were helpful. Without access to high- Laboratory, and Pacific Northwest At the low end of the density range, the instability dominated the burning. consume up to 20 percent of the computers. For more than 30 years, for the Office of Science, so general performance computing, “There are National Laboratory judge whether hours available on NERSC’s massively it’s been DOE’s main production purpose and diverse applications projects I would not have been able the codes and algorithms described Image illustrates a carbon mass fraction for a 384 cm wide, 6.67 x 106 g cm–3 C/O parallel computers, says center director center for scientific computing, and have been part of our mission,” to complete,” he says. “We were able in the proposals are ready to run on flame shown every 1.6 x 10–3 s until 8.12 x 10–2 s. The fuel appears red (carbon Horst Simon. The remaining 80 it hosts some of the department’s Simon adds. to find out a lot about the chemistry massively parallel machines. Peer mass fraction = 0.5), and gravity points toward increasing y. At this low density, percent is divided among 300 or so largest, fastest systems for unclassified through computational mechanisms review panels scrutinize the proposals the instability dominates over the burning, and a large mixed region develops. projects, each using tens of thousands research. More than 2,500 users that would not have been easily for their impact on science. “This is obtained by experiment.” sort of self-selecting, because the Image and caption courtesy of www.nersc.gov people who apply know what the reviewers are looking for,” Simon says. NERSC is making it even easier for DOE CSGF fellows to get on its machines. It’s allocating 40,000 to 50,000 startup hours to between 50 and 70 projects fellows put forward. 14 COLLABORATORS Named “Franklin” after Benjamin Franklin, America’s first scientist, the Cray 175 Horst Simon was named Associate Laboratory Director for XT4 will consist of more than 19,000 processor cores when fully installed. Computing Sciences at Berkeley Lab in 2004. He represents the It will deliver sustained performance of at least 16 trillion calculations per interests of the lab’s scientific computing divisions — the NERSC second, with a theoretical peak speed of more than 100 teraflop/s. Franklin Center and Computational Research — in the formulation of laboratory the computer was powered up for the first time on January 17, 2007 thus celebrating policy, and leads the overall direction of the two divisions. He also coordinates constructive interactions within the computing its birthday with Benjamin Franklin, who was born January 17, 1706. GETTING UNSTUCK sciences divisions to seek coupling with other scientific programs. Simon joined LBNL in early 1996 as director of the newly formed Image and caption courtesy of www.nersc.gov SCIDAC OUTREACH CENTER NERSC Division, and was one of the key architects in establishing BRINGS SOLUTIONS TO SCIENTISTS NERSC at its new location in Berkeley. Simon also is the founding director of Berkeley Lab’s Computational Research Division, which conducts applied research and development in computer science, >> Even though it too has a toll-free number (866-470-5547) and computational science, and applied mathematics. His research interests e-mail address ([email protected]) for obtaining assistance, are in the development of sparse matrix algorithms, algorithms for In the past, requests often totaled Fellows can go to a Web page and fill the center that David Skinner oversees isn’t quite like the help desks that computer makers run for perplexed users. For one large-scale eigenvalue problems, and domain decomposition more than 10 times the amount of out a survey about their computing thing, callers to the SciDAC Outreach Center get better service. algorithms for unstructured domains for parallel processing. available time. Now the deployment needs. About half of respondents of ever-more-powerful computers through summer 2007 “were people Simon’s recursive spectral bisection algorithm is regarded as a The center is part of the second round of the Department of Energy’s breakthrough in parallel algorithms for unstructured computations, allows NERSC to meet about half of who said ‘This is where I’m generally • An IBM p575 POWER5 system with theoretical peak performance of Scientific Discovery through Advanced Computing (SciDAC-2 ) and his algorithm research efforts were honored with the 1988 the requests. While NERSC seeks to headed, but I’m not there yet,’” 976 processors named Bassi. Bassi’s 100 teraflops. With future upgrades, program, which applies computational science to projects in a range of Gordon Bell Prize for parallel processing research. He also is one accommodate projects and investigators Skinner says, but other fellows said peak performance is 7.4 teraflops Franklin could have a theoretical disciplines. The center is a clearinghouse to disseminate computational of four editors of the twice-yearly “TOP500” list of the world’s most with little or no background in parallel they can’t get their research done and it has 100 terabytes of peak of 1 petaflops — one quadrillion tools and techniques SciDAC researchers develop throughout the powerful computing systems. processing or high-end computing, fast enough with their present disk storage. calculations per second. program’s entire community. those are becoming rare as parallel computer resources. “SciDAC has tried to bring software solutions to scientists and David Skinner was the lead high-performance computing (HPC) machines become ubiquitous. • A 712-processor Opteron Linux Franklin will increase the number of make them easier to use,” says Skinner, the outreach center consultant for the Department of Energy’s first six Innovative and Novel “A couple…said ‘My workstation is cluster named Jacquard, with NERSC computer cycles available for Computational Impact on Theory and Experiment (INCITE) projects coordinator and leader of the Open Software and Programming The process and facilities may seem too slow for the work,’” Skinner says. a peak speed of 3.1 teraflops. research by a factor of 16, Simon says before starting the SciDAC Outreach Center. The INCITE program has Group at the National Energy Research Scientific Computing since blossomed into a multi-lab allocation process for large-scale daunting, but it’s not hard to get “For those, my response was to get — and every one of them is needed. (NERSC) Center. But “Some of the technologies are quite complicated, computing. In that work and other projects Skinner often focuses on a foot in the door. New users who them onto some high-performance • PDSF, a 700-processor Linux “Researchers often ask for many more and having someone who can assist in deploying a high-performance making scientific applications run fast and scale well. The core areas want to try their codes or develop computing facilities,” either at NERSC cluster dedicated to high-energy processor hours than we can actually computing software library or a different methodology” is the of Skinner’s present HPC research include improving application new ones on NERSC machines can or another national laboratory, such as physics research. accommodate. In the last couple of outreach center’s role. performance, characterizing scientific workloads, and analysis of apply online for a startup allocation. Oak Ridge or Argonne. The startup years we had requests that were more emerging architectures. Startups must meet the Office allocations should let fellows try out • DaVinci, a 32-processor SGI Altix than six or seven times what we had Skinner and NERSC staff member Andrew Uselton are the only people permanently assigned to the center, but they have access of Science mission and require their codes for up to 18 months. shared memory cluster devoted available,” he adds. Most researchers to other NERSC application experts. More often, the center is an Skinner heads the NERSC Center’s Open Software and Programming high-performance computing. to visualization, data analysis, and got only a fraction of the processor Group at NERSC, which is active in making software deliver for HPC intermediary to connect researchers to each other. Students, Simon says, often are shy long-running interactive work. hours they wanted. With Franklin, centers and users and in promoting software development practices that enhance reliability and performance in the overall HPC process. NERSC is making it even easier for about asking for help getting their “We expect we will have, for once, Skinner says the most common thing he hears from researchers His group works on a variety of software related to parallel computing DOE CSGF fellows. It’s allocating codes to run on NERSC Center NERSC also has two data enough cycles to keep everybody is “I’m stuck.” applications themselves as well as HPC center infrastructure software 40,000 to 50,000 startup hours to computers. Many are accustomed storage systems: happy” — but not for long. for system monitoring, allocation banking, and Web services. Skinner between 50 and 70 projects fellows to solving their own computer Demand for computing time “That’s the most motivating factor to get people to seek out help, also publishes scientific research in the areas of molecular dynamics, put forward. The allocations will let problems and working with university • The NERSC Global File System is constantly growing. is that they’re bottlenecked or can’t get their research done,” chemical quantum dynamics and kinetics. students see how well their codes computing centers that often were (NGF), with about 150 terabytes Skinner says. Researchers contact the center seeking information on new software or technology they might use. On the other side, scale in parallel, and enable them staffed by their fellow students, (trillion bytes) of user-accessible That’s why, as soon Franklin is researchers also contact the center to offer software solutions. Further Reading: to work with NERSC consultants Simon adds. They’re “quite surprised storage. NGF allows users to create stabilized and running, NERSC NERSC Center Web page: http://www.nersc.gov/ to improve their projects and when they come to NERSC, because and access a single file from any will begin preparing for the next “We definitely are the matchmaker between producers and code performance. we are a full-service organization.” of the lab’s high-performance system, called NERSC-6 for now. requesters of new technology,” Skinner says, but he and Uselton NERSC Center strategic plan: computing systems. NERSC-6 is likely to start life as a don’t just sit back and wait for calls. They’re spreading the word, http://www.nersc.gov/news/reports/LBNL-57582.pdf The idea is to get fellows to look “We don’t discriminate against 1-petaflops-capable machine. It’s both in person and electronically, about available technologies. beyond the computational resources students,” Simon says; they’re often • The High-Performance Storage also likely to have even more processing They attend meetings to learn what scientists need and to help them NERSC Center newsletter archive: they have at hand through their the people doing the nitty-gritty System (HPSS), with a theoretical cores on a single chip, a change learn new software. For example, about 100 researchers attended http://www.nersc.gov/news/nerscnews/ major professor or department, says coding for their major professors’ capacity of 22 petabytes (quadrillion that poses challenges for the future a series of software tutorials the center staged in conjunction with the annual SciDAC conference in Boston in June 2007. David Skinner, leader of NERSC’s research. DOE CSGF fellows who bytes) for long-term archival NERSC director. Contact: Horst Simon David Skinner Open Software and Programming get access to NERSC can work with data storage. Electronically, the center provides tools for collaboration, including [email protected] [email protected] Group and coordinator for the some powerful computers: “There’s always enough work to do,” software testing and evaluation, software repositories, and even SciDAC Outreach Center (see In the fall of 2007, NERSC also will Simon says. “There’s exciting stuff FAQs. For example, the center is developing a GForge website for sidebar). Some students may have • Seaborg, an IBM RS/6000 SP bring its latest computer system on to do as long as computers grow and software development activities, including news, downloads, bug discovered NERSC Center computer with 6,656 processors (6,080 of line. Dubbed Franklin, the Cray XT4 become more powerful. We never reporting, feature tracking and other tools. PRACTICUM COORDINATOR resources if their advisor has used which are available to run scientific will have 19,344 compute CPUs, at stand still.” Daniel Martin ...... [email protected] them, but the goal is to attract even computing applications). The least two gigabytes of memory per The outreach center builds on NERSC’s strong track record, those who have not worked with the system has a peak performance CPU, and a sustained performance Skinner says. “We have a lot of experience in bringing people up to speed on parallel computing resources,” he adds, but “The center before. of 10 teraflops, or 10 trillion of 16 teraflops, as opposed to a SciDAC Outreach Center has a broader scope than just NERSC.” operations per second. It will connect researchers with any laboratory or computing 16 center that might have the answers or services they need. DOE LAB RESEARCH Lawrence Berkeley | Sandia | Oak Ridge | Lawrence Livermore | Los Alamos | Pacific Northwest | Argonne 195

By Thomas R. O’Donnell From Flames to Fusion Sandia Researchers’ Methods Prove Their Versatility

WHETHER IT’S THE SUN, THE STARS, or the long-sought viable fusion reactor, they all involve plasmas, and researchers at Sandia and Los Alamos national laboratories want to help understand what happens in them.

They’re developing computer But Shadid had his own application Stability Control algorithms that simulate how ions in mind. He’s worked with fellow and electrons move and react in Sandians Roger Pawlowski, Andy Such transport/reaction processes plasmas under the influence of Salinger, Karen Devine, Gary couple complex physical processes: magnetic fields. If they’re successful, Hennigan and Paul Lin to simulate Fluid flow, energy transfer, radiation it could help physicists better the intricate physics of coupled fluid effects, chemical species transport >> NERSC understand the processes fueling flow and complex chemistry. These and chemical reactions — both in the sun and stars, and how to transport/reaction (or chemically fluids and on surfaces — and more. harness that power for energy. reacting flow) systems have a They also occur across an expansive >> Computational Science multitude of applications, including range of space and time scales. The project builds on nearly 20 years simulating combustion, cardiac cell of Sandia’s field-leading research in activity, and even the spread of a “This myriad of coupled phenomena Time-exposure photograph of Sandia's Z machine firing. Z is the world's most efficient (15%) and powerful >> Using LCC hardware, computer science, applied biological material released in a produce a highly nonlinear, multiple math and numerical algorithms to busy airport. timescale behavior,” Shadid says. It’s more likely to produce nonsense laboratory x-ray source, producing x-ray powers in excess of 200 trillion watts. (For more information: devise efficient methods for massively a huge challenge to find stable, accurate, answers. “You have to go at the smallest http://www.sandia.gov/pulsedpower/). >> Applied Math parallel computing. The results In one case, Sandia researchers worked and efficient numerical methods that time-scale of the individual physics include Aztec, one of the first with Ford Motor Co. researcher Kevin simulate these processes and run to achieve stability, even if it doesn’t Photo provided by Randy Montoya large-scale, parallel, iterative Elwood to simulate a hydrogen solid efficiently on ever-larger computers. provide extra accuracy in the >> Climate Modeling solver libraries; and Chaco, an early oxide fuel cell. The researchers’ simulation,” Shadid says. tool to balance the computational algorithms and the MPSalsa simulation The usual algorithmic approaches workload between parallel processors. software helped model flow and have used semi-implicit, operator- Shadid and his fellow researchers >> Algorithms A host of complex applications reactions in the fuel cell to transform splitting, or explicit time-stepping. In approach the problem from the have used those and other codes hydrogen and oxygen into water and explicit time-stepping, the value of other end of the spectrum: using fully strongly coupled, nonlinear equations Getting a Grid devised by John Shadid, Ray electricity. The researchers also have each quantity — density, momentum, implicit methods, which represent all with tens of millions or hundreds of >> Activities for LCC Tuminaro, Scott Hutchinson, collaborated with Dow Chemical energy, species concentration, and the physics consistently at each new millions of equations and just as many In general, discretizing PDEs involves Bruce Hendrickson, Rob Leland Co. to optimize a new design for others — at the new time is calculated time level. “The advantage is you can unknowns. These equations must be distributing points in a grid or mesh and other Sandia researchers. a chemical reactor to efficiently based on the old values. That approach take time steps that are associated solved simultaneously on thousands throughout the space being simulated, convert ethane gas to liquid ethylene, produces a simpler solution method, with the physics you’re interested in,” of computer processors. such as a chemical reaction chamber. an important feedstock for plastics. but it can be unstable, meaning it’s rather than stability, Shadid says. The The more points, the better the desired accuracy dictates the time These huge algebraic systems resolution and accuracy of the steps, and fewer time steps can be must be attacked intelligently so simulation — and the more demanding calculated without losing accuracy. parallel-processing computers can it is for computers to solve. The disadvantage: When the governing solve them efficiently. The methods “This myriad of coupled phenomena produce a highly nonlinear, partial differential equations (PDEs) Shadid and his fellow researchers Multigrid methods use a hierarchy of multiple timescale behavior,” Shadid says. It’s a huge challenge to are discretized — transformed into have developed generally do that in grids of varying resolutions. Because algebraic equation systems digital two steps: applying a nonlinear solver; they use information from a sequence find stable, accurate, and efficient numerical methods that simulate computers can solve — implicit time and applying iterative solvers with of grids rather than a single grid, these integration produces a system of multigrid preconditioners. methods scale optimally — the number these processes and run efficiently on ever-larger computers. of iterations is constant and the work grows in direct proportion to the problem size. Collaborators Ray 18 COLLABORATORS 215 John N. Shadid is a distinguished member of the technical staff at Tuminaro and Mazio Sala (from The group’s code can determine the Plasma Application XMHD is trickier still. Standard MHD grids, allowing them to discretize Sandia National Laboratories in the Computational Science R&D Swiss research institution ETH point at which parameter variations can simulations track just one type of problems with complex geometries. Group. He received bachelor’s and master’s degrees in mechanical Zurich) have fully algebraic cause destabilizing effects that destroy The group’s nonlinear solvers and fluid behavior, and couple ions and The Los Alamos approach “readily engineering and a master’s degree in mathematics from the methods that accomplish this. uniform, steady-state operation. That preconditioner algorithms have been electrons so they behave as a single generalizes to unstructured meshes,” University of Wisconsin-Milwaukee and his doctoral degree in information can help improve designs disseminated to the computational fluid. XMHD brings in more complex Chacón adds. “In principle there is mechanical engineering from the University of Minnesota. After The preconditioning step is critical for reactors and other devices. The science community as part of the electron dynamics, Chacón says. a good marriage between the two. graduation he joined Sandia National Laboratories as a senior to solving these large systems, Shadid code calculates the bifurcation point Trilinos solver framework, a parallel “Standard MHD is an approximation of There is a direct translation to member of the technical staff in the Parallel Computational adds. On test problems with 10 directly, compared to programs that computing software package assembled what reality does, and the approximation unstructured meshes.” Sciences Department and was named a distinguished member in 1999. At Sandia he has been lead principal investigator and million to 250 million unknowns must be run numerous times with at Sandia. Now Shadid and his fellow involves several assumptions,” he adds. co-PI on a number of large-scale computational science projects, and running on up to 8,000-plus different operating parameters. “We researchers are extending their XMHD makes fewer assumptions, but Shadid agrees: “It’s really an including research and development of a parallel implicit parallel processors, the Sandia can set one run, let it go, and it will methods to plasmas — the stuff of “The price you pay is that it becomes excellent collaboration.” transport/reaction simulation code, MPSalsa, and a parallel multilevel preconditioning techniques directly compute the parameter stars and fusion energy. The work a much more complicated system” preconditioned Krylov solver library, Aztec. The Aztec library proved to be 10 to 100 times faster values of the bifurcation point,” could apply to astrophysics and plasma to solve. received an R&D 100 award in 1997. The MPSalsa simulation than one-level methods. The iterations Pawlowski says. “You don’t have to processing of advanced semiconductor code has been honored twice as a Gordon Bell Prize finalist. His necessary to reach a solution also adjust the parameters by hand and and microelectromechanical systems Like the Sandia group, Chacón, Knoll current research interests include high-performance computing; stayed constant, even as the problem try to hopefully figure out where it (MEMS). It also could help simulate and their fellow researchers have parallel algorithm development; numerical solution methods for size grew by more than four orders changes stability.” It’s “a powerful plasma behavior in devices like ITER, focused on multigrid preconditioning multiple-time-scale nonlinear coupled PDEs; and the simulation of FEELING Z PINCH a wide range of complex transport/reaction systems that includes, of magnitude. That means the tool to design stable operation” of an the international fusion reactor methods with applications to most recently, magnetohydrodynamic (MHD) systems. algorithms should scale well on experiment or process. project to be built in southern France; MHD simulations. In essence, their even bigger computers. the National Ignition Facility (NIF) physics-based preconditioners >> There’s a homegrown brand of fusion science Sandia and Luis Chacón is a member of the technical staff at Los Alamos Similarly, the group’s unique algorithms at DOE’s Lawrence Livermore determine which coupled physics Los Alamos researchers want to help model with their algorithms National Laboratory. He obtained a master's degree in industrial The Sandia researchers’ nonlinear can help scientists and engineers National Laboratory; or Sandia’s equations govern the simulation’s for extended magnetohydrodynamics. engineering from the Polytechnic University of Madrid in 1994, solvers are robust — unlikely to blow choose optimal designs. “I could sit at own Z machine (see sidebar). time scale. and master’s and doctoral degrees in nuclear engineering from Sandia National Laboratory’s New Mexico location houses the Z machine, up and produce nonsense — and a computer all day and set different the world’s largest controlled X-ray generator. It’s designed to test materials the University of Illinois in 1998 and 2000, respectively. In 2000, allow them to crack elaborate problems. parameters and run (a program) out, Sandia researchers Pavel Bochev and “We know that the coupling produces he accepted a Director’s Postdoctoral Fellowship appointment under extreme conditions of radiation and pressure as part of the U.S. This includes bifurcation studies, which change the parameters and run it out Jeff Banks work on the project with the problems in time scales and so we in T-division at Los Alamos National Laboratory to research the nuclear weapons stockpile stewardship program, but it’s also being application of Newton-Krylov methods to resistive MHD. During efficiently identify changes in stability again and again” to get an optimal Shadid, Pawlowski and Tuminaro. address those,” Chacón adds. “That’s studied as a route to creating the clean energy of nuclear fusion. this appointment, he has made important contributions in the — points where a change in model answer, Pawlowski says. That may work They’re collaborating with Luis Chacón where the physics-based (approach) context of implicit, nonlinear algorithms for two-dimensional resistive parameters leads to multiple steady if only a few parameters are changing, of Los Alamos National Laboratory, Dana comes from. You need to have the Through a system of huge capacitors and wires, and with 20 million and Hall incompressible MHD. states or oscillatory solutions. For but the problem is tougher when Knoll of Idaho National Laboratory, and insights of what couplings are producing amperes of electrical power, the Z machine can generate pulses of 200 example, the researchers’ codes have many parameters are involved. DOE Computational Science Graduate the time scales” and address those trillion watts of power for short periods. The pulse vaporizes an array of metal wires around a housing the size of a spool of thread, turning Roger Pawlowski is a senior member of the technical staff at been used to analyze chemical vapor Fellow John Evans of the University of directly. The preconditioning Sandia National Laboratories. He obtained his doctoral degree in them into plasma. The powerful magnetic field generated by the current deposition, which places films of “You don’t want to simulate for every Texas. The project marries methods technique breaks the complex compresses the plasma to about the thickness of a pencil lead. Under chemical engineering from the State University of New York-Buffalo semiconductor materials on wafers possible combination of parameters. the Sandia group devised with ones equation systems into smaller in 2000. He has contributed to a variety of projects at Sandia, including compression, the moving ions and electrons suddenly stagnate, releasing for microelectronics. Industry wants You want methods that will take Chacón and Knoll have developed for subsystems for multigrid solutions. the development of solvers for large-scale circuit networks, coupled energy as X-rays and reaching temperatures of billions of degrees. circuit/device solvers, catalytic oxidation reactor design, MEMS to scale up the process to get more you directly to the optimal solution,” magnetohydrodynamics (MHD) and reactor design, multi-phase aerosol modeling, combustion, and chips out of each wafer, but reactor Pawlowski says. The Sandia codes extended magnetohydrodynamics The Sandia group has used a similar The current passing through the wires travels vertically, or along the fundamental studies of stagnation flows. His current interests focus instabilities produce inconsistent do that quickly. (XMHD). physics-based preconditioner Z axis. Since magnetic fields “pinch” the plasma, the process has been on developing robust finite element discretization techniques for results. “Some days you get a good technique and applied it to low called Z-pinch confinement. MHD and reacting flow physics and algorithm development for uniform layer. Some days you get MHD studies how electrically conducting Mach-number fluid flows. In effect, Sandia scientists have used the Z machine to fuse tiny amounts of nonlinear systems, bifurcation analysis, and multi-physics coupling. a really bad, nonuniform layer,” fluids move and are affected by magnetic they decouple the equations into deuterium, producing thermonuclear neutrons — a step toward creating Pawlowski says. Small changes in fields. The physics has similarities with subsystems that can be more easily Further reading: a self-sustaining fusion reaction. the system — whether a valve sticks those governing chemical transport and solved by multigrid methods. J. N. Shadid, A. G. Salinger, R. P. Pawlowski, P. T. Lin, G. L. Hennigan, R. S. Tuminaro, and R. B. Lehoucq. Large-scale Stabilized FE during start-up, whether a temperature reaction — but with some twists. “You “We don’t decouple the actual Sandia researcher John Shadid says some of his group’s algorithms may Computational Analysis of Nonlinear Steady State setting is slightly out of adjustment, add in electric and magnetic fields with iterative solver,” Shadid says. “We be useful in helping model hydromagnetic Rayleigh-Taylor instabilities Transport/Reaction Systems, CMAME. 195, 1846-1871 (2006). or even whether someone bumped Maxwell’s equations,” Shadid says. “It decouple some of the physics in the in the plasma implosion. Such instabilities are an important limiting the reactor during the run — can makes the solution processes significantly preconditioners, which is just an mechanism for the amount of radiation energy produced by a Z-pinch D. A. Knoll, V. A. Mousseau, L. Chacón, J. Reisner. Jacobian-free lead to a fundamental change in more difficult.” approximation that gives us very fast, and are therefore critical to understand and control. The algorithms the Newton-Krylov methods for the accurate time integration of stiff wave system performance. very good approximate solution.” groups are developing may find their way into production simulation systems, Journal of Scientific Computing. 25, no. 1, 213-230 (2005). codes like Sandia’s ALEGRA, which is used in modeling the Z machine. The method Chacón’s group uses Contact: “There’s a large program already at Sandia in terms of modeling this kind also doesn’t assume any particular John Shadid Luis Chacón Roger Pawlowski of physics, but there are a number of open issues still, and one of them [email protected] [email protected] [email protected] grid or mesh structure to discretize is handling these complex, interacting, multiple time scales” — a focus A contour plot of hydrogen and water the governing equations. The Sandia for his group’s research, Shadid says. Luis Chacón, Shadid’s collaborator concentrations in a 2D-cylindrical solid Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, researchers use unstructured data at Los Alamos National Laboratory, says the powerful machine is ripe for the United States Department of Energy’s National Nuclear Security Administration under oxide fuel cell (SOFC). In the MPSalsa Contract DE-AC04-94AL85000. for simulations using the group’s approach. It’s been hypothesized that transport / reaction simulation hydrogen electron physics plays a role in the Z machine’s operation “and that fuel reacts with oxygen ions at the anode brings all the tools we’re talking about in that regard.” Computer models interface (lower surface) to produce water of the Z machine use operator-split time-stepping techniques. These PRACTICUM COORDINATOR and electrons for electrical power. methods can sometimes introduce instabilities and lead to significant error accumulation. Chacón and Shadid believe more effective implicit Heath Hanshaw ...... [email protected] coupling could cut errors and increase efficiency, provided the faster 20 Courtesy of J.N. Shadid of Sandia National Labs and K. Ellwood of Ford Research. time scales electron physics brings to the problem can be managed. DOE LAB RESEARCH Lawrence Berkeley | Sandia | Oak Ridge | Lawrence Livermore | Los Alamos | Pacific Northwest | Argonne 235 Measuring Up By Karyn Hede On the Petascale

“ANYONE CAN BUILD A FAST CPU. The trick is to build a fast system.” — Seymour Cray The words of Seymour Cray, founder of supercomputer maker Cray Inc., will soon be tested. The company is to deliver a system to Oak Ridge National Laboratory (ORNL).

Petascale computing has the potential Jaguar, ORNL’s current Cray XT4 program, which provides large “To do breakthrough science at a user to eclipse today’s fastest computer by system, comprises 124 cabinets computer time grants to just a handful facility, we can’t just be a ‘cycle shop’ a factor of three. But how fast it really containing more than 11,700 of peer-reviwed projects. The 2007 and say ‘OK, here’s your account, good runs depends more on the lab than dual-core processors. The system program made 28 allocations, but luck,’” Kothe says. “We intimately involve on the manufacturer. has achieved 101.7 teraflops (trillion only a handful will be selected to ourselves with each team. We partner floating point operations per second) have the first crack at Baker. with them in ways that can help them >> NERSC As the Department of Energy’s on the Linpack benchmark — more by giving them our best and brightest National Leadership Computing than 85 percent of its 119-teraflops “It’s an interesting and exciting new people who are most aligned with Facility (NLCF), Oak Ridge has been theoretical peak speed. It’s scheduled model, where you are trying to go their interests and needs.” >> Computational Science asked to take a dramatic step toward to receive new quad-core processors deep into the science, and that can be a new capability for complex, in late 2007, bringing its top speed to done if you only have a few projects,” That includes former DOE computationally intense science. greater than 250 teraflops. Kothe says. Computational Science Graduate >> Using LCC Fellowship scholar Richard Mills. “Leadership Class Computing is The new Cray “Baker” system will A third ingredient, besides fast Mills worked in user support at a combination of world-leading make that look like small potatoes. computers and large allocations, Oak Ridge after completing his >> Applied Math computing power and the policies When it arrives in early 2009, it also is necessary if these projects — practicum with Peter Lichtner at that allow it to be used to solve is expected to achieve petascale some of the most complex calculations Los Alamos National Laboratory. very large problems,” says Doug speed — more than a quadrillion ever attempted — are to succeed. The That experience led to a full-fledged This image shows a snapshot of the simulated time evolution of >> Climate Modeling Kothe, director of science at operations per second, 10 times NCCS works with the science teams collaboration with Los Alamos, and atmospheric carbon dioxide (the red plumes) concentration originating the lab’s National Center for Jaguar’s current Linpack rating. through user assistance and scientific now Mills is a fully supported member increasing atmospheric carbon from the land surface at the beginning of the industrial carbon cycle Computational Sciences (NCCS). computing groups. A 5-million of ORNL’s computational earth dioxide by taking up more carbon, (around 1900). This CO2 is a product of the net ecosystem exchange, the Instead of hundreds or thousands But fast computers alone can’t processor-hour INCITE allocation sciences group. but paradoxically also respond to >> Algorithms CO2 flux due to respiration of vegetation and soil microbes (green areas of simulations running on the system guarantee science breakthroughs. is equivalent to a multi-million dollar increasing temperature by releasing on land) minus that taken up for ecosystem production (orange areas on at any one time, there may be only a It’s just as important to allocate large grant in terms of the support they “Richard spun up very quickly through carbon. Scientists would like to compare land). The underlying simulation is one of a number of runs performed for >> Activities for LCC handful. The idea is to push the limit blocks of time on them to attack the receive, Kothe explains. our system,” Kothe says. “It’s a good these responses in a full climate Phase 1 of the Coupled Climate/Carbon Cycle Model Intercomparison on what simulation can tell us most difficult problems. DOE does example of what’s possible for computer simulation, but it has been too expensive Project on the Leadership Computing Facility. about about the natural world. that through the Innovative and scientists who join our group.” in terms of computer time and cost to Novel Computational Impact on add a fully functional carbon cycle to Courtesy: Dr. Warren Washington, Principal Investigator. Theory and Experiment (INCITE) the models. Instead, models specified Bridging Worlds the amount of carbon dioxide in the atmosphere as a fixed input. Climate modeling is one area expected to benefit from the great leap forward The advent of petascale computing “To do breakthrough science at a user facility, we petascale computing offers. opens up the possibility of simulating a full carbon cycle — including can’t just be a ‘cycle shop’ and say ‘OK, here’s your There is a concerted effort to add the human-generated carbon — as well full carbon cycle to climate models, as simulating how the land and ocean account, good luck,’” Kothe says. “We intimately Kothe explains. Previous research has will respond. involve ourselves with each team.” suggested ecosystems respond to

22 25 COLLABORATORS The massive project, called “A scalable component developed at Los Alamos and extensible Earth system model National Laboratory for the CCSM. Doug Kothe is Director of Science for the National Center for for climate change science,” includes He assists Pat Worley, a computer Computational Sciences at Oak Ridge National Laboratory (ORNL). input from several national laboratories, scientist in the climate group, to He is responsible for guiding the multidisciplinary research teams using the Center’s leadership computing systems. Dr. Kothe has academic programs and DOE’s streamline POP and improve its more than 20 years of experience in computational science Atmospheric Science Program and scalability for petascale computing. research. His research interests and expertise have centered Terrestrial Carbon Program, among on developing physical models and numerical algorithms for others. Its goal is nothing less than Worley, who has a long history of simulating physical processes in the presence of incompressible transforming an existing global climate POP performance analysis, found a and compressible fluid flow. Before joining NCCS, he was deputy model, the Community Climate sluggish solver was one thing limiting program director for Theoretical and Computational Programs in the System Model (CCSM), to create its scalability. The code must find the Advanced Simulation and Computing (ASC) Program at Los Alamos one that fully simulates coupling sum of a value across all the processors, The Cray system is a National Laboratory. He joined the technical staff at LANL in 1988 between the physical, chemical, a step called an “all-reduce,” which "hybrid" XT3/XT4 (68 XT4 as a member of the Fluid Dynamics Group. Dr. Kothe received his and biogeochemical processes in significantly slows the parallel operation. cabinets hooked to 56 XT3 bachelor’s degree in chemical engineering from the University cabinets, for a total of of Missouri – Columbia and his master’s and doctoral degrees the climate system. White is looking for a way to reduce 124 cabinets). in nuclear engineering from Purdue University. He is author of the need for all-reduce queries more than 60 refereed publications and has written more than John Drake heads the project’s group by improving what’s called the a half-million lines of source code. at ORNL’s Computer Science and preconditioner. Meanwhile, Worley Mathematics Division, and Peter is making the entire operation Robert J. Harrison holds a joint appointment with ORNL and the Gent of the National Center for faster by arranging the processes chemistry department of the University of Tennessee, Knoxville. At Atmospheric Research is chairman more efficiently. Oak Ridge, he is leader of the Computational Chemical Sciences Group of the group’s scientific steering in the Computer Science and Mathematics Division. He has more than IT’S MADNESS 75 publications in peer-reviewed journals in the areas of theoretical committee. A group at Pacific “The Scientific Computing Group and computational chemistry and high-performance computing. Northwest National Laboratory is really synergistic, since we are often Making the Transition Division heads a research team that is He earned a bachelor’s degree in natural science from Cambridge contributes the component that members of the development teams to Petascale developing a program called DCA++, University, England, in 1981, and continued on there to earn a doctoral calculates the chemistry of aerosols. for the applications and members a new-generation program that will >> “Software gets slower faster than hardware gets faster.” degree in organic and theoretical chemistry in 1984. He worked as Lawrence Livermore National of the operation team for the center,” “The NCCS is co-located at ORNL predict behavior of materials at the — Nicklaus Wirth, 1995 a postdoctoral research fellow at the Quantum Theory Project, Laboratory provides the chemistry White says. “We act as advocates and with many of DOE’s premier science nanoscale. The project’s goal is Robert Harrison has a beef with Wirth’s Law. He’s not saying software University of Florida, and the Daresbury Laboratory, England, before component. Groups at Argonne proxies within the center for each teams who use the computer systems,” ultimately to develop new materials joining the staff of the theoretical chemistry group at Argonne National evolution hasn’t kept up with the hardware in some cases, just that it National Laboratory work on project, and we bring detailed Kothe says. Several of these teams designed on the nanoscale. doesn’t have to be a self-fulfilling prophecy. Harrison, group leader Laboratory in 1988. In 1992, he moved to the Environmental Molecular coupling all of the components. knowledge of the center computers, are already busy developing codes Sciences Laboratory of Pacific Northwest National Laboratory, of the Computational Chemical Sciences Group at ORNL, is working conducting research in theoretical chemistry and leading the The ORNL Leadership Computing infrastructure, and plans back to the specifically for petascale machines. When these and other researchers ahead of the game. He is part of a team developing MADNESS development of NWChem, a computational chemistry code for team is contributing personnel from development teams.” Two such ORNL projects received are ready to run their simulations, (Multi-resolution ADaptive Numerical Evaluation for Scientific massively parallel computers. He started the joint faculty appointment NCCS’s Scientific Computing Group, INCITE awards in 2007. members of the Scientific Computing Simulation), a next-generation software code intended from the with UT/ORNL in August 2002. In addition to his research into efficient which works directly with users, their White says POP could be among a Group will be there to serve as outset to run on petascale computers still under construction. and accurate calculations on large systems, he has pursued applications software, and data, to get all the handful of pioneering applications Robert Harrison, group leader of the liaisons by scheduling time on the The idea behind MADNESS is to harness the full capacity of petascale in molecular electronics and chemistry at the nanoscale. In 1999, components ready for the jump to used to track performance when the Computational Chemical Sciences machine and smoothing the rough the NWChem team received an R&D Magazine R&D100 award, and machines to solve problems that require complex quantum mechanical petascale computing. Baker system first comes on line. Group at Oak Ridge, is working edges of the programs. calculations. It is designed to be particularly useful for solving Dr. Harrison received the IEEE Computer Society Sydney Fernbach with NCCS in an attempt to stave award in 2002. multi-scale problems in not just two or three dimensions, but up to “The people in our Scientific “The Scientific Computing Group off program obsolescence by working “Setting up queuing systems to be 9 dimensions — calculations that would be unthinkable without Computing Group have a lot of exists because of the understanding ahead of the game. He’s developing effective is a black art, particularly Contact: massively parallel systems and a new application of quantum Doug Kothe Robert Harrison fingertip knowledge, meaning their that we are treading new ground; a quantum chemistry program that when you have really large jobs,” wave methodology. [email protected] [email protected] hands are on the keyboard a lot in this isn’t just using computers as is incorporates petascale capability into White says. “Fitting everything in the terms of writing code, and so they typically done, but really pushing the the initial design (see sidebar). available space/time holes can be as “Our goal at the outset was to be very open-ended,” he says. Therefore, Trey White Richard Mills know the latest practices that can envelope on the scale of parallel much a social science as technical. MADNESS can run on existing systems, but should not be limited by ever-expanding hardware capacity. [email protected] [email protected] help these teams evolve their codes computing,” White says. “Many codes have been caught sort When a project has a big deadline and algorithms to be more productive of flat-footed in the race to get to the coming up, we try to make sure One of the first applications to use MADNESS will be a simulation of and more long-lived,” Kothe says. petascale because they have to make they’ve got the resources they need.” the chemistry of heavy elements. The goal, explains Harrison, is to PRACTICUM COORDINATOR a transition from running on a few understand how to separate and reprocess spent nuclear fuel, thereby A background in physics and experience hundred or thousand processors to Kothe adds: “We’ve mastered the art reducing waste and increasing its useful life. Exploring new fuel Jeff Nichols ...... [email protected] in high-performance computing allows running on hundreds of thousands or of user support, while pushing the separations plants is a strategic goal at DOE, but doing the thousands Trey White, a research computer millions of processors,” Harrison says. envelope of the science that can be of required experiments on spent fuel would be a practical impossibility. scientist at NCCS, to contribute to “Typically what happens with older done on these machines. That’s what Realistic simulations of the behavior of uranium, cesium and other several projects. Presently he works on codes is that programmers take an makes us unique.” radioactive isotopes in a separation scenario could reduce experimental work and increase understanding of the fundamental behavior of this the Parallel Ocean Program (POP), a existing program, parallelize it, and class of elements, says Harrison. get stuck there, limiting their ability to migrate to even larger machines.” “People say nanoscale is different,” says Harrison. “Well, petascale is Likewise, Thomas Schulthess of ORNL's different. It allows you to harness the power of complexity. We are just Computer Science and Mathematics at the beginning to learning what it can do.” 24 DOE LAB RESEARCH Lawrence Berkeley | Sandia | Oak Ridge | Lawrence Livermore | Los Alamos | Pacific Northwest | Argonne 275 Modeling an By Victor D. Chase Earth-Shaking Event

“EARTHQUAKES DON’T KILL PEOPLE,“ seismologist Arthur Rodgers says. “Buildings do.” “There are casualties in earthquakes because buildings collapse, freeway sections collapse, and bridges go out,” says Rodgers, a member of an earthquake modeling team at the Department of Energy’s Lawrence Livermore National Laboratory (LLNL). In essence, “We are vulnerable to earthquake damage because we choose to build and live near places where earthquakes occur.”

The propagation of shear waves (in red) through the three-dimensional model of the greater San Francisco bay area, looking south-east towards the bay area from the To help understand and prevent such The displacement of a few meters seismology in the U.S. and gave rise Each participating group created its 1A Pacific ocean. The San Andreas fault devastation, a team of computational along that fault line, where the Pacific to a more quantitative approach, own model of the quake using its surface is shown in gray and the coast line scientists, applied mathematicians and and North American tectonic plates applying physics and mathematics to own methods. The results were quite of northern California is shown in black. seismologists at LLNL has created meet, was enough to set off one of the problem. Shortly after the quake, consistent, proving the exercise’s an earthquake simulation model the most monumental quakes in damage throughout the region value, Rodgers says. FIGURE 1A: 22.5 seconds after the start as part of a larger Serpentine recorded history. After a 20-second was studied and quantified on the of the earthquake >> NERSC Wave Propagation (SWP) project. foreshock, the full power of the quake Modified Mercalli Intensity Scale. The exercise took almost two years to The project, headed by applied was felt for about one minute. It Unlike the Richter scale, the Mercalli complete because of the complexity FIGURE 1B: 30 seconds mathematician Anders Petersson, would have measured 7.9 on the scale does not require instrumentation. involved in creating a computer >> Computational looks at the propagation of waves Richter scale — if the scale had It rates a witness’s impressions and simulation of an earthquake. The Science FIGURE 1C: 60 seconds in nature. Whether they’re seismic, existed then. physical damage to structures. Scientists rarity of major earthquakes in the 7.0 electromagnetic, or sound waves, can use Mercalli scale information to to 7.9 magnitude range, means there >> Using LCC they’re all governed by essentially The shock was felt from Coos Bay, backtrack and determine the kind is less empirical data about them. the same mathematical equations. Oregon, to Los Angeles, and as far of ground velocities corresponding east as central Nevada. In all, the area to the reported destruction. The findings of the centennial >> Applied Math The team spent several years developing of devastation was about 400 miles study were presented at the 2006 the advanced mathematics and long, and 30 miles on either side of The report has proven invaluable to meeting of the Seismological algorithms necessary to run the fault zone. The quake and the later earthquake investigators, including Society of America, which was held 1B >> Climate Modeling computer models of wave propagation. resulting four-day San Francisco fire the SWP team, which developed its in San Francisco to commemorate For its first practical application, the killed about 3,000 people, left 225,000 computer simulation quake to mark the100th anniversary of the famous team used the software to model what homeless, and destroyed about the quake’s 100th anniversary. quake — and of the society’s founding. >> Algorithms happened during the most famous 28,000 buildings. Conducted under the leadership of and damaging earthquake in the U.S. Geological Survey (USGS), To develop their simulations, each of U.S. history. the research also involved scientists the participating groups began with a Poorly Understood >> Activities for LCC from Stanford University, the USGS-created geological model of It began at 5:12 a.m. on April 18, 1906. University of California at Berkeley, the Greater San Francisco Bay area. San Francisco Bay area residents were Earthquakes were poorly understood and URS Corp., a worldwide The model characterized rock and awakened when the San Andreas and little studied before the engineering firm. DOE’s Office soil properties and was developed fault, a 296-mile fissure beneath the San Francisco disaster. The 1906 of Advanced Scientific Computing from years of study of seismic data, Pacific Ocean a few miles along the event put an end to that and marked Research supported the SWP group’s drilling, and tomography up to a California coast and off shore, slipped. the beginning of the science of modeling work. depth of 50 kilometers. The data was crucial to creating a computerized picture of the earthquake in progress because different types of earth have Each participating group created its own model of the dramatically different responses San Francisco earthquake using its own methods. to the spreading shockwaves. 1C The results were quite consistent, proving the exercise’s value, Rodgers says. 26 295

To create its model, the SWP team In discretization, “All the derivatives COLLABORATORS used two LLNL-based supercomputers: in the partial differential equations are replaced by what are called divided Anders Petersson is an applied mathematician in the Applied • MCR incorporates 2,300 processors differences, and that converts the Mathematics group in the Center for Applied Scientific and has a peak performance of original mathematical equation to a Computing (CASC). His research interests lie in the areas of about seven teraflops (trillion set of algebraic equations, and those grid generation and numerical solution of partial differential floating point operations can be solved in the computer,” equations. Dr. Petersson earned his doctoral degree in per second). Petersson says. Numerical Analysis from the Royal Institute of Technology in 1991. He joined the Lawrence Livermore National Laboratory in 1999. • Thunder has 4,096 processors Without discretization, a computer and a peak performance of about cannot solve the original equations Arthur J. Rodgers is a physicist and Group Leader of the 21 teraflops. because they are too complicated — Seismology Group in the Atmospheric, Earth and Energy even for a supercomputer. “The Department at Lawrence Livermore National Laboratory. He computer can only deal with simple received his Ph.D. and M.S. in Physics from the University of Doing the Math operations like addition, subtraction, Colorado. Dr. Rodgers’ research interests include computational multiplication and division. The seismology, earthquake ground motion simulation and nuclear One reason creating an earthquake partial differential equation involves explosion monitoring. He joined Lawrence Livermore National model is difficult is that “a continuous very complicated relations between in how you do all the details. So you’ve “We know that earthquakes are From the Soil Up Laboratory in 1997. elastic body, such as the earth, has how the solution varies in space and got to figure out the way to deal with going to happen,” Rodgers says. Contact: an infinite number of degrees of time,” says Petersson. all the details such that you can “The problem is that we have only Such projections are especially Anders Petersson freedom corresponding to motion at guarantee that it is stable. And that,” been looking at earthquakes in detail needed to avoid accidents at the [email protected] each point in space,” Petersson, the Petersson says, “is the challenge.” for about 100 years. The return times many nuclear power plants being SWP team leader, says. “So before the The Freedom is in the Details of large earthquakes are hundreds, if considered to meet increasing Arthur Rodgers motion of a system with an infinite not thousands of years, so we haven’t worldwide energy demands. Seismic [email protected] number of degrees of freedom can But there is another factor to Earthquake in a Box got a statistical sample to allow us to safety of nuclear power plants is guided be calculated in a computer, we need complicate the discretization process: do meaningful statistics.” by observed, as well as computed, to reduce the number of degrees stability. “If the method is unstable Rodgers describes the SWP modeling ground motions. The same computer PRACTICUM COORDINATOR of freedom to a finite, large number then perturbations due to round-off effort from a seismologist’s perspective. Nonetheless, “In the Bay area the modeling can also be used to simulate through a process called discretization.” errors in the computer will accumulate “Think of it as though we built a box most likely next earthquake will be potential damage should an earthquake Tom Epperly ...... [email protected] during simulations, and they can that represents the Bay area in three along the Hayward fault,” he says. impact nuclear storage facilities, such The group used a finite difference make the computed result completely dimensions,” he says, “and we put This supposition is based on geological as the controversial Yucca Mountain discretization method. It broke the useless,” Petersson says. an earthquake in the box, and the studies indicating there have been site. “Then you would know how to area being modeled, known as the earthquake sets the box in motion. 11 earthquakes along the fault at design containers to withstand the computational domain, into a series That’s where the mathematician’s By doing so we were able to put intervals averaging 140 years. The possible motions of the Earth,” of equally spaced grid points. expertise comes in. The key is to a simulated seismic station that last such quake occurred in 1868, says Petersson. develop a mathematical theory that measures the ground motions as a making 2008 the 140th year. “In our calculations we used a guarantees the stability of the process function of time at any place, which Now that the SWP team has created a grid spacing of 125 meters, which “so you know before you start your allowed us to compute the ground Using modeling, the researchers program that models how earthquake corresponds to something like 2.3 calculation that it is not going to go motion anywhere.” “can put in a hypothetical Hayward waves propagate from the source billion grid points,” Petersson says. unstable, or ‘blow up,’ as we also fault magnitude 7 earthquake, and through rocks and soil to the foundations “At each of these grid points you have say,” Petersson adds. To do so, Modeling the San Francisco see what happens,” Rodgers says. of buildings, the next logical step will three degrees of freedom, so you get “Mathematically, you study how earthquake has significance beyond Although models cannot tell experts be to follow those waves up from the about 6.8 billion degrees of freedom. perturbations propagate through commemorating the event. The precisely where along the fault the soil through complicated structures, You reduce your infinite number such a calculation without actually ability to create a computer simulation earthquake will start or in what such as nuclear power plants, airports, of the degrees of freedom to 6.8 computing the solution. You can of such a complicated occurrence direction it will run, “We can do and bridges, to learn how they billion,” which “is still fairly large.” then estimate how large these enables scientists to model other lots of simulations to look at how the will respond to the shaking of perturbations can become in earthquakes before they happen. ground motion might vary depending an earthquake. The earthquake’s motion also is a calculation.” Architects and civil engineers can on those types of factors.” integrated in time, which also must use data gathered from those “The foundations of buildings are be discretized into a manageable This is accomplished the old-fashioned models to design structures that Such scenarios are valuable because embedded within soil so they need to be number. The group took equal way. “With pen and paper you can withstand tremors. “Our data set of actual large earthquake modeled together,” said Rodgers. “We steps in time to simulate the first analyze the properties of your numerical shaking is limited,” Rodgers say. “So would like to be able to model what’s 300 seconds of earthquake motion. method. And there are various ways What DOE’s supercomputers and this modeling effort is very important called the soil/structure interactions.” “Each of these steps is 0.01 seconds, you can modify your finite difference scientists can’t do is predict exactly because it allows us to, in the safety of so you take about 30,000 time steps method so there’s not just one where or when earthquakes our computer, compute the shaking Meanwhile, the SWP team recently in the calculation,” Petersson says. prescription, there’s a lot of freedom will strike. that would occur if an earthquake made some waves of its own by receiving were to happen on a specific fault of a an internal award from the DOE’s certain size within a certain geology.” Energy and Environment Directorate for its work on the 1906 earthquake. 28 DOE LAB RESEARCH Lawrence Berkeley | Sandia | Oak Ridge | Lawrence Livermore | Los Alamos | Pacific Northwest | Argonne 315 Model Mixes Ice, Heat, Water and Salt to By Victor D. Chase View the Ocean’s Future Looking to the Sea for Clues to the Climate

WITHIN THE NEXT SEVERAL DECADES, ice over the Arctic will completely disappear during the summer. That’s just one of the clear and dramatic predictions to come from models developed by the Climate Ocean and Sea Ice Modeling (COSIM) program at the Department of Energy’s Los Alamos National Laboratory.

Additionally, “The models now predict The loss of summer Arctic sea ice The oceans also are slow to respond Glacial Decline globally an average surface temperature provides a case in point. Satellite to a rise in global temperature, rise of 2 to 4 degrees Celsius over the observations since 1978 show that especially at depths below the top A recent report notes that the decline next 100 years,” says the program’s the average annual sea ice shrinkage few hundred meters. of glaciers and mountain ice caps in project manager, Philip Jones. And has been some 2.7 percent per both hemispheres has already led to the oceans account for more than decade with summertime average “What that means is even if we stop sea-level increases — and that doesn’t >> NERSC 70 percent of the Earth’s surface. losses of 7.4 percent. At some point putting greenhouse gases into the include melting from Greenland and in the not-too-distant future, the ice atmosphere today, we are committed the Antarctic. That conclusion is Jones and his group of 12 researchers will become so thin that one warm to another half-degree or so of based on the increased flow speed >> Computational Science use the power of supercomputer summer — or as Jones puts it, “one warming just because the oceans of ice draining from the interior of simulations to study how Earth’s pulse of warm water” — will be haven’t caught up,” Jones says. the ice sheets, with a corresponding oceans are affected by and influence enough “to cause a rapid transition increase in ice loss. >> Using LCC global warming. Their specialty is to an ice-free state.” This will happen, Thermal expansion is expected to modeling ocean circulation and how he anticipates, within 30 to 40 years. increase ocean levels by some 30 Because “Current climate models it affects both heat transport and sea centimeters over the next 100 years. don’t represent ice-sheet changes very >> Applied Math ice up to 100 years into the future. The loss of Arctic sea ice, dramatic That’s not a large increase, but it well, we are in the process of putting as it may be, will not directly raise will affect some areas and will have together some new ice-sheet models Results from his team and other the sea level, since the ice already secondary effects on things like storm to look at the sea-level rise issue,” Sea ice thickness (m) in September 1998 research lead Jones to conclude displaces water as it floats on the surges. A much more dramatic rise Jones says. The goal is “to figure out >> Climate Modeling from an ocean-ice simulation. that some aspects of climate change surface. The disappearance of sea in sea level would occur if huge how rapidly that ice is going to melt, The Cause Is Us could take place more rapidly than ice can, however, indirectly affect land-based glaciers on Greenland and because if Greenland melts you get Graphics provided by Elizabeth Hunke. >> Algorithms previously anticipated. “There are sea level since the water temperature Antarctica melt — a phenomenon about six meters of sea level rise, and Analyses of ice core samples dating amazing changes going on right will rise, causing thermal expansion. that’s already being observed. that is pretty significant.” back eons, combined with modern-day now,” he says. This is significant because the oceans’ computer simulations, have firmly >> Activities for LCC huge mass makes them act like a heat Though not a cause for complacency, established the cause of global warming sink, absorbing 80 percent of the the time frame for such occurrences — and it is us. climate’s temperature increase. is on a geologic scale. Because Greenland’s ice sheet, for example, The power of the Cray high- is several kilometers thick, “The performance computers the COSIM question is whether a complete researchers use allows them to turn melting will take a thousand years on or off various factors that may or a few hundred years,” Jones says. or may not contribute to global Results from his team and other research lead Jones to warming. That’s let them confirm conclude that some aspects of climate change could take that greenhouse gases generated by burning fossil fuels cause climate place more rapidly than previously anticipated. “There changes. “We can turn off the industrial, human-caused carbon dioxide emissions are amazing changes going on right now,” he says. and artificially pretend that humans didn’t exist, and see what the natural forcing is,” Jones says. 30 335

Sea surface temperature at 15m depth from an eddy-resolving ocean simulation. Impact Scenarios The DOE is interested in climate DOE CONTRIBUTES TO Graphics provided by Mathew Maltrud. change because of its impact on the CLIMATE’S BIG PICTURE The COSIM group receives support nation’s energy portfolio. “The DOE from DOE’s Office of Biological and is tasked with determining what are Environmental Research and from ‘safe levels of carbon dioxide’ in >> To put the current global climate into “paleoclimatic perspective,” DOE’s Scientific Discovery through the atmosphere,” says Jones. This it has been about 1,300 years since it was as warm as during the PHILIP JONES Advanced Computing (SciDAC) information helps determine what last half century; and one needs to look back some 125,000 years program, which matches mathematics will be used to generate power in the to find a time when the polar ice regions were significantly and computational science with future — “Whether it’s still going to warmer than they are now. Philip Jones is leader of the Climate, Ocean and Sea Ice “When you do that, in the early part This circulation pattern acts like a physical scientists in various areas. be coal and oil or whether we need Modeling Project within the Theoretical Fluid Dynamics Group These estimations are based on studies of indicators such as of the century you can explain some heat pump, transporting heat from Climate modeling has been going on to put more emphasis on other forms at Los Alamos National Laboratory. He also is the lead software tree ring width, ancient ice samples, and computer simulations. developer of the POP ocean model. His research interests of what we see via natural warming, the equator to the North Atlantic. at Los Alamos for some 16 years, and of energy,” he adds. They’re reported in “Climate Change 2007,” issued by the include coupled climate modeling, ocean modeling, remapping but anything from about 1975 onward It is primed by water cooling as it Jones has been a part of it almost Intergovernmental Panel on Climate Change (IPCC). The World and interpolation and computational performance of climate can’t be explained by any kind of natural moves northward. As it does, water from the beginning. Meteorological Organization and the United Nations Environment models. He holds a doctoral degree in astrophysical, planetary forcing — either solar variability or evaporates and ice forms. Each of From Stars To Earth Programme created the panel in 1988. and atmospheric sciences from the University of Colorado and a any other natural process. Unless you those actions rejects salt, leaving a Its work is part of the larger bachelor’s degree in physics and math from Iowa State University. include the human contribution to higher salt content in the ocean as Community Climate System Model Jones is trained as an astrophysicist, The IPCC combines the computational findings of the joint National greenhouse gases you can’t match it flows northeast. As it approaches (CCSM), headquartered at the and comes to modeling global warming’s Science Foundation/Department of Energy Community Climate Further reading: the observations,” he says. Based on Northern Europe the cold, heavier, National Center for Atmospheric effects on oceans and sea ice from System Model (CCSM) program with data from 17 other groups to IPCC, 2007: Climate Change 2007: The Physical Science Basis. create its influential periodic report. The 2007 version is the panel’s that data, current climate change salt-laden water sinks, completing the Research (NCAR) which is based in studying how heat is transported from Contribution of Working Group I to the Fourth Assessment Report fourth, and is divided into three sections. CCSM’s contribution is of the Intergovernmental Panel on Climate Change [S. Solomon, scenarios show a 1 to 2 percent circulation and drawing more warm Boulder, Colorado, and supported the inside of a star to the surface. included in the “Physical Science Basis” section, prepared by D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, M. Tignor increase in carbon dioxide per water northward, warming the nearby by the National Science Foundation. Though the temperatures are quite Working Group I. The other two sections are “Impacts, Adaptation and H.L. Miller (eds.)]. Cambridge University Press, Cambridge, year, or a doubling over the next land mass. The two groups jointly develop the different, the fluid dynamics are and Vulnerability” and “Mitigation of Climate Change.” United Kingdom and New York, NY, USA, 996 pp. Also available 100 years. model and use it to project climate essentially the same. at: http://ipcc-wg1.ucar.edu/wg1/wg1-report.html. The concern has been that as the change scenarios. In its “Summary for Policymakers,” Working Group I writes that ocean surface warms and ice melts When he joined Los Alamos as a carbon dioxide is the most important greenhouse gas created by W. D. Collins, C. M. Bitz, M. L. Blackmon, G. B. Bonan, C. S. No Ice Age rather than solidifying, the added As part of the CCSM collaboration, post-doctoral fellow, Jones applied human activity: “Global atmospheric concentrations of carbon Bretherton, J. A. Carton, P. Chang, S. C. Doney, J. J. Hack, T. B. fresh water at the surface, having the Los Alamos group also feeds data his knowledge of fluid dynamics to dioxide, methane and nitrous oxide have increased markedly Henderson, J. T. Kiehl, W. G. Large, D. S. McKenna, B. D. Santer, as a result of human activities since 1750 and now far exceed Improved ocean and ice models have less salt content, would not sink. to the Intergovernmental Panel on study fluid motions inside the Earth. and R. D. Smith. The Community Climate System Model: CCSM3, pre-industrial values determined from ice cores spanning many J. Climate, 19, 2122-2143 (2006). let climate scientists put to rest at This could slow the ocean circulation Climate Change (IPCC), which uses “The similarity among these is that thousands of years. The global increases in carbon dioxide least one of the more ominous global and ironically have an ominous them in its influential climate change they all require high-end computation concentration are due primarily to fossil fuel use and land use J. B. Drake, P. W. Jones, and G. Carr Jr. Overview of the warming scenarios, in which changes chilling (rather than warming) reports (see sidebar). and the same fluid equations,” change, while those of methane and nitrous oxide are primarily Software Design of the Community Climate System Model. Int. in the Atlantic Ocean’s currents effect on Europe. he explains. due to agriculture.” Journ. High Perf. Comput. Application, 19, 187-201 (2005). have a dramatic cooling effect on Jones says that since his group’s Northern Europe. Such cooling Ironically, climate modeling shows inception, models have gotten good When he isn’t modeling the oceans The summary adds: “Warming of the climate system is unequivocal, Contact as is now evident from observation in global average and ocean Philip Jones brought on a cataclysmic ice age that though melting sea ice does slow enough “that we feel much more and sea ice or spending time with in the popular 2004 movie, The thermohaline circulation, the cooling confident in believing in what we his family, Jones entertains the public temperatures, widespread melting of snow and ice, and rising [email protected] global average sea level.” Day After Tomorrow, which was effect is largely counterbalanced by are predicting.” That’s partly due to with his trombone, which he has replete with scientists rushing global warming. As Jones puts it, a collaboration with computational played since fourth grade. He has Even if greenhouse gases held constant at 2000 levels, the report to model climate changes “The fact that the northern latitudes scientists at DOE’s Oak Ridge and an eclectic repertoire, and performs says, global warming would continue for the next two decades due PRACTICUM COORDINATOR on supercomputers. are warming faster than the rest of Lawrence Livermore national with the Los Alamos Symphony, the primarily to a lag in the response of the oceans. If greenhouse gas Aric Hagberg...... [email protected] the globe tends to overwhelm the laboratories, who help ensure Los Alamos Big Band (which plays emissions continue at or above current levels, continued warming Real-life scientist Jones, however, says fact that you are missing a little bit the COSIM team gets optimum ‘40s and ‘50s dance tunes), and a would “induce many changes in the global climate system during this scenario will not take place, at of ocean circulation. So we are less performance from the Oak Ridge-based brass quartet that plays everything the 21st century that would very likely be larger than those least not due to the driving force, worried about that scenario than we Cray computers they use for modeling. from baroque music to Dixieland jazz. observed during the 20th century.” These changes would likely known as thermohaline circulation, used to be. I guess that’s good news include more frequent heat waves, some heavy precipitation, and more intense tropical cyclones. depicted in the movie. in a sense, although I would not Clearly, Jones is a man for all want people to get complacent.” seasons and temperature climes. It’s certain the effects of greenhouse gas emissions will be felt far into the future, even if humans immediately stopped adding significant amounts to the atmosphere, but that need not spell doom. The warning signs are more than abundant, yet some scientists feel there is still time to act. How much has yet to be modeled.

32 DOE LAB RESEARCH Lawrence Berkeley | Sandia | Oak Ridge | Lawrence Livermore | Los Alamos | Pacific Northwest | Argonne 355 Harnessing By Alan S. Brown Hundreds of Thousands of Processors

WHAT DO ANCIENT CHARIOTS and modern high-performance supercomputers have in common? Both started with a single engine pulling the load. With chariots, that motor was a horse. To go faster, drivers bred larger, stronger horses.

With supercomputers, it was a Parallel computers also slashed “In academia, most people do research, “Let’s say I want to send data,” he high-performance processor. To costs, since they replaced expensive write a paper, and then forget about it begins. “So I put it in an envelope boost performance, engineers custom processors with the same and move onto another problem,” he and drop it in the mailbox. The post created bigger, faster, and more off-the-shelf chips used to run servers explains. “In DOE national laboratories office comes, picks up my letter, and expensive processors. and workstations. The world’s fastest we’re developing real applications delivers it to a post office in another computer, IBM’s BlueGene/L, uses here, and our goal is to create software city, which puts it in the right >> NERSC The big breakthrough in chariots more than 130,000 IBM Power PC that has an impact on science. We mailbox. Then someone goes to came when drivers harnessed teams processors. A new model unveiled want to create something that the mailbox and picks up the letter.” of horses. By distributing the load in 2007, the BlueGene/P, will use benefits everyone.” >> Computational Science among two, four, or even six horses, almost 900,000 processors. In computers, “This two-sided process chariots could go faster and farther requires handshaking: both the than any one horse could pull them. This is where the similarity between Managing Memory sender and the receiver need to >> Using LCC For this to work, however, drivers had chariots and high-performance know what’s being moved and agree to train their horses to start, stop, and supercomputers breaks down. The Memory is a key issue in on when to do it. It is laborious, turn together. Otherwise, each animal mind easily wraps around the task supercomputers. In conventional expensive, and difficult to program >> Applied Math would bolt in a different direction of training a handful of willful horses. PCs, each processor has its own local this type of communication.” during the tumult of battle. It’s simply a matter of training one memory. In a parallel supercomputer, horse to lead and the others to follow. however, every processor has its own Nieplocha’s alternative is an >> Climate Modeling Supercomputer architects made a But how do you yoke together local memory, but also shares memory approach called Global Arrays. On a recent benchmark, we demonstrated that the per-processor throughput similar breakthrough. They learned hundreds of thousands of computer with the hundreds or thousands of “You don’t have to send data Shared memory, however, comes at for analyzing biological sequence data using ScalaBLAST is essentially to break big calculations into smaller processors in a single harness? How other processors in the system. In large because it is visible to every processor a price. “One-sided sharing is like constant whether one uses 8 or 1,800 processors. This scaling characteristic is due, in part, to efficient memory management enabled by Global Array >> Algorithms parts that multiple processors could do they share data, store information, simulations, where the results of one without handshaking,” he says. accessing somebody else’s mailbox solve simultaneously. Instead of a or recover when one node fails? set of calculations drive another and without involving the post office, so features for hiding latency and replacing repeated global file I/O with memory single processor, computers were another and another, processors are This works because users first define everybody has to know where everything operations. This same approach has been used to perform trillions of pairwise homology calculations in a single day. Figure from C. Oehmen, J. Nieplocha >> Activities for LCC composed of many networked “Our goal is to find a practical constantly reading, writing, and where all their data will be within a is,” Nieplocha says. “That means Global processors, pushing calculation solution for those problems,” says exchanging data in memory. matrix, or array. The array itself is a Arrays must maintain indexes to track “ScalaBLAST: A scalable implementation of BLAST for high-performance speeds into the stratosphere. Jarek Nieplocha of the Department logical object — that is, software sees the physical location of the data, and data-intensive bioinformatics analysis.” IEEE Transactions on parallel and of Energy’s Pacific Northwest Most supercomputers share information it as a single entity even though employ communications techniques distributed systems, 2006:17(8), 740-749. National Laboratory (PNNL) using an approach called Message portions of the array (and the data that optimize how the data flows in Richland, Washington. Passing Interface (MPI). It’s a it holds) may reside on hundreds between processors.” complicated technique that Nieplocha or even thousands of different compares to postal delivery. memory chips. As long as users The payoff, however, is huge. know which part of the array holds Instead of spending time describing their data, they can access it handshaking routines for thousands without complex handshaking of memory locations, programmers The mind easily wraps around the task of training a and tracking routines. can access shared memory the same way they would on an individual handful of willful horses. It’s simply a matter of training PC. They don’t even need to know one horse to lead and the others to follow. But how do the underlying mechanics of memory manipulation. 34 you yoke together hundreds of thousands of processors? 375 JAREK NIEPLOCHA Nieplocha and colleague Chris computers need the data, they go to the computer operating system. Like Saving a simulation that sprawls Fault tolerance was never much of a Oehmen recently put Global Arrays the cabinet, open a file, read it, and brain surgery, tinkering with the kernel across thousands of different nodes problem in a chariot, but then, chariots Jarek Nieplocha is a Laboratory Fellow and Deputy Director to work in BLAST, a bioinformatics do something with the data. Then is both delicate and costly. “In the and processors is far more complex. were relatively simple machines. Modern of Computational Sciences and Mathematics Division of the program. BLAST lets researchers they write the results onto another long term, such modifications are “You have to think about what to save high-performance supercomputers Fundamental and Computational Science Directorate at Pacific match snippets of newly-sequenced file in the cabinet. too complex to be practical,” and where and when to save it. The are anything but simple. With their Northwest National Laboratory (PNNL). He is also the Chief DNA or protein structures with known Nieplocha says. applications are very complex, going cluster-based architecture, they Scientist for High Performance Computing in Computational genetic information. In August 2006, But what if the filing cabinet were through different types of stages, and promise an unprecedented combination Sciences and Mathematics Division. He leads Advanced Nieplocha and Oehmen published a smarter? What if you could write One year later, however, the group some points simply cannot be saved,” of flexibility and power. One day they Computing Technology Laboratory at PNNL. paper showing that their new global measurements in feet and inches, developed a way to achieve the same Nieplocha explains. may help us produce pollution-free His area of research has been in collective and one-sided array-based ScalaBLAST software is the but pull out the same data in meters results with software that intercepts fusion energy or develop new communication on modern networks, runtime systems, parallel I/O, most scalable and highest-performing and centimeters when you needed requests for data as they move To resolve these issues, Nieplocha’s genetically based medications. scalable programming models, multithreading architectures parallel implementation of the metric units? through the network. team uses virtualization, an approach and, and component technology for scientific computing. He BLAST algorithm. that traces back to mainframe The goal of Jarek Nieplocha and has led development of Global Arrays, a portable shared memory Better yet, suppose you wanted to “Now we don’t have to modify the computer days. It involves slipping his team is to make that power and programming toolkit widely used in scalable computational chemistry study rainfall at certain elevations. operating system or any system a layer of software, called a hypervisor, flexibility more easily accessible applications and other areas, and the ARMCI runtime system for Smarter Memory Ordinarily, the supercomputer would software,” Nieplocha explains. “And between the hardware and the operating to those who need it. It’s just a matter global address space programming models. have to send all the rainfall data to a we can also do some types of processing, system that controls it. Instead of of harnessing all those processors With so many processors yoked processor to sort out the information such as floating point computation, delivering instructions to the computer, so they pull in the same direction. Further Reading: S. Benson, M. Krishnan, L. C. McInnes, J. Nieplocha, J. Sarich. together in modern supercomputers, you want. But suppose your hard that you cannot do in the kernel.” the operating system talks to the Using the GA and TAO toolkits for solving large-scale optimization the researcher explains, the speed drive did the sorting for you? This The group is testing the approach on hypervisor, which whispers those problems on parallel computers. ACM Trans. Math. Softw. of an individual processor hardly would reduce the flow of unwanted bioinformatics software. “We still have instructions to the hardware. 33(2): 11 2007. matters. Instead, the most common data over the network and also cut a few issues, but there are no major impediment to high-performance the time processors waste on doing research challenges,” he concludes. This sounds inefficient, but recent C. Oehmen and J. Nieplocha. ScalaBLAST: A scalable implementation computing is the speed at which menial calculations. advances have made hypervisors STANDARDS MAKE of BLAST for high performance data-intensive bioinformatics analysis. the network shuttles data between much more economical and, for ADVANCED COMPUTING IEEE Trans. Parallel Distributed Systems. Special issue on high-performance memory, processors, and hard Visionaries first suggested the Soft Landings the first time ever, worthwhile, computational biology, Vol. 17, No. 8, 2006. disk storage. concept, called active or intelligent Nieplocha says. “Today, using FASTER — AND EASIER M. Krishnan, S. J. Bohn, W. E. Cowley, V. L. Crow, and J. Nieplocha. disks, 20 years ago. They proposed In addition to harnessing the a modern hypervisor, we will Scalable Visual Analytics of Massive Textual Datasets, Proc. IEEE By doing away with time-consuming using small processors to run simple power of healthy supercomputers, periodically stop each node and map International Parallel and Distributed Processing Symposium, 2007. handshaking, Global Arrays let calculations on stored data. Hard Nieplocha tries to prepare for the the application state and operating >> Nearly all PC software conforms to standards. This is why software data flow smoothly along the drive manufacturers rebelled at the unavoidable failures. While the system memory. Our software writes components written using a wide variety of languages — from C++ and Java to Visual Basic and Perl — all run and exchange data with one J. Nieplocha, B. Palmer, V. Tipparaju, M. Krishnan, H. Trease, and network to tens of thousands of idea. They did not want anyone individual components of any the image to a hard drive, then moves another on the same PC. E. Apra. Advances, Applications and Performance of the Global thirsty processors. But what if tinkering with drive electronics, since supercomputer — processors, circuit on to the next node. If a node fails, Arrays Shared Memory Programming Toolkit. International Journal supercomputers could process it might cause unexpected failures. boards, hard drives, power supplies — we will sense the problem, retrieve of High Performance Computing and Applications, 20(2), 2006. “PC components work with each other all the time,” notes Jarek some types of data without even are highly reliable, the sheer numbers the saved image, and mount it on a Nieplocha of Pacific Northwest National Laboratory (PNNL). “If you open D. P. Scarpazza, P. Mullaney, O. Villa, F. Petrini, V. Tipparaju, taking them out of storage? In 2005, the PNNL team found a way of components used in high-end healthy node so that the application a Word document with an embedded graph, the underlying component D. Brown, and J. Nieplocha. Transparent System Level Migration to do those calculations using another systems make failure a certainty. can continue from the last technology understands that the graph is an Excel object. When you click of PGAs Applications using Xen on Infiniband. Proceedings of the Storage on computers means hard part of the computer that’s usually checkpoint,” Nieplocha says. on the graph, it opens Excel. These programs know about each other and IEEE International Cluster Conference 2007. drives, which act like file cabinets better left alone: the kernel. The Supercomputer failures carry a high work together.” to stockpile information. When kernel contains the core functions of price in time and money. A single “People have worked on this problem In recent years, the Common Component Architecture (CCA) community Contact: simulation may take months of for some time, but the results were has been adopting this technology for scientific supercomputing. Jarek Nieplocha preparation. Researchers may wait specific to individual computers or basic [email protected] weeks for computer time. And some academic research,” he adds. “People “There are supercomputer chemistry applications that need to perform big simulations take days or even describe how they made it work, write a some sort of optimization, such as a minimum energy function,” weeks to run. paper and move on to other challenges. Nieplocha explains. “These optimization routines are readily available in PRACTICUM COORDINATOR We’re creating a more practical solution science libraries, but in the past programmers had to struggle with data “Most applications running on that can manage multiple nodes auto- structure and code to adapt them to their chemistry program. Stephen Elbert ...... [email protected] supercomputers are not fault-tolerant,” matically and reconfigure the system if Nieplocha says. “If a hard drive or there is a failure.” “Today, we’re seeing common interfaces that let us glue those two programs together so they work like a single application. If someone memory chip fails, users typically develops a better routine, they could swap it for the old one in a com- lose their data and have to restart Nieplocha’s team understands that pletely transparent way and it would work,” Nieplocha says. from scratch.” virtualization degrades computer ScalaBLAST gives a sizeable performance boost over BLAST, speed, though they believe their a conventional sequence analysis tool. The PNNL team is working with ways to extend Common Component One solution is to save the work technique makes the slowdown Architecture to specialized hardware accelerators, such as field periodically. PC word processors and negligible. Given the high cost of programmable gate arrays (FPGAs) and graphics processors. While not http://picturethis.pnl.gov/PictureT.nsf/webpages/Search spreadsheets do this automatically by failure, they believe most users would as flexible as conventional processors, these specialized chips excel at making mirror images of documents. prefer a minor delay to a failure that accelerating repetitive calculations. leaves them without critical results “We want to hide the complexity of moving data onto the accelerator, for months at a time. so programmers have a simple way to take advantage of its speed,” 36 Nieplocha says. DOE LAB RESEARCH Lawrence Berkeley | Sandia | Oak Ridge | Lawrence Livermore | Los Alamos | Pacific Northwest | Argonne 395 The New Paradigm of By Karyn Hede Petascale Computing

WHAT WOULD HAPPEN if a magnitude 8.0 temblor wrenched the earth beneath a nuclear power plant? Can nano-scale technology be used to build a new generation laser?

From planning for the worst case to The Department of Energy has tapped to machine ‘hiccups,’ and these big designing for the best case, there are Argonne to become its second runs generate terabytes of data so some questions that we would like to Leadership Computing Facility by real-time data management is essential. answer without having to find out the early 2008, when the world’s first It requires a full-time team working hard way. No one wants to experience IBM BlueGene/P class system comes around the clock to manage it all. a nuclear reactor failure firsthand, or on-line. The system, capable of Then at the end, another team helps >> NERSC find out that the expensive nanoscale petaflop performance, will be available the users analyze their data.” materials they designed won’t work at for open science and engineering for the macroscale. Scientists would like applications granted computing time The support team at Argonne will >> Computational Science to be able to use computer simulations through DOE’s Innovative and Novel form long-term partnerships with to predict behaviors of large systems. Computational Impact on Theory and the scientific teams as projects go But simulating a devastating earthquake Experiment (INCITE) program. to petascale on BlueGene/P and >> Using LCC or a nanostructure assembly has been researchers begin to analyze results. hampered by the sheer complexity Stevens says the Leadership Computing But, equally important, teams of of the problems. program represents more than another Argonne researchers will work to ensure >> Applied Math leap in computational power. It is, that when the scientific applications are These kinds of multi-scale simulations quite simply, a new paradigm for ready, the accompanying software will require tremendous computational doing science. be up to the task. >> Climate Modeling power, says Rick Stevens, associate BlueGene/P Short List “Ideally, one would like to be able to laboratory director for Computing “This is different from the way “There’s a multistep process to get model how an earthquake fault might and Life Sciences at the Department supercomputing centers traditionally codes ready for the BlueGene/P,” says Stevens has a short list of applications rupture, how those seismic waves may >> Algorithms of Energy's Argonne National are run, where you might have Stevens. “If they are already running that are likely to be the first to access propagate through the earth, and Laboratory. Stevens leads Argonne’s dozens or more users on the system on the BlueGene/L then they are in the BlueGene/P system. One such how they may arrive at your site,” says advanced computing initiative, simultaneously,” he says. “With the pretty good shape.” He says there are application is the earthquake simulations McCallen, LLNL division leader in >> Activities for LCC targeting the development of Leadership Computing Systems, currently about 100 applications that led by Arthur Rodgers, Anders Petersson nonproliferation, homeland and petaflop computing systems capable there might be only one or two run on BlueGene/L, ranging from and David McCallen of Lawrence international security. Historically, of handling the kinds of computations groups using the system at any one astrophysics to chemistry to economics. Livermore National Laboratory (LLNL). such predictions have been made multi-scale problems require. time. But what they are doing is very But some application scientists will based on past earthquakes and hard. They might be running across need to modify their software to get As the U.S. searches for new sources then extrapolated to predict future the whole machine, which is always ready for the BlueGene/P. of energy, there is renewed interest earthquake damage. But these interesting, because of the sensitivity in building new nuclear power plants. methods have limitations. “You can But safety concerns about earthquake make better predictions about the hazards, particularly in the western ground movements at a given site U.S., require detailed studies of from physics and what we call ‘first potential nuclear plant sites. principles.’” The problem is that “The structure of leadership computing means that you can have combining data from subsurface geology and earthquake fault models has been close collaborations between teams over time,” Stevens says. prohibitive because there simply has not been enough computational “That’s really the vision and the promise of this approach.” power available to do the simulations. 38 415

RICK STEVENS “This is an application that demands An Argonne group, led by Bob Hill, application runs on Qbox, a program Optimal Petascale Simulations as much computer crunching is working on next-generation nuclear built on an Message Passing Interface (TOPS).” Argonne researchers, led Rick Stevens is Argonne’s Associate Laboratory Director for and computer power as you can power plant design, so there is a (MPI) framework and calculating first by Lois McInnes, are working to Computing and Life Sciences. muster up,” he says. “It’s a heavily natural link between the groups, principles molecular dynamics. It is relieve the linear solver bottleneck. three-dimensional computationally says McCallen. one of a handful of programs that Similarly, through SciDAC’s Center The Computing and Life Sciences directorate is made up of the intensive problem. Even today, our have already demonstrated the ability for Technology for Advanced Biosciences Division, the Leadership Computing Facility, and the biggest computers will be taxed Similarly, Argonne is teaming with to take full advantage of Leadership Scientific Component Software, Mathematics and Computer Science Division, along with the by this application. That’s a good François Gygi at the University of Computing systems. Argonne scientists are working to Computation Institute and the Institute for Genomics and example of why there is a motivation California at Davis and his research expand the software toolbox for Systems Biology. to use the emerging leadership team to examine the fundamental In 2007, Qbox set the world record scientific discovery to include programs Stevens has been at Argonne since 1982, and has served as director computing facilities at Argonne.” physical nature of nanomaterials for floating-point performance by that act as discrete components with of the Mathematics and Computer Science Division and also as that gives them unique properties. achieving a sustained performance unique capabilities, but that also work Acting Associate Laboratory Director for Physical, Biological and With the additional computing power, of 207 teraflops on the LLNL together seamlessly. The idea is to Computing Sciences. He is currently leader of Argonne’s Petascale McCallen would like to construct One of the biggest challenges in BlueGene/L 65,536 node system. build a component ecosystem, a Computing Initiative, Professor of Computer Science and Senior fault models of an area of 50 to 100 nanoscience, according to Gygi, That corresponds to 56.5 percent of community of interacting units that Fellow of the Computation Institute at the University of Chicago, and kilometers across by 80 kilometers is learning how to control nanostructure the theoretical peak using all 128,000 work together and provide feedback Professor at the University’s Physical Sciences Collegiate Division. deep and then combine that data assemblies. The scientists want to CPUs. Gygi’s team simulated the to each other in a self-regulating way. CHECKING OUT CODE AT From 2000-2004, Stevens served as Director of the National Science with subsurface geologic models to understand and control the interacting electronic structure of molybdenum, Foundation’s TeraGrid Project and from 1997-2001 as Chief Architect simulate what the ground motion of nanoscale building blocks used a high-Z or heavy metal, that is Stevens says that most of these ARGONNE’S NUMERICAL LIBRARIES for the National Computational Science Alliance. might be at a given site for different to design new materials. The important to the National Nuclear projects are in their beginning earthquake scenarios. computational challenge, then, is Security Administration’s (NNSA) phases and there are many places Stevens is interested in the development of innovative tools and >> One of the thorniest issues in petascale computing is how to ensure techniques that enable computational scientists to solve important to predict interface properties at Stockpile Stewardship Program. In where a DOE CSGF fellow could plug that the software applications running on the machines measure up. large-scale problems effectively on advanced scientific computers. To do that, the group uses Wave the nanoscale. this case, the study was the first step into the organization. Code developers typically don’t write new code from scratch when Specifically, his research focuses on three principal areas: advanced Propagation Program (WPP) toward simulating the effects of aging a new machine arrives on the scene, says Rick Stevens, associate collaboration and visualization environments, high-performance developed through laboratory-directed Gygi, in collaboration with Giulia on nuclear materials. “It’s an exciting time to be starting laboratory director for Computing and Life Sciences at Argonne computer architectures (including grids) and computational problems research at Lawrence Livermore, Galli (also at UC Davis) wants to use a career in scientific computing,” National Laboratory. in the life sciences, most recently the computational problems arising a program that is expected to scale BlueGene/P to study silicon dots Stevens says. “Depending on the in systems biology. In addition to his research work, Stevens teaches up to petascale relatively easily, embedded in silicon nitride and Future Programming career interests of the person, “We think of these codes as having a family tree,” he says. courses on computer architecture, collaboration technology, virtual since it has already proven it runs carbon nanotubes in solution. The there are opportunities to become reality, parallel computing and computational science. “We’ve been working on the software problem for parallel computing efficiently on a massively parallel investigation will focus on whether it Capitalizing on their strengths a co-developer, a discipline scientist, for over 20 years,” says Stevens. “We’ve created scalable numerical Contact: computing architecture. is possible to build a silicon laser out in scientific computing, a numerical analyst, a visualization libraries that solve a lot of the problems scientists run into in getting Rick Stevens of dots assembled on a matrix. The teams of Argonne researchers expert. It’s all open right now. The these codes to work in a parallel computing environment.” [email protected] are already working on structure of leadership computing next-generation programming. means that you can have close What’s developed over the years is a suite of dependable open-source Current linear solvers, a key collaborations between teams over software that is available for scientists to plug into their problem without having to “reinvent the wheel.” And because the code developers are in PRACTICUM COORDINATOR component of scientific software, time. That’s really the vision and are reaching their limit in scalability, the promise of this approach.” many cases still available for consultation, Argonne’s numerical libraries Raymond Bair...... [email protected] says Stevens. As part of DOE’s repository has become one of the go-to places when a homegrown solver isn’t working. SciDAC-funded project “Towards Former DOE CSGF fellow Allison Baker, computational scientist at Lawrence Livermore National Laboratory, spent a summer at Argonne in 1999 working with an Argonne-developed library, PETSc, a suite of data structures and routines for the scalable, parallel solution of scientific applications modeled by partial differential equations. Her summer learning experience set the stage for a portion of Baker’s thesis work at the University of Colorado, where she developed a new linear solver A great challenge to modeling large damaging earthquakes is algorithm that reduced data movement through machine memory to the fact that it is difficult to know where slip will occur along gain efficiency. the fault before the event happens. The figure shows ground motions computed with WPP code along two points (indicated “My experience with PETSc taught me a lot about the importance of by triangles) above a hypothetical magnitude 7.0 earthquake thinking carefully about implementation details when developing software and about writing good code in general,” she says. “It also made me for three different earthquake rupture models (star indicates realize the importance of having quality software tools available to the hypocenter, fixed for all three models). The ground motions the scientific community so that researchers can spend more time (colored for different rupture models) show variability due to on their specific area of expertise and less time on code development.” variations in the slip along the fault. The LCF will allow us to run these simulations at higher resolution, resulting in more realistic higher frequency results and to sample the range of possible rupture models.

40 Image provided by Arthur Rodgers, Lawrence Livermore National Laboratory. ALUMNI PROFILE ALUMNI PROFILE 43 MARY BIDDY ERIC HELD Mary Biddy Eric Held

Mary Biddy’s assignment sounds like a task from MISSION: IMPOSSIBLE: understand the It is easy to imagine Eric Held as a farmer. “I like BEING ALONE inner workings of a “black box” — a device that takes a known input, does something mysterious to it, and with my thoughts,” says the Utah State University physicist — one of the few yields a different output. people on campus who does not own a cell phone.

he “black box” is third-party It’s a complex problem. Each Then she took an internship at eld often uses the silence The DOE CSGF stipend gave Held “Imparting kinetic physics to fluid software with proprietary plant operates somewhat different Johnson Polymers, a subsidiary of to contemplate fusion, which the freedom to consider what equations captures more of the real Tmodeling algorithms that equipment. The cost of utilities SC Johnson & Son Inc. “My mentor Hpromises to produce clean, problem he would choose for his physics and enables us to do really big are hidden. Biddy feeds in data and fluctuates daily. So does the thought I was naturally curious and emission-free energy by binding doctoral thesis. He decided to model simulations,” Held says. scans the results. composition of chemical raw would enjoy the research side of hydrogen nuclei. Held’s simulations how heat escapes from magnetically materials, which vary with the type engineering,” Biddy recalls. of magnetically confined plasmas may confined plasma as it races around a The approach has already achieved at “If I’m trying to run a simulation and of oil the refinery processes that day. help scientists better understand this donut-shaped torus, and he’s wrestled least one payoff. “Heat normally flows the data is not converging, then I Sometimes equipment breaks down, In graduate school, she pursued her complex process. with the problem ever since. from hot to cold, the way a radiator look at the variables and try to figure so plants need to find alternate environmental interests by looking heats up a room,” Held explains. “But out why,” she says. “But there is always routes to make the same products. at ways to make lubricating oils This is a long way from Held’s “The torus is like a heating duct bent under some conditions in a magnetic a part of the code that I can’t see.” from renewable plants. “I found original plans. His grandfather and back on itself, and the magnetic field field, heat will actually flow to even Most dauntingly, BP’s optimization several ways to improve their uncles were farmers, and his father acts like insulation to prevent the hotter areas.” A year ago, Biddy received her doctoral model works in real time, adjusting physical properties, but the was director of the South Dakota heat from escaping through the degree in chemical engineering from to each new fluctuation in customer modifications reduced the oils’ Farm Bureau. He wanted to farm walls,” Held explains. If too much Will those results help create a clean the University of Wisconsin after orders, prices, chemical inputs, and biodegradability,” she says. after high school graduation, and heat escapes the magnetic field, fusion reactor? Perhaps, Held says. researching vegetable-based lubricating available equipment. didn’t even apply to college until a the plasma loses energy and cannot Regardless, they promise better oils. Today, she is a senior research Biddy used molecular modeling few weeks before the fall semester. sustain fusion. insights into a process that has baffled engineer in the Aromatics and Acetyls The model’s backbone was supplied to predict how modifications “My parents leaned on me to do it,” researchers for two generations. Division of BP plc, one of the world’s by Aspen Technologies, which specializes changed the physical properties he recalls. Unfortunately, heat travels up to 10 largest oil companies. in oil and chemical simulations. It of biolubricants — an approach she billion times faster along the torus It’s nothing like farming, but it’s just also includes BP-developed models. might not have taken were it not At South Dakota State University than it does when it escapes. Like a as rewarding. The black boxes are part of Biddy has full access to BP’s models, for her DOE CSGF. Held discovered the joy of unraveling gear that has a ratio of 10 billion to mammoth models used to optimize but Aspen’s models are a trade secret. physics and math problems. “I thought one, it takes 10 billion simulation the operation and profitability “The fellowship required that I it would be cool to spend all day cycles of the “duct” gear to turn the of nearly 20 aromatic chemical When confronted with simulation take math and computer science thinking about a problem,” he says. “escaped heat” gear forward one tick. plants worldwide. problems, Biddy falls back on skills courses, such as a class on algorithm Such musings led him to the DOE That extreme difference in scale she learned in graduate school. “You development, that would not have CSGF while pursuing a doctoral makes the system very hard to model. can use logic to figure out why you’re been part of the normal curriculum,” degree in plasma physics at the not getting the right answer,” she she explains. “It gave me a more University of Wisconsin. Held uses hybrid models to simplify says, “but you have to do it methodically rounded education.” the calculations. He starts with a and try to understand what’s important Held honed his computer modeling relatively simple fluid model to for each component of the model.” Biddy eventually chose to join BP skills and developed a new perspective describe the plasma’s density, flow, “I thought it would be cool because of its strong commitment to on his work during his DOE CSGF and temperature. He then adds “You can use logic to figure out why Tickling the secrets out of massive green technology and the challenging practicum at Oak Ridge National elements of more complex kinetic to spend all day thinking you’re not getting the right answer, models is a far cry from Biddy’s work it offered — even if her mission Laboratory. “My advisor, Jean-Noel models, which describe how individual about a problem.” original plans. Although she studied meant finding the possibilities in a Leboeuf, was a bright, humble guy particles interact with one another but you have to do it methodically.” chemical engineering at Texas A&M black box. who thought carefully about what he and with the electromagnetic fields University in Kingsville, she planned said and did. I learned how to carry around them. to become an environmental lawyer. myself as a physicist by watching him,” Held says.

42 Dr. Kristen Grauman ALUMNI PROFILE 45 Dr. Jaydeep Bardhan AHMED ISMAIL Ahmed Ismail Howes Scholars

What do STICKY PROTEINS have to do with safely storing nuclear waste? On the surface, Ahmed Ismail says, they involve considerably different scales of time and size. THE FREDERICK A. HOWES SCHOLAR in Computational Science award was established in 2001 to honor the late Frederick Anthony Howes who was a champion for computational science education.

Dr. Kristen Grauman of the University To honor his memory and his A DOE CSGF fellow from 2002-2006, nuclear waste repository After his MIT thesis defense — for buckle the rooms and isolate the of Texas at Austin and Dr. Jaydeep dedication to the Department of Dr. Bardhan received his Ph.D. “covers tens of square which the lifelong Red Sox fan containers. The salt rock resists Bardhan of Argonne National Energy’s Computational Science in Electrical Engineering and Dr. Grauman presents at A kilometers and its time prepared while sneaking peeks chemical infiltration. Laboratory have been named the Graduate Fellowship program, one Computer Science, also from the annual Department scale at Boston’s first World Series win 2007 Frederick A. Howes Scholars DOE CSGF fellow will be chosen each MIT, in 2006. He is currently a of Energy Computational is 10,000 years,” he notes. “Protein since trading Babe Ruth to the In addition to continuing his PEO in Computational Science. calendar year as a Howes Scholar. Wilkinson Fellow at Argonne Science Graduate interactions take place in nanometers Yankees — Ismail moved to Sandia work, Ismail tries to imagine how But because there were so many National Laboratory. Fellows conference. and nanoseconds.” as a post-doctoral researcher. worst-case scenarios might affect the The Howes Scholar award was outstanding nominees for the award waste. “In 10,000 years, people might established to honor the late this year, two winners were selected. Both award recipients were on hand What ties them together for Ismail, At Sandia, Ismail used models to forget WIPP is there and drill down Frederick Anthony Howes, who This award will provide the recipients at the DOE CSGF annual fellows’ who works for Sandia National explain why proteins sometimes stick to a reservoir of briny water several managed the Applied Mathematical with a substantial cash award, a conference where they presented Laboratories, are the complex to the non-stick polyethylene oxide thousand feet below the repository,” Science Program in the U.S. Tiffany paperweight, and the their research and received their models of molecular-scale interactions (PEO) coatings used to prevent he says. “We want to know what Department of Energy during distinction of being named a awards. Daniel Hitchcock from the that he uses to predict behavior fouling in medical devices. “Proteins would happen if the brine infiltrated the 1990s. Dr. Howes was highly Howes Scholar. U.S. Department of Energy’s Office in both regimes. like to bond to regular surfaces,” the repository.” respected and admired for his energy, of Science presented the awards and Ismail explains. “Coat with too dedication and personal integrity. A DOE CSGF fellow is eligible to be Fred Howes’ widow and son, Mary While a doctoral student and little PEO and proteins bond to the Using models that describe fluid named the Howes Scholar if he or she Hall and Michael Howes, were in Department of Energy Computational exposed metal below. Use too much mechanics, geochemistry, and nuclear One of Howes’ responsibilities was to has completed all the requirements for attendance at the presentation. Science Graduate Fellow at the PEO and its molecules will crowd physics, Ismail studies how radionuclides oversee the Department of Energy’s his or her Ph.D. program while being Massachusetts Institute of Technology, together to form a regular ‘surface’ might react with rock and brine. Computational Science Graduate supported by the DOE CSGF program For more information on this Ismail worked on ways to simplify to which proteins can stick. “If the salt does its job, there’s likely Fellowship (DOE CSGF) program. or having been supported by the DOE program, contact the Krell Institute simulations of molecular interactions to be very little release to the He was extremely committed to this CSGF program for the maximum at (515) 956-3696 or email Rachel so they eat less supercomputing time. “Between those two extremes, water environment,” he says. program that supports graduate number of allotted years. Huisman at [email protected]. “This is a widely researched problem, slips between the PEO molecules, students in computational science. but the applications have not kept up creating an irregular surface that Yet Ismail continues upgrading This program is unique, as it Dr. Grauman was a CSGF fellow with the theory,” he says. proteins ignore. I think of this as Carlsbad’s chemical equilibrium requires candidates to take courses from 2001-2005. She received her the ‘Goldilocks effect,’ because the model to improve its accuracy, as well in mathematics, computer science Ph.D. in Computer Science in 2006 coating has to be just right to work,” as studying sites for the safe storage and an applications discipline, such from MIT’s Computer Science and Ismail concludes. of more highly radioactive wastes. as physics or engineering. The DOE Artificial Intelligence Laboratory. Daniel Hitchcock, U.S. CSGF program currently supports She is currently an Assistant Professor Department of Energy After finishing his post-doc, Ismail Ismail never imagined using his over 60 graduate students and is in the Department of Computer presents Jaydeep “Our group is trying to make sure joined Sandia’s Carlsbad Programs ability to model molecular-scale administered by the Krell Institute. Sciences at the University of Texas Bardhan his 2007 Group, which supports the world’s interactions to understand radioactive at Austin. Howes Scholar award. that centralized storage is not a only operating nuclear waste repository. containment, but he’s pleased about DOE’s Waste Isolation Pilot Plant the added application. threat to the environment or to other (WIPP) was built to safely hold clothing, test tubes, and other radiation-exposed “We’re solving a problem that already people. I’m comfortable with that.” items for 10,000 years. exists because we’re storing waste all over the place now,” he says. “Our WIPP stores sealed waste containers group is trying to make sure that in rooms dug into an underground centralized storage is not a threat to Fred Howes’ son and widow, Michael Howes and Mary Hall, salt deposit. Over time, pressure from the environment or to other people. (back row) pose with 2007 Howes Scholars Jaydeep Bardhan thousands of feet of rock above will I’m comfortable with that.” (left) and Kristen Grauman (right) and Daniel Hitchcock from the U.S. Department of Energy, Office of Science (center). 44 ALUMNI 47 Directory

Edward Chao John Costello Laura Dominik Alumni Directory Princeton University University of Arizona Florida Atlantic University Plasma Physics Applied Mathematics Electrical Engineering Fellowship Years: 1992-1995 Fellowship Years: 1998-2002 Fellowship Years: 1993-1997 Current Status: TomoTherapy Current Status: Pratt & Whitney Nathan Crane Jarrod Chapman University of Illinois Michael Driscoll A Devin Balkcom Edwin Blosch University of California – Berkeley Civil Engineering Boston University Carnegie Mellon University University of Florida Computational Biology Fellowship Years: 1999-2002 Bioinformatics & Systems Biology Bree Aldridge Robotics Aerospace Engineering Fellowship Years: 1999-2003 Current Status: Sandia National Fellowship Years: 2002-2006 Massachusetts Institute of Technology Fellowship Years: 2000-2004 Fellowship Years: 1991-1994 Current Status: DOE Joint Laboratories – New Mexico Current Status: Student, Computational Biology Current Status: Faculty, Current Status: CFD-FASTRAN Genome Institute Boston University Fellowship Years: 2002-2006 Dartmouth College Stephen Cronen-Townsend Current Status: Student, Massachusetts Nawaf Bou-Rabee Eric Charlton Cornell University Brian Dumont Institute of Technology Michael Barad California Institute of Technology University of Michigan Computational Materials Physics University of Michigan University of California – Davis Applied & Computational Mathematics Aerospace Engineering Fellowship Years: 1991-1995 Aerospace Engineering Marcelo Alvarez Civil & Environmental Engineering Fellowship Years: 2002-2006 Fellowship Years: 1992-1996 Current Status: Esko-Graphics Fellowship Years: 1994 University of Texas Fellowship Years: 2002-2006 Current Status: Staff, California Current Status: Lockheed Martin Current Status: Airflow Astrophysics Current Status: Staff, Institute of Technology Robert Cruise Sciences Corporation Fellowship Years: 2001-2005 Stanford University Michael Chiu Indiana University Current Status: Staff, Dean Brederson Massachusetts Institute of Technology Physics Amanda W. Duncan Stanford University Jaydeep Bardhan University of Utah Mechanical Engineering Fellowship Years: 1997-2001 University of Illinois Massachusetts Institute of Technology Computer Science Fellowship Years: 1992-1996 Electrical Engineering Asohan Amarasingham Electrical Engineering Fellowship Years: 1996-1998 Current Status: Student, Aron Cummings Fellowship Years: 1991-1995 A Brown University Fellowship Years: 2002-2006 Massachusetts Institute of Arizona State University Current Status: Intel Cognitive Science Current Status: Argonne Paul Bunch Technology – Sloan School Electrical Engineering Fellowship Years: 1998-2002 National Laboratory Purdue University Fellowship Years: 2003-2007 Mary Dunlop Kevin Chu Current Status: Student, B Current Status: Staff, University Chemical Engineering California Institute of Technology C of Jaffna, Sri Lanka Edward Barragy Fellowship Years: 1994-1997 Massachusetts Institute of Technology Arizona State University Mechanical Engineering University of Texas Applied Mathematics Fellowship Years: 2002-2006 Fellowship Years: 2002-2005 Joseph Czyzyk Kristopher Andersen Engineering Mechanics Jeffery Butera Current Status: Student, California C University of California – Davis Fellowship Years: 1991-1993 North Carolina State University Current Status: Vitamin D, Inc. Northwestern University Institute of Technology D Physics Current Status: Intel Mathematics Industrial Engineering Fellowship Years: 2001-2005 Fellowship Years: 1993-1997 Kristine Cochran Fellowship Years: 1991-1994 Lewis Jonathan Dursi Current Status: Faculty, William Barry Current Status: Staff, Hampshire College University of Illinois Current Status: Staff, Central Michigan University of Chicago Northern Arizona University Civil Engineering University Research Corporation Astrophysics E Carnegie Mellon University Fellowship Years: 2002-2006 Structural & Computational Engineering Fellowship Years: 1999-2003 Matthew Anderson Fellowship Years: 1994-1998 C Current Status: Student, Current Status: Canadian Institute for University of Texas University of Illinois D Theoretical Astrophysics Physics Paul Bauman Brandoch Calef Fellowship Years: 2000-2004 University of Texas University of California – Berkeley Joshua Coe William Daughton Current Status: Faculty, Computational & Applied Mathematics Applied Mathematics University of Illinois Massachusetts Institute of Technology E Brigham Young University Fellowship Years: 2003-2007 Fellowship Years: 1996-2000 Chemical Physics Plasma Physics Current Status: Student, Current Status: Boeing Fellowship Years: 2001-2002 Fellowship Years: 1992-1996 Ryan Elliott University of Texas Current Status: Los Alamos Current Status: Los Alamos University of Michigan B Patrick Canupp National Laboratory National Laboratory Aerospace Engineering Martin Bazant Stanford University Fellowship Years: 2000-2004 Gregory Davidson Teresa Bailey Harvard University Aerospace Engineering Ken Comer Current Status: Faculty, Texas A&M University Physics Fellowship Years: 1991-1995 North Carolina State University University of Michigan University of Minnesota Nuclear Engineering Fellowship Years: 1992-1996 Current Status: Joe Gibbs Racing Mechanical Engineering Nuclear Engineering Fellowship Years: 2002-2006 Current Status: Faculty, Massachusetts Fellowship Years: 1991-1995 Fellowship Years: 2002-2006 Thomas Epperly Current Status: Student, Texas Institute of Technology Kent Carlson Current Status: Procter & Gamble Current Status: Student, University of Wisconsin A&M University Florida State University University of Michigan Chemical Engineering Bonnie Carpenter Beyer Mechanical Engineering Gavin Conant Fellowship Years: 1991-1995 Allison Baker University of Illinois Fellowship Years: 1991-1995 University of New Mexico Mark DiBattista Current Status: Lawrence Livermore University of Colorado Mechanical Engineering Current Status: Staff, University of Iowa Biology Columbia University National Laboratory Applied Mathematics Fellowship Years: 1991-1995 Fellowship Years: 2000-2004 Computational Fluid Dynamics Fellowship Years: 1999-2003 Current Status: Rockwell Collins Nathan Carstens Current Status: Staff, Trinity College, Fellowship Years: 1992-1994 Annette Evangelisti Current Status: Lawrence Massachusetts Institute of Technology Dublin Ireland University of New Mexico Livermore National Laboratory Mary Biddy Nuclear Engineering John Dolbow Computational Molecular Biology University of Wisconsin Fellowship Years: 2001-2004 William Conley Northwestern University Fellowship Years: 2001-2005 Chemical Engineering Current Status: AREVA Purdue University Theoretical and Applied Mechanics Current Status: Student, Fellowship Years: 2002-2006 Mechanical Engineering Fellowship Years: 1997-1999 University of New Mexico Current Status: British Petroleum Fellowship Years: 2003-2007 Current Status: Faculty, Duke University Current Status: Student, Purdue University 46 49

F G Larisa Goldmints H Jeffrey Hittinger J Carnegie Mellon University University of Michigan Matthew Fago Kenneth Gage Structural Mechanics Aric Hagberg Aerospace Engineering & Nickolas Jovanovic California Institute of Technology University of Pittsburgh Fellowship Years: 1997-2001 University of Arizona Scientific Computing Yale University Aeronautical Engineering Chemical Engineering Current Status: General Electric & Applied Mathematics Fellowship Years: 1996-2000 Mechanical Engineering Fellowship Years: 2000-2003 Fellowship Years: 1998-2002 Rensselaer Polytechnic Institute Fellowship Years: 1992-1994 Current Status: Lawrence Livermore Fellowship Years: 1992-1994 Current Status: LC Wright Current Status: Student, Current Status: Los Alamos National Laboratory Current Status: Faculty, University University of Pittsburgh William Gooding National Laboratory of Arkansas – Little Rock Michael Falk Purdue University Gordon Hogenson University of California – Santa Barbara Nouvelle Gebhart Chemical Engineering Glenn Hammond University of Washington Physics University of New Mexico Fellowship Years: 1991-1994 University of Illinois Physical Chemistry K Fellowship Years: 1995-1998 Chemistry Environmental Engineering & Science Fellowship Years: 1993-1996 Current Status: Faculty, Fellowship Years: 2001-2003 Kristen Grauman Fellowship Years: 1999-2003 Current Status: Microsoft Yan Karklin University of Michigan Massachusetts Institute of Technology Current Status: Pacific Northwest Carnegie Mellon University Sommer Gentry Computer Science National Lab Daniel Horner Computational Neuroscience Matthew Farthing Massachusetts Institute of Technology Fellowship Years: 2001-2005 University of California – Berkeley Fellowship Years: 2002-2006 University of North Carolina Optimization/Control Theory Current Status: Faculty, Jeffrey Haney Chemistry Current Status: Student, Carnegie Environmental Science & Engineering Fellowship Years: 2001-2005 University of Texas Texas A&M University Fellowship Years: 2000-2004 Mellon University Fellowship Years: 1997-2001 Current Status: Faculty, Physical Oceanography Current Status: Los Alamos Current Status: Faculty, University United States Naval Academy Corey Graves Fellowship Years: 1993-1996 National Laboratory Richard Katz of North Carolina North Carolina State University Current Status: Dynacon, Inc. Columbia University Charles Gerlach Computer Engineering William Humphrey Geodynamics Michael Feldmann Northwestern University Fellowship Years: 1996-1999 Heath Hanshaw University of Illinois Fellowship Years: 2001-2005 California Institute of Technology Mechanical Engineering Current Status: Faculty, North Carolina University of Michigan Physics Current Status: Staff, Computational Chemistry Fellowship Years: 1995-1999 Agricultural & Technical State University Nuclear Engineering Fellowship Years: 1992-1994 University of Cambridge Fellowship Years: 1999-2002 Current Status: Network Fellowship Years: 2001-2005 Current Status: NumeriX LLC Current Status: Walleye Trading Computing Services, Inc. Michael Greminger Current Status: Sandia National Benjamin Keen Advisors LLC University of Minnesota Laboratories – New Mexico Jason Hunt F University of Michigan H Timothy Germann Mechanical Engineering University of Michigan Mathematics Krzysztof Fidkowski Harvard University Fellowship Years: 2002-2005 Rellen Hardtke Aerospace Engineering & Fellowship Years: 2000-2004 Massachusetts Institute of Technology Physical Chemistry Current Status: Seagate Technologies University of Wisconsin Scientific Computing Current Status: IDA Center for G Computational Fluid Dynamics Fellowship Years: 1992-1995 Physics Fellowship Years: 1999-2003 Computing Sciences I Fellowship Years: 2003-2007 Current Status: Los Alamos Noel Gres Fellowship Years: 1998-2002 Current Status: General Dynamics – Current Status: Staff, Massachusetts National Laboratory University of Illinois Current Status: Faculty, University Advanced Information Systems Peter Kekenes-Huskey Institute of Technology Electrical Engineering of Wisconsin – River Falls California Institute of Technology Christopher Gesh Fellowship Years: 1999-2001 E. McKay Hyde Computational Chemistry/Biology J Stephen Fink Texas A&M University Owen Hehmeyer California Institute of Technology Fellowship Years: 2004-2007 University of California – San Diego Nuclear Engineering Boyce Griffith Princeton University Applied & Computational Mathematics Current Status: Arete Associates Computer Science Fellowship Years: 1993-1997 New York University – Courant Institute Chemical Engineering Fellowship Years: 1999-2002 Fellowship Years: 1994-1998 Current Status: Pacific Northwest Applied Mathematics Fellowship Years: 2002-2006 Current Status: Goldman Sachs Jeremy Kepner K Current Status: IBM National Laboratory Fellowship Years: 2000-2004 Current Status: ExxonMobil Upstream Princeton University Current Status: Staff, New York University Research Corporation Computational Cosmology Robert Fischer Matthew Giamporcaro I Fellowship Years: 1993-1996 Harvard University Boston University Eric Grimme Eric Held Current Status: Staff, Massachusetts Computer Science Cognitive and Neural Systems University of Illinois Eugene Ingerman Institute of Technology Lincoln Labs Fellowship Years: 1994-1998 Fellowship Years: 1998-2000 Electrical Engineering University of Wisconsin University of California – Berkeley Current Status: Quant Current Status: GCI, Inc. Fellowship Years: 1994-1997 Engineering Physics Applied Mathematics Fellowship Years: 1995-1999 Sven Khatri Current Status: Intel Fellowship Years: 1997-2001 California Institute of Technology Gregory Ford Ahna Girshick Current Status: Faculty, Current Status: General Electric Utah State University Electrical Engineering University of Illinois University of California – Berkeley John Guidi Fellowship Years: 1993-1996 Chemical Engineering Vision Science University of Maryland Ahmed Ismail Judith Hill Current Status: Honeywell, Inc. Fellowship Years: 1993-1995 Fellowship Years: 2001-2005 Computer Science Massachusetts Institute of Technology Carnegie Mellon University Current Status: Staff, Fellowship Years: 1994-1997 Chemical Engineering Benjamin Kirk Oliver Fringer New York University Current Status: Math High School Teacher Mechanics, Algorithms & Computing Fellowship Years: 2000-2004 Fellowship Years: 1999-2003 University of Texas Stanford University Current Status: Sandia National Aerospace Engineering Environmental Fluid Mechanics Kevin Glass Current Status: Sandia National Laboratories – New Mexico Laboratories – New Mexico Fellowship Years: 2001-2004 Fellowship Years: 1997-2001 University of Oregon Current Status: NASA Johnson Current Status: Faculty, Computer Science Space Center Stanford University Fellowship Years: 1996-2000 Charles Hindman University of Colorado Current Status: Pacific Northwest Justin Koo National Laboratory Aerospace Engineering Fellowship Years: 1999-2003 University of Michigan Current Status: Air Force Aerospace Engineering Research Laboratory Fellowship Years: 2000-2004 Current Status: Air Force Research Laboratory

48 51

Michael Kowalok Tasha (Palmer) Lopez Lisa Mesaros N O James Phillips University of Wisconsin University of California – Los Angeles University of Michigan University of Illinois Medical Physics Chemical Engineering Aerospace Engineering & Heather Netzloff Christopher Oehmen Physics Fellowship Years: 2000-2004 Fellowship Years: 2000-2001 Scientific Computing Iowa State University University of Memphis Fellowship Years: 1995-1999 Current Status: Staff, Virginia Current Status: IBM Fellowship Years: 1991-1995 Physical Chemistry Biomedical Engineering Current Status: Staff, Commonwealth University Current Status: FLUENT, Inc. Fellowship Years: 2000-2004 Fellowship Years: 1999-2003 University of Illinois Christie Lundy Current Status: Ames Laboratory Current Status: Pacific Northwest Yury Krongauz University of Missouri – Rolla Richard Mills National Laboratory Todd Postma Northwestern University Physics College of William and Mary Elijah Newren University of California – Berkeley Theoretical & Applied Mechanics Fellowship Years: 1991-1994 Computer Science University of Utah Nuclear Engineering Fellowship Years: 1993-1996 Current Status: State of Fellowship Years: 2001-2004 Mathematics P Fellowship Years: 1994-1998 Current Status: Black Rock Missouri Employee Current Status: Oak Ridge Fellowship Years: 2001-2005 Current Status: Totality National Laboratory Current Status: Sandia National Steven Parker Laboratories – New Mexico University of Utah Richard Propp L M Julian Mintseris Computer Science University of California – Berkeley Boston University Pauline Ng Fellowship Years: 1994-1997 Mechanical Engineering Eric Lee William Marganski Bioinformatics University of Washington Current Status: Faculty, Fellowship Years: 1993-1996 Rutgers University Boston University Fellowship Years: 2001-2005 Bioengineering University of Utah Current Status: Oracle Mechanical Engineering Biomedical Engineering Current Status: Staff, Harvard Fellowship Years: 2000-2002 Fellowship Years: 1999-2003 Fellowship Years: 1998-2002 Medical School Current Status: Illumina Joel Parriott Current Status: Northrup Grumman Corp. Current Status: Ikonisys, Inc. University of Michigan Q Erik Monsen Brian Nguyen Gunney Astronomy & Astrophysics Seung Lee Daniel Martin Stanford University University of Michigan Fellowship Years: 1992-1996 Alejandro Quezada Massachusetts Institute of Technology University of California – Berkeley Aerospace and Astronautical Engineering Aerospace Engineering & Current Status: Office of University of California – Berkeley K Mechanical Engineering Mechanical Engineering Fellowship Years: 1991-1994 Scientific Computing Management and Budget Geophysics Fellowship Years: 2001-2005 Fellowship Years: 1993-1996 Current Status: Staff, Max Planck Fellowship Years: 1993-1996 Fellowship Years: 1997-1998 Current Status: Student, Massachusetts Current Status: Lawrence Berkeley Institute of Economics, Germany Current Status: Lawrence Livermore Ian Parrish Institute of Technology National Laboratory National Laboratory Princeton University Catherine Grasso Quist L Brian Moore Computational Plasma Physics Cornell University N Jack Lemmon Marcus Martin North Carolina State University Diem-Phuong Nguyen Fellowship Years: 2004-2007 Bioinformatics Georgia Institute of Technology University of Minnesota Nuclear Engineering University of Utah Current Status: Staff, University Fellowship Years: 2000-2004 of California – Berkeley M Mechanical Engineering Physical Chemistry Fellowship Years: 1992-1995 Chemical Engineering Current Status: Student, 0 Fellowship Years: 1991-1994 Fellowship Years: 1997-1999 Current Status: Global Nuclear Fuel Fellowship Years: 1999-2003 University of Michigan Current Status: Medtronic, Inc. Current Status: Useful Bias Current Status: Staff, University of Utah Tod Pascal Nathaniel Morgan California Institute of Technology Mary Ann Leung Randall McDermott Georgia Institute of Technology Debra Egle Nielsen Physical Chemistry R P University of Washington University of Utah Mechanical Engineering Colorado State University Fellowship Years: 2003-2007 Theoretical Physical Chemistry Chemical Engineering Fellowship Years: 2002-2005 Civil Engineering Current Status: Student, California Mala Radhakrishnan Fellowship Years: 2001-2005 Fellowship Years: 2001-2005 Current Status: Los Alamos Fellowship Years: 1992-1996 Institute of Technology Massachusetts Institute of Technology Current Status: e-SciTek, LLC Current Status: National Institute of National Laboratory Physical Chemistry Q Standards and Technology (NIST) Joyce Noah-Vanhoucke Virginia Pasour Fellowship Years: 2004-2007 Benjamin Lewis James (Dan) Morrow Stanford University North Carolina State University Current Status: Faculty, Massachusetts Institute of Technology Matthew McGrath Carnegie Mellon University Theoretical Chemistry Biomathematics Wellesley College Computational Biology University of Minnesota Robotics Fellowship Years: 2001-2003 Fellowship Years: 1998-1999 R Fellowship Years: 2002-2006 Physical Chemistry Fellowship Years: 1992-1995 Current Status: Staff, University Current Status: Staff, University Emma Rainey of California – Los Angeles Current Status: Student, Massachusetts Fellowship Years: 2004-2007 Current Status: Sandia National of California – Berkeley California Institute of Technology Current Status: Peace Corps Volunteer, Laboratories – New Mexico Geological and Planetary Sciences Institute of Technology Christina Payne University of Dschang Catherine Norman Fellowship Years: 2003-2006 Lars Liden Sarah Moussa Northwestern University Vanderbilt University Current Status: Arete Associates Boston University Richard McLaughlin University of California – Berkeley Applied Mathematics Chemical Engineering Fellowship Years: 2003-2007 Cognitive & Neural Systems Princeton University Machine Learning Fellowship Years: 2000-2004 Current Status: Student, Nathan Rau Fellowship Years: 1994-1998 Applied Mathematics Fellowship Years: 2003-2005 Current Status: Center Fellowship Years: 1991-1994 Current Status: Google for Naval Analysis Vanderbilt University University of Illinois Current Status: Staff, Civil Engineering University of Washington Current Status: Faculty, University of North Carolina Michael Mysinger Gregory Novak Robert (Chris) Penland Fellowship Years: 2000-2001 Stanford University University of California – Santa Cruz Duke University Current Status: Hanson Alex Lindblad Professional Services University of Washington Matthew McNenly Chemical Engineering Theoretical Astrophysics Biomedical Engineering University of Michigan Fellowship Years: 1996-2000 Fellowship Years: 2002-2006 Fellowship Years: 1993-1997 Civil Engineering Aerospace Engineering Current Status: Arqule, Inc. Current Status: Student, University Current Status: Predix Clifton Richardson Fellowship Years: 2002-2006 Fellowship Years: 2001-2005 of California – Santa Cruz Pharmaceuticals, Inc. Cornell University Current Status: Sandia National Current Status: Lawrence Livermore Physics Laboratories – California National Laboratory Fellowship Years: 1991-1995

50 53

Christopher Rinderspacher Susanne (Essig) Seefried James Strzelec U Collin Wick Michael Wu University of Georgia Massachusetts Institute of Technology Stanford University University of Minnesota University of California – Berkeley Chemistry Aeronautics/Astronautics Computational Mathematics Obioma Uche Computational Chemistry Computational Neuroscience Fellowship Years: 2001-2005 Fellowship Years: 1997-2002 Fellowship Years: 1992-1994 Princeton University Fellowship Years: 2000-2003 Fellowship Years: 2002-2006 Current Status: Staff, Duke University Materials/Statistical Mechanics Current Status: Faculty, Louisiana Current Status: Student, University Marc Serre Rajeev Surati Fellowship Years: 2002-2006 Tech University of California – Berkeley John Rittner University of North Carolina Massachusetts Institute of Technology Current Status: Sandia National Northwestern University Environmental Science & Engineering Electrical Engineering & Laboratories – California James Wiggs Peter Wyckoff Mechanical Engineering Fellowship Years: 1996-1999 Computer Science University of Washington Massachusetts Institute of Technology Fellowship Years: 1991-1995 Current Status: Faculty, University Fellowship Years: 1995-1997 Physical Chemistry Chemical Engineering Current Status: Chicago Board of North Carolina Current Status: Scalable V Fellowship Years: 1991-1994 Fellowship Years: 1992-1995 Options Exchange Display Technologies Current Status: Novum Current Status: Ohio Jason Sese Anton Van Der Ven Supercomputing Center Courtney Roby Stanford University Laura (Painton) Swiler Massachusetts Institute of Technology Jon Wilkening University of Colorado Computational Materials Science Carnegie Mellon University Materials Science University of California – Berkeley Electrical Engineering Fellowship Years: 2003-2005 Engineering & Public Policy Fellowship Years: 1996-2000 Applied Mathematics Z Fellowship Years: 2002-2003 Current Status: U.S. Patent and Fellowship Years: 1992-1995 Current Status: Faculty, University Fellowship Years: 1997-2001 Current Status: Student, Trademark Office Current Status: Sandia National of Michigan Current Status: Faculty, University Charles Zeeb Stanford University Laboratories – New Mexico of California – Berkeley Colorado State University Elsie Simpson Pierce Rajesh Venkataramani Mechanical Engineering David Ropp University of Illinois Massachusetts Institute of Technology Glenn Williams Fellowship Years: 1993-1997 University of Arizona Nuclear Engineering T Chemical Engineering University of North Carolina Current Status: Los Alamos Applied Mathematics Fellowship Years: 1991-1993 Fellowship Years: 1995-1999 Environmental Science & Engineering National Laboratory Fellowship Years: 1992-1995 Current Status: Lawrence Livermore Shilpa Talwar Current Status: Goldman Sachs Fellowship Years: 1993-1996 R U Current Status: SAIC National Laboratory Stanford University Current Status: Faculty, Scott Zoldi Scientific Computing Stephen Vinay Old Dominion University Duke University Robin Rosenfeld Amoolya Singh Fellowship Years: 1992-1994 Carnegie Mellon University Theoretical & Computational Physics Scripps Research Institute University of California – Berkeley Current Status: Intel C. Eric Williford S Chemical Engineering Fellowship Years: 1996-1998 V Biology Computational Biology Fellowship Years: 1998-2000 Florida State University Current Status: Fair Issac Corporation Fellowship Years: 1996-1997 Fellowship Years: 2002-2006 Brian Taylor Current Status: Bettis Laboratory Meteorology Current Status: ActiveSight Current Status: European Molecular University of Illinois Fellowship Years: 1993-1996 T Biology Lab, Heidelberg Germany Engineering Mechanics Current Status: Weather Predict W Mark Rudner Fellowship Years: 2003-2007 W Massachusetts Institute of Technology Melinda Sirman Current Status: Student, Michael Wolf Physics University of Texas University of Illinois Joshua Waterfall University of Illinois Fellowship Years: 2003-2007 Engineering Mechanics Cornell University Scientific Computing/Computer Science Z Mayya Tokman Fellowship Years: 2003-2007 Current Status: Student, Massachusetts Fellowship Years: 1994-1996 Biophysics Institute of Technology California Institute of Technology Current Status: Student, Steven Smith Applied Mathematics Fellowship Years: 2002-2006 University of Illinois North Carolina State University Fellowship Years: 1996-2000 Current Status: Staff, Cornell University S Chemical Engineering Current Status: Faculty, University Matthew Wolinsky Fellowship Years: 1992-1994 of California – Merced Phillip Weeber Duke University David Schmidt Current Status: Invista University of North Carolina Earth Surface Dynamics University of Illinois William Triffo Environmental Science & Engineering Fellowship Years: 2001-2005 Elecctrical Engineering Eric Sorin Rice University Fellowship Years: 1994-1996 Current Status: Staff, University Fellowship Years: 2002-2006 Stanford University Bioengineering Current Status: Chatham Financial of Minnesota Current Status: Epic Systems Chemical Physics Fellowship Years: 2003-2007 Fellowship Years: 2002-2004 Current Status: Student, Rice University Adam Weller Brandon Wood Samuel Schofield Current Status: Faculty, California Princeton University Massachusetts Institute of Technology University of Arizona State University – Long Beach Mario Trujillo Chemical Engineering Computational Materials Science Applied Mathematics University of Illinois Fellowship Years: 2001-2002 Fellowship Years: 2003-2007 Fellowship Years: 2001-2005 Scott Stanley Mechanical Engineering Current Status: Student, Jawaharlal Current Status: Los Alamos University of California – San Diego Fellowship Years: 1997-2000 Gregory Whiffen Nehru Centre for Advanced National Laboratory Mechanical Engineering Current Status: Staff, Pennsylvania Cornell University Scientific Research Fellowship Years: 1994-1998 State University Applied Environmental Systems Engineering Robert Sedgewick Current Status: Hewlett Research Laboratory Fellowship Years: 1991-1995 Lee Worden University of California – Santa Barbara Packard Company Current Status: NASA – Jet Princeton University Physics Propulsion Laboratory Applied Mathematics Fellowship Years: 2000-2003 Samuel Stechmann Fellowship Years: 1998-2002 Current Status: Staff, Carnegie New York University Current Status: Staff, University Mellon University Applied Mathematics of California – Berkeley Fellowship Years: 2003-2007 Current Status: Student, New York University

52 FELLOWS DIRECTORY FOURTH YEAR FELLOWS | DOE COMPUTATIONAL SCIENCE GRADUATE FELLOWSHIP 55

Erik Allen Michael Bybee Jimena Davis Jeffrey Drocco Jasmine Foo Bonnie Kirkpatrick Massachusetts Institute of Technology University of Illinois North Carolina State University Princeton University Brown University University of California – Berkeley Chemical Engineering Chemical Engineering Applied Mathematics Biophysics & Computation Computational Fluid Dynamics Computer Science

Advisor: Advisor: Advisor: Advisor: Advisor: Advisor: Greg Rutledge Jonathan Higdon H.T. Banks David Tank George Karniadakis Richard Karp Practicum: Practicum: Practicum: Practicum: Practicum: Practicum: Sandia National Laboratories – Lawrence Livermore National Laboratory Sandia National Laboratories – Los Alamos National Laboratory Lawrence Berkeley National Laboratory Lawrence Livermore National Laboratory New Mexico Contact: New Mexico Contact: Contact: Contact: Contact: [email protected] Contact: [email protected] [email protected] [email protected] [email protected] Research Synopsis: [email protected] Research Synopsis: Research Synopsis: Research Synopsis: Research Synopsis: Suspensions of colloidal particles (1nm to Research Synopsis: Studies of the dynamics of mesoscopic My primary area of focus is the numerical My primary interest in Computational In the field of molecular simulation, many 1µm) are of great importance in a variety There are two types of measure-dependent systems have benefited greatly from simulation of fluid-structure interactions. Biology is algorithms for human genetics, of the most interesting problems are of industrial applications including paints, problems: individual dynamics and advances in computational methodology. To do this, I use a spectral/hp element including disease association and currently computationally intractable. coatings, foods, drugs, and cosmetics. aggregate dynamics. We are currently Also however, recent advances in Navier-Stokes solver coupled with a haplotype phase resolution. My peripheral Despite significant advances in hardware Additional applications arise in the working on estimating growth rate many-body dynamics have made it possible structural solver on serial and parallel interests include regulatory interaction and algorithmic efficiency, the combined fabrication and development of novel distributions in size-structured mosquitofish for condensed matter physics to return systems. I am working on the development networks, phylogenetics, and biomolecular requirements of a large number of atoms electronic materials, microscale populations, an aggregate dynamics the favor, in the form of potential new of numerical methods using polynomial folding kinetics. (10,000-1,000,000 or more) and a biosensors, and nanostructured materials. problem. In order to achieve the computer architectures. chaos for modeling uncertainty in fluids. small integration time step (typically These suspensions exhibit a wide variety properties of dispersion and bifurcation Currently, I am also studying the motion femtoseconds) relegates many of these of phase transitions and rheological exhibited by the mosquitofish data, we While at present the size of microelectronics of a free rigid cylinder in a flow, subject molecular simulation problems to the behavior that can be far more complex use the Sinko-Streifer model as modified continues to decrease, allowing to vortex-induced vibrations (VIV). I am category of future work. Coarse graining than for normal fluids. The ability to in the Growth Rate Distribution (GRD) corresponding improvements in computing investigating the effect of Reynolds number, attempts to address many of the current understand, manipulate, and predict the model of Banks-Botsford-Kappel-Wang. power, their size will soon reach a mass ratio and damping parameters on weaknesses of molecular simulation by behavior of these systems is important Given data, we would like to estimate the fundamental lower limit determined by the cylinder amplitude and mode of vortex seeking to reliably replicate the behavior for designing and optimizing novel growth rate distributions by considering a quantum mechanics. One alternative to formation. I am implementing a dual-level of an underlying atomistic simulation materials and manufacturing processes. least-squares formulation for the inverse the standard field-effect transistor model parallelism in this VIV code to increase the through a reduced representation. problem. We wish to minimize the cost is a cellular automata model. This uses the speed of computation. Other current and My research is in the large scale functional over an appropriate collection positions of particles in an array of cells to future projects include applications in The goal of our present research is to simulation of colloidal suspensions. of probability measures; however, finding store state information. If cells are lined up biomimetics, such as flow around bat wings study the fundamental questions of More specifically, I am interested in the an optimal probability distribution is not next to each other, interparticle repulsion during flight and the energy-harvesting eel. coarse-grained experimental design by simulation of phenomena that arise from simple since we are in an infinite-dimensional will correlate the positions of the particles comparing multiple design options within interparticle forces in the presence of setting. Also of importance in our work and allow information to be transmitted a common particle system. The particle hydrodynamic and Brownian forces. is the ability to be able to talk about the through the array. Any viable computer system we have selected is surfactants These phenomena include liquid-liquid convergence and the continuous architecture also requires a method of in water. Surfactant molecules are phase separation, gelation, and dependence of the estimated growth performing logic operations on the state medium length chain molecules that, crystallization. These phase behaviors rate distributions on the data, but to do information at a constant clock speed. This in sufficient concentration, will in turn have a profound effect on the so we will need to talk about the distance has recently been shown to be possible, spontaneously assemble into spherical rheological behavior and material between two probability measures. numerically, when vortex cellular automata aggregates called micelles. Surfactant properties of the suspension. are used in conjunction with an AC current solutions are an interesting system in ratchet (Hastings et al., Phys. Rev. Lett., which to study coarse-graining techniques 6/20/2003). Moreover, a complete set of for a number of reasons: properties of logic gates can be built with different interest span a large range of length arrangements of the pinning array. scales, they are highly sensitive to the interaction model used, and significant The estimated maximum clock speed with experimental data are available as a basis this model is 800 MHz using conventional for comparison. etching techniques, so further theoretical advances will be necessary to make the method practical for supercomputing applications. I plan to use my experience with computational studies of vortex dynamics to work on this problem. In this case, numerical methodology is useful to verify the feasibility of proposed solutions on a many-body scale without adding the unknown factors which accompany experiment. 54 FOURTH YEAR FELLOWS | DOE COMPUTATIONAL SCIENCE GRADUATE FELLOWSHIP 57

David Potere Amber Sallerson Michael Veilleux Allan Wollaber Etay Ziv John ZuHone Princeton University University of North Carolina – Cornell University University of Michigan Columbia University University of Chicago Demography/Remote Sensing Chapel Hill Computational Fracture Mechanics Nuclear Engineering, Computational Biology Astrophysics Applied Mathematics Fission Concentration Advisor: Advisor: Advisor: Advisor: Burt Singer Advisor: Anthony Ingraffea Advisor: Chris Wiggins Donald Lamb Practicum: Cass Miller Practicum: Edward Larsen Practicum: Practicum: Oak Ridge National Laboratory Oak Ridge National Laboratory Practicum: Sandia National Laboratories – Practicum: Lawrence Berkeley National Laboratory Contact: Contact: Lawrence Berkeley National Laboratory New Mexico Los Alamos National Laboratory Contact: [email protected] [email protected] Contact: Contact: Contact: [email protected] Research Synopsis: Research Synopsis: [email protected] [email protected] [email protected] Research Synopsis: Astrophysicists have known for many Space-born imagery when fused with Research Synopsis: Research Synopsis: Research Synopsis: Biological networks, such as cellular and years that a non-luminous form of matter existing census data offers a means of The problems that I am concerned with My interests are in optimizing the My field of interest concerns the genetic networks, are of particular interest accounts for most of the mass in the reducing the uncertainty in the spatial are of high complexity, involving multiple accuracy of structural integrity prognosis interaction of radiation with matter, or because of their potential to elucidate universe. This mass resides in haloes distribution of global population — a phases and species, multiple space and for American Military and NASA aerospace radiation transport. It is a multifaceted design principles that nature has employed in which the visible structures of the particularly urgent problem within the time scales, and which are inherently vehicles. I want to advance the stochastic field, encompassing nuclear reactors, to perform computations, while maintaining universe (galaxies, groups and clusters of developing world. The DOE’s LandScan heterogeneous and three-dimensional in capabilities of computational fracture radiation cancer therapy, and industrial robustness to noise and adaptability. Our lab galaxies, etc.) reside. Recent simulations project based at ORNL is one example of nature. The advancement of fundamental analysis programs for the purposes of applications such as material thickness studies biological networks using two of the formation of cosmological structure such an effort. models and simulators for such systems developing more accurate, multi-scale monitoring or oil well logging. Because approaches: statistical graph analyses and have suggested that the density profiles of requires high-performance computation. structural damage state models of air working directly with radioactive stochastic dynamic simulations. In both My aim is to use computational geography vehicles. Given stochastic material materials can be hazardous or prohibitively cases we aim to use computational these haloes are universal and are largely to improve such global demographic geometries and properties, finite expensive, scientists and engineers rely technologies to discover underlying independent of the parameters of the mapping efforts. There are correlations element analysis programs use extensive heavily on computational models of design principles. halo, such as its mass. It is thought that between population distribution and NASA’s computation to produce multiple radiation transport. My research topic structure in the universe has formed by immense optical satellite remote sensing length-scale fatigue crack growth models concerns a new technique by which we the merger of smaller structures to form archive. The NASA archive is global in in structures. Advanced technological simulate radiation transport on computers. larger objects, but this does not by itself geographic extent, fine resolution, covers sensors are currently being developed account for the universal density profile. many spectral bands, has revisit times as that will give local and global strain I am performing a series of N-body frequent as every three days, and stretches outputs from structural components on an simulations using the FLASH astrophysics back the early 1980’s. Uncovering aerospace vehicle during or after usage. code which set up controlled mergers of relationships between this imagery and By further enhancing the capabilities of dark matter haloes. In doing so, I hope to global population will require a combination finite element fracture analysis programs isolate particular effects or circumstances of machine learning, image processing, currently being used at Cornell, I will which may contribute to the particular statistics, and data visualization. translate the sensor data into continually form of density profile that cosmological updated, multi-scale damage state models simulations have observed. In addition, of the structural components. If component the simulations may shed some light on damage states can be accurately modeled the exact shape of the universal profile, throughout the existence of structures, as different simulations have given similar then the safety, life expectancy, and but not equivalent results. performance of aerospace vehicles can be greatly enhanced.

56 THIRD YEAR FELLOWS SECOND YEAR FELLOWS 59

Joshua Adelman Jeff Hammond David Markowitz Mark Berrill Ashlee Ford David Rogers University of California – Berkeley University of Chicago Princeton University Colorado State University University of Illinois University of Cincinnati Biophysics Theoretical & Computational Neurobiology Electrical &Computer Engineering Chemical Engineering Physical Chemistry Advisor: Computational Chemistry Advisor: Advisor: Advisor: Advisor: George Oster Advisor: David Tank Jorge Rocca Richard Braatz Thomas Beck Practicum: David Mazziotti Practicum: Practicum: Practicum: Practicum: Lawrence Livermore Practicum: Los Alamos Los Alamos Brookhaven National Laboratory Oak Ridge National Laboratory National Laboratory Pacific Northwest National Laboratory National Laboratory Contact: Contact: Contact: National Laboratory Contact: Contact: [email protected] [email protected] [email protected] Contact: [email protected] [email protected] [email protected] Kristi Harris Ariella Sasson Zlatan Aksamija Peter Norgaard Arnab Bhattacharyya University of Maryland – Rutgers University University of Illinois Asegun Henry Princeton University Massachusetts Institute of Technology Baltimore County Computational Biology Electrical Engineering Massachusetts Institute of Technology Computational Plasma Dynamics Computer Science Physics Advisor: Advisor: Mechanical Engineering Advisor: Advisor: Advisor: Wilma Olson Umberto Ravaioli Advisor: Clarence Rowley Madhu Sudan Philip Rous Practicum: Practicum: Gang Chen Practicum: Contact: Practicum: Sandia National Laboratories – Los Alamos Practicum: Lawrence Berkeley [email protected] Sandia National Laboratories – New Mexico National Laboratory Sandia National Laboratories – National Laboratory New Mexico Contact: Contact: New Mexico Contact: Jenelle Bray Contact: [email protected] [email protected] Contact: [email protected] California Institute of Technology [email protected] [email protected] Computational Biophysical Chemistry Michael Sekora Jordan Atlas Natalie Ostroff Advisor: David Ketcheson Princeton University Cornell University Kevin Kohlstedt University of California – San Diego William Goddard III University of Washington Continuum Mechanics, PDE Chemical Engineering Northwestern University Bioengineering Contact: Applied Mathematics Advisor: James Stone Advisor: Bio-Polymer/Soft Matter Computation Advisor: [email protected] Advisor: Michael Shuler Advisor: Jeff Hasty Randall LeVeque Practicum: Practicum: Monica Olvera de la Cruz Practicum: Julianne Chung Practicum: Lawrence Berkeley Los Alamos Practicum: Oak Ridge National Laboratory Emory University Sandia National Laboratories – National Laboratory National Laboratory Lawrence Berkeley Contact: Computational Mathematics New Mexico Contact: Contact: [email protected] National Laboratory [email protected] Advisor: Contact: [email protected] Contact: James Nagy [email protected] [email protected] Christopher Schroeder Practicum: Benjamin Smith Christopher Carey University of California – San Diego Lawrence Berkeley Brian Levine Harvard University University of Wisconsin Miler Lee Physics National Laboratory Cornell University Experimental High Energy Physics Plasma Physics University of Pennsylvania Advisor: Contact: Transportation Systems Advisor: Advisor: Masahiro Morii Genomics & Computational Biology Julius Kuti [email protected] Advisor: Carl Sovinec Practicum: Advisor: Practicum: Linda Nozick Practicum: Junhyong Kim Brookhaven National Laboratory Tal Danino Contact: Lawrence Berkeley Lawrence Berkeley Practicum: Contact: University of California – San Diego [email protected] National Laboratory National Laboratory Sandia National Laboratories – [email protected] Bioengineering Contact: Contact: [email protected] New Mexico Advisor: Oaz Nir [email protected] Contact: Stefan Wild Jeff Hasty Massachusetts Institute of Technology [email protected] Cornell University Contact: Computational Biology Benjamin Sonday Ethan Coon Operations Research [email protected] Advisor: Princeton University Columbia University Jeremy Lewi Advisor: Bonnie Berger Applied & Computational Mathematics Applied Mathematics Georgia Institute of Technology Christine A. Shoemaker Jack Deslippe Contact: Advisor: [email protected] Salvatore Torquato Advisor: Neuroengineering Practicum: University of California – Berkeley Marc Spiegelman Contact: Advisor: Argonne National Laboratory Physics Carolyn Phillips [email protected] Practicum: Robert Butera Contact: Advisor: Los Alamos Practicum: [email protected] Steven Louie University of Michigan National Laboratory Lawrence Livermore Contact: Applied Physics Contact: National Laboratory [email protected] Advisor: [email protected] Contact: Sharon Glotzer [email protected] John Evans Contact: University of Texas [email protected] Computational & Applied Mathematics Advisor: Alejandro Rodriguez Thomas Hughes Massachusetts Institute of Technology Practicum: Condensed Matter Theory Sandia National Laboratories – Advisor: New Mexico Steven G. Johnson Contact: Contact: [email protected] [email protected] 58 FIRST YEAR FELLOWS

Matthew Adams Paul Loriaux Paul Sutter University of Washington University of California – San Diego University of Illinois Computational Electromagnetics Computational Biology Cosmology Advisor: Advisor: Advisor: Vikram Jandhyala Alexander Hoffmann Paul Ricker Contact: Contact: Contact: [email protected] [email protected] [email protected]

Gregory Crosswhite James Martin Cameron Talischi University of Washington University of Texas University of Illinois Physics Computational & Applied Mathematics Topology Optimization Advisor: Advisor: Advisor: Dave Bacon Omar Ghattas Glaucio Paulino Contact: Contact: Contact: [email protected] [email protected] [email protected]

Hal Finkel Geoffrey Oxberry John Ziegler Yale University Massachusetts Institute of Technology California Institute of Technology Physics Chemical Kinetics/Transport Phenomena Aeronautics Advisor: Advisor: Advisor: Richard Easther William Green Dale Pullin Contact: Contact: Contact: [email protected] [email protected] [email protected]

Robin Friedman Troy Perkins Massachusetts Institute of Technology University of California – Davis Computational & Systems Biology Theoretical Ecology Advisor: Advisor: Christopher Burge Alan Hastings Contact: Contact: [email protected] [email protected]

Steven Hamilton Matthew Reuter Emory University Northwestern University Computational Mathematics Theoretical Chemistry Advisor: Advisor: Michele Benzi Mark Ratner Contact: Contact: [email protected] [email protected]

Joshua Hykes Sarah Richardson North Carolina State University Johns Hopkins University Nuclear Engineering School of Medicine Advisor: Human Genetics & Molecular Biology Dmitriy Anistratov Advisor: Contact: Joel Bader [email protected] Contact: [email protected] Milo Lin California Institute of Technology Danilo Scepanovic Physics Harvard/Massachusetts Advisor: Institute of Technology Ahmed Zewail Signal Processing/Modeling Contact: Advisor: [email protected] Richard Cohen Contact: [email protected]

60