<<

zjjvojvovo

OVERVIEWNAS OVERVIEW AND HISTORY a Look at NAS’ 25 Years of Innovation and a Glimpse at the Future

In the mid-1970s, a group of Ames aerospace researchers began to study a highly innovative concept: NASA could transform U.S. aerospace R&D from the costly and cghghmgm time-consuming wind tunnel-based process to simulation-centric design and engineer- ing by executing emerging computational fluid dynamics (CFD) models on supercom- puters at least 1,000 times more powerful than those commercially available at the time. In 1976, Ames Center Director Dr. Hans Mark tasked a group led by Dr. F. Ronald Bailey to explore this concept, leading to formation of the Numerical Aerodynamic Simulator (NAS) Projects Office in 1979.

At the same time, a user interface group was formed consisting of CFD leaders from industry, government, and academia to help guide requirements for the NAS concept gmfgfgmfand provide feedback on evolving computer feasibility studies. At the conclusion of these activities in 1982, NASA management changed the NAS approach from a focus on purchasing a specially developed supercomputer to an on-going Numerical Aerody- namic Simulation Program to provide leading-edge computational capabilities based on an innovative network-centric environment. The NAS Program plan for implementing this new approach was signed on February 8, 1983.

Grand Opening As the NAS Program got underway, a projects office to a full-fledged division its first supercomputers were installed at Ames. In January 1987, NAS staff and in Ames’ Central Computing Facility, equipment were relocated to the new starting with a X-MP-12 in 1984. facility, which was dedicated on March 9, However, a key component of the Program 1987. At the grand opening, excitement plan was to create the NAS facility: a levels were high about what had been state-of-the-art supercomputing center accomplished so far, and about what was as well as a multi-disciplinary innova- yet to come. tion environment, bringing together CFD experts, computer scientists, visualization Pioneering Achievements specialists, and network and storage en- From its earliest years, NAS has achieved gineers under one roof. Groundbreaking many firsts in computing and in enabling for the NAS facility took place on March NASA’s challenging science and engineer- 14, 1985. In 1986, NAS transitioned from ing missions, including deployment of

NAS Overview 1 NAS OVERVIEW (CONTINUED)

the first Cray-2 in 1985, which attained development and applications. Redesign Rediscovering the Mission at NAS was assured with the formation increasingly challenging modeling and unprecedented performance on real of the Shuttle main engine, failure of Starting in 2001, NASA conducted of the High End Computing simulation requirements. Emerging in- CFD codes. NAS was also the first facility the Challenger O-ring, wing stall and a series of major engineering studies, project as the first asset in the Strategic novative architectures and systems will be to put the UNIX on aerodynamic loads for fighter airplanes, including the X-37 vehicle atmospheric Capabilities Assets Program (SCAP). strategically leveraged to provide the most supercomputers, and the first to imple- thrust loss in vertical take-off jets—all reentry and the Shuttle’s hydrogen NAS remains widely recognized as the effective supercomputing platforms and ment TCP/IP networking in a supercom- demonstrated the potential of CFD sup- fuel line flowliner cracking, each of premier supercomputing facility for the environments. Advanced algorithms and puting environment, linking to users ported by faster supercomputing to help which required most of NAS’ computa- Agency, and vital to each of NASA’s mis- applications will be developed in lockstep across the country. In 1984, NAS became lives and millions of dollars. Many of tional resources for three to six months, sion directorates. In 2007, the Aeronau- to exploit these petascale systems. the first organization to connect super- the award-winning codes that supported significantly delaying other important tics Research and Exploration Systems Partnerships with other NASA centers, computers and workstations together to these analyses were also developed at simulations. Following the Columbia Mission Directorates procured additional government laboratories, the computer distribute computation and visualization NAS, including INS3D, OVERFLOW, and tragedy in February 2003, the Agency resources and placed them at NAS to industry, and academia are being aggres- (what is now known as the client-server Cart3D. These codes have continued to again turned to NAS CFD experts for foam augment their SCAP allocations. NAS sively developed to continue Ames and model). In another innovation, NAS de- evolve through the years, and remain debris transport analysis, further straining facilities and staff are playing key roles the NAS facility’s proud reputation of be- veloped the first UNIX-based hierarchical workhorses for NASA missions today. NAS computational resources in an effort in Shuttle operations, designing the Ares ing the “go-to” place for supercomputing mass storage system, NAStore. NAS was to determine the physical cause of the and Orion space exploration vehicles, and advanced simulations. Discussions the first customer for New Directions accident. The Earth Science Enterprise advancing Earth and space science un- are underway regarding bleeding-edge Inc. (SGI), beginning an enduring partner- With a wealth of information technol- stepped in once more to support the derstanding, and conducting aeronautics concepts such as 10-petaflop systems ship that remains important today, and ogy expertise and a visionary spirit, purchase of , NAS’ first SGI research in all flight regimes. and 1-exabyte data archives at NAS, that led to the successful NASA-SGI-Intel NAS helped pioneer new technologies, system, which enabled the even larger connected to the science and engineering collaboration on the Columbia supercom- including computational nanotechnol- computational modeling effort required to Future community via 1 terabit-per-second links, puter in 2004. ogy and grid computing, starting in the improve the Shuttle design for a success- While reflecting on NAS’ 25-year legacy enabling near-real-time aerospace design mid-1990s. NAS even set up one of the ful Return to Flight. is inspiring, current NAS and Agency and deeper understanding of our planet NAS has also been a leader in visual- earliest websites in the world in 1993. leadership is carrying on the tradition of and the universe. ization, benchmarking standards, and In the late 1990s, NASA’s Shuttle flights By June 2004, the importance of the visionaries of a quarter century ago. job management. The graphics tools, began to seem routine, and aeronautics supercomputing-enabled high-fidelity The newly-installed Hyperwall-2 graphics At NAS, we honor our past most sincerely PLOT3D and FAST, both had vast user research and modeling lost momentum modeling and simulation for aerospace and data analysis system—currently by continually pursuing more impressive bases and won major software awards in the Agency, leading to diminished and other missions was clear. Purchase the world’s highest resolution display achievements in the future. And as these from NASA. The NAS Parallel Bench- investment in supercomputing capa- of the Columbia supercomputer was ap- at a quarter-billion pixels—will allow triumphs attract a new generation of inno- marks (NPB), which became the industry bilities and CFD research. Nevertheless, proved by NASA and Congress in record supercomputer users to view and explore vators and leaders to NAS, soon enough standard for objective evaluation of NAS managed to sustain its interna- time, and the system returned supercom- their computational results in unprec- their aspirations and achievements will parallel computing architectures, are tional leadership in single-system image puting leadership to the U.S. after two edented detail and speed. Visualization exceed our own. still widely used today. And since its supercomputing with funding from an and a half years with the Japanese Earth will become more tightly integrated into first release in 1994, the Portable Batch information technology R&D project and Simulator having the highest computa- the traditional environment of computing, System (PBS) has been commercialized NASA’s Earth Science Enterprise. In 1995, tional performance. More importantly, storage, and networks to provide a more and adopted broadly in the supercomput- NAS changed its name to the Numerical NASA was able to pursue its new Space powerful tool to scientists and engineers. ing community. Aerospace Simulation Division, and in Exploration mission and simultaneously The NAS facility is being significantly 2001, NAS became the NASA Advanced conduct leadership research in Earth upgraded and expanded in preparation for Throughout its history, NAS has also Supercomputing Division—the name it science, astrophysics, and aerospace installation of new generations of leader- been a pioneer and committed partner retains today. modeling. In October 2005, NASA’s long- ship-class supercomputers over the com- in both high-fidelity modeling tool term commitment to supercomputing ing months and years to address NASA’s

2 NAS Overview NAS Overview 3 TIMELINE (1983–1988)

A visionary team of managers, engineers, 25 Silicon Graphics Inc. (SGI) IRIS 1000 graphics and CFD researchers develops the NAS NAS Program officially goes into terminals arrive, which soon make a significant Program plan, which leads to the beginning production phase on July 21 at 5 90,000 square-foot NAS facility impact on post-processing and visualization of of a NASA New Start Program in October. One a.m. with Cray-2 links open to 27 opens its doors on March 1, CFD results. This first SGI purchase marks the of the primary goals included in the plan is to remote computer locations across 1987 and is dedicated by NASA beginning of a successful partnership that remains provide a national computational capability the country, catching nationwide Administrator, James C. Fletcher important today. as a necessary element in ensuring continued news media attention. eight days later. leadership in CFD and related disciplines. First 4-processor Cray-2 system, Navier (1.95 Gflops peak) is delivered, NAS purchases its first hardware—two DEC VAX which later helps NAS achieve its 11/750s named Orville and Wilbur, linked by initial performance goal of a sustained Ethernet and hyperchannel networks. Engineers 250 Mflops rate—the world’s most design a network-centric environment with a UNIX powerful supercomputer to date. A second Cray-2, Stokes operating system and TCP/IP protocol. (same performance Taking advantage of the Cray-2’s large memory characteristics as Navier) capacity and speedy calculations, General arrives, doubling NAS’ Dynamics researchers apply CFD methods to computing capacity. obtain Euler and Navier-Stokes solutions for a complete, realistic F-16 fighter configuration.

1983...... 1984...... 1985 ...... 1986...... 1987 ...... 1988......

Construction Development of the Graphics contract for NAS Animation System (GAS) begins, Researchers run CFD calculations on the facility is awarded providing useful animation of Cray-2s to investigate V/STOL (vertical In collaboration with Rocketdyne, and on March 14, particle traces so they appear to The Remote Interactive Particle-tracer (RIP) application and short take-off and landing) flows supported by NASA Marshall’s Development marking the move through a flow field. Example is developed—a unique program to combine to better understand the interaction of Engine Project Office, researchers generate groundbreaking applications include pressure capabilities of SGI IRIS graphics workstations and the propulsive jets with V/STOL aircraft and CFD solutions for the Space Shuttle Main of the building. distributions both inside the SSME, Cray-2 to reduce demands on computational resources. the ground, among other phenomena. Engine (SSME) hot gas manifold flow, using vortex flows over the wing surface The workstations perform excellent 3D graphics but newly developed INS3D software. This of F-16 aircraft, and over the high- lack the Cray-2’s memory and speed to process large first known application of CFD to rocket speed National Aerospace Plane. datasets—RIP exploits the strengths of both systems. propulsion systems development led to replacement of the original three-duct hot gas manifold design. The more reliable, NASnet, a high-speed, long-haul better performing two-duct design made communications subsystem using its first flight on Shuttle Discovery’s 20th NAS partners with the TCP/IP, connects users across the U.S. to mission (STS-70) in July 1995. Defense Advanced Research supercomputing resources at NAS. This is Projects Agency (DARPA) A single-processor Cray the first instance of a supercomputer being and the University of X-MP-12 (210.5 Mflops peak connected to a TCP/IP network without California, Berkeley in the performance) arrives at building using a front-end system. 233a at NASA Ames (where NAS RAID (Redundant Array of computer hardware was housed Inexpensive Disks) project, Boeing Flow Research Textron United Amtec Technologies prior to construction of the NAS Vertol which was completed in Penn State Grumman NASA Langley researchers use NAS systems Lowa State Lockheed S.A.I.C. CTR (Stanford) Lewis Goddad facility). This is the first customer NAS Allison 1992, resulting in the RAID (Ames) Langley to understand and analyze events in the Shuttle McDonnell Douglas NCAR Comp. Mech. installation of a UNIX-based Cray Challenger disaster, and in particular, failure of storage technology standard Redstone Lockheed JPL Marshall U.of Arizona General Dynamics supercomputer, and the first to Lockheed in use today. McDonnell Doughlas the O-ring; this work results in the largest 3D Northrop Johnson Rocketdyne provide client-server interface Rockwell Marquardt Kennedy structural simulation in the world to date. Convair Pratt-Whitney with UNIX workstations. Cal St. Long Beach

4 Timeline (1983–1988) Timeline (1983–1988) 5 TIMELINE (1988–1992)

NAS installs two StorageTek 4400 cartridge tape robots (each with a capacity of about 1.1 TB), reducing tape retrieval time from 4 minutes to 15 seconds. In 1990, this new hardware becomes a part of NAStore, the first UNIX-based hierarchical mass storage system. A 16,000-processor Connection Machine model Original NAS Parallel Benchmarks Installation of NAS-developed AEROnet is CM-2, Pierre (14.34 Gflops peak), is installed, (NPB) are developed in response to the underway, the first high-speed wide-area initiating NAS computer science research in the program’s increasing involvement with network (WAN) connecting supercomputing area of massively parallel computing. massively parallel architectures. The resources to remote customer sites. This new NAS supports the U.S. Navy and McDonnell Aircraft Company in synthetic application benchmarks mimic network provides increased bandwidth to demonstrating the effectiveness of applied CFD in meeting immediate computation and data movement of remote users, keeping NAS on the forefront engineering design objectives—to reduce the number of expensive, large-scale CFD applications, providing of networking technology while implementing time-consuming wind tunnel tests required. This effort focuses on objective evaluation of parallel computing open systems standards. architectures. NPBs become an industry developing methods and demonstrating the ability of CFD to predict standard and are still widely used today. aerodynamic loads on the F/A-18 vehicle.

1988 ...... 1989...... 1990...... 1991...... 1992......

Research scientists successfully run incompressible Navier-Stokes simulations of A 4-processor Convex 3240 NAS deploys UltraNet technology, taking a isotropic turbulence—the first application code system with a peak performance giant step forward in network speed. Initial NAS makes extensive to demonstrate the effectiveness of Lagrange, of 200 Mflops is installed. tests confirm that UltraNet can sustain data modifications to GRIDGEN, the highly parallel Intel iPSC/860 system (128 including the addition of four movement from a Cray-2 to a network frame processors, 7.68 Gflops peak) for very large buffer at speeds up to 768 Mbps. new modules. The multi-program CFD simulations—the largest having a mesh software package, originally size of about 16 million grid points. Supercomputer HSP-2 Supercomputer HSP-1 developed by Wright Patterson Air Force Base and General Reynolds, an 8-processor Cray Y-MP HSX HSX Hub Dynamics, is used at NAS for (2.54 Gflops peak) arrives to replace Ultra 1000 Fibers modeling complex-structured Hub a Cray-2 now used by the Department Ultra 1000 Mini-Supercomputer Frame Buffer Mini-Supercomputer aerodynamic grid configurations. of Defense for classified computing at NAS. Reynolds meets NAS’ goal of a Workstations High Speed Workstations sustained 1 Gflops performance rate Graphics Display for CFD applications. INITIAL HIGH SPEED LAN TOPOLOGY The OVERFLOW code is released outside of NASA. This is the NAS develops the Virtual Windtunnel first general-purpose NASA (VWT), applying virtual reality technology CFD code for overset (Chimera) to visualization of CFD simulation results. grid systems—resulting in a This highly interactive 3D tool allows production-level CFD capability users to explore and analyze complex for complex geometries, and structures found in time-varying fluid flow initial moving-body simulations. simulations—for example, momentum and density characteristics on a Harrier Jump Jet model.

6 Timeline (1988–1992) Timeline (1988–1992) 7 TIMELINE (1993-1998)

NAS networking group successfully connects the NAS and the National Energy Research Supercomputer Center The Flow Analysis Software internal NASA AEROnet (NERSC) of Lawrence Livermore National Laboratory release Toolkit (FAST) team wins wide-area network and the first phase (alpha test release) of the Portable Batch System NASA Software of the Year DARWINnet, a secure To prepare for a January 2000 test flight over the (PBS), providing improved batch job scheduling and resource Award for their work on the local-area network at Pacific, NASA Langley researchers use a NAS management for massively parallel computers, clustered highly modular software Ames, to facilitate remote Cray system to simulate the scramjet-powered workstations, and traditional supercomputers. PBS is later environment for visualizing Hyper-X vehicle’s unprecedented horizontal A 16-processor Cray C90 (15.36 access to wind tunnel and commercialized in 1998. and analyzing CFD scalar and separation from its Pegasus booster rocket—a Gflops peak) arrives at the NAS other aircraft design tests vector data. FAST builds on risky Mach-7 maneuver never before attempted. facility, just hours after removal across the Agency. the earlier PLOT3D program. of the Cray-2. This new system, A 160-processor IBM SP2 named von Neumann, allows NAS configuration called Babbage to run many more multitasking jobs is installed at the NAS facility A 64-processor SGI Origin 2000 testbed around the clock, some as large as and starts off with a user base of system named Turing—the first of its 7,200 MB. approximately 300. This system has kind to be run as a single system—is a peak of 42.56 Gflops. installed. This joint purchase by the Ames Information Technology Program and NASA’s Data Assimilation Office is part of a collaborative plan for systems software development and code optimization.

1993...... 1994...... 1995...... 1996...... 1997...... 1998...... 1985

NAS’ first website goes The INS3D code, designed to solve live—Internet “surfers” can NAS researchers are developing CAPO, incompressible Navier-Stokes NAS embarks on the ambitious now visit the NAS facility an automated tool that helps speed up equations in 3D for both steady- Information Power Grid (IPG) project via hyperlinks on the World and simplify the tedious process of A 1992 NAS- state and time-varying flows, wins to seamlessly integrate computing Wide Web, a wide-area parallelizing NASA’s large serial codes for produced video NASA Software of the Year Award. systems, data storage, networks, and hypermedia information distributed-memory machines. Among covering the analysis software across the Agency. retrieval system. the success stories, NASA Goddard Division’s services Among the IPG’s primary goals Ames computational uses CAPO to parallelize one of their and projects wins a is enabling researchers to spend molecular nanotechnology codes, enabling them to run larger cloud nationally recognized more time conducting research and group is created, securing simulations much faster than before. Silver Telly Award— $1 million for the first- less time dealing with computing the highest honor in ever nanotechnology resources required. the video industry program at a federally NASA Ames computational for non-network and funded laboratory. cable films. molecular nanotechnology group wins the Richard P. Feynman Prize in Nanotechnology for NAS and NASA Langley are co-creating the Theoretical Work. NASA Metacenter, the Agency’s first successful The PLOT3D team is awarded endeavor to dynamically distribute real-user the fourth largest prize ever given production workloads across supercomputing Taking advantage of the multiprocessor by the NASA Space Act Program resources at geographically distant locations. architecture of the SGI Origin 2000s at for development of their software The project involves two IBM-SP2s—one at both NAS and the National Center for program, which has revolutionized NASA Langley, the other at NAS. Supercomputing Applications, NAS scientific visualization and analysis researchers are exploring the potential of of 3D CFD solutions. using nanotubes for tiny electromechanical sensors and semiconductor devices.

8 Timeline (1993-1998) Timeline (1993-1998) 9 TIMELINE (1999-2002)

NAS aerospace engineers develop the AeroDB software system to automate CFD processes. With its database-centric NAS’ data analysis group approach, AeroDB provides users with develops Collaborative a convenient central warehouse for Virtual Mechanosynthesis, The life-saving DeBakey Ventricular Assist Device (VAD), a investigating simulation results in real an application for distributive, collaborative effort between researchers at NASA Ames, NASA time. Since its launch in September, it collaborative, remote Johnson, and MicroMed Technology, Inc., is named NASA’s has been used during several Shuttle computational steering of Commercial Invention of the Year. By this date, the device— missions to conduct rapid-turnaround Using NAS’ 512-processor SGI atoms and interaction with improved using SSME technology and CFD, coupled with NAS’ engineering analyses. Origin 2800, Lomax, NASA their structures. HEC resources—has been implanted in 115 heart patients in Langley researchers run CFD Europe with no reported failures. simulations of the NASA/Boeing X-37 experimental launch vehicle—which flies at speeds The Cart3D team wins the 2002 NASA over Mach 20—to analyze The NASA Research and Education Software of the Year Award. The software thermodynamic conditions Network (NREN) continues to serve as a package, developed by researchers from of the vehicle’s reentry through global leader in deploying new multicast Ames and the Courant Institute at New the atmosphere. routing and forwarding protocols used for York University, streamlines the design and sending information across the Internet. analysis process for complex configurations such as aerospace vehicles.

1999..... 2000 ...... 2001 ...... 2002 ...... 1985

NAS visualization experts develop an innovative A 1,024-processor SGI Origin visualization system called the Hyperwall, including 3800, Chapman (819.2 Gflops Multi-Level Parallelism (MLP), a new method a graphics cluster and LCD panel wall, which Computational chemistry is being used to calculate the peak), is installed. for application parallelization, is developed by global warming potential of industrial compounds being enables scientists to view complex datasets and NAS computer scientists to take advantage of released into Earth’s atmosphere—exploring phenomena series of computational results across a dynamic, shared-memory, multiprocessor computing impossible to capture in laboratory experiments due to linked array of 49 displays. architectures such as the SGI Origin 2000 the instability of crucial compounds at Earth’s surface. housed at NAS.

Version 1.0 of the NASA-developed Chimera Grid Tools (CGT) NAS visualization scientists develop two new algorithms, GEL software package is released under a beta testing agreement, and and Batchvis, to help discern unsteady flow features not readily used in a wide variety of disciplines including aerospace, marine, distinguishable or reproducible with traditional software packages. automotive, environmental, and sports. This comprehensive tool These algorithms are being applied to the YAV-8B Harrier jet to for overset grid generation has since been licensed to more than visualize the vehicle’s exhaust flows and resulting ground vortices 500 user organizations at different NASA centers, government labs, during vertical take-offs—phenomena that can lead to rapid loss industries, and universities. of engine thrust and flight failure.

10 Timeline (1999-2002) Timeline (1999-2002) 11 TIMELINE (2003-2005)

The MAP’05 team (under the auspices of the Modeling, Since the Shuttle Columbia accident in Analysis, and Prediction Program), implements two February 2003, NAS resources have largely high-resolution versions of the Goddard Earth Observing been dedicated to analysis of the accident’s System (GEOS), a global atmospheric model that enables cause. These time-critical analyses “real-time” prediction of Atlantic tropical cyclones and other The finite-volume General underscore the Agency’s requirement meteorological events. These models run 4 times daily for Circulation Model (fvGCM) runs for expanded supercomputing power to over 6 months on one of Columbia’s 512-processor nodes on Columbia throughout the 2004 complete human spaceflight analyses much to produce hurricane forecasts for immediate integration hurricane season, validating its real- more quickly, while still providing sufficient into the national ensemble forecast. resources to continue NASA’s broad time, numerical weather prediction modeling and simulation activities. capabilities to improve hurricane tracking and intensity forecasts. The Simulation Assisted Risk Assessment (SARA) NASA-developed HiMAP (High-fidelity Multidisciplinary The first two SGI Altix 512-processor team gets into full swing with Constellation-related Analysis Process) software is applied in a joint project systems of the Columbia supercomputer work, incorporating CFD analyses performed on with the U.S. Navy to understand phenomena involved (63 Tflops peak) are installed, with all Columbia into a probabilistic risk assessment in abrupt wing stall of an F-18 E/F aircraft. To date, 10,240 processors operational by October. model for Ares I crew launch vehicle (CLV) ascent. calculations include more than 17 million grid points Columbia captures the second highest Ares I will launch Orion, the crew exploration involving 104 processors on Lomax. spot on the TOP500 list of the fastest vehicle (CEV) that will take over NASA human supercomputers in the world, with a spaceflight missions after the Space Shuttle is LINPACK rating of 51.9 Tflops. retired in 2010.

2003...... 2004 ...... 2005 ......

The first SGI Altix system, a 512-processor single- NAS scientists are extensively involved in the system image supercomputer, is installed. The The Integrated Modeling and Simulation Agency’s Return to Flight (RTF) activities, employing new system, named Kalpana after astronaut and project’s Entry Descent and Landing team greatly state-of-the-art CFD codes to simulate steady and former NAS colleague Kalpana Chawla, is being streamlines efficiency of vehicle design work for unsteady flow fields around the Shuttle during ascent. used for a joint effort between NASA Headquarters, NAS CFD experts are performing high- the Agency’s new Constellation Program, created NAS’ findings help make crucial design changes to the Jet Propulsion Laboratory, Ames, and Goddard fidelity computations on Columbia in May 2004. By coupling engineering methods the Shuttle for future flights. with high-fidelity aerodynamic and aerothermal to deliver high-resolution ocean analyses for the to analyze the Shuttle orbiter’s liquid NREN completes contract negotiations with ECCO (Estimating the Circulation and Climate of the hydrogen feedline flowliner. Several simulations run on Columbia, engineers are able to analyze six potential design shapes for the National Lambda Rail (NLR) and Level3 The High End Computing Ocean) model. The work dramatically accelerates computational models are being used Orion crew exploration vehicle within a three- Communications, Inc., enabling them to Revitalization Task Force development of highly accurate analyses of global to characterize unsteady flow features, month period—saving years of design work and establish 10 Gigabit-per-second wide-area (HECRTF) is established in ocean circulation and sea-ice dynamics. which have been identified as major demonstrating the value of using high-fidelity networking between Ames and other NASA March to fuel improvements in contributors to cyclic loading that simulations early in the design cycle. field centers. Such robust connection speed U.S. HEC capacity, capability, damages the flowliner over time. is of paramount importance for optimal and accessibility. NAS is heavily utilization of Columbia, and for transferring involved, looking for ways to massive datasets required to solve today’s help the U.S. gain a competitive, challenging problems. cost-effective edge over Japan’s Earth Simulator, which has ranked highest on the TOP500 list of fastest supercomputers for three years running.

12 Timeline (2003-2005) Timeline (2003-2005) 13 TIMELINE (2006-2008)

Two new systems arrive as part of the NTR effort— Schirra, a 640-core IBM POWER5+ (4.8 Tflops The Division releases the NAS peak) and a 2,048-core SGI Altix 4700 (13 Tflops Technology Refresh (NTR) request for peak, Columbia node 22), which is the largest SSI NAS visualization experts assist NASA information in September, initiating computer to date. NAS staff members are evaluating Goddard researchers in modeling the the process of enhancing NASA’s the two architectures to determine their potential to spectacular merger of two black holes high-end computing capacity to keep Aerospace engineers from NASA help meet the Agency’s future computing needs. using a 2,048-processor component pace with the Agency’s growing HEC Johnson and Ames are relying heavily of Columbia. The Goddard simulation requirements. NAS’ ultimate goal is to on Columbia to support redesign represents one of the largest single increase total computing capacity at efforts for the Shuttle’s external tank, consuming more than 600,000 calculations ever run on Columbia. least four-fold over Columbia’s existing NAS visualization team members enhance their concurrent visualization processor-hours on the task to date. size by the end of FY09. framework to extract complicated vortex core isosurfaces directly on Columbia, shedding new light on phenomena associated with V-22 In support of NASA Kennedy ground tiltrotor flight. Without concurrent data extraction, storing and interactively operations, NAS simulation and modeling exploring the full data from every simulation timestep (43 terabytes total) experts are conducting time-accurate would not be possible. simulations of the launch pad with double and single solid rocket booster configurations (to mimic the design of the Ares I-X) to analyze effects of ignition over- pressurization on the vehicle.

19832006...... 2007 ...... 2008 ...... 1985

The Army Research Laboratory selects After months of performance testing and analysis, NAS the NAS facility as its new operations announces the outcome of its large technology refresh NAS modeling and simulation experts are base for the Army High Performance To meet increasing computational demands, the Agency’s effort to carry the Agency through the next several using Columbia to determine whether the Computing Research Center (AHPCRC). Aeronautics Research and Exploration Systems Mission years of high-end computing—a 20,480-core SGI ICE Vehicle Assembly Building (VAB) at NASA NAS now houses a Cray X1E and a Cray Directorates fund the purchase of additional computing cluster system with a peak performance of 245 Tflops. Kennedy, designed for the Shuttle Program, XT3, plus other supporting equipment. resources at NAS. The ESMD systems are two 1,024-core SGI The system will be installed in summer 2008, initially is properly equipped to safely handle storage NASA Ames, Stanford University, High Altix 4700s with the latest Montvale 2 processors, which The CEV Aerosciences Project (CAP) augmenting and eventually, replacing Columbia. of the significantly greater number of solid Aerodynamics team begins using Performance Technologies, Inc., and become nodes 23 and 24 of Columbia (now 88.9 Tflops). The rocket booster segments required for next- NASA’s Cart3D and OVERFLOW codes several minority university partners ARMD system is a 4,096-core SGI Altix ICE cluster (43.5 Tflops generation launch vehicles. on Columbia to generate aerodynamic serve as the consortium for AHPCRC. peak) with Xeon quad core processors. performance databases for each major NAS visualization, applications, and facilities teams work together design iteration of the Orion launch abort to complete construction of Hyperwall-2, a follow-on version of the vehicle (LAV). These results enable the Hyperwall visualization system that enables scientists to display team to examine critical performance high-fidelity simulation results in a single view. This new version factors such as stability, control, and comprises 128 state-of-the-art graphics processing units and 1,024 separation dynamics for the vehicle, cores, providing 74 teraflops peak processing power and a total of and to recommend specific modifications 245 million pixels—the highest resolution display in the world today. to the design.

14 Timeline (2006-2008) Timeline (2006-2008) 15 NAS SUPERSTARS Randy Graves NASA Director Scott Hubbard Ames Center Paul Kutler Ames Applied Among the large cast of important players deserving applause for their contributions to the reputation and success of NAS over the of Aerodynamics 1984–1989: Director 2002–2006: Recog- Computational Aerodynamics past quarter-century, these 25 individuals are spotlighted for their vision, guidance, and influence on the organization’s achievements As the first Headquarters nized Columbia’s importance Branch Chief 1980–1984: CFD Program Manager for NAS, to supporting Agency missions; research scientist and manager in supercomputing and computational science for NASA and the nation. understood and promoted was key to bringing all pieces who developed and promoted its importance as part of the together to reach the NASA- creation of aerospace-related U.S. supercomputing effort. industry agreement and gain software to run on supercom- David Bailey Senior Ron Bailey Founding NAS Bill Ballhaus Ames Center Helped shape national HPC Headquarters’ approval. puters. Dedicated much of Scientist 1984–1998: Expert Division Chief 1982–1990: Director 1984–1989: Among policies and was influential in his NASA career to advocacy, in high-performance scientific Computational fluid dynamics the researchers who created starting the HPCC Program of planning, and utilization of computing who led develop- (CFD) pioneer and leader of the the field of CFD at NASA Ames. the 1990s. NAS capabilities. ment of the NAS Parallel team that developed the NAS Earlier, as Director of Astro- Benchmarks that are still concept and laid its technical nautics when the NAS Program widely used to evaluate sus- foundation. Anticipated and plan was approved, strongly tained performance of highly promoted the transition from supported the NAS vision. Tom Lasinski NAS Deputy Harvard Lomax Ames Hans Mark Ames Center Di- parallel supercomputers. vector to parallel computing, Division Chief (acting) 1995– Computational Fluid Dynam- rector 1969–1977: “Godfather” and remains an important con- 1999: Established and led ics Branch Chief 1970–1992: of numerical simulation tributor to NAS’ vision today. NAS Applied Research Branch Pioneer in emergence of CFD, at Ames, appointed task team and fostered development and intellectual powerhouse in to study concept leading to of key technology projects. combining CFD with supercom- NAS. Managed Ames’ research Collaborated with aeronautics puters. With Kutler, Peterson, and applications efforts in Bob Bishop SGI CEO Bruce Blaylock NAS Chief Loren Bright Director of Ames scientists at Ames to create and Chapman, showcased NAS’ aeronautics, space science, 2004–2007: Key partner in Engineer, 1994–1999: Impor- Research Support 1969–1979: the NAS prototype that tightly CFD accomplishments to U.S. life science, and technology, NASA-industry collaboration to tant early contributor to shaping Early advocate of supercomput- integrated workstations with aerospace companies. and strongly advocated CFD build Columbia. Enabled radical of NAS Program. Earlier, as ing at Ames who led the ILLIAC supercomputers. benefits in these areas. alteration of SGI production to NAS Systems Development IV project. Teamed with Chap- turn out 5–10 times more Altix Branch Chief, provided clear man to champion and support systems to meet the aggressive vision and staff support for early NAS technical studies and Harry McDonald Ames Paul Otellini Intel President Dave Parry SGI Senior Vice Columbia schedule. Contrib- software development, and program planning. Center Director 1996–2002: and Chief Executive Officer President and Product General uted intellectual and fiscal introduced rigorous design Supported pioneering work 2005–present: Envisioned Manager 2002–present: Chief support to create this powerful methodologies, tools, in SSI architectures, known the partnership between NAS architect in hardware engineer- national resource. and environments. internationally for his work and SGI to deliver the Intel ing of SGI’s Altix servers that in computer applications. Itanium 2 processor-based enabled Columbia’s implemen- Responsible for defining and Columbia supercomputer to tation. His vision and leader- executing Ames’ role as a accelerate NASA’s design and ship moved the Altix to Open Walt Brooks NAS Division Dean Chapman Ames Direc- Dan Clancy Ames Director Center of Excellence for Infor- engineering research. and provided scaling Chief 2003–2005: Created the tor of Astronautics 1974–1980: of Information Sciences 2003– mation Technologies. to 512 and 2,048 processors, NASA-industry partnership and Visionary who established the 2004: Recognized Columbia’s enabling Columbia’s growth. gained Congressional approval first research group at Ames to potential for NASA, played for Columbia, which reestab- develop and combine CFD with a critical role in supporting lished NAS as a world-class high-speed computers to lead and advocating the project Vic Peterson Ames Deputy Paul Rubbert Boeing Com- Sy Syvertson Ames Center supercomputing facility and NASA into the future. at the Agency’s highest levels. Director 1989–1994: Earlier, mercial Airplane Group’s Chief Director 1978–1984: Signatory created critical simulation Worked closely with Brooks as Chief, Thermo- and Gas- of Aerodynamics Research on the NAS Program plan, who capabilities in support of to make HEC a hallmark of Dynamics Division and then 1984–1992: Championed led aerodynamics and wind Agency missions. Ames technology. Director of Aerophysics, an NAS from concept through tunnels at Ames into the super- originator (with Bailey, Chap- maturity and expounded NAS sonic and hypersonic regimes. man, and Mark) of initiative as a pathfinding organization, His earlier research on lifting to develop the NAS capabil- leading to duplication of NAS bodies helped define the basic ity. Devoted much time to model at other sites. Provided shape of the Apollo capsules, Dale Compton Ames Center Dave Cooper NAS Bill Feiereisen NAS Division advocating the importance of long-term support, experience, and the Space Shuttle orbiter. Director 1989–1994: Dedicated Division Chief 1991–1995: Chief 1998–2002: Gave NAS supercomputing to maintain and vision for evolution of CFD supporter of NAS Program and NAS advocate extraordinaire its start in single-system image U.S. leadership in aerospace. and supercomputing. facility, and a strong advocate who tirelessly campaigned for (SSI) architectures, increasing for aeronautics user commu- NAS recognition and funding. NAS’ computing capability nity involvement to help define Influenced the transition from ten-fold, and was a key player Steven Wheat Intel Senior Director requirements for NAS. Earlier, vector to parallel computing, in Ames’ foray into grid tech- of HPC Platforms 1995–present: Special appreciation goes to our stakeholders—the­ many managers and leaders a strong advocate for NAS and contributed significantly nologies. Previously, Program Innovator of Intel’s flagship supercom- project development and to the advancement of U.S. Manager for NASA’s HPCC puting project, who worked directly throughout NASA and up through the halls of Congress, whose sponsorship, implementation. supercomputing. Program. with NAS management to develop the commitment, and financial support have allowed NAS’ successful 25-year run. initial framework, key objectives, and parameters for Columbia.

16 NAS Superstars NAS Superstars 17 Scientific Visualization User Services Data Storage The NAS visualization team develops and Our user services team works 24x7 to as- Our HEC customers often require vast applies tools and techniques custom- sure users can make effective, productive amounts of data storage. With about 25 ized for NASA science and engineering use of the NAS HEC systems. This team’s petabytes (PB) of tertiary storage and problems. Our tools help users view and experience in anticipating and heading off over 2 PB of disk storage on the floor, interact with their results—right from problems and quickly solving challenges NAS’ mass storage system allows users their desktops—to quickly pinpoint as they arise is greatly appreciated by to archive and retrieve important results important details in complex datasets. users. During Space Shuttle missions, quickly, reliably, and securely. Our stor- OVERVIEWNAS TODAY AND HISTORY Working closely with NASA users, our the team marshals all HEC components age specialists create custom filesystems an Integrated Supercomputing Environment Supporting NASA Missions 24x7 visualization experts capture, parallelize, and NAS staff to ensure essential support to temporarily store large amounts of data and render the enormous amounts of is in place for vital analysis tasks. This for special user projects, and provide The NAS concept was a revolutionary idea for its time: A balanced supercomputing data needed to produce high-resolution, enables engineers to rapidly provide individual training to help users efficiently cghghmgm capability consisting of processors, networks, storage, and visualization tools, all speak- 3D visualizations and videos. NAS’ new mission managers with critical data to manage and move such large amounts ing the same language. Over the last quarter-century, NAS has continued to evolve as Hyperwall-2 visualization system—the clear Shuttles for landing. User services of data. a leader in high-end computing (HEC). Our supercomputers are still integrated with highest known resolution display in the staff monitor all systems, networks, job high-speed networks and mass data storage systems, but are now augmented by world at a quarter-billion pixels—is scheduling, and resource allocations to customizable support in user services, software performance optimization, high-fidelity designed to provide users with a way to ensure a stable, seamless computing modeling and simulation, and scientific visualization. Together, they provide our visualize massive datasets directly from environment. users with unparalleled support throughout the entire life-cycle of their projects. the supercomputing systems.

Production Supercomputing Networking Technologies High-Fidelity Performance Optimization gmfgfgmfOur expert HEC team manages all aspects NAS’ high-speed networking technologies Modeling and Simulation The NAS application optimization team of the NAS production supercomput- and services are essential to the per- Many of NASA’s missions present unique specializes in enhancing performance of ing environment to ensure users get the formance of the HEC systems and their aerospace design challenges that no NASA’s complex codes so researchers secure, reliable resources they expect. high impact for our customers. Using off-the-shelf software can address. NAS can do more science and engineering in The current environment includes three high-capacity network connections and scientists specializing in physics-based less time by utilizing our HEC systems supercomputers, as well as two secure our local and wide-area network expertise, modeling and simulation have years of more effectively. Optimization services front-end systems requiring two-factor scientists’ massive data transfers (some experience in developing world-class range from answering basic questions to authentication, and two secure unattended exceeding 1 terabyte per hour) travel CFD software packages, custom tools, partnering with users for in-depth code proxy systems for remote operations. seamlessly between local and remote and methods. Our CFD experts provide performance optimization. The team also The Columbia supercomputer—recently systems. Our network engineers have also critical services to NASA teams—often evaluates tools and technologies best upgraded to 14,336 cores and 88.9 Tflops implemented innovative transfer strategies dramatically reducing time-to-solution suited for the NAS environment and gives of peak processing power—continues to and protocols to vastly reduce time-to- and substantially increasing model feedback to outside tool developers. enable striking advances in addressing solution for users. resolution. Recently, the team has been NASA’s real-world science and engineer- involved in simulations to assess design ing challenges. In addition, NAS houses risks for proposed crew exploration and two smaller but still very powerful testbed launch vehicles. systems, RTJones (43.5 Tflops) and Schirra (4.8 Tflops).

For more details on NAS systems and services, please visit http://www.nas.nasa.gov/Resources/resources.html on the web.

18 NAS Today NAS Today 19 THEN AND NOW

Theoretical peak performance of NAS’ first supercomputer, the Cray X-MP (1984), in Teraflops : 0.00021

Minimum number of Apple Mac Mini computers it would take to exceed performance of the Cray X-MP : 1

Theoretical peak performance of Columbia (2008), in Teraflops : 88.9

Number of Mac Minis it would take to equal peak performance of Columbia : 329,185

Estimated total number of people who have ever lived on Earth, as of 2002 : 106 billion

Amount of time, in seconds, it would take the Cray X-MP to add all their heights together : 505

Amount of time, in seconds, it would take Columbia to add all their heights together : 0.0012

Amount of time in years it would take a person to complete 1 hour of Columbia calculations, if they performed 1 calculation per second for 8 hours per day for 365 days per year : 30 billion

Approximate number of books housed in the U.S. Library of Congress collection : 32 million

Number of complete copies of the U.S. Library of Congress collection that would fit on NAS’ current mass storage system: 1,000+

Current amount of unique data, in Petabytes, stored on NAS mass storage systems : 3.2

Approximate number of trees it would take to make the paper needed to print all of this data : 160 million

Total amount of memory (RAM), in Gigabytes, available on the Cray X-MP supercomputer : 0.016

Number of iPod songs that could be stored in the Cray X-MP’s memory : 4

Total amount of memory (RAM), in Gigabytes, currently available on the Columbia supercomputer : 28,672

Number of iPod songs that could be stored in Columbia’s memory : 6,880,320

Speed of network connections, in bits per second, to some of NAS’ remote users in 1983 : 300

Number of years it would take to download a feature-length movie using the original network connection : 3.97

Speed of NAS’ current high-bandwidth wide-area network, in Gigabits per second : 10

Number of seconds it would take to download a feature length movie now : 3.76

Hyperwall-2 (2008) graphics processing capability, in texture elements/second : 4.71 trillion

Number of Microsoft Xbox game consoles required to equal this graphics capability : 589

20 Then and Now