Overview and History Nas Overview

Total Page:16

File Type:pdf, Size:1020Kb

Overview and History Nas Overview zjjvojvovo OVERVIEWNAS OVERVIEW AND HISTORY a Look at NAS’ 25 Years of Innovation and a Glimpse at the Future In the mid-1970s, a group of Ames aerospace researchers began to study a highly innovative concept: NASA could transform U.S. aerospace R&D from the costly and cghghmgm time-consuming wind tunnel-based process to simulation-centric design and engineer- ing by executing emerging computational fluid dynamics (CFD) models on supercom- puters at least 1,000 times more powerful than those commercially available at the time. In 1976, Ames Center Director Dr. Hans Mark tasked a group led by Dr. F. Ronald Bailey to explore this concept, leading to formation of the Numerical Aerodynamic Simulator (NAS) Projects Office in 1979. At the same time, a user interface group was formed consisting of CFD leaders from industry, government, and academia to help guide requirements for the NAS concept gmfgfgmfand provide feedback on evolving computer feasibility studies. At the conclusion of these activities in 1982, NASA management changed the NAS approach from a focus on purchasing a specially developed supercomputer to an on-going Numerical Aerody- namic Simulation Program to provide leading-edge computational capabilities based on an innovative network-centric environment. The NAS Program plan for implementing this new approach was signed on February 8, 1983. Grand Opening As the NAS Program got underway, a projects office to a full-fledged division its first supercomputers were installed at Ames. In January 1987, NAS staff and in Ames’ Central Computing Facility, equipment were relocated to the new starting with a Cray X-MP-12 in 1984. facility, which was dedicated on March 9, However, a key component of the Program 1987. At the grand opening, excitement plan was to create the NAS facility: a levels were high about what had been state-of-the-art supercomputing center accomplished so far, and about what was as well as a multi-disciplinary innova- yet to come. tion environment, bringing together CFD experts, computer scientists, visualization Pioneering Achievements specialists, and network and storage en- From its earliest years, NAS has achieved gineers under one roof. Groundbreaking many firsts in computing and in enabling for the NAS facility took place on March NASA’s challenging science and engineer- 14, 1985. In 1986, NAS transitioned from ing missions, including deployment of NAS Overview 1 NAS OVERVIEW (CONTINUED) the first Cray-2 in 1985, which attained development and applications. Redesign Rediscovering the Mission at NAS was assured with the formation increasingly challenging modeling and unprecedented performance on real of the Shuttle main engine, failure of Starting in 2001, NASA conducted of the High End Computing Columbia simulation requirements. Emerging in- CFD codes. NAS was also the first facility the Challenger O-ring, wing stall and a series of major engineering studies, project as the first asset in the Strategic novative architectures and systems will be to put the UNIX operating system on aerodynamic loads for fighter airplanes, including the X-37 vehicle atmospheric Capabilities Assets Program (SCAP). strategically leveraged to provide the most supercomputers, and the first to imple- thrust loss in vertical take-off jets—all reentry and the Shuttle’s hydrogen NAS remains widely recognized as the effective supercomputing platforms and ment TCP/IP networking in a supercom- demonstrated the potential of CFD sup- fuel line flowliner cracking, each of premier supercomputing facility for the environments. Advanced algorithms and puting environment, linking to users ported by faster supercomputing to help which required most of NAS’ computa- Agency, and vital to each of NASA’s mis- applications will be developed in lockstep across the country. In 1984, NAS became lives and millions of dollars. Many of tional resources for three to six months, sion directorates. In 2007, the Aeronau- to exploit these petascale systems. the first organization to connect super- the award-winning codes that supported significantly delaying other important tics Research and Exploration Systems Partnerships with other NASA centers, computers and workstations together to these analyses were also developed at simulations. Following the Columbia Mission Directorates procured additional government laboratories, the computer distribute computation and visualization NAS, including INS3D, OVERFLOW, and tragedy in February 2003, the Agency resources and placed them at NAS to industry, and academia are being aggres- (what is now known as the client-server Cart3D. These codes have continued to again turned to NAS CFD experts for foam augment their SCAP allocations. NAS sively developed to continue Ames and model). In another innovation, NAS de- evolve through the years, and remain debris transport analysis, further straining facilities and staff are playing key roles the NAS facility’s proud reputation of be- veloped the first UNIX-based hierarchical workhorses for NASA missions today. NAS computational resources in an effort in Shuttle operations, designing the Ares ing the “go-to” place for supercomputing mass storage system, NAStore. NAS was to determine the physical cause of the and Orion space exploration vehicles, and advanced simulations. Discussions the first customer for Silicon Graphics New Directions accident. The Earth Science Enterprise advancing Earth and space science un- are underway regarding bleeding-edge Inc. (SGI), beginning an enduring partner- With a wealth of information technol- stepped in once more to support the derstanding, and conducting aeronautics concepts such as 10-petaflop systems ship that remains important today, and ogy expertise and a visionary spirit, purchase of Kalpana, NAS’ first SGI Altix research in all flight regimes. and 1-exabyte data archives at NAS, that led to the successful NASA-SGI-Intel NAS helped pioneer new technologies, system, which enabled the even larger connected to the science and engineering collaboration on the Columbia supercom- including computational nanotechnol- computational modeling effort required to Future community via 1 terabit-per-second links, puter in 2004. ogy and grid computing, starting in the improve the Shuttle design for a success- While reflecting on NAS’ 25-year legacy enabling near-real-time aerospace design mid-1990s. NAS even set up one of the ful Return to Flight. is inspiring, current NAS and Agency and deeper understanding of our planet NAS has also been a leader in visual- earliest websites in the world in 1993. leadership is carrying on the tradition of and the universe. ization, benchmarking standards, and In the late 1990s, NASA’s Shuttle flights By June 2004, the importance of the visionaries of a quarter century ago. job management. The graphics tools, began to seem routine, and aeronautics supercomputing-enabled high-fidelity The newly-installed Hyperwall-2 graphics At NAS, we honor our past most sincerely PLOT3D and FAST, both had vast user research and modeling lost momentum modeling and simulation for aerospace and data analysis system—currently by continually pursuing more impressive bases and won major software awards in the Agency, leading to diminished and other missions was clear. Purchase the world’s highest resolution display achievements in the future. And as these from NASA. The NAS Parallel Bench- investment in supercomputing capa- of the Columbia supercomputer was ap- at a quarter-billion pixels—will allow triumphs attract a new generation of inno- marks (NPB), which became the industry bilities and CFD research. Nevertheless, proved by NASA and Congress in record supercomputer users to view and explore vators and leaders to NAS, soon enough standard for objective evaluation of NAS managed to sustain its interna- time, and the system returned supercom- their computational results in unprec- their aspirations and achievements will parallel computing architectures, are tional leadership in single-system image puting leadership to the U.S. after two edented detail and speed. Visualization exceed our own. still widely used today. And since its supercomputing with funding from an and a half years with the Japanese Earth will become more tightly integrated into first release in 1994, the Portable Batch information technology R&D project and Simulator having the highest computa- the traditional environment of computing, System (PBS) has been commercialized NASA’s Earth Science Enterprise. In 1995, tional performance. More importantly, storage, and networks to provide a more and adopted broadly in the supercomput- NAS changed its name to the Numerical NASA was able to pursue its new Space powerful tool to scientists and engineers. ing community. Aerospace Simulation Division, and in Exploration mission and simultaneously The NAS facility is being significantly 2001, NAS became the NASA Advanced conduct leadership research in Earth upgraded and expanded in preparation for Throughout its history, NAS has also Supercomputing Division—the name it science, astrophysics, and aerospace installation of new generations of leader- been a pioneer and committed partner retains today. modeling. In October 2005, NASA’s long- ship-class supercomputers over the com- in both high-fidelity modeling tool term commitment to supercomputing ing months and years to address NASA’s 2 NAS Overview NAS Overview 3 TIMELINE (1983–1988) A visionary team of managers, engineers, 25 Silicon Graphics Inc. (SGI) IRIS 1000 graphics and CFD researchers develops the NAS NAS Program
Recommended publications
  • A Look at the Impact of High-End Computing Technologies on NASA Missions
    A Look at the Impact of High-End Computing Technologies on NASA Missions Contributing Authors Rupak Biswas Jill Dunbar John Hardman F. Ron Bailey Lorien Wheeler Stuart Rogers Abstract From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency’s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission’s discovery of new planets now capturing the world’s imagination. Keywords Information Technology and Systems; Aerospace; History of Computing; Parallel processors, Processor Architectures; Simulation, Modeling, and Visualization The story of NASA’s IT achievements is not complete without a chapter on the important role of high-performance computing (HPC) and related technologies. In particular, the NASA Advanced Supercomputing (NAS) facility—the space agency’s first and largest HPC center—has enabled remarkable breakthroughs in NASA’s science and engineering missions, from designing safer air- and spacecraft to the recent discovery of new planets. Since its bold start nearly 30 years ago and continuing today, NAS has led the way in influencing the state-of-the-art in HPC and related technologies such as high-speed networking, scientific visualization, system benchmarking, batch scheduling, and grid environments.
    [Show full text]
  • Paper in PDF Format
    Managing System Resources on ‘Teras’ Experiences in Managing System Resources on a Large SGI Origin 3800 Cluster The paper to the presentation given at the CUG SUMMIT 2002 held at the University of Manchester. Manchester (UK), May 20-24 2002. http://www.sara.nl Mark van de Sanden [email protected] Huub Stoffers [email protected] Introduction Outline of the Paper This is a paper about our experiences with managing system resources on ‘Teras’. Since November 2000 Teras, a large SGI Origin 3800 cluster, is the national supercomputer for the Dutch academic community. Teras is located at and maintained by SARA High Performance Computing (HPC). The outline of this presentation is as follows: Very briefly, we will tell you what SARA is, and what we presently do, particularly in high performance computing, besides maintaining Teras. Subsequently, to give you an idea, on the one hand which resources are available on Teras, and on the other hand what resources are demanded, we will give overviews of three particular aspects of the Teras: 1) Some data on the hardware – number of CPUs, memory, etc. - and the cluster configuration, i.e. description of the hosts that constitute the cluster. 2) Identification of the key software components that are used on the system; both, system software components, as well as software toolkits available to users to create their (parallel) applications. 3) A characterization of the ‘job mix’ that runs on the Teras cluster. Thus equipped with a clearer idea of both ‘supply’ and ‘demand’ of resources on Teras, we then state the resource allocation policy goals that we pursue in this context, and we review what resource management possibilities we have at hand and what they can accomplish for us, in principle.
    [Show full text]
  • Ribbons Manual Help Ribbons On-Line Manual
    Ribbons Manual Help Ribbons On-Line Manual. Usage: ribbons [options] ribbons is driven by mouse and an X/Motif interface, once data has been prepared with auxiliary programs. Coordinate files must be in Protein Data Bank (*.pdb) format. Recommend that you make a new directory and copy over a *.pdb file to begin. To initialize for default drawing data from a PDB entry, type: ribbons -e YourMol.pdb To display once the data above is prepared, type: ribbons -n YourMol To create any data from 'YourMol.pdb' with ribbons-data GUI, type: ribbons-data YourMol The choices from the "Help" submenus bring up the additional information screens listed below: Data Preparation ● Required Files ● Auxiliary Programs ● Tutorial Overview ● X-ray Error Analysis Graphics ● Motif MenuBar ● Control Panels ● Model Transformation ● Menubar Options ❍ File ❍ Edit ❍ Model ❍ View ❍ Help ● The Ribbon Drawing Panels ❍ Ribbon Style file:///C|/My Documents/ribbons/help/manual.html (1 of 2) [4/3/2001 1:52:00 PM] Ribbons Manual Help ❍ Ribbon Dimensions ❍ Ribbon N-primitives ❍ Ribbon Palette ● Primitive Control Panels ❍ Atom Panel ❍ Bond Panel ❍ Poly Panel ❍ Ndot Panel ❍ Text Panel ● Material and Image Control Panels ❍ Image Panel ❍ Color Panel ❍ Light Panel ❍ colorIndex Panel ❍ Motion Panel ● Keyboard Shortcuts Images ● Saving Images ● Example Images ● Artistic Images ● Color Indexing Scheme ● PostScript Plots ● Raytracing ● Virtual Reality Program ● Frequently Asked Questions ● Licensing ● Source Code ● Ribbon References Ribbons User Manual / UAB-CBSE / [email protected] file:///C|/My Documents/ribbons/help/manual.html (2 of 2) [4/3/2001 1:52:00 PM] Ribbons Data Preparation Help Ribbons Data Preparation Ribbons requires generation of a database of files for display.
    [Show full text]
  • Fast Synchronization on Shared-Memory Multiprocessors: an Architectural Approach
    Fast Synchronization on Shared-Memory Multiprocessors: An Architectural Approach Zhen Fang1, Lixin Zhang2, John B. Carter1, Liqun Cheng1, Michael Parker3 1 School of Computing University of Utah Salt Lake City, UT 84112, U.S.A. fzfang, retrac, [email protected] Telephone: 801.587.7910 Fax: 801.581.5843 2 IBM Austin Research Lab 3 Cray, Inc. 11400 Burnet Road, MS 904/6C019 1050 Lowater Road Austin, TX 78758, U.S.A. Chippewa Falls, WI 54729, U.S.A. [email protected] [email protected] Abstract Synchronization is a crucial operation in many parallel applications. Conventional synchronization mechanisms are failing to keep up with the increasing demand for efficient synchronization operations as systems grow larger and network latency increases. The contributions of this paper are threefold. First, we revisit some representative synchronization algorithms in light of recent architecture innovations and provide an example of how the simplifying assumptions made by typical analytical models of synchronization mechanisms can lead to significant performance estimate errors. Second, we present an architectural innovation called active memory that enables very fast atomic operations in a shared-memory multiprocessor. Third, we use execution-driven simulation to quantitatively compare the performance of a variety of synchronization mechanisms based on both existing hardware techniques and active memory operations. To the best of our knowledge, synchronization based on active memory outforms all existing spinlock and non-hardwired barrier implementations by a large margin. Keywords: distributed shared-memory, coherence protocol, synchronization, barrier, spinlock, memory controller This effort was supported by the National Security Agency (NSA), the Defense Advanced Research Projects Agency (DARPA) and Silicon Graphics Inc.
    [Show full text]
  • Remanufactured SGI™ Origin™ 2000 High-Performance Server
    Product Guide Remanufactured SGI™ Origin™ 2000 High-Performance Server Origin 2000 Rack Multirack System The rack-mounted Connect multiple racks as a shared- Origin 2000 system memory powerhouse or redeploy is the industry’s those assets into multiple, tightly most powerful and integrated departmental servers. flexible shared-memory Add IRIS FailSafe™ software to server. Easily expand provide failover between systems, the system by adding and Origin 2000 becomes the additional racks. ultimate high-availability server for critical computational and business solutions. Origin 2000 Deskside Revolutionary Server for Your Expanding Business SGI server family, can expand from dual-processor With its outstanding Today’s corporate environment is characterized by deskside systems to powerful 512-processor scalable price/performance, the a rapid growth in communications requirements servers without disruptive and expensive box-swaps deskside Origin 2000 and continual organizational change. A great deal common with other server solutions. System server is ideal for of energy is spent planning future system purchases resources can be deployed as tightly integrated evolving applications in an ongoing effort to keep system capabilities arrays of small Origin 2000 systems or as single requiring expansion capability. Seamless upgradability to larger ahead of application demands. Corporations require large shared-memory systems capable of meeting rack-based configurations supports scalable, flexible computer systems that can adapt the most demanding data processing and your growing requirements. to business change and satisfy growing server computing requirements. requirements. SGI offers you a proven, more adaptable way to purchase your data processing, SGI pioneered the development and delivery of computation, and communications capabilities. shared-memory parallel processing systems.
    [Show full text]
  • Infiniband Shines on the Top500 List of World's Fastest Computers
    InfiniBand Shines on the Top500 List of World’s Fastest Computers InfiniBand Drives Two Top Ten Supercomputers and Gains Widespread Adoption for Clustering and Storage PITTSBURGH, PA - November 9, 2004 - (SuperComputing 2004) – Mellanox® Technologies Ltd, the leader in performance business and technical computing interconnects, announced today that the InfiniBand interconnect is used on more than a dozen systems on the prestigious Top500 list of the world's fastest computers, including two systems which achieved a top ten ranking. The latest Top500 list (www.top500.org) was unveiled this week at SuperComputing 2004 – the world’s leading conference on high performance computing (HPC). This represents more than a 400% growth since last year when InfiniBand technology made its debut on the list with just three entries. The Top500 computer list is a bellwether technology indicator for the broader performance busi- ness computing and technical computing markets. InfiniBand is being broadly adopted for numer- ically intensive applications including CAD/CAM, oil and gas exploration, fluid dynamics, weather modeling, molecular dynamics, gene sequencing, etc. These applications benefit from the high throughput, low latency InfiniBand interconnect - which allows industry standard servers to be connected together to create powerful computers at unmatched price/performance levels. These InfiniBand clustered computers deliver a 20X price/performance advantage vs. traditional mainframe systems. InfiniBand enables NASA’s Columbia supercomputer to claim its status as the second fastest computer in the world. Columbia uses InfiniBand to connect twenty Itanium-based Silicon Graphics platforms into a 10,240 processor supercomputer that achieves over 51.8 Tflops of sus- Mellanox Technologies Inc.
    [Show full text]
  • IRIX® Device Driver Programmer's Guide
    IRIX® Device Driver Programmer’s Guide Document Number 007-0911-210 CONTRIBUTORS Written by David Cortesi, John Raithel, Bill Tuthill, and Anita Manders UpdatedbyJulieBoneyandStevenLevine Illustrated by Dany Galgani, Cheri Brown, and Chrystie Danzer Production by Karen Jacobson COPYRIGHT © 1998-2003 Silicon Graphics, Inc. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation in any manner, in whole or in part, without the prior written permission of Silicon Graphics, Inc. LIMITED RIGHTS LEGEND The electronic (software) version of this document was developed at private expense; if acquired under an agreement with the USA government or any contractor thereto, it is acquired as "commercial computer software" subject to the provisions of its applicable license agreement, as specified in (a) 48 CFR 12.212 of the FAR; or, if acquired for Department of Defense units, (b) 48 CFR 227-7202 of the DoD FAR Supplement; or sections succeeding thereto. Contractor/manufacturer is Silicon Graphics, Inc., 1600 Amphitheatre Pkwy 2E, Mountain View, CA 94043-1351. TRADEMARKS AND ATTRIBUTIONS Silicon Graphics, SGI, the SGI logo, Challenge, Indigo, IRIS, IRIX, O2, Octane, Onyx, Onyx2, and Origin are registered trademarks and Indigo2,Indigo2 Maximum Impact, IRIS InSight, Power Challenge, Power Channel, Power Indigo2,PowerOnyx,andREACT/proare trademarks of Silicon Graphics, Inc., in the United States and/or other countries worldwide. Indy is a registered trademark, used under license in the United States and owned by Silicon Graphics,Inc.inothercountriesworldwide. IBM is a trademark of International Business Machines Corporation.
    [Show full text]
  • William Thigpen Deputy Project Manager August 15, 2011 Introduction
    The High End Computing Capability Project: Excellence at the Leading Edge of Computing William Thigpen Deputy Project Manager August 15, 2011 Introduction • NAS has stellar history of amazing accomplishments, drawing from synergy between high end computing (HEC) and modeling and simulation (M&S) • NAS has earned national and international reputation as a pioneer in the development and application of HEC technologies, providing its diverse customers with world-class M&S expertise, and state-of-the-art supercomputing products and services • HECC has been a critical enabler for mission success throughout NASA – it supports the modeling, simulation, analysis, and decision support activities for all four Mission Directorates, NESC, OCE, and OS&MA • HECC Mission: Develop and deliver the most productive HEC environment in the world, enabling NASA to extend technology, expand knowledge, and explore the universe • SCAP model of baseline funding + marginal investment has been immensely successful and has had a positive impact on NASA science and engineering – Baseline funding provides “free” HECC resources and services (MDs select own projects, determine priorities, and distribute own resources) – Marginal investments from ARMD, ESMD, and SMD demonstrate current model has strong support and buy-in from MDs HEC Characteristics • HEC, by its very nature, is required to be at the leading edge of computing technologies – Unlike agency-wide IT infrastructure, HEC is fundamentally driven to provide a rapidly increasing capability to a relatively small
    [Show full text]
  • SGI™ Origin 3000 Series Technical Configuration Owner's Guide
    SGI™ Origin 3000 Series Technical Configuration Owner’s Guide Document Number 007-4311-002 CONTRIBUTORS Written by Dick Brownell Illustrated by Dan Young Production by Diane Ciardelli Engineering contributions by SN1 Electrical, Mechanical, Manufacturing, Marketing, and Site Planning teams COPYRIGHT © 2000 Silicon Graphics, Inc. All rights reserved; provided portions may be copyright in third parties, as indicated elsewhere herein. No permission is granted to copy, distribute, or create derivative works from the contents of this electronic documentation. The contents of this document may not be copied or duplicated in any form, in whole or in part, without the prior written permission of Silicon Graphics, Inc. LIMITED RIGHTS LEGEND The electronic (software) version of this document was developed at private expense; if acquired under an agreement with the USA government or any contractor thereto, it is acquired as “commercial computer software” subject to the provisions of its applicable license agreement, as specified in (a) 48 CFR 12.212 of the FAR; or if acquired for Department of Defense units, (b) 48 CFR 227-7202 of the DoD FAR Supplement; or sections succeeding thereto. Contractor/manufacturer is Silicon Graphics, Inc., 1600 Amphitheatre Pkwy. 2E, Mountain View, CA 94043-1351. TRADEMARKS AND ATTRIBUTIONS Silicon Graphics is a registered trademark and SGI, the SGI logo, Origin, Onyx3, and IRIS InSight are trademarks of Silicon Graphics, Inc. PostScript is a trademark of Adobe Systems, Inc. MIPS is a registered trademark of MIPS Technologies, Inc. Windows and Windows NT are registered trademarks of Microsoft Corporation. DVCPRO is a trademark of Panasonic, Inc. Linux is a registered trademark of Linus Torvalds.
    [Show full text]
  • Jack 5 User's Guide
    University of Pennsylvania ScholarlyCommons Technical Reports (CIS) Department of Computer & Information Science October 1991 Jack 5 User's Guide Cary B. Phillips University of Pennsylvania Follow this and additional works at: https://repository.upenn.edu/cis_reports Recommended Citation Cary B. Phillips, "Jack 5 User's Guide", . October 1991. University of Pennsylvania Department of Computer and Information Science Technical Report No. MS-CIS-91-78. This paper is posted at ScholarlyCommons. https://repository.upenn.edu/cis_reports/479 For more information, please contact [email protected]. Jack 5 User's Guide Abstract This chapter is a brief tutorial introduction to using Jack. It demonstrates some of the basic Jack features, illustrating mostly the flavor of how you interact with Jack rather than describing the details of how to use it. The first time ouy run Jack, you should go through the examples in this chapter. From then on, refer to the later chapters for more details about the various features. Comments University of Pennsylvania Department of Computer and Information Science Technical Report No. MS- CIS-91-78. This technical report is available at ScholarlyCommons: https://repository.upenn.edu/cis_reports/479 Jack 5 User's Guide MS-CIS-91-78 GRAPHICS LAB 43 Cary B. Phillips Department of Computer and Information Science School of Engineering and Applied Science University of Pennsylvania Philadelphia, PA 19104-6389 October 1991 Jack @ User's Guide Version 5.8 January 11, 1994 Computer Graphics Research Laboratory Department of Computer and Information Sciences University of Pennsylvania Philadelphia, Pennsylvania Copyright @ 1988, 1992, 1993, 1994 University of Pennsylvania Jack @ is a registered trademark of the University of Pennsylvania.
    [Show full text]
  • High-End Supercomputing at NASA!
    High-End Supercomputing at NASA! Dr. Rupak Biswas! Chief, NASA Advanced Supercomputing (NAS) Division! Manager, High End Computing Capability (HECC) Project! NASA Ames Research Center, Moffett Field, Calif., USA ! 41st HPC User Forum Houston, TX 7 April 2011 NASA Overview: Mission Directorates •" NASA’s Mission: To pioneer the future in space exploration, scientific discovery, and aeronautics research •" Aeronautics Research (ARMD): Pioneer new flight technologies for safer, more secure, efficient, and environmentally friendly air transportation and space exploration •" Exploration Systems (ESMD): Develop new spacecraft and other capabilities for affordable, sustainable human and robotic exploration •" Science (SMD): Observe and understand the Earth-Sun system, our solar system, and the universe •" Space Operations (SOMD): Extend the duration and boundaries of human spaceflight for space exploration and discovery 2 NASA Overview: Centers & Facilities Goddard Inst for Space Studies Glenn Research Center New York, NY Cleveland, OH Headquarters Ames Research Center Washington, DC Moffett Field, CA Independent V&V Facility Goddard Space Flight Center Fairmont, WV Greenbelt, MD Dryden Flight Research Center Marshall Space Flight Center Langley Research Center Edwards, CA Huntsville, AL Hampton, VA White Sands Test Facility Wallops Flight Facility Jet Propulsion Laboratory White Sands, NM Stennis Space Center Wallops Island, VA Pasadena, CA Stennis, MS Johnson Space Center Kennedy Space Center Houston, TX Cape Canaveral, FL 3 M&S Imperative
    [Show full text]
  • Scientific Application-Based Performance Comparison of SGI Altix 4700, IBM POWER5+, and SGI ICE 8200 Supercomputers
    NAS Technical Report NAS-09-001, February 2009 Scientific Application-Based Performance Comparison of SGI Altix 4700, IBM POWER5+, and SGI ICE 8200 Supercomputers Subhash Saini, Dale Talcott, Dennis Jespersen, Jahed Djomehri, Haoqiang Jin, and Rupak Biswas NASA Advanced Supercomputing Division NASA Ames Research Center Moffett Field, California 94035-1000, USA {Subhash.Saini, Dale.R.Talcott, Dennis.Jespersen, Jahed.Djomehri, Haoqiang.Jin, Rupak.Biswas}@nasa.gov Abstract—The suitability of next-generation high-performance whereas the 4700 is based on the dual-core Itanium processor. computing systems for petascale simulations will depend on Hoisie et al. conducted performance comparison through various performance factors attributable to processor, memory, benchmarking and modeling of three supercomputers: IBM local and global network, and input/output characteristics. In this Blue Gene/L, Cray Red Storm, and IBM Purple [6]. Purple is paper, we evaluate performance of new dual-core SGI Altix 4700, an Advanced Simulation and Computing (ASC) system based quad-core SGI Altix ICE 8200, and dual-core IBM POWER5+ on the single-core IBM POWER5 architecture and is located at systems. To measure performance, we used micro-benchmarks Lawrence Livermore National Laboratory (LLNL) [7]. The from High Performance Computing Challenge (HPCC), NAS paper concentrated on system noise, interconnect congestion, Parallel Benchmarks (NPB), and four real-world applications— and performance modeling using two applications, namely three from computational fluid dynamics (CFD) and one from SAGE and Sweep3D. Oliker et al. studied scientific application climate modeling. We used the micro-benchmarks to develop a controlled understanding of individual system components, then performance on candidate petascale platforms: POWER5, analyzed and interpreted performance of the NPBs and AMD Opteron, IBM BlueGene/L, and Cray X1E [8].
    [Show full text]