NASA Unveils Its Newest, Most Powerful Supercomputer

Total Page:16

File Type:pdf, Size:1020Kb

NASA Unveils Its Newest, Most Powerful Supercomputer National Aeronautics and Space Administration, Ames Research Center, Moffett Field, CA November 2004 Communication for the Information Technology Age NASA unveils its newest, most powerful supercomputer NASA’s newest supercomputer, meet its mission goals and the Vision for code used to achieve this result was ‘Columbia,’ has been named one of the Space Exploration. conceived and developed in that same world’s most powerful production Named to honor the crew of the time frame, and is much more straight- supercomputers by the TOP500 Project space shuttle Columbia lost Feb. 1, 2003, forward than the traditional approach. at SC2004, the International Conference the new supercomputer is comprised of Our simplified implementation, allowed an integrated cluster of 20 intercon- by shared memory systems like the SGI nected SGI® Altix® 512-processor sys- Altix, translates directly into improved tems, for a total of 10,240 Intel® effectiveness for users of our systems." The almost instant produc- tivity of the Columbia supercomputer architecture and technology has made the system available to a broad spectrum NASA’s new supercomputer ‘Columbia’ was of NASA-spon- installed at Ames in less than 120 days. NASA photos by Tom Trower sored scientists. Feedback from scientists is ex- of High Performance Computing, Net- tremely positive. working and Storage in Pittsburg. "The Colum- Columbia, which achieved a bench- bia system is a mark rating of 51.9 teraflops on 10,240 Left to right: Walt Brooks, NAS division chief; Ronnie Kenneth, CEO, Voltaire; tremendous de- processors, is ranked second on the Ghassem Asrar, NASA HQ; Ames Center Director G. Scott Hubbard; Richard velopment for TOP500 List, just behind Blue Gene, Dracott, Intel; and Bob Bishop, CEO SGI, display recognition plaques NASA and the IBM’s supercomputer to be installed at presented to them by Brooks. nation. Simula- the Department of Energy's Lawrence tion of the evolu- Livermore National Laboratory. Itanium® 2 processors. Columbia builds tion of the Earth and planetary ecosys- "Large, integrated simulation envi- upon the highly successful collabora- tems with high fidelity has been beyond ronments like those we have at Ames tion between NASA, Silicon Graphics the reach of Earth scientists for decades," are crucial to NASA’s missions, and Co- Inc. (SGI) and Intel Corporation that NASA's Deputy Associate Administra- lumbia has provided a breakthrough developed the world's first 512-proces- tor of the Science Mission Directorate increase in our computational power," sor Linux server. That server, the SGI® Ghassem Asrar said. "With Columbia, said Ames Center Director G. Scott Altix® located at Ames, was named scientists are already seeing dramatic Hubbard. "A high rating on the TOP500 ‘Kalpana,’ after Columbia astronaut and improvements in the fidelity of simula- list is an impressive achievement, but Ames' alumna Kalpana Chawla. tions in such areas as global ocean circu- for NASA, the immediate availability to NASA unveiled its newest lation, prediction of large-scale struc- analyze important issues like ‘Return to supercomputer during a ribbon-cutting tures in the universe and the physics of Flight’ for the space shuttle, space sci- ceremony Oct. 26 at Ames. Columbia supernova detonations," he said. ence, Earth modeling and aerospace was built and installed at the NASA "This amazing new supercomputer vehicle design for exploration, is the Advanced Supercomputing facility at system dramatically increases NASA's true measure of success." Ames in less than 120 days. capabilities and revolutionizes our ca- "Columbia allows NASA to perform Within days of completion of the pacity for conducting scientific research numerical simulations at the cutting supercomputer’s installation, Columbia and engineering design," Hubbard said. edge of science and engineering," said achieved a Linpack benchmark rating "It will be one of the fastest, largest and Walt Brooks, chief of the NASA Ad- of 42.7 teraflops on just 16 nodes with an most productive supercomputers in the vanced Supercomputing (NAS) Division 88 percent efficiency rating, exceeding world, providing an estimated 10-fold at Ames. "As the largest example of an the previously best-reported perfor- increase in NASA's supercomputing ca- important, high-end computing archi- mance by a significant margin. This was pacity. It is already having a major im- tecture developed in the U.S., part of followed almost immediately by the 51.7 pact on NASA's science, aeronautics and this system will be available to the teraflop rating reported Nov. 8 for the exploration programs, in addition to nation’s best research teams. The swift entire system. playing a critical role in preparing the design and deployment of Columbia "What is most noteworthy is that space shuttle for return to safe flight next has redefined the concept of we were able to post such a significant year," Hubbard said. supercomputer development." and efficient Linpack result in such a "With SGI and Intel, we set out to With Columbia at its core, said short time," said Bob Ciotti, chief sys- revitalize NASA's computing capabili- Brooks, the NAS facility provides an tems engineer for the Columbia instal- ties, and the Columbia system has done integrated computing, visualization and lation project. "Not only was the system so in a spectacular way," said Brooks. data storage environment to help NASA deployed in less than 120 days, but the continued on page 2 amesnews.arc.nasa.gov ‘See inside for special NASA Ames 65th Anniversary insert’ Immune system inspires machine-software fault detector Using the human immune system ture can then be used to identify future is still in the research phase. Later, scien- as an inspiration, scientists at NASA occurrences of similar faults. Similarly, tists hope to modify it so it will work as Ames are developing software to find the biological immune system quickly stand-alone software. faults in complex machines. recognizes diseases to which it has been In the near future, when engineers The software 'tool' - called an algo- exposed previously or has been 'immu- use MILD software on another machine, rithm, or mathematical recipe - looks for nized' to some known diseases," they will need to set up the software so abnormalities in a machine's hardware Krishnakumar said. it will monitor data from that machine. and software. The mathematical recipe, "Another advantage of using the "However, we now are enhancing the which engineers may well someday put immune system as an inspiration is that MILD software 'tool' so it can more eas- we can program the MILD software tool ily be used for other machines," to recognize known faults that occur in Krishnakumar said. "Eventually, engi- a machine. Similarly, a biological im- neers could use MILD algorithms in any mune system recognizes diseases to kind of software and hardware in ma- which it has been exposed," chine environments -- from machines in Krishnakumar said. a shop to flying airplanes and space- So far, scientists have tested the craft," Krishnakumar ventured. MILD software in a C-17 aircraft flight "We expect future machines to have simulator at NASA Ames to collect nor- their own immune systems so that they mal and simulated airplane failures. "We could be used for long-duration space used the aircraft simulator as a proof-of- missions, or any other use where techni- concept experiment to test how well cal support would be limited," the MILD algorithm worked," Krishnakumar said. Krishnakumar explained. The software BY JOHN BLUCK NASA unveils powerful supercomputer continued from front page "Not only were scientists doing real Earth new age in scientific discovery, and in spacecraft as well as other complex and space analysis during the system based on NASA's initial success, it seems systems, is part of the Multi-level Im- build, but within days of the full instal- likely that we'll be discussing new scien- mune Learning Detection (MILD) soft- lation, we achieved a Linpack bench- tific breakthroughs in the very near fu- ware 'tool,' under development at NASA mark rating of 42.7 teraflops on 16 nodes ture," he said. Ames in the Computational Sciences with an 88 percent efficiency rating, ex- "The launching of the Columbia sys- Division, Code TC. ceeding the current best reported num- tem shows what's possible when gov- "The human immune system doesn't try to identify what is good, only what is Walt Brooks (left) chats bad," said MILD principle investigator with NASA Administrator Kalmanje Krishnakumar, a scientist at Sean O’Keefe (second Ames. "Similarly, MILD software only from left) and Chief of tries to identify what is bad, and that's Staff John Schumacher one of the main ideas behind MILD, which is similar to biological immune (third from left) while systems," Krishnakumar said. Co-inves- Ames Center Director G. tigator on the MILD project is Dipankar Scott Hubbard (third from Dasgupta of the University of Memphis, right) and other guests Tenn., who is spending a year as a visit- look on. ing faculty member at NASA Ames. "You can have identical MILD soft- ware recipes distributed throughout the machine that look at different potential abnormalities," Krishnakumar ex- plained. "Typically, a problem will show up in more than one place in a machine, and comparisons of different parts of NASA photo by Tom Trower the machine help us to more accurately identify problems early," he added. MILD uses data from sensors in ber by a significant margin," he said. ernment and technology leaders work machines to find patterns of system faults "With the completion of the Colum- together toward a goal of truly national and damage to clarify if systems are bia system, NASA, SGI and Intel have importance," said Paul Otellini, presi- working properly. In an aircraft, sen- created a powerful national resource, dent and COO of Intel Corporation.
Recommended publications
  • A Look at the Impact of High-End Computing Technologies on NASA Missions
    A Look at the Impact of High-End Computing Technologies on NASA Missions Contributing Authors Rupak Biswas Jill Dunbar John Hardman F. Ron Bailey Lorien Wheeler Stuart Rogers Abstract From its bold start nearly 30 years ago and continuing today, the NASA Advanced Supercomputing (NAS) facility at Ames Research Center has enabled remarkable breakthroughs in the space agency’s science and engineering missions. Throughout this time, NAS experts have influenced the state-of-the-art in high-performance computing (HPC) and related technologies such as scientific visualization, system benchmarking, batch scheduling, and grid environments. We highlight the pioneering achievements and innovations originating from and made possible by NAS resources and know-how, from early supercomputing environment design and software development, to long-term simulation and analyses critical to design safe Space Shuttle operations and associated spinoff technologies, to the highly successful Kepler Mission’s discovery of new planets now capturing the world’s imagination. Keywords Information Technology and Systems; Aerospace; History of Computing; Parallel processors, Processor Architectures; Simulation, Modeling, and Visualization The story of NASA’s IT achievements is not complete without a chapter on the important role of high-performance computing (HPC) and related technologies. In particular, the NASA Advanced Supercomputing (NAS) facility—the space agency’s first and largest HPC center—has enabled remarkable breakthroughs in NASA’s science and engineering missions, from designing safer air- and spacecraft to the recent discovery of new planets. Since its bold start nearly 30 years ago and continuing today, NAS has led the way in influencing the state-of-the-art in HPC and related technologies such as high-speed networking, scientific visualization, system benchmarking, batch scheduling, and grid environments.
    [Show full text]
  • Overview and History Nas Overview
    zjjvojvovo OVERVIEWNAS OVERVIEW AND HISTORY a Look at NAS’ 25 Years of Innovation and a Glimpse at the Future In the mid-1970s, a group of Ames aerospace researchers began to study a highly innovative concept: NASA could transform U.S. aerospace R&D from the costly and cghghmgm time-consuming wind tunnel-based process to simulation-centric design and engineer- ing by executing emerging computational fluid dynamics (CFD) models on supercom- puters at least 1,000 times more powerful than those commercially available at the time. In 1976, Ames Center Director Dr. Hans Mark tasked a group led by Dr. F. Ronald Bailey to explore this concept, leading to formation of the Numerical Aerodynamic Simulator (NAS) Projects Office in 1979. At the same time, a user interface group was formed consisting of CFD leaders from industry, government, and academia to help guide requirements for the NAS concept gmfgfgmfand provide feedback on evolving computer feasibility studies. At the conclusion of these activities in 1982, NASA management changed the NAS approach from a focus on purchasing a specially developed supercomputer to an on-going Numerical Aerody- namic Simulation Program to provide leading-edge computational capabilities based on an innovative network-centric environment. The NAS Program plan for implementing this new approach was signed on February 8, 1983. Grand Opening As the NAS Program got underway, a projects office to a full-fledged division its first supercomputers were installed at Ames. In January 1987, NAS staff and in Ames’ Central Computing Facility, equipment were relocated to the new starting with a Cray X-MP-12 in 1984.
    [Show full text]
  • Infiniband Shines on the Top500 List of World's Fastest Computers
    InfiniBand Shines on the Top500 List of World’s Fastest Computers InfiniBand Drives Two Top Ten Supercomputers and Gains Widespread Adoption for Clustering and Storage PITTSBURGH, PA - November 9, 2004 - (SuperComputing 2004) – Mellanox® Technologies Ltd, the leader in performance business and technical computing interconnects, announced today that the InfiniBand interconnect is used on more than a dozen systems on the prestigious Top500 list of the world's fastest computers, including two systems which achieved a top ten ranking. The latest Top500 list (www.top500.org) was unveiled this week at SuperComputing 2004 – the world’s leading conference on high performance computing (HPC). This represents more than a 400% growth since last year when InfiniBand technology made its debut on the list with just three entries. The Top500 computer list is a bellwether technology indicator for the broader performance busi- ness computing and technical computing markets. InfiniBand is being broadly adopted for numer- ically intensive applications including CAD/CAM, oil and gas exploration, fluid dynamics, weather modeling, molecular dynamics, gene sequencing, etc. These applications benefit from the high throughput, low latency InfiniBand interconnect - which allows industry standard servers to be connected together to create powerful computers at unmatched price/performance levels. These InfiniBand clustered computers deliver a 20X price/performance advantage vs. traditional mainframe systems. InfiniBand enables NASA’s Columbia supercomputer to claim its status as the second fastest computer in the world. Columbia uses InfiniBand to connect twenty Itanium-based Silicon Graphics platforms into a 10,240 processor supercomputer that achieves over 51.8 Tflops of sus- Mellanox Technologies Inc.
    [Show full text]
  • William Thigpen Deputy Project Manager August 15, 2011 Introduction
    The High End Computing Capability Project: Excellence at the Leading Edge of Computing William Thigpen Deputy Project Manager August 15, 2011 Introduction • NAS has stellar history of amazing accomplishments, drawing from synergy between high end computing (HEC) and modeling and simulation (M&S) • NAS has earned national and international reputation as a pioneer in the development and application of HEC technologies, providing its diverse customers with world-class M&S expertise, and state-of-the-art supercomputing products and services • HECC has been a critical enabler for mission success throughout NASA – it supports the modeling, simulation, analysis, and decision support activities for all four Mission Directorates, NESC, OCE, and OS&MA • HECC Mission: Develop and deliver the most productive HEC environment in the world, enabling NASA to extend technology, expand knowledge, and explore the universe • SCAP model of baseline funding + marginal investment has been immensely successful and has had a positive impact on NASA science and engineering – Baseline funding provides “free” HECC resources and services (MDs select own projects, determine priorities, and distribute own resources) – Marginal investments from ARMD, ESMD, and SMD demonstrate current model has strong support and buy-in from MDs HEC Characteristics • HEC, by its very nature, is required to be at the leading edge of computing technologies – Unlike agency-wide IT infrastructure, HEC is fundamentally driven to provide a rapidly increasing capability to a relatively small
    [Show full text]
  • High-End Supercomputing at NASA!
    High-End Supercomputing at NASA! Dr. Rupak Biswas! Chief, NASA Advanced Supercomputing (NAS) Division! Manager, High End Computing Capability (HECC) Project! NASA Ames Research Center, Moffett Field, Calif., USA ! 41st HPC User Forum Houston, TX 7 April 2011 NASA Overview: Mission Directorates •" NASA’s Mission: To pioneer the future in space exploration, scientific discovery, and aeronautics research •" Aeronautics Research (ARMD): Pioneer new flight technologies for safer, more secure, efficient, and environmentally friendly air transportation and space exploration •" Exploration Systems (ESMD): Develop new spacecraft and other capabilities for affordable, sustainable human and robotic exploration •" Science (SMD): Observe and understand the Earth-Sun system, our solar system, and the universe •" Space Operations (SOMD): Extend the duration and boundaries of human spaceflight for space exploration and discovery 2 NASA Overview: Centers & Facilities Goddard Inst for Space Studies Glenn Research Center New York, NY Cleveland, OH Headquarters Ames Research Center Washington, DC Moffett Field, CA Independent V&V Facility Goddard Space Flight Center Fairmont, WV Greenbelt, MD Dryden Flight Research Center Marshall Space Flight Center Langley Research Center Edwards, CA Huntsville, AL Hampton, VA White Sands Test Facility Wallops Flight Facility Jet Propulsion Laboratory White Sands, NM Stennis Space Center Wallops Island, VA Pasadena, CA Stennis, MS Johnson Space Center Kennedy Space Center Houston, TX Cape Canaveral, FL 3 M&S Imperative
    [Show full text]
  • Scientific Application-Based Performance Comparison of SGI Altix 4700, IBM POWER5+, and SGI ICE 8200 Supercomputers
    NAS Technical Report NAS-09-001, February 2009 Scientific Application-Based Performance Comparison of SGI Altix 4700, IBM POWER5+, and SGI ICE 8200 Supercomputers Subhash Saini, Dale Talcott, Dennis Jespersen, Jahed Djomehri, Haoqiang Jin, and Rupak Biswas NASA Advanced Supercomputing Division NASA Ames Research Center Moffett Field, California 94035-1000, USA {Subhash.Saini, Dale.R.Talcott, Dennis.Jespersen, Jahed.Djomehri, Haoqiang.Jin, Rupak.Biswas}@nasa.gov Abstract—The suitability of next-generation high-performance whereas the 4700 is based on the dual-core Itanium processor. computing systems for petascale simulations will depend on Hoisie et al. conducted performance comparison through various performance factors attributable to processor, memory, benchmarking and modeling of three supercomputers: IBM local and global network, and input/output characteristics. In this Blue Gene/L, Cray Red Storm, and IBM Purple [6]. Purple is paper, we evaluate performance of new dual-core SGI Altix 4700, an Advanced Simulation and Computing (ASC) system based quad-core SGI Altix ICE 8200, and dual-core IBM POWER5+ on the single-core IBM POWER5 architecture and is located at systems. To measure performance, we used micro-benchmarks Lawrence Livermore National Laboratory (LLNL) [7]. The from High Performance Computing Challenge (HPCC), NAS paper concentrated on system noise, interconnect congestion, Parallel Benchmarks (NPB), and four real-world applications— and performance modeling using two applications, namely three from computational fluid dynamics (CFD) and one from SAGE and Sweep3D. Oliker et al. studied scientific application climate modeling. We used the micro-benchmarks to develop a controlled understanding of individual system components, then performance on candidate petascale platforms: POWER5, analyzed and interpreted performance of the NPBs and AMD Opteron, IBM BlueGene/L, and Cray X1E [8].
    [Show full text]
  • Columbia Application Performance Tuning Case Studies
    NAS Technical Report; NAS-06-019 December 2006 Columbia Application Performance Tuning Case Studies Johnny Chang NASA Advanced Supercomputing Division Computer Sciences Corporation NASA Ames Research Center Moffett Field, California 94035-1000, USA [email protected] Abstract: This paper describes four case studies of application performance enhancements on the Columbia supercomputer. The Columbia supercomputer is a cluster of twenty SGI Altix systems, each with 512 Itanium 2 processors and 1 terabyte of global shared-memory, and is located at the NASA Advanced Supercomputing (NAS) facility in Moffett Field. The code optimization techniques described in the case studies include both implicit and explicit process-placement to pin processes on CPUs closest to the processes’ memory, removing memory contention in OpenMP applications, eliminating unaligned memory accesses, and system profiling. These techniques enabled approximately 2- to 20-fold improvements in application performance. Key words: Code tuning, process-placement, OpenMP scaling, memory contention, unaligned memory access. 1 Introduction An integral component of the support model for a world-class supercomputer is the work done by the applications support team to help the supercomputer users make the most efficient use of their computer time allocations. This applications support involves all aspects of code porting and optimization, code debugging, scaling, etc. Several case studies derived from our work in helping users optimize their codes on the Columbia supercomputer have been presented at both the 2005 [1] and 2006 [2] SGI User Group Technical Conference. This paper describes four of those case studies. First, we present a brief description and history of the Columbia supercomputer, which also sets the terminology used throughout the paper.
    [Show full text]
  • Kalpana Chawla
    Kalpana Chawla From Wikipedia, the free encyclopedia Jump to: navigation, search Kalpana Chawla NASA Astronaut March 17, 1962 Born Karnal, Haryana, India February 1, 2003 (aged 40) Died Aboard Space Shuttle Columbia over Texas, U.S. Previous Research Scientist occupation Time in space 31 days, 14 hours, 54 minutes Selection 1994 NASA Group Missions STS-87, STS-107 Mission insignia Awards Kalpana Chawla (March 17, 1962[1] – February 1, 2003) was born in Karnal, India. She was the first Indian American astronaut[2] and first Indian woman in space.[3] She first flew on Space Shuttle Columbia in 1997 as a mission specialist and primary robotic arm operator. In 2003, Chawla was one of seven crew members killed in the Space Shuttle Columbia disaster.[4] Contents [hide] 1 Education 2 NASA career 3 Death 4 Awards 5 Memorials 6 See also 7 References 8 Further reading 9 External links Education[edit] Chawla completed her earlier schooling at Tagore Baal Niketan School, Karnal. She was underage and to enable her to enroll in school, her date of birth was changed to July 1, 1961. She completed Bachelor of Engineering degree in Aeronautical Engineering at Punjab Engineering College at Chandigarh in 1982. She moved to the United States in 1982 and obtained a M.S. degree in aerospace engineering from the University of Texas at Arlington in 1984. Chawla went on to earn a second M.S. degree in 1986 and a PhD in aerospace engineering in 1988 from the University of Colorado at Boulder. Later that year she began working at the NASA Ames Research Center as vice president of Overset Methods, Inc.
    [Show full text]
  • August 2003, Columbia Accident Investigation Report Volume I
    COLUMBIA ACCIDENT INVESTIGATION BOARD Report Volume I August 2003 COLUMBIA ACCIDENT INVESTIGATION BOARD On the Front Cover This was the crew patch for STS-107. The central element of the patch was the microgravity symbol, µg, flowing into the rays of the Astronaut symbol. The orbital inclination was portrayed by the 39-degree angle of the Earthʼs horizon to the Astronaut symbol. The sunrise was representative of the numerous science experiments that were the dawn of a new era for continued microgravity research on the International Space Station and beyond. The breadth of science conduct- ed on this mission had widespread benefits to life on Earth and the continued exploration of space, illustrated by the Earth and stars. The constellation Columba (the dove) was chosen to symbolize peace on Earth and the Space Shuttle Columbia. In addition, the seven stars represent the STS-107 crew members, as well as honoring the original Mercury 7 astronauts who paved the way to make research in space possible. The Israeli flag represented the first person from that country to fly on the Space Shuttle. On the Back Cover This emblem memorializes the three U.S. human space flight accidents – Apollo 1, Challenger, and Columbia. The words across the top translate to: “To The Stars, Despite Adversity – Always Explore“ Limited First Printing, August 2003, by the Columbia Accident Investigation Board Subsequent Printing and Distribution by the National Aeronautics and Space Administration and the Government Printing Office Washington, D.C. 2 Report Volume I August 2003 COLUMBIA ACCIDENT INVESTIGATION BOARD IN MEMORIAM Rick D. Husband Commander William C.
    [Show full text]
  • 1 Parallel I/O Performance Characterization of Columbia And
    Parallel I/O Performance Characterization of memory, and interconnect, but I/O performance needs Columbia and NEC SX-8 Superclusters to be significantly increased. It is not just the number 1 1 2 of teraflops/sec that matters, but how many Subhash Saini, Dale Talcott Rajeev Thakur, gigabytes/sec or terabytes/sec of data can applications Panagiotis Adamidis,3 Rolf Rabenseifner 4 1 really move in and out of disks that will affect and Robert Ciotti whether these computing systems can be used Abstract: Many scientific applications running on today’s productively for new scientific discoveries [1-2]. supercomputers deal with increasingly large data sets and To get a better understanding of how the I/O systems are correspondingly bottlenecked by the time it takes to of two of the leading supercomputers of today read or write the data from/to the file system. We therefore perform, we undertook a study to benchmark the undertook a study to characterize the parallel I/O parallel I/O performance of NASA's Columbia performance of two of today’s leading parallel supercomputer located at NASA Ames Research supercomputers: the Columbia system at NASA Ames Center and the NEC SX-8 supercluster located at Research Center and the NEC SX-8 supercluster at the University of Stuttgart, Germany. On both systems, we ran University of Stuttgart, Germany. a total of seven parallel I/O benchmarks, comprising five The rest of this paper is organized as follows. In low-level benchmarks: (i) IO_Bench, (ii) MPI Tile IO, (iii) Section 2, we present the architectural details of the IOR (POSIX and MPI-IO), (iv) b_eff_io (five different two machines and their file systems.
    [Show full text]
  • NAS Division 2008 25 Years of Innovation
    National Aeronautics and Space Administration NAS Division 2008 Years of Innovation www.nasa.gov zjjvojvovo LETTER FROM THE DIRECTOR OF AMES Greetings, I’m delighted to present this special chronicle celebrating NAS’ 25th anniversary. Created as the Agency’s bold initiative in simulation-based aerospace vehicle design and stewardship, NAS has earned an international reputation as a pioneer in OVERVIEWNAS OVERVIEW AND HISTORY development and application of high-performance computing technologies, providing its diverse cus- a Look at NAS’ 25 Years of Innovation and a Glimpse at the Future tomers with world-class aerospace modeling and In the mid-1970s, a group of Ames aerospace researchers began to study a highly simulation expertise, and state-of-the-art supercom- innovative concept: NASA could transform U.S. aerospace R&D from the costly and cghghmgm puting services. time-consuming wind tunnel-based process to simulation-centric design and engineer- ing by executing emerging computational fluid dynamics (CFD) models on supercom- puters at least 1,000 times more powerful than those commercially available at the time. Within these pages, you’ll find an over- to improving the design and safety of the In 1976, Ames Center Director Dr. Hans Mark tasked a group led by Dr. F. Ronald Bailey view of NAS’ 25-year history, pictorial Space Shuttle Main Engine, to adapt- to explore this concept, leading to formation of the Numerical Aerodynamic Simulator highlights from the organization’s legacy ing Shuttle technology for a life-saving (NAS) Projects
    [Show full text]
  • November 2004
    November 2004 Participant’s Articles, Information, Product Announcements 03 FEA Information: Letter To The Engineering Community 04 SGI: How NASA, SGI and Intel managed to build and deploy history’s most powerful supercomputer in 15 blistering weeks 08 FEA Information - Software – “What Is?” Series IBM: What is Deep Computing 10 ETA: eta/VPG version 3.0 Delivers the Best of the Windows and Unix Modeling Features to LS-DYNA Users 12 FEA Information - Database for technical papers updated 13 Asia Pacific News – China Directories 15 Hardware & Computing and Communication Products 16 Software Distributors 18 Consulting Services 19 Educational Participants 20 Informational Websites FEA News/Newswire /Publications/Events & Announcements 21 News Page & Events 22 Paper on Line due to Size: Structural Analysis In The Frequency Domain & Understanding Impact Data - Ala (Al) Tabiei , PhD And Martin Lambrecht 23 Paper on Line due to Size: Simulation of Under Water Explosion Using MSC.DYTRAN - Peiran Ding - Arjaan Buijk 24 Paper in Full: Test and Analysis Correlation of Foam Impact onto Space Shuttle Wing Leading Edge RCC Panel 8 – Ediwn L. Fasanella, US Army Research Laboratory, Vehicle Technology Directorate, Hampton, VA FEA Information Inc. President & CEO: Marsha Victory Editor: Technical Writers: Trent Eggleston Dr. David Benson Managing Editor: Uli Franz Marsha Victory Ala Tabiei Technical Editor: Technical Consultants: Art Shapiro Steve Pilz Reza Sadeghi Graphic Designer: Wayne L. Mindle The content of this publication is deemed to be accurate and complete. However, FEA Information Inc. doesn’t guarantee or warranty accuracy or completeness of the material contained herein. All trademarks are the property of their respective owners.
    [Show full text]