MAY 24-28, 1999 • MINNEAPOLIS, MN

41st CUG Conference Final Program Welcome

Welcome to the 1999 CUG Conference! is a beautiful time of year here, and enthusiasm runs high as people begin to enjoy the outdoors after a long northern winter. From the bluffs of the mighty Mississippi River in We are excited to welcome everyone to Minneapolis for the the east, to the scenic shores of one of the largest fresh water 1999 CUG Conference. As the original home of Cray lakes in the world (Lake Superior), to the farms and prairies Research, it seems a fitting place to close out the millennium of the west, Minnesota provides a diverse offering for visi- and look ahead to the next. We are indeed at the end of an tors. Whether you like outdoor activities and sports, a quiet era, as the supercomputing paradigm has shifted from the respite in the northern woods, or just some time to see the "big iron" to an HPC world of high-performance commodity sights in the Twin Cities of Minneapolis and St. Paul, we processors. hope your stay will be memorable. The CUG Program Committee has planned an exciting week Minnesota is known for friendly but reserved people, who of technical sessions and discussions. Add to that good food, often are masters of understatement. To use the words of a grand night out in the elegant Minnesota History Center, Garrison Keillor, well-known Minnesota humorist and host an evening reception hosted by SGI, a special luncheon for of national public radio's "A Prairie Home Companion," we CUG newcomers, and a chance to meet with your colleagues think this conference will be "not too bad," which, translated in the comfortable setting of the downtown Minneapolis from understated Minnesota parlance, means it will be Marriott City Center Hotel. We think it will be a rewarding great! and important event for the the CUG community. The technical program promises to be excellent and we encourage you to peruse the program. In addition, we are excited to have as keynote speaker Dr. Cherri M. Pancake, John M. Sell Professor of Computer Science at Oregon State University. Network Computing Services, Inc. Minnesota is called the Land of 10,000 Lakes, and we hope you take some extra time to enjoy the environment. Spring Local Arrangements Chair Message from the CUG Program Committee

To the Members of CUG: • Summit Interactive Sessions on Tuesday and Thursday st each provide a forum for interaction between CUG and Welcome to the 41 CUG conference on high performance com- SGI with more time than a Parallel Technical Session and putation and visualization. This is your best chance in 1999 to more focus than a General Session. exchange problem-solving information and enjoy professional interactions with your fellow users of SGI and CRAY technical • Birds of a Feather (BOF) Sessions, the most dynamic computing systems in the birthplace of supercomputing! aspect of a CUG conference, are scheduled as needed, and notices of BOF meetings will be posted near the Message Discover how CUG—your user forum on SGI high performance Board. You are welcome to organize a BOF session during systems—can be your source for up-to-the-minute information the conference. and insight for that competitive edge you need. • Formal receptions and informal luncheons are among the Please, review the Technical Program in the following pages for countless occasions you will have to exchange information a truly unique opportunity to experience the "End of an Era ... with your colleagues from other CUG sites and to collabo- Eve of a New Millennium." rate with representatives from SGI. The CUG Program Committee has assembled a diverse array of You can see how the Minneapolis CUG Conference offers detail-packed presentations in Tutorials, Keynote and General something for everyone during an exciting, educational, and Sessions, Special Interest Group technical presentations and entertaining week as we experience the spontaneous "Birds of a Feather" (BOF) discussions. • Tutorial Sessions on Monday morning—available to you at NO additional charge—are a great opportunity for you End of an Era... to update your technical skills with the help of selected ...Eve of a New Millennium technical experts from SGI and CUG sites. • The Welcome and Keynote Session, Monday afternoon, starts the countdown. The Keynote Speaker, Cherri Pancake of Oregon State University, is a well-known opinion-leader in HPCC. Sam Milosevich CUG Vice-President and Program Chair • General Sessions during the week provide you with the latest corporate and technical information from SGI execu- Eli Lilly and Company tives, as well as general-interest technical presentations. • Parallel Technical Sessions each day give you the opportu- nity to focus on the specific knowledge domains of the Special Interest Groups (SIGs). Presentations in these ses- sions have been reviewed and selected for you by SIG Chairpersons. Program Notes

Special Interest Group (SIG) Video Theater Contents Meetings Supercomputers generate data that, via sci- SIGs will hold meetings on Monday that are entific visualization, can be transformed into pictures that provide a way for scientists to Welcome Message open to all interested CUG attendees. They provide a forum to discuss the future direc- understand the information more fully and Message from CUG Program Committee tion of the SIG, important issues, and talks quickly. Although the primary purpose of Program Notes ...... 3 of special interest. You are encouraged to visualization is to facilitate the discovery process, scientists are increasingly coming to Schedule ...... 4 attend any of these open meetings. rely on it also to present their research con- Monday ...... 4 clusions. This Friday morning session show- Tuesday ...... 6 cases the latest in scientific/engineering Summit Interactive Sessions visualization. Wednesday ...... 8 Summit Interactive Sessions on Tuesday and Thursday ...... 10 Thursday each provide a forum for interac- Friday ...... 12 tion between CUG and SGI with more time Conference Program Updates Abstracts ...... 13 than a Parallel Technical Session and more focus than a General Session. We anticipate that there will be continuing Local Arrangements changes to the Program schedule. You How to Contact Us ...... 26 should plan to check the Bulletin Boards on Conference Information ...... 26 site at CUG each day for schedule changes. Program Committee Meeting Thank you for your patience and coopera- On Site Facilities ...... 27 The Program Committee needs your ideas! tion. Travel Information ...... 27 Come to the Open Program Committee Social Events ...... 28 meeting on Thursday to hear about the next CUG conference and to share your sugges- Contacts tions of how to make each CUG the best one Call for Papers CUG Board of Directors ...... 29 to date. All conference attendees are invited We encourage you to present a paper at the Conference Chair ...... 29 and encouraged to attend this open meet- CUG 2000 conference in The Netherlands. Special Interest Groups ...... 30 ing. The CUG Conference Program is the net Please see the Call for Papers at the end of result of the combined efforts of all interest- this brochure. Regional CUG Representatives . . .32 ed CUG sites. Come to the Program Com- Program Layout Committee . . . . .32 mittee meeting and make your unique con- SGI Representatives to CUG . . . . .33 tribution to the next CUG conference! CUG Office ...... 33 Call for Papers ...... 34

Page 3 AM Monday

Ballroom 2 Ballroom 3 Ballroom 4 Deer / Elk Lake Tutorial I Tutorial II Tutorial III Tutorial IV

8:30 System Tuning for the 8:30 OpenMP 8:30 Visualization: 8:30 SV1 Origin2000 Tom MacDonald (SGI) OpenGL Volumizer Jef Dawson (SGI) Edward Hayes-Hall (SGI) Chikai Ohazama (SGI)

10:30–11:00 Break

Tutorial V Tutorial VI Tutorial VII Tutorial VIII

11:00 Performance Tools on the 11:00 Scheduling for the T3E 11:00 I/O Tuning 11:00 Data Intensive Computing Origin Jay Blakeborough (SGI) Randy Kreiser (SGI) William Kramer (NERSC) Alex Poulos (SGI)

12:30–2:00 Newcomers Luncheon

Page 4 Monday PM Ballrooms 1 & 2

General Session 1 Chair: John Sell (NCS-MINN) 2:00 Welcome John Sell (MINN) 2:20 Keynote Address: They Who Live by the FLOP May Die by the Flop Cherri Pancake, Oregon State University 3:05 CUG Report Sally Haerer (NCAR), CUG President

3:30–4:00 Break Ballroom 2 Ballroom 3 Ballroom 4

Special Interest Group (SIG) Special Interest Group (SIG) Special Interest Group (SIG) 2A Updates (Open Meeting) 2B Updates (Open Meeting) 2C Updates (Open Meeting) 4:00 Communication and Data Management 4:00 Programming Environments SIG 4:00 Computer Centers SIG SIG, Chair: Hartmut Fichtel (DKRZ) Chair: Jeff Kuehn (NCAR) Chair: Leslie Southern (OSC) Mass Storage System Focus Compilers and Libraries Focus Operations Focus Jeff Terstriep (NCSA) Hans-Hermann Frese (ZIB) Brian Kucic (NCSA) Networking Focus Software Tools Focus Hans Mandt (BCS) Guy Robinson (ARSC) 5:00 Operating Systems SIG 5:00 High Performance Solutions SIG 5:00 Computer Centers SIG Chair: Chuck Keagle (BCS) Chair: Eric Greenwade (INEEL) Chair: Leslie Southern (OSC) IRIX Focus Applications Focus User Services Focus Cheryl Wampler (LANL) Larry Eversole (JPL) Chuck Niggley (SS-SSD) Security Focus Performance Focus Virginia Bedford (ARSC) Michael Resch (RUS) UNICOS Focus Visualization Focus Ingeborg Weidl (MPG) John Clyne (NCAR) All SIG Meetings will end and the rooms must be vacated promptly at 6:00. 7:00–9:00 SGI Reception Page 5 AM Tuesday

Ballroom 2 Ballroom 3 Ballroom 4 Security and User Services Mass Storage Software Tools 3A Chair: Virginia Bedford (ARSC) 3B Chair: Jeff Terstriep (NCSA) 3C Chair: Guy Robinson (ARSC) 9:00 Meeting the Requirements of a "Security 9:00 KART: a simple client/server interface 9:00 Development Tools Update Test and Evaluation" with UNICOS and to access different tape devices at Bill Cullen (SGI) IRIX CINECA site Virginia Bedford (ARSC) Sergio Bernardi (CINECA) 9:30 The First Ten Steps to Securing a Unix 9:30 FibreChannel and Storage Area 9:30 The Integrative Role of COW's and Host Networks on the Origin2000 Supercomputers in Research and Liam Forbes (ARSC) Thomas Ruwart, University of Minnesota Education Activities Don Morton (ARSC) 10:00 Allocations on the Web—Beyond 10:00 CXFS: A Clustered File System from SGI 10:00 Application of Fortran Pthreads on Removable Tape R. Kent Koeninger (SGI) Linear Algebra Routines Barbara Woodall (OSC) Clay Breshears, Henry Gabb, and Mark Fahey (NRC_VBURG)

10:30–11:00 Break

Ballrooms 1 & 2

General Session 4 Chair: Laney Kulsrud (IDA) 11:00 SGI Software Report Mike Booth (SGI) 11:30 SGI Hardware Report Steve Oberlin (SGI)

12:30–2:00 Lunch

Page 6 Tuesday PM

Ballroom 2 Ballroom 3 Ballroom 4 Compilers and Libraries Networking Performance and IRIX 5A Chair: Jeff Kuehn (NCAR) 5B Chair: Hans Mandt (BCS) 5C Chair: Michael Resch (RUS) 2:00 Strategies and Obstacles in Converting 2:00 Super Computer Communications 2:00 How Moderate-Sized RISC-Based SMPs a Large Production System to Ralph Niederberger (KFA) Can Outperform Much Larger Distrib- FORTRAN 90 (f90) on a T90 uted Memory MPPs David Gigrich (BCS) Daniel Pressel, Walter B. Sturek, J. Sahu, and K.R. Heavey (USARL) 2:30 A Solver that Learns 2:30 Systems Issues in Building Gigabyte 2:30 Performance Evaluation of MPI and Thomas Elken (SGI) Networks MPICH on the CRAY T3E Joe Gervais (SGI) Hans-Hermann Frese (ZIB) 3:00 What's New in the Message Passing 3:00 GSN and ST: Infrastructures for 3:00 IRIX Scalability Toolkit High Bandwidth, Low Latency Gabriel Broner (SGI) Karl Feind (SGI) Communications Brad Strand (SGI)

3:30–4:00 Break

Summit Interaction Session Summit Interaction Session Summit Interaction Session 6A (SIS) Open Meeting 6B (SIS) Open Meeting 6C (SIS) Open Meeting

4:00 SV1 4:00 Origin2000 4:00 IRIX Release Plans Patti Langer (SGI) Dave Morton (SGI) Betsy Zeller (SGI)

Page 7 AM Wednesday

Ballroom 2 Ballroom 3 Ballroom 4 UNICOS Applications User Services 7A Chair: Ingeborg Weidl (MPG) 7B Chair: Larry Eversole (JPL) 7C Chair: Leslie Southern (OSC) 9:00 T3E Resilience Update 9:00 Experiences Porting a CFD Parallel 9:00 SGI Technical Publications Strategy Dean Elling (SGI) Code to Fortran 90 Including Object- Laurie Mertz (SGI) Oriented Programming Jean-Pierre Grégoire (EDF) 9:30 T3E Scheduling Update 9:30 Performance of the Parallel Climate 9:30 High Performance Computing at the Jay Blakeborough (SGI) Model on the SGI Origin2000 and the University of Utah: A User Services CRAY T3E Overview Thomas Bettge, Anthony Craig, Rodney Julia Caldwell (CHPCUTAH) James, Warren G. Strand, Jr., and Vincent Wayland (NCAR) 10:00 GigaRing SWS and I/O Status 10:00 Teraweb: Browser-Based High Perfor- 10:00 Providing Extra Service: What To Do Paul Falde (SGI) mance Computing With Migrated User Files Joel Neisen, David Pratt, and Ola Bildtsen R.K. Owen (NERSC) (MINN)

10:30–11:00 Break

Ballrooms 1 & 2

General Session 8 Chair: Bruno Loepfe (ETHZ) 11:00 CUG Election Candidates 11:25 SGI Service Report Bob Brooks (SGI) 12:25 CUG Election Results

12:30–2:00 Lunch

Page 8 Wednesday PM

Ballroom 2 Ballroom 3 Ballroom 4 Performance Operations Visualization 9A Chair: Hans-Hermann Frese (ZIB) 9B Chair: Brian Kucic (NCSA) 9C Chair: John Clyne (NCAR) 2:00 Optimizing AMBER for the CRAY T3E 2:00 IRIX/Windows NT Interoperability, 2:00 Reality Monster Volume Rendering Robert Sinkovits and Hank Shiffman (SGI) James Painter, Al McPherson, and Pat Jerry Greenberg (SDSC) McCormick (LANL) 2:30 Performance Metrics for Parallel Systems 2:30 An Examination of the Issues in Imple- 2:30 Interacting with Gigabyte Volume Daniel Pressel (USARL) menting the PBS Scheduler on a SGI Datasets on the Origin2000 Onyx2/Origin2000 Steve Parker, Peter Shirley, Yarden Livnat, Sandra Bittner (ARGONNELAB) Charles Hansen, Peter-Pike Sloan, and Michael Parker (CHPCUTAH) 3:00 Performance Characteristics of Messag- 3:00 LSF Workload Management System Sta- 3:00 Distributed Memory Octree Algorithms ing Systems on the T3E & the Origin2000 tus and Plans for Interactive, Adaptive, Multiresolu- Mark Reed, Eric Sills, Sandy Henriquez, Jack Thompson (SGI) and John Feduzzi, tion Visualization of Large Data Sets Steve Thorpe, and Lee Bartolotti (NCSC) Platform Computing Corporation Raymond Loy and Lori Freitag (ARGONNELAB)

3:30–4:00 Break

UNICOS Mass Storage Compilers and Libraries 10A Chair: Chuck Keagle (BCS) 10B Chair: Hartmut Fichtel (DKRZ) 10C Chair: Hans-Hermann-Frese(ZIB)) 4:00 Clustering—Operational and User 4:00 CRAY Data Migration Facility and Tape 4:00 Compilers and Libraries Update Impacts Management Facility Update Jon Steidel (SGI) Sarita Wood (SS-SSD) Neil Bannister and Laraine MacKenzie (SGI) 4:30 Cluster UDB for SV1 Clusters 4:30 Storage Management—Seven Years and 4:30 Scientific Library Update Richard Lagerstrom (SGI) Seven Lessons with DMF Mimi Celis (SGI) Robert Bell (CSIRO) 5:00 UNICOS and UNICOS/mk Release 5:00 Performance Evaluation of New STK 5:00 Co-Array Fortran Plans and Status ACS Tape Drives Robert Numrich (SGI) Laura Mikrut (SGI) Ulrich Detert (KFA)

CUG Night Out 7:00 Page 9 AM Thursday

Ballroom 2 Ballroom 3 Ballroom 4 IRIX Operations Visualization 11A Chair: Cheryl Wampler (LANL) 11B Chair: Brian Kucic (NCSA) 11C Chair: Eric Greenwade (INEEL) 9:00 IRIX Software Engineering Support 9:00 SV1 SuperCluster Resiliency 9:00 Lagrangian Visualization of Natural Laura Mikrut (SGI) Mike Wolf (SGI) Convection Flows Luis M. de la Cruz, Eduardo Ramos, and Victor Godoy (UNAM) 9:30 Experiences with the SGI/CRAY 9:30 Installation and Administration of Soft- 9:30 Animation of Hairpin Vortices in a Origin2000 256 Processor System ware onto a CRAY SV1 SuperCluster Laminar-Boundary-Layer Flow Past a Installed at the NAS Facility of the from the SWS Hemisphere NASA Ames Research Center Scott Grabow (SGI) Henry Tufo, Paul Fischer, Mike Papka, and Jens Petersohn and Karl Schilke (NAS) Matt Szymanski (ARGONNELAB) 10:00 IRIX Resource Management Plans and 10:00 Site Planning for SGI Products 10:00 The Use of Virtual Reality Techniques in Status Jim Tennesen (SGI) Industrial Applications Dan Higgins (SGI) Lori Freitag and Darin Diachin (ARGONNELAB), Daniel Heath, Iowa State University, and Tim Urness (ARGONNELAB)

10:30–11:00 Break

Ballrooms 1 & 2

General Session 12 Chair: John Clyne (NCAR) 11:00 Strategies for Visual Supercomputing Scott Ellman (SGI) 11:45 New Techniques for High Performance Visualization Jim Foran (SGI)

12:30–2:00 Lunch

Page 10 Thursday PM

Ballrooms 1 & 2

General Session 13 Chair: Barbara Horner-Miller (ARSC) 2:00 “Honey, I Flew the CRAY” Candace Culhane (DOD) 2:45 SGI Corporate Report Beau Vrolyk (SGI)

3:30–4:00 Break

Ballroom 2 Ballroom 3 Ballroom 4 Deer / Elk Lake Summit Interaction Summit Interaction Summit Interaction Summit Interaction 14A Session (SIS) 14B Session (SIS) 14C Session (SIS) 14D Session (SIS) Open Meetings Open Meetings Open Meetings Open Meetings 4:00 T3E 4:00 T90 4:00 Storage Directions 4:00 SGI Operations Q&A Panel William White (SGI) Vito Bongiorno (SGI) Art Beckman (SGI) Charlie Clark (SGI)

Ballrooms 1 & 2

General Session 15 Chair: Sam Milosevich (ELILILLY), CUG Vice President 5:30 Conference Program Discussion (Open Meeting)

Everyone interested in the CUG Program is invited to attend.

Page 11 AM Friday

Ballroom 2 Ballroom 3 Ballroom 4 Performance Software Tools Visualization and Operations 16A Chair: Michael Resch (RUS) 16B Chair: Guy Robinson (ARSC) 16C Chair: Eric Greenwade (INEEL) 9:00 A CMOS Vector Processor with a Cus- 9:00 Panel Discussion: Is "write" still the Best 9:00 Visualization Theater Videotapes tom Streaming Cache Debug/Performance Analysis Tool Greg Faanes (SGI) Available? Guy Robinson (ARSC) 9:30 Performance Tuning for Large Systems 9:30 (continued) 9:30 (continued) Edward Hayes-Hall (SGI) 10:00 High Performance Simulation of 10:00 The SGI Multi-Stream Processor (MSP) 10:00 Improvements in SGI’s Magnets Terry Greyzck (SGI) Electronic Support Balazs Ujfalussy (ORNL) Lori Leanne Parris and Vernon Clemons (SGI) 10:30–11:00 Break

Ballrooms 1 & 2

General Session 17 Chair: Sam Milosevich (ELILILLY), CUG Vice-President 11:00 SIG Reports Sam Milosevich (ELILILLY), CUG Vice-President 11:15 Progress and Power of Flow Simulation and Modeling Tayfun E. Tezduyar, Rice University 12:00 Next Conference: Noordwijk aan Zee, Holland Technical University of Delft (DELFTU) and Stichting Academisch Rekencentrum Amsterdam (SARA) 12:15 Closing Remarks Sally Haerer (NCAR), CUG President 12:30 Conference Ends

Page 12 Abstracts

Monday IV SV1 VII I/ O Tuning Jef Dawson (SGI) Randy Kreiser (SGI) Tutorials I–IV 8:30 This tutorial will describe the performance How to optimize disk I/O for both SCSI and I System Tuning for the Origin2000 characteristics of the Cray SV1. It is intended to Fibre Channel attached disks and XFS File Sys- enable programmers to develop optimization tems. To include Optimal File System Block Edward Hayes-Hall (SGI) strategies for their software running on the size determination, whether to stripe and what A focus on the systune parameters. The tutorial SV1. the stripe size should be, systune resources will present a list of the most often used para- that affect disk I/O performance, performance meters (15–20 or so) with descriptions of what Tutorials V–VIII 11:00 aspects of internal vs. external Journal files, and how to determine the optimal allocation they are and how they interact with other sys- V Performance Tools on the Origin tune parameters. This will be a technical pre- group count. Some information about specific sentation for people who know something Alex Poulos (SGI) computers and OS systems will be presented. about systune and system resources. As applications grow larger, compiler opti- mizations more aggressive, processor designs II OpenMP more sophisticated and system architectures VIII Data Intensive Computing Tom MacDonald (SGI) more scalable, performance analysis is getting William Kramer (NERSC) more and more complex. This tutorial will pro- OpenMP is a portable shared memory pro- vide a presentation of SGI’s performance tools Data intensive computing is increasingly gramming model with bindings for Fortran, C that offer practical solutions helping the users important in scientific research, whether for and C++. This tutorial will present the major understand the behavior of their codes and experimental data analysis and visualization, constructs and features in OpenMP that enable tune their applications on Origin systems. The or intensive simulation such as climate model- programmers to write coarse grain, scalable, tutorial is targeted toward all audiences. A ing. By integrating high performance computa- shared memory parallel programs. The tutorial Q&A session will follow. tion, very large data storage, high bandwidth is aimed at users interested in writing parallel access, and both local and wide area high- applications or parallelizing existing applica- speed networking, data intensive computing tions in Fortran, C and C++. stretches the technical resources of most archi- VI Scheduling for the T3E tectures. The synergy of networking and com- III Visualization: OpenGL Volumizer Jay Blakeborough (SGI) putational initiatives should produce major improvements in data intensive computing Chikai Ohazama (SGI) A presentation of the current state of schedul- technology in the next several years. Preparing This tutorial will present the basic structure of ing for the T3E. A discussion of the solution of for this paradigm shift is the prime motivation OpenGL Volumizer. Details of its use for scien- current solved and unsolved problems. Ques- for this tutorial. The tutorial provides an tific visualization, and the advantages versus tions arising from this tutorial will be treated overview of these issues followed by case stud- the disadvantages for using volume rendering, in a Q&A session later in the meeting. ies that explore in detail different aspects of will be discussed. The mechanics behind Mon- research and systems solutions for data inten- ster Volumizer and its use will also be sive computing. addressed.

Page 13 Abstracts

1 General Session 2:00 2A Communications and Data 2A Operating Systems SIG 5:00 Management SIG 4:00 Keynote Address: SIG Chair: Chuck Keagle (BCS) They Who Live by the FLOP May Die Chair: Hartmut Fichtel (DKRZ), IRIX Focus Area Chair: Cheryl Wampler (LANL) by the Flop Mass Storage Focus Area Chair: Jeff Terstreip Security Focus Area Chair: Virginia Bedford (NCSA) Cherri Pancake, Oregon State University (ARSC) Networking Focus Area Chair: Hans Mandt (BCS) It's definitely not your father's supercomputer UNICOS Focus Area Chair: Ingeborg Weidl anymore—it's not even like your own super- This SIG currently consists of the above two (MPG) computer of just a few years ago. HPC users focus areas, and is interested in nearly all This session will be of interest to those who don't want to believe that every time they get aspects of networking, I/O, and mass storage want the inside scoop on SGI’s plans for IRIX, familiar with a new type of machine it will be (hardware, software, performance, security). Unicos, and Security. SGI will discuss their superceded by something entirely different. After the SIG restructuring process of last year development plans for IRIX and security and Are they being unrealistic, or are market forces this meeting concentrates on compiling input their retirement plans for Unicos. Will we ever going to push users to the breaking point? This to get the technical activities going again. The see NT on SGI high end systems? What about presentation will examine trends that have meeting is open to everyone. Please come and Linux? Will IRIX support Session Level emerged at HPC vendors, at user sites, and participate, and give us your technical input. Accounting? Will User Resource Limits be inte- across the broader computing marketplace to This is the opportunity to influence the direc- grated into IRIX? SGI will answer these ques- identify the long-term implications for usabili- tion of the SIG and compile questions/com- tions and more in a thumbnail sketch of their ty. Examples gleaned from hundreds of inter- ments/criticism to SGI development plans for IRIX, Unicos, and Secu- views with users and vendors will be used to rity. Following this 15 minute presentation, we show why HPC has evolved to its current will discus features we both like and dislike state, and what will be needed for users to about the existing products. These will be cate- believe that future machines are really "new 2C Computer Centers SIG 4:00 gorized and the top likes and dislikes for each and improved." SIG Chair: Leslie Southern (OSC) area will be presented to SGI. Operations Focus Area Chair: Brian Kucic (NCSA) Introductions of Focus Group Chairs 2A–C Special Interest Group (SIG) • IRIX - Cheryl Wampler, LANL Updates (Open Meetings) Welcome/Introduction - Brian Kucic • Security - Virginia Bedford, ARSC 4:00 to 6:00 CUG Survey - Fran Pellegrino • Unicos - Ingeborg Weidl, MPG Around the Room - Introductions • SGI Liason - TBD These meetings are open to everyone. The • Site Configuration Summary SGI Thumbnail Sketch of future plans meetings will introduce the SIG leadership, • Any Concerns or Problems (help needed, SGI contacts and basic agenda of the SIG. issues for panel) Open discusion of issues and concerns Please come and participate. See the Monday • Any Solutions or Problems Overcome (help Voting for the top issues and concerns to pre- 4:00 and 5:00 schedule on page 5 for a list of offered) sent to SGI SIGs and Chairs.

Page 14 Abstracts

Tuesday 3C Software Tools 9:00 3B Mass Storage 9:30 Development Tools Update FibreChannel and Storage Area Net- works on the Origin2000 3A Security 9:00 Bill Cullen (SGI) Thomas Ruwart, University of Minnesota Meeting the Requirements of a During this talk, users will hear about current “Security Test and Evaluation” with Debugger and Performance Tool offerings on FibreChannel is fast becoming the standard for UNICOS and IRIX IRIX. Specifically, the talk will highlight fea- connecting storage to high performance tures that have gone into the WorkShop IDE, machines. This talk describes the performance Virginia Bedford (ARSC) SpeedShop, and ProMP. In addition, this talk of FibreChannel storage on SGI Origin2000 and This talk will cover the steps taken to meet the will cover future plans for development tools its use in creating Storage Area Networks requirements put forth by a US Security on IRIX, Linux, and NT. (SANs). Agency, specifically focusing on Unicos and IRIX platforms. Specific changes made to the system’s monitoring tools 3A Security 9:30 3C Software Tools 9:30 will be covered as well as some general policy issues. It will also describe the steps taken to The First Ten Steps to Securing a Unix The Integrative Role of COW’s and ensure that newly installed systems maintain a Host Supercomputers in Research and Edu- cation Activities safer environment. Liam Forbes (ARSC) Don Morton (ARSC) This paper and talk will cover the first ten steps ARSC performs to secure a Unix (UNI- Experiences of researchers and students are 3B Mass Storage 9:00 COS or IRIX) machine. This paper is aimed at presented in the porting of code between a KART: A simple client/ server inter- system administrators implementing host cluster of Linux workstations at The University face to access different tape devices at based security network-wide or on a single of Montana and the CRAY T3E at the Arctic CINECA site machine. None of these steps require extra Region Supercomputing Center. We test the security packages and have helped us to main- thesis that low-cost workstation environments Sergio Bernardi (CINECA) tain a similar configuration across multiple may be utilized for training and for developing KART was developed to provide a controlled platforms and machines. and debugging parallel codes which can ulti- and easy access to different tape devices at mately be moved to the CRAY T3E with rela- CINECA site. Based on a client/server architec- tive ease for realizing high performance gains. ture, KART provides a set of commands in We further present ideas on how the comput- order to define logical tape volumes, to read ing environments of supercomputers and and write files on the defined volumes. The set COW’s might benefit from more commonality. of commands is also accessible via a Web page.

Page 15 Abstracts

3A User Services 10:00 3C Software Tools 10:00 5B Networking 2:00 Allocations on the Web—Beyond Application of Fortran Pthreads on Super Computer Communications Removable Tape Linear Algebra Routines Ralph Niederberger (KFA) Barbara Woodall (OSC) Clay Breshears, Henry Gabb, and Mark Fahey In the last several years, the distribution of a (NRC_VBURG) OSC’s allocations process is moving from man- parallel application over high speed communi- ually produced records to automated, Web- The manipulation and solution of dense linear cation lines onto more than one computer sys- based access for applicants and Allocations systems with 10,000 to 40,000 equations is not tem has become highly interesting. The paper Committee members. Now, hyperlinks and unheard of in the HPC community. We pro- will describe how the current CRAY supercom- immediate, committee-wide access to applica- pose to use Pthreads to make concurrent For- puter systems (GigaRing Systems) can con- tion information have replaced notes attached tran linear algebra routines. The effectiveness tribute to such metacomputing environments to file folders with removable tape. Future of this work will be demonstrated with a MRI and what has to be done to support high speed plans include linking OSC’s database with the image analysis application. This work builds ATM 622 Mb/s communication lines on CRAY Web pages to automate the process even more on research presented at the 40th CUG confer- GigaRing systems using an HiPPI-to-ATM- and to make more information readily avail- ence. gateway. able.

5A Compilers and Libraries 2:00 5C Performance and IRIX 2:00 3B Mass Storage 10:00 Strategies and Obstacles in Convert- How Moderate-Sized RISC-Based CXFS: A Clustered File System from ing a Large Production System to SMPs Can Outperform Much Larger SGI FORTRAN 90 (f90) on a T90. Distributed Memory MPPs R. Kent Koeninger (SGI) David Gigrich (BCS) Daniel Pressel, Walter B. Sturek, J. Sahu, and K.R. Heavey (USARL) CXFS is a clustered file system that provides Why the decision to convert in-house Triton near-local IO performance for shared access to applications from FORTRAN 77 (cft77) to FOR- This paper is an expanded version of the mid- Fibre Channel disks from multiple hosts. This TRAN 90 (f90)? The strategies used and chal- dle part of my talk from the CUG Origin2000 talk will give an overview of CXFS and the sta- lenges faced in the f90 conversion of a 1.4 mil- Workshop and was accepted for presentation tus of the CXFS product development. lion line finite element application used by the at the SCS sponsored Simulation Conference in world’s largest producer of commercial jetlin- April. ers. Complications due to poor programming, compiler changes, and unexpected compiler features (bugs). The tools used/developed, regression testing, system validation for accep- tance by the engineering customers, and the resources required.

Page 16 Abstracts

5A Compilers and Libraries 2:30 5C Performance and IRIX 2:30 5B Networking 3:00 A Solver that Learns Performance Evaluation of MPI and GSN and ST: Infrastructures for High MPICH on the CRAY T3E Bandwidth, Low Latency Communi- Thomas Elken (SGI) cations Hans-Hermann Frese (ZIB) was one of the first to employ Brad Strand (SGI) nested dissection matrix ordering methods for MPI is a portable message passing interface for reducing the non-zeroes in solving sparse sys- parallel applications on distributed memory In this presentation, SGI's Gigabyte Systems tems of linear equations. This talk will discuss an MIMD machines. In this paper we present per- Network (GSN) network interface and Sched- enhancement which uses randomization, parallel formance results for the native SGI/CRAY MPI uled Transfers (ST) protocol will be described. processing, and a long-term memory to improve implementation and the portable ANL/MSU Details of the high bandwidth and low latency the quality of the matrix ordering. This enhance- MPICH implementation on the CRAY T3E. of GSN and ST will be presented. Options for ment has been applied to application software using ST as a framework for clustering and such as ABAQUS and CPLEX and will soon be storage area networking will be explored. part of the SCSL (SGI/CRAY Scientific Library). Finally, performance numbers gathered will be 5A Compilers and Libraries 3:00 presented, and compared with current genera- What’s New in the Message Passing tion technologies. 5B Networking 2:30 Toolkit? Karl Feind (SGI) Systems Issues in Building 5C Performance and IRIX 3:00 Gigabyte Networks In the past year a number of new features and optimizations have been added to the Message IRIX Scalability Joe Gervais (SGI) Passing Toolkit (MPT) on Origin2000 and Gabriel Broner (SGI) System Area Networks require more than a fast CRAY T3E systems. This talk will describe channel like Gigabyte System Network (GSN) recently added features in the MPI, SHMEM, In the past year we have made improvements to achieve high performance. An overview of and PVM message passing libraries, as well as to IRIX scalability. This talk will cover the the Scheduled Transfer (ST) Protocol for OS outline the current feature road map for plans and status for scaling IRIX in an SMP Bypass will be provided. ST is a connection- upcoming releases. fashion, additional reliability and features, and oriented data transfer protocol that can be support across multiple SMPs for additional thought of as “Network DMA.” A survey of scaling. systems issues in interfacing to a gigabyte per second network will be done. Applications of technology to other networks such as Gigabit Ethernet will also be presented.

Page 17 Abstracts

7C User Services 9:00 and dynamic climate system, and includes Wednesday fully active components of the atmosphere, SGI Technical Publications Strategy ocean, land, river water, and space. It was Laurie Mertz (SGI) designed for use on the T3E massively parallel 7A UNICOS 9:00 machine architecture, and adapted for use on T3E Resilience Update SGI Technical Publications delivers documen- the distributed shared memory Origin2000. tation through various methods, from online to The computational design of the PCM, includ- Dean Elling (SGI) print. This session will give an overview of ing the compromises incorporated to address This talk will cover upcoming enhancements to current documentation delivery mechanisms conflicting scientific and computational improve T3E resilience. SGI currently supports and thoughts on future delivery methods. requirements, priorities, and deadlines, will be warm booting PEs from the SWS. With the summarized. Performance experiences on both release of UNICOS/mk 2.0.5, warm booting PEs the T3E and Origin2000 will be presented. from the mainframe will be supported, which 7A UNICOS 9:30 could significantly reduce the time needed to T3E Scheduling Update perform this function. Dynamic PE renumbering 7C User Services 9:30 will also be provided. This feature will enable Jay Blakeborough (SGI) sites to reconfigure a failed PE with a renum- High Performance Computing at the With UNICOS/mk 2.0.4, we have made signifi- bered PE and thereby maintain the APP PE space University of Utah: A User Services cant strides in the stability and functionality of without requiring a system reboot. Overview application scheduling on the T3E. This talk will highlight recent and planned changes to Julia Caldwell (CHPCUTAH) the Political Scheduling Daemon (psched) and The user community supported by CHPC 7B Applications 9:00 kernel-level scheduling support. It will also comes from a wide array of disciplines at the outline the implementation of single-PE appli- Experiences Porting a CFD Parallel University of Utah. The resources at CHPC cations and migrate-on-swap. Code to Fortran 90 Including Object- include a 64 node SGI Origin2000 with graph- Oriented Programming ics pipes, a 72 node IBM SP, a 20 node SGI Jean-Pierre Grégoire (EDF) PowerChallenge, a 32 node Intel Cluster, a 16 7B Applications 9:30 node Sun E10K, and a few other small work- Platform used: C98, Programming Environment Performance of the Parallel Climate station clusters. This talk will discuss how the 3.0. First we present the successful efforts we Model on the SGI Origin2000 and User Services division supports this diverse have done to obtain from CF90 parallelizer a For- the CRAY T3E user community, including problem tracking, tran parallel version of the N3S code similar to and our online support services. that produced by the CF77 parallelizer. We also Thomas Bettge, Anthony Craig, Rodney James, discuss the stability of the results during this Warren G. Strand, Jr., and Vincent Wayland porting. Then we describe the difficulties in con- (NCAR) troling memory consumption when moving from The DOE Parallel Climate Model (PCM) is a basic programming to object-oriented program- tightly coupled model of the earth’s physical ming. Page 18 Abstracts

7A UNICOS 10:00 Special software and scripts were written to their jobs. We’ll discuss the range of challenges store the file inode data, parse the DMF data- involved in integrating IRIX and Windows sys- GigaRing SWS and I/ O Status base, and interact with the tape storage system. tems into one environment, a variety of solu- Paul Falde (SGI) Design issues, problems to overcome, bound- tions to these challenges and ways to satisfy aries to cross, and the hard reality of experi- the differing concerns of application users, This presentation will cover the current status ence will be discussed. developers and system administrators. of the SWS and GigaRing I/O.

9A Performance 2:00 9C Visualization 2:00 7B Applications 10:00 Reality Monster Volume Rendering Teraweb: Browser-Based High Perfor- Optimizing AMBER for the CRAY mance Computing T3E James Painter, Al McPherson, and Pat McCormick (LANL) Joel Neisen, David Pratt, and Ola Bildtsen (MINN) Robert Sinkovits and Jerry Greenberg (SDSC) Texture based volume rendering algorithms With standard browsers, Teraweb provides a AMBER is a widely used suite of programs can be used to exploit high performance graph- seamless and unified graphical environment employed to study the dynamics of biomole- ics accelerators such as the SGI IR (Infinite for users of High Performance Computing cules and is the single most used application Reality) for interactive volume rendering. A (HPC) equipment. Teraweb can be used to run on the SDSC CRAY T3E. In this paper, we single IR can interactively render a 64M voxel computational models, manipulate data files, describe the cache and intrinsic function opti- data set (256^3). Working with SGI, we have print or visualize results, or display system mizations that were taken to tune the molecu- chained together multiple IR pipes in order to usage information without having to learn lar dynamics module of AMBER, resulting in enable rendering of even larger problems. With cryptic command-line interfaces or operating single processor speedups of more than 70%. 16 IR pipes we have been able to render billion system commands. The implications of these performance gains for biochemistry calculations will be briefly cell data sets (1024^3) at approximately 5 described. frames per second. We have constructed an I/O system that enables us to interactively 7C User Services 10:00 page through billion cell time histories at approximately the same rate. This I/O system Providing Extra Service: What To Do 9B Operations 2:00 is able to page texture data from disk at 5 With Migrated User Files? IRIX/ Windows NT Interoperability Gigavoxels per second. R.K. Owen (NERSC) Hank Shiffman (SGI) When faced with decommissioning our popu- lar C90 machine, there was a problem of what No man is an island, entire of itself. Similarly, to do with the 80,000 migrated files in 40,000 computing systems need to interact, to share directories and the users’ non-migrated files. information, to cooperate in helping users do

Page 19 Abstracts

9A Performance 2:30 9C Visualization 2:30 9B Operations 3:00 Performance Metrics for Parallel Interacting with Gigabyte Volume LSF Workload Management System Systems Datasets on the Origin2000 Status and Plans Daniel Pressel (USARL) Steve Parker, Peter Shirley, Yarden Livnat, Charles Jack Thompson (SGI) and John Feduzzi, Platform Hansen, Peter-Pike Sloan, and Michael Parker Computing Corporation This presentation deals with various metrics (CHPCUTAH) one might use for measuring the success of a SGI is working with Platform Computing to project to parallelize a code (or port it to a new We present a parallel ray tracing program that incorporate key NQE functionality into LSF. system). computes isosurfaces of large-scale volume Platform and SGI will jointly present the status datasets interactively. The system is shown for of LSF for IRIX, UNICOS, and UNICOS/mk the gigabyte Visable Woman dataset. and discuss upcoming features of LSF 4.0. 9B Operations 2:30 9A Performance 3:00 9C Visualization 3:00 An Examination of the Issues in Implementing the PBS Scheduler Performance Characteristics of Distributed Memory Octree on a SGI Onyx2/ Origin2000 Messaging Systems on the T3E Algorithms for Interactive, Adaptive, and the Origin2000 Multiresolution Visualization of Sandra Bittner (ARGONNELAB) Large Data Sets Mark Reed, Eric Sills, Sandy Henriquez, Steve Schedulers and supporting utilities are used to Thorpe, and Lee Bartolotti (NCSC) Raymond Loy and Lori Freitag (ARGONNELAB) automate resource allocation. Identifying those Interprocessor communication time has a large Interactive visualization of scientific data sets resources, tracking their usage, and identifying impact upon the scaling of parallel applications generated on MPP architectures is a difficult mechanisms to control their allocation without with processor number. To assess the effective- task due to their immense size; adaptive mul- causing the system to thrash or crash is a chal- ness of the T3E and the Origin2000 in using mes- tiresolution techniques are necessary to reduce lenge. I will outline the solutions we imple- sages to move data to remote memory, the per- data sets while still providing adequate resolu- mented and the obstacles we encountered formance characteristics of PVM, MPI and tion in areas of interest and full resolution on while rolling out the PBS scheduler and sup- SHMEM are investigated and performance demand. Because these data sets are often too porting utilities on our Onyx2. curves for transfer time as a function of message large to fit into the memory of a serial visual- size are presented. In addition, latencies, band- ization engine, we use a parallel octree data widths and the message size required to achieve structure closely coupled to the application to half of peak are reported. Performance is also increase the speed of interactive visualization compared with a cluster of O2 workstations con- of both structured and unstructured data sets. nected via Ethernet. Differences for many types In this talk we discuss the parallel octree data of point-to-point calls are presented and dis- structure, its creation using criteria such as cussed. A variety of collective communication mesh feature size or data gradients to adjust calls representing a diverse set of traffic patterns level of detail, and efficient parallel algorithms is investigated and results reported. for searching and manipulating the octree.

Page 20 Abstracts

10A UNICOS 4:00 10C Compilers and Libraries 4:00 10B Mass Storage 4:30 Clustering—Operational and User Compilers and Libraries Update Storage Management—Seven Years Impacts and Seven Lessons with DMF Jon Steidel (SGI) Sarita Wood (SS-SSD) Robert Bell (CSIRO) This talk will cover the status and plans for Clustering can improve the overall utilization compilers and libraries. CSIRO has been using CRAY’s Data Migration of multiple hosts and can provide better turn- Facility (DMF) for storage management for around to users by balancing workloads across over seven years. This talk will focus on seven hosts. Changes are needed on the operational lessons learned in the seven years. side and in user scripts to accommodate a clus- tered environment. 10A UNICOS 4:30 Cluster UDB for SV1 Clusters Richard Lagerstrom (SGI) 10C Compilers and Libraries 4:30 10B Mass Storage 4:00 Each node of an SV1 cluster will have a local Scientific Library Update UDB just as if it were a stand-alone system. Mimi Celis (SGI) CRAY Data Migration Facility and The assumptions of an SV1 cluster require that Tape Management Facility Update much of the user information be consistent This talk will describe the SGI/CRAY Scientific among all nodes of the cluster. The Cluster Library (SCSL) and plans for future releases. Neil Bannister and Laraine Mackenzie (SGI) UDB feature provides commands and inter- Differences in interfaces from the SGI library This paper will review status and plans for faces to make UDB administration global. This (Complib) and the CRAY library (scilib) will be both CRAY and SGI platforms. Since the last makes it possible to enforce UDB consistency covered. CUG meeting DMF four was released. This has within the cluster while accommodating nodes presented the DMF team with new challenges with varying resources or designated func- which will be covered in this paper. The paper tions. will also report progress and release plans for the TMF and CRL products.

Page 21 Abstracts

10A UNICOS 5:00 10C Compilers and Libraries 5:00 11B Operations 9:00 UNICOS and UNICOS/ mk Release Co-Array Fortran SV1 SuperCluster Resiliency Plans and Status Robert Numrich (SGI) Mike Wolf (SGI) Laura Mikrut (SGI) Co-array Fortran (previously known as F--) is a Cluster configurations introduce new resiliency UNICOS and UNICOS/mk are alive and well. small set of extensions to Fortran 95 for Single challenges. This talk will discuss both automat- This talk will cover enhancements, release Program Multiple Data (SPMD) parallel pro- ed and manual processes for detecting SV1 plans, support status and Software Problem cessing. It is a simple, explicit notation for data SuperCluster failures and recovering from Report (SPR) status. decomposition expressed in a natural Fortran- them. It will emphasize how to keep the cluster like syntax. This talk will give a short introduc- alive rather than how to keep any individual tion into Co-array Fortran and will illustrate its system within it alive. power and simplicity through some examples. 10B Mass Storage 5:00 Performance Evaluation of New STK 11C Visualization 9:00 ACS Tape Drives in an SGI/ CRAY Thursday Mainframe Environment Lagrangian Visualization of Natural Convection Flows Ulrich Detert (KFA) 11A IRIX 9:00 Luis M. de la Cruz, Eduardo Ramos, and Victor In spring ‘99 new STK tape drives 9840 (for- Godoy (UNAM) merly called “Eagle”) become available for IRIX Software Engineering Support SGI/CRAY systems. The capacity of the new We present a new technique to visualize natur- Laura Mikrut (SGI) tapes is 20 GB with a peak transfer rate of 10 al convection flows inside a cubic cavity. This MB/s. The paper describes performance char- With recent reorganizations within the Soft- technique consists in following a closed surface acteristics for the new tapes in comparison to ware Division, a new emphasis on IRIX main- using a Lagrangian system. We used advanced the older tape drives 9490 (Timberline) and tenance is under way. The overriding goal is techniques to build spectacular graphics and 4490(Silverton) with BMX and SCSI interfaces. increased customer satisfaction. This session animations in 3D. In our particular research, Related topics of installation and handling of will cover how SGI will deliver on this goal we made a visualization of a natural convec- the tapes are also discussed. along with a current status. tion mixing flow, and this visualization permits us to analyze, in a graphic way, our flow. Cal- culations and graphics were done in an Origin- 2000 and an Onyx with processors, using Fortran programs and Alias/Wavefront software.

Page 22 Abstracts

11A IRIX 9:30 11C Visualization 9:30 tures has been identified, based on customer input, and work is underway to complete these Experiences with the SGI/ CRAY Animation of Hairpin Vortices in a features. The feature plan set, plans, and status Origin2000 256 Processor System Laminar-Boundary-Layer Flow Past a information will be presented. Installed at the NAS Facility of the Hemisphere NASA Ames Research Center Henry Tufo, Paul Fischer, Mike Papka, and Matt Jens Petersohn and Karl Schilke (NAS) Szymanski (ARGONNELAB) 11B Operations 10:00 The NAS division of the NASA Ames Research In an effort to understand the vortex dynamics Site Planning for SGI Products Center installed a 256 processor single system of coherent structures in turbulent and transi- image Origin2000 system, the first at a cus- tional boundary layers, we consider direct Jim Tennesen (SGI) tomer site, during the latter half of October numerical simulation of the interaction With the volume of system installations shift- 1998. Due to the large number of processors, between a flat-plate-boundary-layer flow and ing from liquid cooled mainframes to air the system exhibits different operational char- an isolated hemispherical roughness element. cooled racks, the considerations for data pro- acteristics than smaller Origin2000 systems. Of principal interest is the evolution of hairpin cessing facility infrastrutures are changing. In The causes of the observed behavior are dis- vortices that form an interlacing pattern in the general, site preparations for the air cooled sys- cussed along with progress towards solving or wake of the hemisphere, lift away from the tems are much simpler and less expensive; reducing the effect of problem areas. Reliability wall, and are stretched by the shearing action however, as air cooled systems scale to hun- and benchmarking data are also presented. of the boundary layer. Using animations of dreds of processors in dozens of racks, the unsteady, three-dimensional representations of cumulative air cooled load may overwhelm this flow produced by a vtk enhanced CAVE data processing facilities which have tradition- library developed at Argonne National Labora- ally supported liquid cooled systems. The pre- 11B Operations 9:30 tory, we identify and study several key features sentation will provide power and cooling esti- in the evolution of this complex vortex topolo- Installation and Administration of mates for future products to provide the data gy not previously observed in other visualiza- Software onto a CRAY SV1 center operations people a view of the long- tion formats. SuperCluster from the SWS term trends. In order to get as much out of this Scott Grabow (SGI) session as possible, it would be beneficial for the participants to know how much heat rejec- With the introduction of the CRAY SV1 Super- 11A IRIX 10:00 tion to air and water their present facilities can Cluster, the installation process for initial and provide and to consider at what heat rejection upgrade installations has been updated to pro- IRIX Resource Management Plans value it is desirable to have equipment cooled vide a level of flexibility and consistency for and Status by liquid versus air. I'll also cover the impacts sites that have a CRAY SV1 SuperCluster sys- Dan Higgins (SGI) that are associated with having few main- tem. This paper will outline the two processes frames consuming considerable current at available to perform each installation and This talk is a review of the current plans and medium voltage (400 to 480vac), 3 phase address some administration issues when status of IRIX Resource Management for large power versus dozens of chassis consuming working with SV1-8 and larger SuperClusters. systems. A list of Resource Management fea-

Page 23 Abstracts current at lower voltage (180 to 240), single 12 General Session 11:00 13 General Session 2:00 phase power. I prefer this to be a session where I will present the information in 10 to 15 min- Strategies for Visual Supercomputing “Honey, I Flew the CRAY” utes and spend the rest of the time discussing Scott Ellman (SGI) Candace Culhane (DOD) the issues with the participants. This will allow me to understand the customers’ viewpoints This talk will summarize the current state of From 1994 to 1999, the Department of Defense and bring those back to SGI Engineering. visualization for supercomputing environ- has supported multiple R&D contracts with ments, including the emergence of the Visual Cray Research and SGI, all in support of the 11C Visualization 10:00 Supercomputing paradigm. It will then look at goal of demonstrating a supercomputer capa- how Visual Supercomputing can be used to ble of operating in mobile environments. The Use of Virtual Reality Tech- increase the insight obtained from supercom- These research efforts included the applica- niques in Industrial Applications puting calculations, and will conclude with a tion of new thermal management and packag- discussion of how various Visual Supercom- ing technologies to achieve miniaturization of Lori Freitag and Darin Diachin (ARGONNELAB), puting sites are implementing these strategies. a standard CRAY product, the J90 vector Daniel Heath, Iowa State University, and supercomputer. The research prototype SOLI- Tim Urness (ARGONNELAB) TAIRE and its host system was designed, built At Argonne National Laboratory we have suc- and tested. Companion programs were target- cessfully used the CAVE virtual reality envi- 12 General Session 11:45 ed at further developing spray cooling tech- ronment for interactive engineering and analy- New Techniques for High Perfor- nologies and investigating the design space sis in two industrial combustion applications; mance Visualization for ruggedized versions of the Scalable Node the design of injective pollution control sys- (SN) and Scalable Vector (SV) product lines. tems for commercial boilers and the analysis of Jim Foran (SGI) This talk will summarize the research pro- different fuel types for a new burner in alu- Visualization is a critical aspect in understand- grams, the experiences encountered along the minum smelting furnaces. In this talk we will ing results from supercomputing calculations. way, successes and lessons learned, and what describe the applications in some detail, the However, in many supercomputing environ- the future outlook for ruggedized high perfor- visualization techniques developed, the soft- ments, the visualization techniques used are mance computers is. A highlight of the talk ware mechanisms used to create data structure limited by the capabilities of desktop worksta- will be the recounting of the successful Jan 99 independent toolkits, and the use of interactive tions and do not match the requirements of the flight test of an alternate ruggedized version virtual reality environments to add value to end user. This presentation describes several of the CRAY J90. engineering analysis and design. new visualization paradigms that eliminate obstacles to understanding, increase the involvement of scientists and engineers in the analysis of their data, and increase the value of supercomputing and visualization assets to the organization.

Page 24 Abstracts

14C Summit Interaction Session 16B Software Tools 9:00 1998 Gordon Bell prize. The code shows near linear scale-up to 1024-processing elements (SIS) Open Meeting 4:00 Panel Discussion: Is “write” still the (PE) and also achieved record breaking perfor- Storage Directions Best Debug/ Performance Analysis mance of 1.02 Teraflops. Tool Available? Art Beckman (SGI) Guy Robinson (ARSC) Data is growing faster than ever, with recent 16B Software Tools 10:00 As many programmers know, there is much growth exceeding Moore’s Law. The combina- The SGI Multi-Stream Processor (MSP) tion of Big Data and the Net is already creat- activity aimed at creating debuggers and per- ing, and will create, much more stress on the formance tools to aid in code development. Terry Greyzck (SGI) However, the majority of programmers still use infrastructure (i.e., InfraStress) of computing. The SGI Multi-Stream Processor (MSP) is an This talk examines storage technology trends write statements and human logic combined with knowledge of the applications when find- innovative concept in supercomputer design. expected over the next few years, tries to pin- Exciting new developments in compiler and point the likely stress issues and reasons, and ing bugs or performance bottlenecks. This panel’s members will discuss two questions, library technology, coupled with hardware looks at some of the potential solutions for the support on the CRAY SV1, allow immediate, problems. why write statements are preferred and why programmers should use a debugger/perfor- direct comparisons with wide-pipe long-vector mance analysis tool. architectures, with a four gigaflop peak speed. This talk covers how to compile and run on an Friday MSP, what code changes may be necessary, and the current status of the compiler. Through 16A Performance 9:30 examples and anecdotal evidence, a strategy 16A Performance 9:00 Performance Tuning for Large Systems for developers to harness the full power of the MSP on the SV1 is presented. A CMOS Vector Processor with a Edward Hayes-Hall (SGI) Custom Streaming Cache A set of strategies and methods of tuning use- Greg Faanes (SGI) ful for large Origin systems will be presented. 16C Visualization and Operations This will cover existing mechanisms and the The CRAY SV1 processor is a CMOS chipset 10:00 means to monitor the benefits of the changes. that represents an affordable approach to a cus- Improvements in SGI’s Electronic Support tom CPU architecture for vector applications that is interesting in both the micro-architec- Lori Leanne Parris and Vernon Clemons (SGI) ture and design methodology. The design 16A Performance 10:00 Join SGI for an update on our Electronic Sup- includes several new features not found in High Performance Simulation of Magnets port and Services roadmap and planned existing vector processors. The talk will improvements in this area. Learn of our excit- describe the micro-architecture of the scalar Balazs Ujfalussy (ORNL) ing plans to optimize your electronic support processor, the vector processor, and the custom This talk summarizes the computational and services experience. This session will be streaming cache. aspects of the computer code that received the interactive, and we encourage you to share your feedback with us. Page 25 Local Arrangements

Conference Information Cancellations Conference registration cancellations must How to Contact Us Who May Attend? be received by the CUG Office (see "How to Contact Us" above) before April 30, 1999. All Local Arrangements Committee CUG bylaws specify that employees of a CUG member (usually a facility or company registration fees will be refunded if the can- During the Conference using an SGI computer, identified by its cellation is received by this date. Minneapolis Marriott City Center CUG site code) and users of computing ser- Fourth Floor: Birch & Maple Lake vices provided by a CUG member, may 30 South Seventh Street attend a CUG meeting. Visitors are also eli- Minneapolis, MN 55402 USA Conference Registration Location/Hours gible to attend a CUG meeting, provided (1-612) 349-7885 they have formal approval from a member Conference registration is located on the 4th [email protected] of the CUG Board of Directors. The mem- floor, Minneapolis Marriott City Center bers of the Board of Directors are listed later Hotel. After the Conference in this booklet. Minneapolis CUG '99 Conference Office Hours Liz Stadther Sunday 3:00–5:00 p.m. Network Computing Services, Inc. Mon.–Thurs. 8:00 a.m.–6:00 p.m. 1200 Washington Avenue South Conference Registration Fees Friday 8:00 a.m.–1:00 p.m. Minneapolis, MN 55415 USA The member registration fee is $695 through Badges and registration materials are avail- (1-612) 337-3430 April 30, 1999. The registration fee after able during these scheduled hours. All Fax: (1-612) 337-3400 April 30 and before May 20 is an additional attendees must wear badges during CUG [email protected] $55. Registration on or after May 30 is $800. Conference activities. Conference Hotel For the first registrant from non-member sites, add $500 for a one year membership in Minneapolis Marriott City Center addition to the registration fee. 30 South Seventh Street Minneapolis, Minnesota 55402 USA (1-612) 349-4000 Fax: (1-612) 332-7165

CUG Office (Registration) 2911 Knoll Road Shepherdstown, WV 25443 USA (1-304) 263-1756 Fax: (1-304) 263-4841 [email protected]

Page 26 Local Arrangements

On-Site Facilities Photocopying Travel Information A copy machine for making a limited num- Transportation Messages ber of copies will be available in the CUG From the Airport During the Conference, messages will be Conference office on-site. If you plan to dis- taken at the on-site CUG office at (1-612) tribute copies of your presentation, please Airport Express is a public shuttle service, 349-7885. Messages may also be e-mailed to bring sufficient copies with you. You may providing transportation from the airport to [email protected]. The messages will also use the Minneapolis Marriott Business hotels in Downtown Minneapolis. A shuttle be posted in the registration area. Center for making copies, for a charge. departs from the airport (in front of the lug- gage pick-up area) approximately every half Faxes Dining Services hour. The cost is $10.00 one-way and $16.50 round-trip. It is best to make advance reserva- A fax machine will be available at the CUG The conference registration fee includes lun- tions by calling (1-612) 827-7777. Conference Office for conference-related cheons Tuesday through Thursday. In addi- business. tion, refreshments will be provided during At the airport, taxis are available at the cab breaks Monday through Friday morning. starter booth. From the Main Terminal build- Business Center The fee also includes breakfast Tuesday ing, cross underground through the Ground through Friday mornings. Breakfast and Transportation Center toward the parking The Minneapolis Marriott City Center has a luncheons will be served in the Marriott ramp and go up one level. The distance to business center located in the main lobby City Center sixth floor dining area. The downtown Minneapolis is approximately 16 area. Services available there include print- refreshments during breaks will be available miles. All cab fares are metered. The phone ing, copying, and faxing. Those with laptops just outside session rooms on the fourth number for taxi service is (1-612) 726-5877. can take them to the business center and floor. send files to printers (including transparen- Car Rental Agencies cies). There is a charge for these services. Special Assistance Alamo Rent-a-Car ...... (1-800) 327-9633 www.goalamo.com E-mail Throughout the conference, our staff at the Avis Rent-a-Car ...... (1-800) 331-1212 E-mail facilities will be provided at the Con- CUG Office will be glad to assist you with www.avis.com ference for attendees' use. any special requirements. Budget Rent-a-Car...... (1-800) 527-0700 www.budgetrentacar.com Speaker Preparation Smoking Policy Dollar Rent-a-Car ...... (1-800) 800-4000 There is no smoking allowed at the Confer- www.dollarcar.com The Speaker Preparation room is Pine Hertz Rent-a-Car ...... (1-800) 654-3131 Lake on the fourth floor near the CUG ence. Smoking is allowed only in designated areas of the hotel. www.hertz.com Office. There will be an overhead projec- National Car Rental . . . . . (1-800) 227-7368 tor, a Macintosh, and a PC available for www.nationalcar.com minor last-minute editing of presentations. Americar ...... (1-612) 866-4918 Enterprise...... (1-800) 325-8007 Thrifty ...... (1-800) 367-2277

Page 27 Local Arrangements

Holiday Please note that Monday, May 31, is Memor- ial Day, a U.S. national holiday. Some shops Social Events and services may be unavailable that day. Newcomer’s Luncheon Minneapolis, in downtown St. Paul. The Currency building itself provides an elegant setting All first-time CUG attendees are invited to The currency is the U.S. dollar. Currency, for dinner. You will also be able to explore the Newcomer’s Luncheon to be held on including travelers checks, can be the history exhibits, visit a gift shop, or Monday. This is a good time to meet CUG exchanged at a bank or at the airport. Nor- just relax and enjoy the magnificent views officers and ask any questions you may west Bank, at Sixth Street and Marquette of the Minnesota State Capitol and the St. have about CUG or the Conference. Please Avenue in downtown Minneapolis, is open Paul Cathedral. Weather permitting, the be sure to sign up for this luncheon at the Monday through Friday from 9:00 a.m. until outdoor plaza will be available to take in a Registration Desk. 5:00 p.m. At the airport, there is a currency warm spring evening. exchange service located across from North- Guests are welcome for an additional fee west Airline's ticket counter on the upper of $60.00 for each guest. Please pay at the level of the main terminal. The hours are SGI Reception registration desk if you have not already from 6:00 a.m. to 8:00 p.m. Monday through paid with your registration fee and pick SGI will host a reception on Monday Saturday, and from 8:00 a.m. to 8:00 p.m. on up a guest badge(s) there as well. Sundays. evening, from 7:00 p.m until 9:00 p.m. at the Minneapolis Marriott City Center. All Voltage conference participants and their guests are invited to attend. Tour of Network Computing Services, Inc. Electrical service is 120 volts, 60 cycle. You will most likely need an adapter for use in There will be an opportunity for CUG electrical outlets. attendees to tour Network Computing CUG Night Out Services, Inc., your local host for the Min- Climate neapolis CUG Conference. Detailed infor- The traditional CUG Night Out will be a mation will be posted on conference bul- The average daily high temperature in May trip to the Minnesota History Center, letin boards and will be available at the is 21˚ C or 69˚ F. Light coats or jackets are located across the Mississippi River from CUG Conference Office. recommended, and there may be rain.

Page 28 Contacts

CUG Board of Directors President Director—Asia/Pacific Sally Haerer Shigeki Miyaji National Center for Atmospheric Research Chiba University NCAR - SCD Dept. of Physics CUG Chief Information Manager 1850 Table Mesa Drive 1-33, Yayoi-cho, Inage-ku Walter Wehinger Boulder, CO 80307-3000 USA 2538522 Chiba, Japan Rechenzentrum der Universitaet Stuttgart (1-303) 497-1283 Fax: (1-303) 497-1804 (81-43) 290-3719 Fax: (81-43) 290-3720 Allmandring 30 [email protected] [email protected] D-70550 Stuttgart Germany Vice President Director—Europe (49-711) 685-2513 Fax: (49-711) 682 357 [email protected] Sam Milosevich Bruno Loepfe Eli Lilly and Company ETH Zürich Computational Science Apps, DC0725 Informatikdienste/Systemdienste Advisory Council Secretary and Lilly Corporate Center Clausiusstrasse 59 CUG.log Editor Indianapolis, IN 46285-0725 USA CH-8092 Zürich Switzerland Lynda Lester (1-317) 276-9118 Fax: (1-317) 276-4127 (41-1) 632 35 56 Fax: (41-1) 632 10 22 National Center for Atmospheric Research [email protected] [email protected] P.O. Box 3000 1850 Table Mesa Drive Secretary Director—Americas Boulder, CO 80303 USA Margaret Simmons Barry Sharp (1-303) 497-1285 Fax: (1-303) 497-1804 San Diego Supercomputer Center Boeing Information and Support Services [email protected] NPACI/SDSC TS Enterprise Servers University of California, San Diego P.O. Box 3707 MC 7J-04 Conference Chair MC 0505 Seattle, WA 98124-3707 USA John Sell 9500 Gilman Dr., Bldg. 109 (1-425) 865-6411 Fax: (1-425) 865-2221 Network Computing Services, Inc. La Jolla, CA 92093-0505 USA [email protected] 1200 Washington Ave. South (1-619) 822-0857 Fax: (1-619) 534-8380 Minneapolis, MN 55415 USA [email protected] Past President Gary Jensen (1-612) 337-0200 Fax: (1-612) 337-3400 Treasurer National Center for Supercomputing [email protected] Barbara Horner-Miller Applications Arctic Region Supercomputing Center 139 West 640 North 910 Yukon Drive, Suite 108, P.O. Box 756020 American Fork, UT 84003 USA Fairbanks, AK 99775-6020 USA (1-801) 492-9235 Fax: (1-801) 492-9552 (1-907) 474-5409 Fax: (1-907) 474-5494 [email protected] [email protected]

Page 29 Contacts

Special Interest Groups SGI Liaison, Communication and Data SGI Liaison, Computer Services Group Management Group Lynn Sayles Communications and Data Management Brian Gaffey 655 Lone Oak Drive Silicon Graphics, Inc. Eagan, MN 55121 Group 655 Lone Oak Drive (1-651) 683-5723 Fax: (1-651) 683-5599 Group Chair, Hartmut Fichtel Eagan, MN 55121 USA [email protected] Deutsches Klimarechenzentrum GmbH (1-651) 683-5445 Fax: ( 1-651) 683-5644 Systems Software [email protected] Bundesstrasse 55 High Performance Solutions Group D-20146 Hamburg 13 Germany (49-40) 41-173-220 Fax: (49-40) 41-173-174 Computer Services Group Group Chair, Eric Greenwade [email protected] Idaho Natinal Engineering and Group Chair, Leslie Southern Environmental Laboratory Chair, Mass Storage Systems Focus Ohio Supercomputer Center P.O. Box 1625, MS-3605 Jeff Terstriep User Services Idaho Falls, ID 83415 USA National Center for Supercomputing 1224 Kinnear Road (1-208) 526-1276 Fax: (1-208) 526-4017 Applications Columbus, OH 43212-1163 USA [email protected] 152 Computing Applications Building (1-614) 292-9248 Fax: (1-614) 292-7168 605 E. Springfield Ave. Chair, Applications Focus [email protected] Champaign, IL 61820 USA Larry Eversole (1-217) 244-2193 Fax: (1-217) 244-1987 Chair, Operations Focus Jet Propulsion Laboratory [email protected] Brian Kucic California Institute of Technology National Center for Supercomputing 4800 Oak Grove Drive, MS 126-147 Chair, Networking Focus Applications Pasadena, CA 91109-8099 USA Hans Mandt 605 E. Springfield (1-818) 354-2786 Fax: (1-818) 393-1187 Boeing Advanced Systems Laboratory Champaign, IL 61820 USA [email protected] P.O. Box 3707 (1-217) 333-0926 Fax: (1-217) 244-9561 MC 7L-48 Chair, Performance Focus [email protected] Seattle, WA 98124-2207 USA Michael Resch (1-425) 865-3252 Fax: (1-425) 865-2965 Chair, User Services Focus Rechenzentrum der Universitaet Stuttgart [email protected] Chuck Niggley Allmandring 30 NASA Ames Research Center D-70550 Stuttgart Germany Deputy Chair, Networking Focus M/S 258-6 (49-711) 685 5834 Fax: (49-711) 678 7626 Dieter Raith Moffett Field, CA 94087 [email protected] Rechenzentrum der Universitaet Stuttgart (1-650) 604-4347 Fax: (1-408) 737-2041 Systems LAN [email protected] Allmandring 30 D-70550 Stuttgart Germany (49-711) 685-4516 Fax: (49-711) 682-357 [email protected] Page 30 Contacts

Chair, Visualization Focus Chair, Security Focus Chair, Compilers and Libraries Focus John Clyne Virginia Bedford Hans-Hermann Frese National Center for Atmospheric Research Arctic Region Supercomputing Center Konrad Zuse-Zentrum für Scientific Computing Division University of Alaska Informationstechnik Berlin 1850 Table Mesa Drive P.O. Box 756020 Supercomputing Department Boulder, CO 80303 USA Fairbanks, AK 99775-6020 USA Takustrasse 7 (1-303) 497-1236 Fax: (1-303) 497-1804 (1-907) 474-5426 Fax: (1-907) 474-5494 D-14195 Berlin-Dahlem Germany [email protected] [email protected] (49-30) 84185-145 Fax: (49-30) 84185-311 [email protected] SGI Liaison, High Performance Solutions Chair, UNICOS Focus Group Ingeborg Weidl Chair, Software Tools Focus Michael Brown Max Planck Institut fuer Plasmaphysik Guy Robinson Silicon Graphics, Inc. Computer Center Arctic Region Supercomputing Center [email protected] Boltzmannstrasse 2 MPP Specialist D-85748 Garching Germany P.O. Box 756020 (49-89) 3299-2191 Fax: (49-89) 3299-1301 Fairbanks, AK 99775-6020 USA [email protected] (1-907) 474-6386 Fax: (1-907) 474-5494 Operating Systems Group [email protected] SGI Liaison, Operating Systems Group Group Chair, Chuck Keagle Open SGI Liaison, Programming Environments Boeing Information and Support Services Group Engineering Systems Support Margaret Cahir P.O. Box 3707, M/S 2T-20 Silicon Graphics, Inc. Seattle, WA 98124 USA Programming Environments Group 655 Lone Oak Drive (1-206) 544-5535 Fax: (1-206) 544-5544 Group Chair, Jeff Kuehn Eagan, MN 55121 USA [email protected] National Center for Atmospheric Research (1-612) 683-5072 Chair, IRIX Focus Scientific Computing Division [email protected] Cheryl Wampler 1850 Table Mesa Drive Los Alamos National Laboratory Boulder, CO 80303 USA CIC-7, Computing (1-303) 497-1311 Fax: (1-303) 497-1804 Tutorials MS B294 [email protected] Los Alamos, NM 87544 USA Chair, Helene E. Kulsrud (1-505) 667-0147 Fax: (1-505) 665-6333 Institute for Defense Analyses [email protected] CCR-P 290 Thanet Road Princeton, NJ 08540 USA (1-609) 279-6243 Fax: (1-609) 924-3061 [email protected]

Page 31 Contacts

Regional CUG Representatives Program Layout Committee Eric Greenwade Idaho Natinal Engineering and Latin CUG Representative Environmental Laboratory Chair: Sam Milosevich P.O. Box 1625, MS-3605 Eli Lilly and Company Alain Massiot Idaho Falls, ID 83415 USA Computational Science Apps, DC0725 Centre de Ressources Informatiques de (1-208) 526-1276 Fax: (1-208) 526-4017 Lilly Corporate Center Haute-Normandie [email protected] Indianapolis, IN 46285-0725 USA 32 rue Raymond Aron (1-317) 276-9118 Fax: (1-317) 276-4127 Chuck Keagle F-76130 Mont Saint Aignan France [email protected] Boeing Information and Support Services (33-2) 35596159 Fax: (33-2) 35596140 Engineering Systems Support [email protected] Mary Kay Bunde P.O. Box 3707, M/S 2T-20 Silicon Graphics, Inc. Seattle, WA 98124 USA German CUG Representative Software Development (1-206) 544-5535 Fax: (1-206) 544-5544 655 Lone Oak Drive [email protected] Hartmut Fichtel Eagan, MN 55121 USA Deutsches Klimarechenzentrum GmbH (1-651) 683-5655 Fax: ( 1-651) 683-5098 Jeff Kuehn Systems Software [email protected] National Center for Atmospheric Research Bundesstrasse 55 Scientific Computing Division D-20146 Hamburg 13 Germany Leslie Southern 1850 Table Mesa Drive (49-40) 41-173-220 Fax: (49-40) 41-173-174 Ohio Supercomputer Center Boulder, CO 80303 USA [email protected] User Services (1-303) 497-1311 Fax: (1-303) 497-1804 1224 Kinnear Road [email protected] Columbus, OH 43212-1163 USA (1-614) 292-9248 Fax: (1-614) 292-7168 Helene E. Kulsrud [email protected] Institute for Defense Analyses CCR-P Hartmut Fichtel 290 Thanet Road Deutsches Klimarechenzentrum GmbH Princeton, NJ 08540 USA Systems Software (1-609) 279-6243 Fax: (1-609) 924-3061 Bundesstrasse 55 [email protected] D-20146 Hamburg 13 Germany (49-40) 41-173-220 Fax: (49-40) 41-173-174 [email protected]

Page 32 Contacts

SGI Representatives to CUG

Vito Bongiorno Charlie Clark Lynn Sayles Silicon Graphics, Inc. Silicon Graphics, Inc. Silicon Graphics, Inc. 655 Lone Oak Drive Customer Service 655 Lone Oak Drive Eagan, MN 55121 USA 890 Industrial Boulevard Eagan, MN 55121 (1-651) 683-3404 Chippewa Falls, WI 54729 USA (1-651) 683-5723 Fax: (1-651) 683-5599 (1-715) 726-5334 Fax: (1-651) 683-5599 [email protected] Fax: (1-715) 726-4343 [email protected] [email protected]

Mary Kay Bunde Eric Pitcher Linda Yetzer Silicon Graphics, Inc. Silicon Graphics, Inc. Silicon Graphics, Inc. Software Development 655 Lone Oak Drive 655 Lone Oak Drive 655 Lone Oak Drive Eagan, MN 55121 USA Eagan, MN 55121 USA Eagan, MN 55121 USA (1-651) 683-3414 (1-651) 683-3556 (1-651) 683-5655 Fax: (1-651) 683-5599 Fax: (1-651) 683-5599 Fax: (1-651) 683-5098 [email protected] [email protected] [email protected]

CUG Office Bob Winget 2911 Knoll Road Shepherdstown, WV 25443 USA (1-304) 263-1756 Fax: (1-304) 263-4841 [email protected]

Page 33 Spring 2000 CUG

Call for Papers

You'll want to mark your calendars for the CUG "Event of the Millennium": our next CUG conference will be held from May 22 through 26, 2000, near Amsterdam in Nordwick, The Netherlands. Our hosts will be Technical University of Delft and Stichting Academisch Rekencentrum Amsterdam. You and your colleagues from other CUG sites are the key to the conference, so we wanted to give you time to start thinking about presenting a paper. The information and experience you share in your presentation will provide you with the opportunity to help your colleagues solve problems and develop solutions rel- evant to your work environment. Your paper will be organized under the following categories:

Group/Focus Area Communications and Data Management Mass Storage Networking Computer Services Operations User Services High Performance Solutions Applications Performance Visualization Operating Systems IRIX Security UNICOS Programming Environments Compilers and Libraries Software Tools Find a Group or Focus Chair during or after this conference (see the Contacts list on p. 29–33 of this Program) and discuss your ideas for your presentation. When you are ready to submit your abstract, use the on line form at the CUG home page, www.cug.org. It will be available by Sept. 15, 1999. Also check the CUG home page for guidelines for preparing and submitting your paper.

Page 34