2003 Terrain-Following Ocean Models Users Workshop NOAA/PMEL

Total Page:16

File Type:pdf, Size:1020Kb

2003 Terrain-Following Ocean Models Users Workshop NOAA/PMEL 2003 Terrain-Following Ocean Models Users Workshop NOAA/PMEL, Seattle, Washington 4-6 August 2003 Organized by: T. Ezer, H. Arango and A. Hermann Supported by: ONR Ocean Modeling and Prediction 1 PARTICIPANTS Name Email Name Email Ådlandsvik, Bjørn [email protected] Ly, Le [email protected] Aguirre, Enrique [email protected] MacFadyen, Amy [email protected] Albertson, Skip [email protected] M. daSilva, Ricardo [email protected] Arango, Hernan [email protected] Marshak, Jelena [email protected] Bang, Inkweon [email protected] Masson, Diane [email protected] Beegle-Krause, CJ [email protected] McWilliams, James [email protected] Budgell, Paul [email protected] Mellor, George [email protected] Capet, Xavier [email protected] Miller, Art [email protected] Chao, Yi [email protected] Morrison, John [email protected] Chu, Peter [email protected] Soldate, Mills [email protected] Cousins, Steve [email protected] Moore, Andrew [email protected] Curchitser, Enrique [email protected] Moore, Christopher [email protected] Darr, David [email protected] Padman, Laurence [email protected] Di Lorenzo, Emanuele [email protected] Pali, Madra [email protected] Dobbins, Elizabeth [email protected] Payton, Debbie [email protected] Ezer, Tal [email protected] Pereira, Jose Edson [email protected] Fennel, Katja [email protected] Pietrzak, Julie [email protected] Fragoso, Mauricio [email protected] Powell, Tom (Zack) [email protected] Fiechter, Jerome [email protected] Rai, Shailendra [email protected] Fleming, Naomi [email protected] Ro, Young Jae [email protected] Foreman, Mike [email protected] Robertson, David [email protected] Gan, Jianpinggan [email protected] Robertson, Robin [email protected] Gross, Tom [email protected] Rubash, Lambert [email protected] Hadfield, Mark [email protected] Shchepetkin, Alexander [email protected] Haidvogel, Dale [email protected] Sherwood, Chris [email protected] Hakkinen, Sirpa [email protected] Shulman, Igor [email protected] Han, Guoqi [email protected] Song, Tony [email protected] Harper, Scott [email protected] Sozer, Adil [email protected] He, Ruoying [email protected] Tang, Charles [email protected] Hedstrom, Kate [email protected] Thompson, LuAnne [email protected] Hemer, Mark [email protected] Vincent, Mark [email protected] Hermann, Albert [email protected] Wang, Xiaochun [email protected] Howard, Susan [email protected] Warbus, Helen [email protected] Hu, Jianyu [email protected] Wei, Eugene [email protected] Iskandarani, Mohamed [email protected] Wilkin, John [email protected] Kagimoto, Takashi [email protected] Williams, Bill [email protected] Kilanowski, Elizabeth [email protected] Worthen, Denise [email protected] Kim, Chang S. [email protected] Wu, Xinglong [email protected] Ko, Dong S. [email protected] Xie, Lian [email protected] Koesters, Frank [email protected] Xue, Huijie [email protected] Kruger, Dov [email protected] Zhang, Hong [email protected] Kurapov, Alexander [email protected] Zhang, Xuebin [email protected] Lewis, Craig [email protected] Zanzig, Rebecca [email protected] Li, Zhijin [email protected] Zheng, Ling [email protected] Lou, John [email protected] 2 2003 Terrain-Following Ocean Models Users Workshop PROGRAM -------------------- MONDAY AM, AUGUST 4 ------------------ -------------------- MONDAY PM, AUGUST 4 ------------------ 08:30-09:15 Registration SEDIMENT TRANSPORT AND SEA-ICE MODELING 09:15-9:30 Welcome, logistics (Hermann, Ezer, Arango) Chairperson: John Wilkin MIXING PARAMETERIZATIONS 01:30-02:10 C. Sherwood, USGS (INVITED) Sediment transport modeling Chairperson: Dale Haidvogel 02:10-02:30 P. Budgell, Inst. Mar. Res., Norway A coupled ice-ocean model based on 09:30-10:10 J. McWilliams, UCLA (INVITED) ROMS/TOMS 2.0 An asymptotic theory for the interaction of 02:30-02:50 M. Hemer and J. Hunter, Univ. of Tasmania waves and currents in shallow coastal waters The influence of tides on the sub-Amery Ice 10:10-10:30 A. Shchepetkin, UCLA Shelf thermohaline circulation KPP revisited BREAK BREAK DATA ASSIMILATION Chairperson: Hernan Arango Chairperson: Chris Sherwood 03:10-03:50 A. Moore, UC (INVITED) 10:50-11:30 G. Mellor, Princeton Univ. (INVITED) The Tangent Linear and Adjoint models Recent and proposed modifications to 03:50-04:10 Z. Li, Y. Chao and J. McWilliams, JPL/UCLA turbulent mixing in ocean models A three-dimensional variational data 11:30-11:50 L. Ly, A. Benilov and M. Batteen assimilation system Breaking wave parameterization and its 04:10-04:30 A. Kurapov, J. Allen, G. Egbert , R. Miller use in ocean circulation-wave coupling Assimilation of moored ADP currents into 11:50-12:10 C. Tang, Bedford Institute of Oceanography a model of coastal wind-driven circulation Surface currents and the Craig-Banner off Oregon boundary condition 12:10-12:30 R. Robertson, LDEO, Columbia Univ. Vertical Mixing Parameterizations and their 04:30-05:30 POSTER SESSION effects on the skill of baroclinic tidal (list of poster presentations on next page) modeling 06:00 RECEPTION AND DINNER BY THE LAKE LUNCH 3 POSTERS 1 B. Ådlandsvik, Institute of Marine Research, Norway 15. J. Marshak and S. Hakkinen, NASA GSFC Using ROMS in the North Sea- a validation study NASA POM performance optimization: costs and benefits of using MPI, SMS and OpenMP 2 E. Aguirre and E. Campos, Oceanog. Inst. São Paulo Univ. Sea circulation off Peruvian coast 16. D. Masson, Institute of Ocean Sciences, Canada Simulation of seasonal variability within the Straits 3. S. Albertson, WA State, Dept. of Ecology of Georgia and Juan de Fuca using POM A comparison of along-channel flow and residuals in a model of South Puget Sound with ADCP data along 17. J. Pereira, Applied Science Associates, Brazil select transects Modeling of tidal dynamics in a resonant system. application and integration with GIS tools 4. P. Chu and L. Ivanov, Naval Postgraduate School Prediction-skill variability in regional ocean models 18. S. Rai, Allahabad University, India Simulation of southern Indian Ocean using sigma 5. P. Chu and A. Ong, Naval Postgraduate School coordinate ocean model Uncertainty in Diagnostic Initialization 19. Y. Ro, Chungnam National University, Korea 6. J. Gan and J. S. Allan, Oregon State Univ. Data assimilation experiment in the East (Japan) Sea Open boundary condition for a limited-area coastal based on POM-ES model off Oregon 20. R. Robertson, LDEO, Columbia Univ. 7. G. Han, Fisheries and Oceans, Canada Modeling internal tides in the Ross Sea Tidal currents and wind-driven circulation off New Foundland 21. L. Rubash and E. Kilanowski, Raincoast GeoResearch Experimenting with Lagrangian drifters in a 8. M. Hadfield, NIWA, New Zealand Washington Lake using POM New Zealand region ocean modeling with terrain- following and z-level models 22. A. Shchepetkin, UCLA Poor's man computation: How much power 9. J. Hu and H. Kawamura, Xiamen & Tohoku Univ., Japan one can get from a Linux PC Numerical simulation of tides in the northern South China Sea and its adjacent seas 23. T. Song, NASA/JPL Developing generalized vertical coordinate ocean 10. T. Kagimoto, Frontier Res. Global Change model for multi-Scale compressible or incompressible Sensitivity of the Equatorial Undercurrent flow applications to the vertical mixing parameterization 24. Sozer, Metu Institute of Marine Sciences, Turkey 11. C. Kim and H. Lim, Korea Ocean R&D Institute Three-dimensional numerical modeling of Simulation of circulation and density structure of the Bosphorus Strait Yellow Sea using ROMS 25. L. Xie, North Carolina State University 12. D.-S. Ko, NRL Modeling inundation and draining in estuaries and NRL DBDB2- a global topography for ocean modeling coastal oceans 13. F. Koesters and R. Kaese, University of Kiel, Germany 26. H. Xue, University of Maine Modeling of Denmark Strait Overflow Towards a forecast system for the Gulf of Maine 14. L. Ly, R. Stumpf, T. Gross, F. Aikman, C. Lewis and T. Powel, NPS, NOS, UC Berkeley A physical-biological model coupling study of the West Florida Shelf 4 -------------------- TUESDAY AM, AUGUST 5 ----------------- -------------------- TUESDAY PM, AUGUST 5 ----------------- NUMERICS, COMPUTATIONS AND MODEL DEVELOPMENTS NUMERICS (Cont.) Chairperson: George Mellor 01:30-01:50 J. Lou, Y. Chao and Z. Li, JPL, Caltech Development of a multi-level parallel 08:30-09:10 J. Pietrzak, Delft Univ. (INVITED) adaptive ROMS Advection-confusion revisited/ A new unstructured grid model BIOLOGICAL MODELING 09:10-09:30 T. Ezer and G. Mellor, Princeton University On mixing and advection in the BBL and Chairperson: Julie Pietrzak how they are affected by the model grid: Sensitivity studies with a generalized 01:50-02:30 T. Powel, UC Berkeley (INVITED) coordinate ocean model Review of Biological modeling 09:30-09:50 P. Chu and C. Fan, NPS 02:30-02:50 A. Hermann, PMEL Hydrostatic correction for terrain-following Nesting and coupling of physical-biological ocean models models 09:50-10:10 K. Hedstrom, Arctic Supercomp. Ctr. Test cases and the Finite Volume Coastal Ocean Model (FVCOM) BREAK BREAK FORECASTING AND PROCESS STUDIES Chairperson: Jim McWilliams Chairperson: Andrew Moore 03:10-03:30 I. Shulman, J. Paduan and J. Kindle, NRL/NPS 10:30-11:00 H. Arango, Rutgers Univ. Modeling study of the coastal upwelling A community Terrain-following Ocean system of the Monterey Bay area during Modeling System (TOMS) 1999 and 2000 11:00-11:40 D. Kruger, Davidoson Lab., SIT (INVITED) 03:30-03:50 E.
Recommended publications
  • Through the Years… When Did It All Begin?
    & Through the years… When did it all begin? 1974? 1978? 1963? 2 CDC 6600 – 1974 NERSC started service with the first Supercomputer… ● A well-used system - Serial Number 1 ● On its last legs… ● Designed and built in Chippewa Falls ● Launch Date: 1963 ● Load / Store Architecture ● First RISC Computer! ● First CRT Monitor ● Freon Cooled ● State-of-the-Art Remote Access at NERSC ● Via 4 acoustic modems, manually answered capable of 10 characters /sec 3 50th Anniversary of the IBM / Cray Rivalry… Last week, CDC had a press conference during which they officially announced their 6600 system. I understand that in the laboratory developing this system there are only 32 people, “including the janitor”… Contrasting this modest effort with our vast development activities, I fail to understand why we have lost our industry leadership position by letting someone else offer the world’s most powerful computer… T.J. Watson, August 28, 1963 4 2/6/14 Cray Higher-Ed Roundtable, July 22, 2013 CDC 7600 – 1975 ● Delivered in September ● 36 Mflop Peak ● ~10 Mflop Sustained ● 10X sustained performance vs. the CDC 6600 ● Fast memory + slower core memory ● Freon cooled (again) Major Innovations § 65KW Memory § 36.4 MHz clock § Pipelined functional units 5 Cray-1 – 1978 NERSC transitions users ● Serial 6 to vector architectures ● An fairly easy transition for application writers ● LTSS was converted to run on the Cray-1 and became known as CTSS (Cray Time Sharing System) ● Freon Cooled (again) ● 2nd Cray 1 added in 1981 Major Innovations § Vector Processing § Dependency
    [Show full text]
  • The Gemini Network
    The Gemini Network Rev 1.1 Cray Inc. © 2010 Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced or disclosed in any form. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, Cray, Cray Channels, Cray Y-MP, UNICOS and UNICOS/mk are federally registered trademarks and Active Manager, CCI, CCMT, CF77, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ada, Cray Animation Theater, Cray APP, Cray Apprentice2, Cray C90, Cray C90D, Cray C++ Compiling System, Cray CF90, Cray EL, Cray Fortran Compiler, Cray J90, Cray J90se, Cray J916, Cray J932, Cray MTA, Cray MTA-2, Cray MTX, Cray NQS, Cray Research, Cray SeaStar, Cray SeaStar2, Cray SeaStar2+, Cray SHMEM, Cray S-MP, Cray SSD-T90, Cray SuperCluster, Cray SV1, Cray SV1ex, Cray SX-5, Cray SX-6, Cray T90, Cray T916, Cray T932, Cray T3D, Cray T3D MC, Cray T3D MCA, Cray T3D SC, Cray T3E, Cray Threadstorm, Cray UNICOS, Cray X1, Cray X1E, Cray X2, Cray XD1, Cray X-MP, Cray XMS, Cray XMT, Cray XR1, Cray XT, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray Y-MP EL, Cray-1, Cray-2, Cray-3, CrayDoc, CrayLink, Cray-MP, CrayPacs, CrayPat, CrayPort, Cray/REELlibrarian, CraySoft, CrayTutor, CRInform, CRI/TurboKiva, CSIM, CVT, Delivering the power…, Dgauss, Docview, EMDS, GigaRing, HEXAR, HSX, IOS, ISP/Superlink, LibSci, MPP Apprentice, ND Series Network Disk Array, Network Queuing Environment, Network Queuing Tools, OLNET, RapidArray, RQS, SEGLDR, SMARTE, SSD, SUPERLINK, System Maintenance and Remote Testing Environment, Trusted UNICOS, TurboKiva, UNICOS MAX, UNICOS/lc, and UNICOS/mp are trademarks of Cray Inc.
    [Show full text]
  • Shared-Memory Vector Systems Compared
    Shared-Memory Vector Systems Compared Robert Bell, CSIRO and Guy Robinson, Arctic Region Supercomputing Center ABSTRACT: The NEC SX-5 and the Cray SV1 are the only shared-memory vector computers currently being marketed. This compares with at least five models a few years ago (J90, T90, SX-4, Fujitsu and Hitachi), with IBM, Digital, Convex, CDC and others having fallen by the wayside in the early 1990s. In this presentation, some comparisons will be made between the architecture of the survivors, and some performance comparisons will be given on benchmark and applications codes, and in areas not usually presented in comparisons, e.g. file systems, network performance, gzip speeds, compilation speeds, scalability and tools and libraries. KEYWORDS: SX-5, SV1, shared-memory, vector systems averaged 95% on the larger node, and 92% on the smaller 1. Introduction node. HPCCC The SXes are supported by data storage systems – The Bureau of Meteorology and CSIRO in Australia SAM-FS for the Bureau, and DMF on a Cray J90se for established the High Performance Computing and CSIRO. Communications Centre (HPCCC) in 1997, to provide ARSC larger computational facilities than either party could The mission of the Arctic Region Supercomputing acquire separately. The main initial system was an NEC Centre is to support high performance computational SX-4, which has now been replaced by two NEC SX-5s. research in science and engineering with an emphasis on The current system is a dual-node SX-5/24M with 224 high latitudes and the Arctic. ARSC provides high Gbyte of memory, and this will become an SX-5/32M performance computational, visualisation and data storage with 224 Gbyte in July 2001.
    [Show full text]
  • Appendix G Vector Processors in More Depth
    G.1 Introduction G-2 G.2 Vector Performance in More Depth G-2 G.3 Vector Memory Systems in More Depth G-9 G.4 Enhancing Vector Performance G-11 G.5 Effectiveness of Compiler Vectorization G-14 G.6 Putting It All Together: Performance of Vector Processors G-15 G.7 A Modern Vector Supercomputer: The Cray X1 G-21 G.8 Concluding Remarks G-25 G.9 Historical Perspective and References G-26 Exercises G-29 G Vector Processors in More Depth Revised by Krste Asanovic Massachusetts Institute of Technology I’m certainly not inventing vector processors. There are three kinds that I know of existing today. They are represented by the Illiac-IV, the (CDC) Star processor, and the TI (ASC) processor. Those three were all pioneering processors.…One of the problems of being a pioneer is you always make mistakes and I never, never want to be a pioneer. It’s always best to come second when you can look at the mistakes the pioneers made. Seymour Cray Public lecture at Lawrence Livermore Laboratorieson on the introduction of the Cray-1 (1976) G-2 ■ Appendix G Vector Processors in More Depth G.1 Introduction Chapter 4 introduces vector architectures and places Multimedia SIMD extensions and GPUs in proper context to vector architectures. In this appendix, we go into more detail on vector architectures, including more accurate performance models and descriptions of previous vector architectures. Figure G.1 shows the characteristics of some typical vector processors, including the size and count of the registers, the number and types of functional units, the number of load-store units, and the number of lanes.
    [Show full text]
  • Recent Supercomputing Development in Japan
    Supercomputing in Japan Yoshio Oyanagi Dean, Faculty of Information Science Kogakuin University 2006/4/24 1 Generations • Primordial Ages (1970’s) – Cray-1, 75APU, IAP • 1st Generation (1H of 1980’s) – Cyber205, XMP, S810, VP200, SX-2 • 2nd Generation (2H of 1980’s) – YMP, ETA-10, S820, VP2600, SX-3, nCUBE, CM-1 • 3rd Generation (1H of 1990’s) – C90, T3D, Cray-3, S3800, VPP500, SX-4, SP-1/2, CM-5, KSR2 (HPC ventures went out) • 4th Generation (2H of 1990’s) – T90, T3E, SV1, SP-3, Starfire, VPP300/700/5000, SX-5, SR2201/8000, ASCI(Red, Blue) • 5th Generation (1H of 2000’s) – ASCI,TeraGrid,BlueGene/L,X1, Origin,Power4/5, ES, SX- 6/7/8, PP HPC2500, SR11000, …. 2006/4/24 2 Primordial Ages (1970’s) 1974 DAP, BSP and HEP started 1975 ILLIAC IV becomes operational 1976 Cray-1 delivered to LANL 80MHz, 160MF 1976 FPS AP-120B delivered 1977 FACOM230-75 APU 22MF 1978 HITAC M-180 IAP 1978 PAX project started (Hoshino and Kawai) 1979 HEP operational as a single processor 1979 HITAC M-200H IAP 48MF 1982 NEC ACOS-1000 IAP 28MF 1982 HITAC M280H IAP 67MF 2006/4/24 3 Characteristics of Japanese SC’s 1. Manufactured by main-frame vendors with semiconductor facilities (not ventures) 2. Vector processors are attached to mainframes 3. HITAC IAP a) memory-to-memory b) summation, inner product and 1st order recurrence can be vectorized c) vectorization of loops with IF’s (M280) 4. No high performance parallel machines 2006/4/24 4 1st Generation (1H of 1980’s) 1981 FPS-164 (64 bits) 1981 CDC Cyber 205 400MF 1982 Cray XMP-2 Steve Chen 630MF 1982 Cosmic Cube in Caltech, Alliant FX/8 delivered, HEP installed 1983 HITAC S-810/20 630MF 1983 FACOM VP-200 570MF 1983 Encore, Sequent and TMC founded, ETA span off from CDC 2006/4/24 5 1st Generation (1H of 1980’s) (continued) 1984 Multiflow founded 1984 Cray XMP-4 1260MF 1984 PAX-64J completed (Tsukuba) 1985 NEC SX-2 1300MF 1985 FPS-264 1985 Convex C1 1985 Cray-2 1952MF 1985 Intel iPSC/1, T414, NCUBE/1, Stellar, Ardent… 1985 FACOM VP-400 1140MF 1986 CM-1 shipped, FPS T-series (max 1TF!!) 2006/4/24 6 Characteristics of Japanese SC in the 1st G.
    [Show full text]
  • System Programmer Reference (Cray SV1™ Series)
    ® System Programmer Reference (Cray SV1™ Series) 108-0245-003 Cray Proprietary (c) Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright, and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced, or disclosed in any form. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE: The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, CF77, Cray, Cray Ada, Cray Channels, Cray Chips, CraySoft, Cray Y-MP, Cray-1, CRInform, CRI/TurboKiva, HSX, LibSci, MPP Apprentice, SSD, SuperCluster, UNICOS, UNICOS/mk, and X-MP EA are federally registered trademarks and Because no workstation is an island, CCI, CCMT, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Animation Theater, Cray APP, Cray C90, Cray C90D, Cray CF90, Cray C++ Compiling System, CrayDoc, Cray EL, CrayLink,
    [Show full text]
  • Appendix G Vector Processors
    G.1 Why Vector Processors? G-2 G.2 Basic Vector Architecture G-4 G.3 Two Real-World Issues: Vector Length and Stride G-16 G.4 Enhancing Vector Performance G-23 G.5 Effectiveness of Compiler Vectorization G-32 G.6 Putting It All Together: Performance of Vector Processors G-34 G.7 Fallacies and Pitfalls G-40 G.8 Concluding Remarks G-42 G.9 Historical Perspective and References G-43 Exercises G-49 G Vector Processors Revised by Krste Asanovic Department of Electrical Engineering and Computer Science, MIT I’m certainly not inventing vector processors. There are three kinds that I know of existing today. They are represented by the Illiac-IV, the (CDC) Star processor, and the TI (ASC) processor. Those three were all pioneering processors. One of the problems of being a pioneer is you always make mistakes and I never, never want to be a pioneer. It’s always best to come second when you can look at the mistakes the pioneers made. Seymour Cray Public lecture at Lawrence Livermore Laboratories on the introduction of the Cray-1 (1976) © 2003 Elsevier Science (USA). All rights reserved. G-2 I Appendix G Vector Processors G.1 Why Vector Processors? In Chapters 3 and 4 we saw how we could significantly increase the performance of a processor by issuing multiple instructions per clock cycle and by more deeply pipelining the execution units to allow greater exploitation of instruction- level parallelism. (This appendix assumes that you have read Chapters 3 and 4 completely; in addition, the discussion on vector memory systems assumes that you have read Chapter 5.) Unfortunately, we also saw that there are serious diffi- culties in exploiting ever larger degrees of ILP.
    [Show full text]
  • Forerunner® HE ATM Adapters for IRIX Systems User's Manual FORE Systems, Inc
    ForeRunner ® HE ATM Adapters for IRIX Systems User’s Manual MANU0404-01 fs Revision A 05-13-1999 Software Version 5.1 FORE Systems, Inc. 1000 FORE Drive Warrendale, PA 15086-7502 Phone: 724-742-4444 FAX: 724-742-7742 http://www.fore.com Notices St. Peter's Basilica image courtesy of ENEL SpA and InfoByte SpA. Disk Thrower image courtesy of Xavier Berenguer, Ani- matica. Copyright © 1998 Silicon Graphics, Inc. All Rights Reserved. This document or parts thereof may not be reproduced in any form unless permitted by contract or by written permission of Silicon Graphics, Inc. LIMITED AND RESTRICTED RIGHTS LEGEND Use, duplication, or disclosure by the Government is subject to restrictions as set forth in the Rights in Data clause at FAR 52.227-14 and/or in similar or successor clauses in the FAR, or in the DOD, DOE or NASA FAR Supplements. Unpublished rihts reserved under the Copyright Laws of the United Stated. Contractor/manufacturer is Silicon Graphics, Inc., 2011 N. Shoreline Blvd., Mountain View, CA 94043-1389. Autotasking, CF77, CRAY, Cray Ada, CraySoft, CRAY Y-MP, CRAY-1, CRInform, CRI/TurboKiva, HSX, LibSci, MPP Apprentice, SSD, SUPERCLUSTER, UNICOS, X-MP EA, and UNICOS/mk are federally registered trademarks and Because no workstation is an island, CCI, CCMT, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Ani- mation Theater, CRAY APP, CRAY C90, CRAY C90D, Cray C++ Compiling System, CrayDoc, CRAY EL, CRAY J90, CRAY J90se, CrayLink, Cray NQS, Cray/REELlibrarian, CRAY S-MP, CRAY SSD-T90, CRAY SV1, CRAY T90, CRAY T3D, CRAY T3E, CrayTutor, CRAY X-MP, CRAY XMS, CRAY-2, CSIM, CVT, Delivering the power .
    [Show full text]
  • Cray Supercomputers Past, Present, and Future
    Cray Supercomputers Past, Present, and Future Hewdy Pena Mercedes, Ryan Toukatly Advanced Comp. Arch. 0306-722 November 2011 Cray Companies z Cray Research, Inc. (CRI) 1972. Seymour Cray. z Cray Computer Corporation (CCC) 1989. Spin-off. Bankrupt in 1995. z Cray Research, Inc. bought by Silicon Graphics, Inc (SGI) in 1996. z Cray Inc. Formed when Tera Computer Company (pioneer in multi-threading technology) bought Cray Research, Inc. in 2000 from SGI. Seymour Cray z Joined Engineering Research Associates (ERA) in 1950 and helped create the ERA 1103 (1953), also known as UNIVAC 1103. z Joined the Control Data Corporation (CDC) in 1960 and collaborated in the design of the CDC 6600 and 7600. z Formed Cray Research Inc. in 1972 when CDC ran into financial difficulties. z First product was the Cray-1 supercomputer z Faster than all other computers at the time. z The first system was sold within a month for US$8.8 million. z Not the first system to use a vector processor but was the first to operate on data on a register instead of memory Vector Processor z CPU that implements an instruction set that operates on one- dimensional arrays of data called vectors. z Appeared in the 1970s, formed the basis of most supercomputers through the 80s and 90s. z In the 60s the Solomon project of Westinghouse wanted to increase math performance by using a large number of simple math co- processors under the control of a single master CPU. z The University of Illinois used the principle on the ILLIAC IV.
    [Show full text]
  • Cray System Software Features for Cray X1 System ABSTRACT
    Cray System Software Features for Cray X1 System Don Mason Cray, Inc. 1340 Mendota Heights Road Mendota Heights, MN 55120 [email protected] ABSTRACT This paper presents an overview of the basic functionalities the Cray X1 system software. This includes operating system features, programming development tools, and support for programming model Introduction This paper is in four sections: the first section outlines the Cray software roadmap for the Cray X1 system and its follow-on products; the second section presents a building block diagram of the Cray X1 system software components, organized by programming models, programming environments and operating systems, then describes and details these categories; the third section lists functionality planned for upcoming system software releases; the final section lists currently available documentation, highlighting manuals of interest to current Cray T90, Cray SV1, or Cray T3E customers. Cray Software Roadmap Cray’s roadmap for platforms and operating systems is shown here: Figure 1: Cray Software Roadmap The Cray X1 system is the first scalable vector processor system that combines the characteristics of a high bandwidth vector machine like the Cray SV1 system with the scalability of a true MPP system like the Cray T3E system. The Cray X1e system follows the Cray X1 system, and the Cray X1e system is followed by the code-named Black Widow series systems. This new family of systems share a common instruction set architecture. Paralleling the evolution of the Cray hardware, the Cray Operating System software for the Cray X1 system builds upon technology developed in UNICOS and UNICOS/mk. The Cray X1 system operating system, UNICOS/mp, draws in particular on the architecture of UNICOS/mk by distributing functionality between nodes of the Cray X1 system in a manner analogous to the distribution across processing elements on a Cray T3E system.
    [Show full text]
  • Performance of Various Computers Using Standard Linear Equations Software
    ———————— CS - 89 - 85 ———————— Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra* Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester CS - 89 - 85 June 15, 2014 * Electronic mail address: [email protected]. An up-to-date version of this report can be found at http://www.netlib.org/benchmark/performance.ps This work was supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research, U.S. Department of Energy, under Contract DE-AC05-96OR22464, and in part by the Science Alliance a state supported program at the University of Tennessee. 6/15/2014 2 Performance of Various Computers Using Standard Linear Equations Software Jack J. Dongarra Electrical Engineering and Computer Science Department University of Tennessee Knoxville, TN 37996-1301 Computer Science and Mathematics Division Oak Ridge National Laboratory Oak Ridge, TN 37831 University of Manchester June 15, 2014 Abstract This report compares the performance of different computer systems in solving dense systems of linear equations. The comparison involves approximately a hundred computers, ranging from the Earth Simulator to personal computers. 1. Introduction and Objectives The timing information presented here should in no way be used to judge the overall performance of a computer system. The results reflect only one problem area: solving dense systems of equations. This report provides performance information on a wide assortment of computers ranging from the home-used PC up to the most powerful supercomputers. The information has been collected over a period of time and will undergo change as new machines are added and as hardware and software systems improve.
    [Show full text]
  • Performance Evaluation of the Cray X1 Distributed Shared Memory Architecture
    1 Performance Evaluation of the Cray X1 Distributed Shared Memory Architecture Thomas H. Dunigan, Jr., Jeffrey S. Vetter, James B. White III, Patrick H. Worley Oak Ridge National Laboratory Abstract—The Cray X1 supercomputer is a distributed shared like vector caches and multi-streaming processors. memory vector multiprocessor, scalable to 4096 processors and up to 65 terabytes of memory. The X1’s hierarchical design uses A. Multi-streaming Processor (MSP) the basic building block of the multi-streaming processor (MSP), The X1 has a hierarchical design with the basic building which is capable of 12.8 GF/s for 64-bit operations. The block being the multi-streaming processor (MSP), which is distributed shared memory (DSM) of the X1 presents a 64-bit capable of 12.8 GF/s for 64-bit operations (or 25.6 GF/s for global address space that is directly addressable from every MSP with an interconnect bandwidth per computation rate of one byte 32-bit operations). As illustrated in Figure 1, each MSP is per floating point operation. Our results show that this high comprised of four single-streaming processors (SSPs), each bandwidth and low latency for remote memory accesses with two 32-stage 64-bit floating-point vector units and one 2- translates into improved application performance on important way super-scalar unit. The SSP uses two clock frequencies, applications. Furthermore, this architecture naturally supports 800 MHz for the vector units and 400 MHz for the scalar unit. programming models like the Cray SHMEM API, Unified Each SSP is capable of 3.2 GF/s for 64-bit operations.
    [Show full text]