NCAR-TN/189+PROC Computing in the Atmospheric Sciences in The

Total Page:16

File Type:pdf, Size:1020Kb

NCAR-TN/189+PROC Computing in the Atmospheric Sciences in The NCAR-TN/189+PROC NCAR TECHNICAL NOTE 1* April 1982 Proceedings of the Second Annual Computer Users Conference Computing in the Atmospheric Sciences in the 1980s Editor: Linda Besen SCIENTIFIC COMPUTING DIVISION v NATIONAL CENTER FOR ATMOSPHERIC RESEARCH BOULDER, COLORADO qTAII or OCAS Section I: Introduction ................... ...... ...............1 Program ....................... .. 3I List of Participants....................... 4 Section II: Future Developments within SCD ........ *................... 1 Section III: Computing in the Atmospheric Sciences in the 1980'S........ 1 Summary of Discussion.......................8............ 8I Section IV: Data Communication Needs for UW/NCAR Computer Link......... 1 Data Communication and the Atmospheric Sciences............ 3I Achieving a Uniform NCAR Computing Environment............. 6 Conference Recommendations: Data Communications/Uniform Access....................*.........a...........*.......* 16 Section V: Gateway Machines to NCAR Network Hosts..................... 1 Conference Recommendations: Gateway Machines............... 9 Section VI: Data Inventories, Archive Preparation, and Access......... 1L The Typical Data Life Cycle and Associated Computing Strategies...............a.*...*.................* * .... 55 Data Analysis and the NCAR SCD. ........................... 133 Conference Recommendations...........................16 3 Introduction Program List of Participants The Scientific Computing Division hosted its second annual Computer Users' Conference on January 7-8, 1982 in Boulder, Colorado. The purpose of these conferences is to provide a formal channel of communication for users of the Computing Division, and to obtain regular feedback from them which can be used to aid in planning for the future of the SCD, The conference opened with formal greetings from Walter Macintyre (SCD Direc- tor), Wilmot Hess (NCAR director), and Lawrence Lee (National Science Founda- tion). Walter Macintyre discussed current planning topics for the Division. His report in Section II includes material on future developments within the Divi- sion. Warren Washington presented the keynote address, "Computing in the Atmospheric Sciences in the 1980's." A summary of his talk, and the resulting discussion, is presented in Section III. Users then attended one of three separate and concurrent workshops which took place on the first day of the conference. Among the various topics considered important to SCD users in the Atmospheric Sciences are access to the host machines at NCAR; the practicality of moving large files to and from remote sites; and the collection, interchange and display of large data files. Papers given in these workshops and the Conference Recommendations resulting from each workshop are covered in Sections IV, V and VI. Section VII contains Walter Macintyre's Response and Concluding Remarks. "I think the most important and exciting result of the conference," said Dr. Macintyre, "is the clear, unanimous, and unambiguous statement from our users that NCAR must continue to offer the most powerful computing system available. This is necessary not only for our users to continue their current scientific endeavors, but, more particularly, to allow them to explore new avenues of research--avenues that are currently inaccessible with a CRAY-1 class of machine." Akucxwe&yoAnts Many persons within SCD have contributed to the conference. Buck Frye was Chairman of the Conference Committee. He was also responsible for the Workshop Issues and Guidelines. Darlene Atwood was responsible for the arrangements and invitations. Cicely Ridley provided University Liaison sup- port. Linda Besen was Editor of this Conference Proceedings. Ann Cowley was responsible for the displays and documentation distribution, as well as the consulting service at the conference. - 2- The workshop leaders and SCD members for each workshop are listed below: Workshop I: Data Communications/Uniform Access Panel Discussion: Dave Houghton, Chair Dave Fulker, Herb Poppe, Dick Sato Workshop II: Gateway Machines to the NCAR Network Hosts Panel Discussion: Steve Orszag, Chair Paul Rotar, Gary Jensen, Buck Frye Workshop III: Data Access and Display Panel Discussion: Francis Bretherton, Chair Margaret Drake, Roy Jenne, Bob Lackman, Gary Rasmussen -3- JANUARY 7 9:00 a.m. Introductions Walter Macintyre Welcome Wilmot Hess Introduction by NSF Larry Lee Division Status and Planning Walter Macintyre 10:30 a.m. Coffee Break 10:45 Computing in the Atmospheric Warren Washington Sciences in the 1980s 1:30 p.m. CONFERENCE WORKSHOPS (concurrent) Workshop I: Data Communications/Uniform Access User Requirements Dave Houghton External Network Alternatives Dave Fulker Uniform Access to Network Herb Poppe Workshop II: Gateway Machines Scientific Requirements Steve Orszag Front End Machines Gary Jensen Configuration Paul Rotar Workshop III: Data Access and Display Collection & Interchange of Scientific Data: User Requirements Francis Bretherton Archives and Data Access Roy Jenne Computing Strategies Bob Lackman Software Tools Gary Rasmussen 6:30 p.m. Dinner at The Harvest House JANUARY 8 9:00 a.m. Opening Remarks Walter Macintyre CONFERENCE RESULTS Workshop I - Data Communications/Uniform Access Dave Houghton 10:30 a.m. Coffee Break 11:00 a.m. CONFERENCE RESULTS (cont.) Workshop II - Gateway Machines Steve Orszag Workshop III - Data Access and Display Francis Bretherton Conclusions Walter Macintyre -4- L·T~IPCEF 1WR!~IPARJSS Paul Bailey Mary Downton NCAR ACAD NCAR ASP Linda Bath Margaret Drake NCAR AAP NCAR SCD W. R. Barchet Jim Drake Battelle Northwest NCAR CSD Richland, Washington Sal Farfan William Baumer NCAR SCD SUNY/Buffalo Richard Farley Ray Bovet Inst. of Atmospheric Sciences NCAR AAP South Dakota School of Mines & Technology Francis Bretherton NCAR AAP Carl Friehe NCAR ATD Gerald Browning NCAR SCD Buck Frye NCAR SCD Garrett Campbell NCAR ASP Dave Fulker NCAR SCD Celia Chen NCAR ATD Bonnie Gacnik NCAR SCD Robert Chervin NCAR AAP Lawrence Gates Climatic Research Institute Julianna Chow Oregon State University NCAR AAP Ron Gilliland Ann Cowley NCAR HAO NCAR SCD James Goerss Robert Dickinson CIMMS NCAR AAP Norman, Oklahoma Dusan Djuric Gil Green Dept. of Meteorology NCAR SCD Texas A&M University Kadosa Halasi Ben Domenico Dept. of Mathematics NCAR SCD University of Colorado John Donnelly Barbara Hale NCAR SCD Graduate Center for Cloud Physics Research University of Missouri -5- Lofton Henderson Lawrence Lee NCAR SCD National Science Foundation Barbara Horner Doug Lilly NCAR SCD NCAR AAP David Houghton William Little Dept. of Meteorology Woods Hole Oceanographic Institution University of Wisconsin Timothy Lorello Hsiao-ming Hsu Hinds Geophysical Sciences Atmospheric Sciences Chicago Illinois University of Wisconsin Walter Macintyre Roy Jenne NCAR SCD NCAR SCD Ton Mayer Gary Jensen NCAR AAP NCAR SCD William McKie Jeff Keeler Climatic Research Institute NCAR ATD Oregon State University Robert Kelly Jack Miller Cloud Physics Laboratory NCAR HAO University of Chicago Robert Mitchell Thomas Kitterman NCAR SCD Dept. of Meteorology Florida State University Carl Mohr NCAR CSD Daniel Kowalski College of Engineering Donald Morris Rutgers University NCAR SCD Carl Kreitzberg Nancy Norton Dept. of Physics NCAR AAP Drexel University Bernie O'Lear Michael Kuhn NCAR SCD NCAR AAP Stephen Orszag Chela Kunasz Dept. of Mathematics JILA M.I.T. University of Colorado Richard Oye Bob Lackman NCAR ATD NCAR SCD Jan Paegle Ron Larson Dept. of Meteorology Cray Research University of Utah -6- Pete Peterson James Tillman NCAR SCD Dept. of Atmos. Science University of Washington Vic Pizzo NCAR HAO Greg Tripoli Dept. of Atmos. Sciences Gandikota Rao Colorado State University Dept. of Earth & Atmospheric Sciences St. Louis University Stacy Walters NCAR ACAD Gary Rasmussen NCAR SCD Thomas Warner Dept. of Meteorology Cicely Ridley Pennsylvania State University NCAR SCD Warren Washington Paul Rotar NCAR AAP NCAR SCD Rick Wolski Robert Pasken NCAR AAP Dept. of Meteorology University of Oklahoma Eric Pitcher Dept. of Meteorology University of Miami John Roads Scripps Inst. of Oceanography Herb Poppe NCAR SCD Richard Sato NCAR SCD Tom Schlatter NOAA/PROFS Bert Semtner NCAR AAP David Stonehill Rochester University Eugene Takle Climatology-Meteorology Iowa State University James Telford Desert Research Institute University of "evada CureK II: DIVISItI S - ANeD Mainye Future Developments within SCD - Walter Macintyre Walter Macintyre National Center for Atmospheric Research As of the date of writing (10/26/81), it is extremely difficult to forecast how many of the things that we would like to do we will actually be able to deliver. We are one month into the fiscal year without knowing exactly our Divisional budget for FY82. The outlook in FY83 is still more uncertain, but the prophets of gloom and doom seem to outnumber the optimists. Therefore I am reviewing in this presentation some things I feel the community must have from the SCD comitment that I will do everythin within my power to ensure that the computational needs of the community are in fact met, but with only a modest expectation of success. However, in a recent letter to the Chairman of the SCD Advisory Panel, the President of tCAR declared that enhancement of the NCAR Computing Facility was the top institutional priority in the months and years ahead. These needs include some relatively novel developments. Last January, the message was loud and clear that the primary need perceived by the cornunity was for
Recommended publications
  • CRAY X-MP Series of Computer Systems
    For close to a decade, Cray Research has been the industry leader in large-scale computer systems. Today, about 70 percent of all supercomputers installed worldwide are Cray systems. They are used in advanced scientific and research Eaboratories around the world and have gained strong acceptance in diverse industrial environments. No other manufacturer has Cray Research's breadth of success and experience in supercomputer development. The company's initial product, the CRAY-I Computer System, was first installed in 1976and quickly became the standard for large-scale scientific computers -and the first commercially successful vector processor. For some time previously, the potential advantages of vector processing had been understood, but effective practical implementation had eluded computer architects. The CRAY-1 broke that barrier, and today vectorization techniques are used commonly by scientists and engineers in a wide variety of disciplines. The field-proven GRAY X-MP Computer Systems now offer significantly more power tosolve new and bigger problems while providing better value than any other systems available. Large memory size options allow a wider range of problems to be solved, while innovative multiprocessor design provides practical opportunities to exploit multitasking, the next dimension of parallel processing beyond vectorization. Once again, Cray Research, has moved supercomputing forward, offering new levels of hardware performance and software techniques to serve the needs of scientists and engineers today and in the future. Introducing the CRAY X-MP Series of Computer Systems Announcing expanded capabilities to serve the needs of a broadening marketplace: the CRAY X-MP Series of Computer Systems. The CRAY X-MP Series now comprises nine models, ranging from a uniprocessor version with one million words of central memory to a top-end system with four processors and a 16-million-word memory.
    [Show full text]
  • Internet Engineering Task Force January 18-20, 1989 at University
    Proceedings of the Twelfth Internet Engineering Task Force January 18-20, 1989 at University of Texas- Austin Compiled and Edited by Karen Bowers Phill Gross March 1989 Acknowledgements I would like to extend a very special thanks to Bill Bard and Allison Thompson of the University of Texas-Austin for hosting the January 18-20, 1989 IETF meeting. We certainly appreciated the modern meeting rooms and support facilities made available to us at the new Balcones Research Center. Our meeting was especially enhanced by Allison’s warmth and hospitality, and her timely response to an assortment of short notice meeting requirements. I would also like to thank Sara Tietz and Susie Karlson of ACE for tackling all the meeting, travel and lodging logistics and adding that touch of class we all so much enjoyed. We are very happy to have you on our team! Finally, a big thank you goes to Monica Hart (NRI) for the tireless and positive contribution she made to the compilation of these Proceedings. Phill Gross TABLE OF CONTENTS 1. CHAIRMANZS MESSAGE 2. IETF ATTENDEES 3. FINAL AGENDA do WORKING GROUP REPORTS/SLIDES UNIVERSITY OF TEXAS-AUSTIN JANUARY 18-20, 1989 NETWORK STATUS BRIEFINGS AND TECHNICAL PRESENTATIONS O MERIT NSFNET REPORT (SUSAN HARES) O INTERNET REPORT (ZBIGNIEW 0PALKA) o DOEESNET REPORT (TONY HAIN) O CSNET REPORT (CRAIG PARTRIDGE) O DOMAIN SYSTEM STATISTICS (MARK LOTTOR) O SUPPORT FOR 0SI PROTOCOLS IN 4.4 BSD (ROB HAGENS) O TNTERNET WORM(MICHAEL KARELS) PAPERS DISTRIBUTED AT ZETF O CONFORMANCETESTING PROFILE FOR D0D MILITARY STANDARD DATA COMMUNICATIONS HIGH LEVEL PROTOCOL ~MPLEMENTATIONS (DCA CODE R640) O CENTER FOR HIGH PERFORMANCE COMPUTING (U OF TEXAS) Chairman’s Message Phill Gross NRI Chair’s Message In the last Proceedings, I mentioned that we were working to improve the services of the IETF.
    [Show full text]
  • RATFOR User's Guide B
    General Disclaimer One or more of the Following Statements may affect this Document This document has been reproduced from the best copy furnished by the organizational source. It is being released in the interest of making available as much information as possible. This document may contain data, which exceeds the sheet parameters. It was furnished in this condition by the organizational source and is the best copy available. This document may contain tone-on-tone or color graphs, charts and/or pictures, which have been reproduced in black and white. This document is paginated as submitted by the original source. Portions of this document are not fully legible due to the historical nature of some of the material. However, it is the best reproduction available from the original submission. Produced by the NASA Center for Aerospace Information (CASI) NASA CONTRACTOR REPORT 166601 Ames RATFOR User's Guide b Leland C. Helmle (NASA—CH-166601) RATFOR USERS GUIDE 0 (Informatics General Corp,) N85-16490 51 p HC A04/mZ A01 CSCL 09B Unclas G3/61 13243 CONTRACT NAS2— 11555 January 1985 v NASA NASA CONTRACTOR REPORT 166601 *i Ames RATF'OR User's Guide Leland C. Helmle Informatics General Corporation 1121 San Antonio Road Palo Alto, CA 94303 i i _I Prepared for Ames Research Center under Contract NAS2-11555 i n ASA National Aeronautics and 1 Space Administration Ames Research Center Moffett Field, California 94035 '^ I I w Ames R.A.TFOR User's Guide Version 2.0 by Loland C. Helmle Informatics General Corporation July 16, 1983 Prepared under Contract NA52-11555, Task 101 ^ Table of Contents page v 1 Introduction .
    [Show full text]
  • Cielo Computational Environment Usage Model
    Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Version 1.0: Bob Tomlinson, John Cerutti, Robert A. Ballance (Eds.) Version 1.1: Manuel Vigil, Jeffrey Johnson, Karen Haskell, Robert A. Ballance (Eds.) Prepared by the Alliance for Computing at Extreme Scale (ACES), a partnership of Los Alamos National Laboratory and Sandia National Laboratories. Approved for public release, unlimited dissemination LA-UR-12-24015 July 2012 Los Alamos National Laboratory Sandia National Laboratories Disclaimer Unless otherwise indicated, this information has been authored by an employee or employees of the Los Alamos National Security, LLC. (LANS), operator of the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 with the U.S. Department of Energy. The U.S. Government has rights to use, reproduce, and distribute this information. The public may copy and use this information without charge, provided that this Notice and any statement of authorship are reproduced on all copies. Neither the Government nor LANS makes any warranty, express or implied, or assumes any liability or responsibility for the use of this information. Bob Tomlinson – Los Alamos National Laboratory John H. Cerutti – Los Alamos National Laboratory Robert A. Ballance – Sandia National Laboratories Karen H. Haskell – Sandia National Laboratories (Editors) Cray, LibSci, and PathScale are federally registered trademarks. Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray XE, Cray XE6, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, Gemini, Libsci and UNICOS/lc are trademarks of Cray Inc.
    [Show full text]
  • Chippewa Operating System
    Chippewa Operating System The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world. The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines. The Chippewa Operating System often called COS is the discontinued operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world. The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines. The Chippewa was a rather simple job control oriented system derived from the earlier CDC 3000 which later influenced Kronos and SCOPE. The name of the system was based on the Chippewa Falls research and The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world.[1] The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines.[2]. Bibliography. Peterson, J. B. (1969). CDC 6600 control cards, Chippewa Operating System. U.S. Dept. of the Interior. Categories: Operating systems. Supercomputing. Wikimedia Foundation. 2010. The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world.[1] The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines.[2]. This operating system at Control Data Corporation was distinct from and preceded the Cray Operating System (also called COS) at Cray.
    [Show full text]
  • Some Performance Comparisons for a Fluid Dynamics Code
    NBSIR 87-3638 Some Performance Comparisons for a Fluid Dynamics Code Daniel W. Lozier Ronald G. Rehm U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Applied Mathematics Gaithersburg, MD 20899 September 1987 U.S. DEPARTMENT OF COMMERCE NATIONAL BUREAU OF STANDARDS NBSIR 87-3638 SOME PERFORMANCE COMPARISONS FOR A FLUID DYNAMICS CODE Daniel W. Lozier Ronald G. Rehm U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Applied Mathematics Gaithersburg, MD 20899 September 1987 U.S. DEPARTMENT OF COMMERCE, Clarence J. Brown, Acting Secretary NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Director - 2 - 2. BENCHMARK PROBLEM In this section we describe briefly the source of the benchmark problem, the major logical structure of the Fortran program, and the parameters of three different specific instances of the benchmark problem that vary widely in the amount of time and memory required for execution. Research Background As stated in the introduction, our purpose in benchmarking computers is solely in the interest of further- ing our investigations into fundamental problems of fire science. Over a decade ago, stimulated by federal recognition of very large losses of fife and property by fires each year throughout the nation, NBS became actively involved in a national effort to reduce such losses. The work at NBS ranges from very practical to quite theoretical; our approach, which proceeds directly from basic principles, is at the theoretical end of this spectrum. Early work was concentrated on developing a mathematical model of convection arising from a prescribed source of heat in an enclosure, e.g.
    [Show full text]
  • View Article(3467)
    Problems of information technology, 2018, №1, 92–97 Kamran E. Jafarzade DOI: 10.25045/jpit.v09.i1.10 Institute of Information Technology of ANAS, Baku, Azerbaijan [email protected] COMPARATIVE ANALYSIS OF THE SOFTWARE USED IN SUPERCOMPUTER TECHNOLOGIES The article considers the classification of the types of supercomputer architectures, such as MPP, SMP and cluster, including software and application programming interfaces: MPI and PVM. It also offers a comparative analysis of software in the study of the dynamics of the distribution of operating systems (OS) for the last year of use in supercomputer technologies. In addition, the effectiveness of the use of CentOS software on the scientific network "AzScienceNet" is analyzed. Keywords: supercomputer, operating system, software, cluster, SMP-architecture, MPP-architecture, MPI, PVM, CentOS. Introduction Supercomputer is a computer with high computing performance compared to a regular computer. Supercomputers are often used for scientific and engineering applications that need to process very large databases or perform a large number of calculations. The performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of millions of instructions per second (MIPS). Since 2015, the supercomputers performing up to quadrillion FLOPS have started to be developed. Modern supercomputers represent a large number of high performance server computers, which are interconnected via a local high-speed backbone to achieve the highest performance [1]. Supercomputers were originally introduced in the 1960s and bearing the name or monogram of the companies such as Seymour Cray of Control Data Corporation (CDC), Cray Research over the next decades. By the end of the 20th century, massively parallel supercomputers with tens of thousands of available processors started to be manufactured.
    [Show full text]
  • System Programmer Reference (Cray SV1™ Series)
    ® System Programmer Reference (Cray SV1™ Series) 108-0245-003 Cray Proprietary (c) Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright, and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced, or disclosed in any form. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE: The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, CF77, Cray, Cray Ada, Cray Channels, Cray Chips, CraySoft, Cray Y-MP, Cray-1, CRInform, CRI/TurboKiva, HSX, LibSci, MPP Apprentice, SSD, SuperCluster, UNICOS, UNICOS/mk, and X-MP EA are federally registered trademarks and Because no workstation is an island, CCI, CCMT, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Animation Theater, Cray APP, Cray C90, Cray C90D, Cray CF90, Cray C++ Compiling System, CrayDoc, Cray EL, CrayLink,
    [Show full text]
  • HP Advancenet for Engineering Dave Morse Hewlett-Packard Company 3404 East Harmony Road Fort Collins, Co 80525
    HP AdvanceNet for Engineering Dave Morse Hewlett-Packard Company 3404 East Harmony Road Fort Collins, co 80525 INTRODUCTION As one of the five solutions in the HP AdvanceNet offering, HP AdvanceNet for Engineering addresses the networking needs of technical professionals engaged in engineering and other technical pursuits. This solution features the same emphasis on standards common to the other solutions. The solution is best understood by considering a model computing environment for engineering. MODEL ENVIRONMENT .........A6,..... -T.mnlll ICCNI -Fl1eJlfr -ARMGW Toe -ou.lIftIS -*"" -OffICe - ~ 01 WS Dirt. CII5K-tIGIO - StnIrG BuI Doen ... -~n - RIlle oil'"tat.ncllrdl) - Ststa v: Blrlllllr u _to -1JItr1lUtldC6 -WhkIwI -SftIIM,.tIOnIId8D_ - TtndlItac_ CD ftIlWOdI -Fllt-..- -PI$.....,... -Fu.Jlfr toftllWOdl -DICInIa. -Al"utv - TICMIDII offtce ...cton - .......-.-aon -PC-oo&fIN -ApIlOedDftIOlCtllc9N HlWlltt-JlcIcn I ~~ ---------- ,a..,., The diagram of the environment shows many of the key characteristics of both the computers and the network. A major trend in the engineering area in the past few years has 2030-1 been a move to engineering workstations and acceptance of the UNIX operating system as a defacto standard. These workstations offer many advantages in terms of powerful graphics and consistent performance; but in order to be effective, they must easily integrate with the installed base of timeshare computers and other larger computers which may be added in the future. The resulting environment represents a range of computing power from personal computers to mainframes and super computers. In almost all cases, these computers will be supplied by several different vendors. In order for users to realize the maximum benefit of this environment, they should retain the desirable characteristics of the timeshare environment - easy information sharing and centralized system management - and also gain the benefits of the workstations in terms of distributed computing power.
    [Show full text]
  • The CRAY- 1 Computer System
    We believe the key to the 10's longevity is its basically simple, clean structure with adequately large (one Mbyte) address space that allows users to get work done. In this way, it has evolved easily with use and with technology. An equally significant factor in Computer G. Bell, S. H. Fuller, and its success is a single operating system environment Systems D. Siewiorek, Editors enabling user program sharing among all machines. The machine has thus attracted users who have built significant languages and applications in a variety of The CRAY- 1 environments. These user-developers are thus the dominant system architects-implementors. Computer System In retrospect, the machine turned out to be larger Richard M. Russell and further from a minicomputer than we expected. Cray Research, Inc. As such it could easily have died or destroyed the tiny DEC organization that started it. We hope that this paper has provided insight into the interactions of its development. This paper describes the CRAY,1, discusses the evolution of its architecture, and gives an account of Acknowledgments. Dan Siewiorek deserves our some of the problems that were overcome during its greatest thanks for helping with a complete editing of manufacture. the text. The referees and editors have been especially The CRAY-1 is the only computer to have been helpful. The important program contributions by users built to date that satisfies ERDA's Class VI are too numerous for us to give by name but here are requirement (a computer capable of processing from most of them: APL, Basic, BLISS, DDT, LISP, Pascal, 20 to 60 million floating point operations per second) Simula, sos, TECO, and Tenex.
    [Show full text]
  • The Hypercube of Innovation
    I H •. SI S«t i I I I I I l I I I H ~ LIBRARIES. ** o, tec*; DEWEY HD28 .M414 /^3 WORKING PAPER ALFRED P. SLOAN SCHOOL OF MANAGEMENT THE HYPERCUBE OF INNOVATION Allan N. Afiiah Nik Bahram Massachusetts Institute of Technology Digital Equipment Corporation November 1992 WP #3481-92 BPS Revised July 1993 Forthcoming Research Policy MASSACHUSETTS INSTITUTE OF TECHNOLOGY 50 MEMORIAL DRIVE CAMBRIDGE, MASSACHUSETTS 02139 • THE HYPERCUBE OF INNOVATION Allan N. Afuah Nik Bahram Massachusetts Institute of Technology Digital Equipment Corporation November 1992 WP #3481-92 BPS Revised July 1993 Forthcoming Research Policy © Massachusetts Institute of Technology Massachusetts Institute of Technology Sloan School of Management 50 Memorial Dr. Cambridge, Massachusetts 02139 M.U LIBRARIES SEP 9 1993 RECEIVE The Hypercube of Innovation. Abstract Innovation has frequently been categorized as either radical, incremental, architectural, modular or niche, based on the effects which it has on the competence, other products, and investment decisions of the innovating entity. Often, however, an innovation which is, say, architectural at the innovator/manufacturer level, may turn out to be radical to customers, incremental to suppliers of components and equipment, and something else to suppliers of critical complementary innovations. These various faces of one innovation at different stages of the innovation value-adding chain are what we call the hypercube of innovation. For many high-technology products, a technology strategy that neglects these various faces of an innovation and dwells only on the effects of the innovation at the ' innovator/manufacturer level, can have disastrous effects. This is especially so for innovations whose success depends on complementary innovations, whose use involves learning and where positive network externalities exist at the customer level.
    [Show full text]
  • The Cray Extended Architecture Series of Computer Systems, 1988
    . ... , . = 1 T' CI r I e Series c.'Computer Lystems Cmy Rmearch's miwion is to develop and market the world's most powerful computer wtems. For more than a decade, Cray Research has been the induatry leader in large-scale computer systems. he majority of supemomputem installed worldwide are Crav svstems. These svstems are used in advanced research laboratorim and h& gained drong acieptance in dimgovernment, university, and industrial environments. No other manufacturer has Cmy Research's breadth of succws and experience in supercomputer development. The company's initial product, the CRAY-I computer system, was first installed in 19765. The CRAW computer quickly established itself as the standard for largescale eomputer systems; it v\raw the first commercially succesa;ful vector processor, Pmviously, the potential advantages of vector processing had been understood, but effective preetical implementation had eluded computer archltt;.ctr;. The CRAY-1 system broke that barrier, and today vectorization techniques are used routinely by scientists and engineers in a wide variety of di~ciplinee. The GRAY XMP series of computer systems was Cray Research's fim pmduct line featuring a multiprocessor architecture. The two or four CPUs of the larger CRAY X-MP systems can operate independently and simultaneously on separate jobs for greater system throughput of can be applied in combi- nation to opeme jointly on a single job for better program turnaround time. For the first time, multiprocessing and vector proceasing combined to provide a geometric increase in computational performance over convmional wdar processing techniques. The CRAY Y-MP computer Wtem, intraduced in 1988, extended the CRAY X-MP architecture by prwiding unsurpassed power for the mast demanding computational needs.
    [Show full text]