CRAY X-MP Series of Computer Systems
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
Internet Engineering Task Force January 18-20, 1989 at University
Proceedings of the Twelfth Internet Engineering Task Force January 18-20, 1989 at University of Texas- Austin Compiled and Edited by Karen Bowers Phill Gross March 1989 Acknowledgements I would like to extend a very special thanks to Bill Bard and Allison Thompson of the University of Texas-Austin for hosting the January 18-20, 1989 IETF meeting. We certainly appreciated the modern meeting rooms and support facilities made available to us at the new Balcones Research Center. Our meeting was especially enhanced by Allison’s warmth and hospitality, and her timely response to an assortment of short notice meeting requirements. I would also like to thank Sara Tietz and Susie Karlson of ACE for tackling all the meeting, travel and lodging logistics and adding that touch of class we all so much enjoyed. We are very happy to have you on our team! Finally, a big thank you goes to Monica Hart (NRI) for the tireless and positive contribution she made to the compilation of these Proceedings. Phill Gross TABLE OF CONTENTS 1. CHAIRMANZS MESSAGE 2. IETF ATTENDEES 3. FINAL AGENDA do WORKING GROUP REPORTS/SLIDES UNIVERSITY OF TEXAS-AUSTIN JANUARY 18-20, 1989 NETWORK STATUS BRIEFINGS AND TECHNICAL PRESENTATIONS O MERIT NSFNET REPORT (SUSAN HARES) O INTERNET REPORT (ZBIGNIEW 0PALKA) o DOEESNET REPORT (TONY HAIN) O CSNET REPORT (CRAIG PARTRIDGE) O DOMAIN SYSTEM STATISTICS (MARK LOTTOR) O SUPPORT FOR 0SI PROTOCOLS IN 4.4 BSD (ROB HAGENS) O TNTERNET WORM(MICHAEL KARELS) PAPERS DISTRIBUTED AT ZETF O CONFORMANCETESTING PROFILE FOR D0D MILITARY STANDARD DATA COMMUNICATIONS HIGH LEVEL PROTOCOL ~MPLEMENTATIONS (DCA CODE R640) O CENTER FOR HIGH PERFORMANCE COMPUTING (U OF TEXAS) Chairman’s Message Phill Gross NRI Chair’s Message In the last Proceedings, I mentioned that we were working to improve the services of the IETF. -
RATFOR User's Guide B
General Disclaimer One or more of the Following Statements may affect this Document This document has been reproduced from the best copy furnished by the organizational source. It is being released in the interest of making available as much information as possible. This document may contain data, which exceeds the sheet parameters. It was furnished in this condition by the organizational source and is the best copy available. This document may contain tone-on-tone or color graphs, charts and/or pictures, which have been reproduced in black and white. This document is paginated as submitted by the original source. Portions of this document are not fully legible due to the historical nature of some of the material. However, it is the best reproduction available from the original submission. Produced by the NASA Center for Aerospace Information (CASI) NASA CONTRACTOR REPORT 166601 Ames RATFOR User's Guide b Leland C. Helmle (NASA—CH-166601) RATFOR USERS GUIDE 0 (Informatics General Corp,) N85-16490 51 p HC A04/mZ A01 CSCL 09B Unclas G3/61 13243 CONTRACT NAS2— 11555 January 1985 v NASA NASA CONTRACTOR REPORT 166601 *i Ames RATF'OR User's Guide Leland C. Helmle Informatics General Corporation 1121 San Antonio Road Palo Alto, CA 94303 i i _I Prepared for Ames Research Center under Contract NAS2-11555 i n ASA National Aeronautics and 1 Space Administration Ames Research Center Moffett Field, California 94035 '^ I I w Ames R.A.TFOR User's Guide Version 2.0 by Loland C. Helmle Informatics General Corporation July 16, 1983 Prepared under Contract NA52-11555, Task 101 ^ Table of Contents page v 1 Introduction . -
Cielo Computational Environment Usage Model
Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Version 1.0: Bob Tomlinson, John Cerutti, Robert A. Ballance (Eds.) Version 1.1: Manuel Vigil, Jeffrey Johnson, Karen Haskell, Robert A. Ballance (Eds.) Prepared by the Alliance for Computing at Extreme Scale (ACES), a partnership of Los Alamos National Laboratory and Sandia National Laboratories. Approved for public release, unlimited dissemination LA-UR-12-24015 July 2012 Los Alamos National Laboratory Sandia National Laboratories Disclaimer Unless otherwise indicated, this information has been authored by an employee or employees of the Los Alamos National Security, LLC. (LANS), operator of the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 with the U.S. Department of Energy. The U.S. Government has rights to use, reproduce, and distribute this information. The public may copy and use this information without charge, provided that this Notice and any statement of authorship are reproduced on all copies. Neither the Government nor LANS makes any warranty, express or implied, or assumes any liability or responsibility for the use of this information. Bob Tomlinson – Los Alamos National Laboratory John H. Cerutti – Los Alamos National Laboratory Robert A. Ballance – Sandia National Laboratories Karen H. Haskell – Sandia National Laboratories (Editors) Cray, LibSci, and PathScale are federally registered trademarks. Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray XE, Cray XE6, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, Gemini, Libsci and UNICOS/lc are trademarks of Cray Inc. -
Chippewa Operating System
Chippewa Operating System The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world. The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines. The Chippewa Operating System often called COS is the discontinued operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world. The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines. The Chippewa was a rather simple job control oriented system derived from the earlier CDC 3000 which later influenced Kronos and SCOPE. The name of the system was based on the Chippewa Falls research and The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world.[1] The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines.[2]. Bibliography. Peterson, J. B. (1969). CDC 6600 control cards, Chippewa Operating System. U.S. Dept. of the Interior. Categories: Operating systems. Supercomputing. Wikimedia Foundation. 2010. The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world.[1] The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines.[2]. This operating system at Control Data Corporation was distinct from and preceded the Cray Operating System (also called COS) at Cray. -
Some Performance Comparisons for a Fluid Dynamics Code
NBSIR 87-3638 Some Performance Comparisons for a Fluid Dynamics Code Daniel W. Lozier Ronald G. Rehm U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Applied Mathematics Gaithersburg, MD 20899 September 1987 U.S. DEPARTMENT OF COMMERCE NATIONAL BUREAU OF STANDARDS NBSIR 87-3638 SOME PERFORMANCE COMPARISONS FOR A FLUID DYNAMICS CODE Daniel W. Lozier Ronald G. Rehm U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Applied Mathematics Gaithersburg, MD 20899 September 1987 U.S. DEPARTMENT OF COMMERCE, Clarence J. Brown, Acting Secretary NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Director - 2 - 2. BENCHMARK PROBLEM In this section we describe briefly the source of the benchmark problem, the major logical structure of the Fortran program, and the parameters of three different specific instances of the benchmark problem that vary widely in the amount of time and memory required for execution. Research Background As stated in the introduction, our purpose in benchmarking computers is solely in the interest of further- ing our investigations into fundamental problems of fire science. Over a decade ago, stimulated by federal recognition of very large losses of fife and property by fires each year throughout the nation, NBS became actively involved in a national effort to reduce such losses. The work at NBS ranges from very practical to quite theoretical; our approach, which proceeds directly from basic principles, is at the theoretical end of this spectrum. Early work was concentrated on developing a mathematical model of convection arising from a prescribed source of heat in an enclosure, e.g. -
UC ONLINE* Berkeley's Multiloop Computer Control Program
ti Na classroom UC ONLINE* Berkeley's Multiloop Computer Control Program ALAN S. FOSS course, my colleagues, because there is seldom a sec University of California ond. Not everyone will agree with that of course, and Berkeley, CA 94720 those that do will ask, "How?" ULTILOOPS ABOUND. They are all around us. HERE'S HOW We invent these intricate control systems and M Imagine that you have a computer program that apply them to distillation columns, fired heaters, reac permits the user to configure any multiloop control tors, steam systems. They are the "brains" and "nerv system he desires for a particular process, say the ous system" of chemical processes. Processes need system shown in Figure 1 for a distillation column. them to operate and to operate safely. And suppose such a control system accepts process That's news? Not at all. Everyone has known it for "measurements" from a dynamic simulation of the col generations. Everyone, that is, except the students in umn and delivers its "commands" to that same simula our process control courses here in the States.** tion. With the keyboard command That's got to change! And the change has to be made in the first course in process control-the first YC, KP = 10, KI = 20, FREQ = 5 the user sets the proportional- and integral-gain pa- Reflux, L Feed Z,F,Q Author Alan Foss writes that "ofter a quarter of a century of search ing for ways to tell Californians about process control, I om still search ing. The article published here reports one of the 'finds' along the way. -
View Article(3467)
Problems of information technology, 2018, №1, 92–97 Kamran E. Jafarzade DOI: 10.25045/jpit.v09.i1.10 Institute of Information Technology of ANAS, Baku, Azerbaijan [email protected] COMPARATIVE ANALYSIS OF THE SOFTWARE USED IN SUPERCOMPUTER TECHNOLOGIES The article considers the classification of the types of supercomputer architectures, such as MPP, SMP and cluster, including software and application programming interfaces: MPI and PVM. It also offers a comparative analysis of software in the study of the dynamics of the distribution of operating systems (OS) for the last year of use in supercomputer technologies. In addition, the effectiveness of the use of CentOS software on the scientific network "AzScienceNet" is analyzed. Keywords: supercomputer, operating system, software, cluster, SMP-architecture, MPP-architecture, MPI, PVM, CentOS. Introduction Supercomputer is a computer with high computing performance compared to a regular computer. Supercomputers are often used for scientific and engineering applications that need to process very large databases or perform a large number of calculations. The performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of millions of instructions per second (MIPS). Since 2015, the supercomputers performing up to quadrillion FLOPS have started to be developed. Modern supercomputers represent a large number of high performance server computers, which are interconnected via a local high-speed backbone to achieve the highest performance [1]. Supercomputers were originally introduced in the 1960s and bearing the name or monogram of the companies such as Seymour Cray of Control Data Corporation (CDC), Cray Research over the next decades. By the end of the 20th century, massively parallel supercomputers with tens of thousands of available processors started to be manufactured. -
System Programmer Reference (Cray SV1™ Series)
® System Programmer Reference (Cray SV1™ Series) 108-0245-003 Cray Proprietary (c) Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright, and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced, or disclosed in any form. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE: The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, CF77, Cray, Cray Ada, Cray Channels, Cray Chips, CraySoft, Cray Y-MP, Cray-1, CRInform, CRI/TurboKiva, HSX, LibSci, MPP Apprentice, SSD, SuperCluster, UNICOS, UNICOS/mk, and X-MP EA are federally registered trademarks and Because no workstation is an island, CCI, CCMT, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Animation Theater, Cray APP, Cray C90, Cray C90D, Cray CF90, Cray C++ Compiling System, CrayDoc, Cray EL, CrayLink, -
Defense Data Network (DDN) Comply with the Official Dod Military Standard (MIL-STD) Protocols
LIBRARY KB:~50002 RESEARCH REPORTS DIVISION NAVAL POSTGRADUATE SCHOOL MONTEREY, CALfFORNIA 93940 DEFENSE COMMUNICATIONS AGENCY // <2--«y" ^DN PROTOCOL IMPLEMENTATIONS AND VENDORS GUIDE AUGUST 1987 [M^ V. 7 i-" ^n^^^^^TK. DEFENSE COMMUNICATIONS AGENCY DDN PROTOCOL IMPLEMENTATIONS AND VENDORS GUIDE AUGUST 1987 Editor Francine Perillo Additional copies of this document may be obtained from the DDN Network Information Center, SRI International, 333 Ravenswood Avenue, Room EJ291, Menio Park, CA 94025. Price is $30.00 domestic, $35 overseas. Copies may also be obtained from the Defense Technical Information Center (OTIC), Cameron Station, Alexandria, VA 22314. It is the intent of the DDN Network Information Center (NIC) to make the DDN Protocol Implementations and Vendors Guide widely available to subscribers of the DDN. The Guide may be obtained in hardcopy or machine-readable form. Hardcopy is available from the NIC for $30.00 ($35.00 overseas) to cover the costs of reproduction and handling. Send check or purchase order to the DDN Network Information Center, SRI International, Room EJ291, 333 Ravenswood Avenue, Menlo Park, CA 94025. Copies are available online to DDN users who have access to the file transfer services, FTP or KERMIT, using pathname NETINFO:VENDORS-GUIDE.DOC. The online file is updated on a continual basis. The hardcopy version is published twice a year in February and August. DDN Protocol Implementations and Vendors Guide. Printed and bound in the United States of America. Published by the DDN Network Information Center, SRI International, Menlo Park, CA 94025. Date: August, 1987 u NOTICE The DDN Protocol Implementation and Vendors Guide is for informational purposes only. -
DDN (Defense Data Network) Protocol Implementations and Vendors Guide
D-RI92 196 DON (DEFENSE DATA NETUORK) PROTOCOL IMPLEMENTATIONS AND 1/'4 VENDORS OIJIDE(U) SRI INTERNATIONAL MENLO PARK CA DON NETUORK INFORMATION CENTER D J OAKLEY ET AL. FEB ES F/ 25/ L NIC- 2 C2ES-S?-C-20 W U:CLAS SIFIED smmhhomhhhhhm smmhmmhhhhhhhu mhhhmmhhhhmhh ! -11, R""-F, "'lP TEST CHART - U" " . I-".. " -"-" '..-G ,-" ,,-.-,-.- --.-- ".:- " " ""I'" "".-" -6 *, .-. ,,, ,.- 'L, . ,•,.. " ,. p . UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAG REPORT DOCUMENTATION PAGE I&. REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS 12a. SECURITY CLASSIFICATION AUTHORITY 3. DISTRIBUTION/AVAILABIUTY OF REPORTb _____________________________________ Distribution Statement A. 2b. DECLASSIFICATION I DOWNGRADING SCHEDULE Approved for public release. Distribution unlimited. 4. PERFORMING ORGANIZATION REPORT NUMBER(S) 5. MONITORING ORGANIZATION REPORT NJUMBER(S) NIC 50002 Ea. NAME OF PERFORMING ORGANIZATION 6b. OFFICE SYMBOL 7a. NAME OF MONITORING ORGANIZATION SRI International (If applicable) Defense Data Network DDN Network Information CentIr Defense CommunicationF Svstem 6c. ADDRESS (City, State, and ZIP Code) 7b. ADDRESS (City,' State, "n ZIP Code) Menlo Park, CA 94025 McLean,VA 22102 Ba. NAME OF FUNDING /SPONSORING 8b. OFFICE SYMBOL 9. PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER ORGANIZATIONI (ifapplicable) 8c. ADDRESS (CIty. State, and ZIP Code) 10. SOURCE OF FUNDING NUMBERS ELEMENT NO. NO. NO. IACCESSION NO. 11. TITLE (Include Security Classificati on) DDN Protocol Implementations and Vendors Guide .. ,,--. 12. PERSONAL. AUTHOR(S) Oakley, Daniel; Perillo, Francine, eds. __________________ 13a. TYPE OF REPORT 13b. TIME COVERED 14. DATE OF REPORT (Year, Month, Day) S. PAGE COUNT FROM _____TO ____ 880200 20 16. SUPPLEMENTARY NOTATION . FIELD GROUP SUB-GROUP Transmission Control Protocol/ Internet Protocol; TCP/IP; 17. COSATCODESVe18. sC TERMS;(CI impleeencsaiond; DeensebokNetwer) Dat ; DDN; DDN protocol suite. -
HP Advancenet for Engineering Dave Morse Hewlett-Packard Company 3404 East Harmony Road Fort Collins, Co 80525
HP AdvanceNet for Engineering Dave Morse Hewlett-Packard Company 3404 East Harmony Road Fort Collins, co 80525 INTRODUCTION As one of the five solutions in the HP AdvanceNet offering, HP AdvanceNet for Engineering addresses the networking needs of technical professionals engaged in engineering and other technical pursuits. This solution features the same emphasis on standards common to the other solutions. The solution is best understood by considering a model computing environment for engineering. MODEL ENVIRONMENT .........A6,..... -T.mnlll ICCNI -Fl1eJlfr -ARMGW Toe -ou.lIftIS -*"" -OffICe - ~ 01 WS Dirt. CII5K-tIGIO - StnIrG BuI Doen ... -~n - RIlle oil'"tat.ncllrdl) - Ststa v: Blrlllllr u _to -1JItr1lUtldC6 -WhkIwI -SftIIM,.tIOnIId8D_ - TtndlItac_ CD ftIlWOdI -Fllt-..- -PI$.....,... -Fu.Jlfr toftllWOdl -DICInIa. -Al"utv - TICMIDII offtce ...cton - .......-.-aon -PC-oo&fIN -ApIlOedDftIOlCtllc9N HlWlltt-JlcIcn I ~~ ---------- ,a..,., The diagram of the environment shows many of the key characteristics of both the computers and the network. A major trend in the engineering area in the past few years has 2030-1 been a move to engineering workstations and acceptance of the UNIX operating system as a defacto standard. These workstations offer many advantages in terms of powerful graphics and consistent performance; but in order to be effective, they must easily integrate with the installed base of timeshare computers and other larger computers which may be added in the future. The resulting environment represents a range of computing power from personal computers to mainframes and super computers. In almost all cases, these computers will be supplied by several different vendors. In order for users to realize the maximum benefit of this environment, they should retain the desirable characteristics of the timeshare environment - easy information sharing and centralized system management - and also gain the benefits of the workstations in terms of distributed computing power. -
The CRAY- 1 Computer System
We believe the key to the 10's longevity is its basically simple, clean structure with adequately large (one Mbyte) address space that allows users to get work done. In this way, it has evolved easily with use and with technology. An equally significant factor in Computer G. Bell, S. H. Fuller, and its success is a single operating system environment Systems D. Siewiorek, Editors enabling user program sharing among all machines. The machine has thus attracted users who have built significant languages and applications in a variety of The CRAY- 1 environments. These user-developers are thus the dominant system architects-implementors. Computer System In retrospect, the machine turned out to be larger Richard M. Russell and further from a minicomputer than we expected. Cray Research, Inc. As such it could easily have died or destroyed the tiny DEC organization that started it. We hope that this paper has provided insight into the interactions of its development. This paper describes the CRAY,1, discusses the evolution of its architecture, and gives an account of Acknowledgments. Dan Siewiorek deserves our some of the problems that were overcome during its greatest thanks for helping with a complete editing of manufacture. the text. The referees and editors have been especially The CRAY-1 is the only computer to have been helpful. The important program contributions by users built to date that satisfies ERDA's Class VI are too numerous for us to give by name but here are requirement (a computer capable of processing from most of them: APL, Basic, BLISS, DDT, LISP, Pascal, 20 to 60 million floating point operations per second) Simula, sos, TECO, and Tenex.