Internet Engineering Task Force January 18-20, 1989 at University
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
High-Level Language Features Not Found in Ordinary LISP. the GLISP
DOCUMENT RESUME ED 232 860 SE 042 634 AUTHOR Novak, Gordon S., Jr. TITLE GLISP User's Manual. Revised. INSTITUTION Stanford Univ., Calif. Dept. of Computer Science. SPONS AGENCY Advanced Research Projects Agency (DOD), Washington, D.C.; National Science Foundation, Washington, D.C. PUB DATE 23 Nov 82 CONTRACT MDA-903-80-c-007 GRANT SED-7912803 NOTE 43p.; For related documents, see SE 042 630-635. PUB TYPE Guides General (050) Reference Materials General (130) EDRS PRICE MF01/PCO2 Plus Postage. DESCRIPTORS *Computer Programs; *Computer Science; Guides; *Programing; *Programing Languages; *Resource Materials IDENTIFIERS *GLISP Programing Language; National Science Foundation ABSTRACT GLISP is a LISP-based language which provides high-level language features not found in ordinary LISP. The GLISP language is implemented by means of a compiler which accepts GLISP as input and produces ordinary LISP as output. This output can be further compiled to machine code by the LISP compiler. GLISP is available for several ISP dialects, including Interlisp, Maclisp, UCI Lisp, ELISP, Franz Lisp, and Portable Standard Lisp. The goal of GLISP is to allow structured objects to be referenced in a convenient, succinct language and to allow the structures of objects to be changed without changing the code which references the objects. GLISP provides both PASCAL-like and English-like syntaxes; much of the power and brevity of GLISP derive from the compiler features necessary to support the relatively informal, English-like language constructs. Provided in this manual is the documentation necessary for using GLISP. The documentation is presented in the following sections: introduction; object descriptions; reference to objects; GLISP program syntax; messages; context rules and reference; GLISP and knowledge representation languages; obtaining and using GLISP; GLISP hacks (some ways of doing things in GLISP which might not be entirely obvious at first glance); and examples of GLISP object declarations and programs. -
CRAY X-MP Series of Computer Systems
For close to a decade, Cray Research has been the industry leader in large-scale computer systems. Today, about 70 percent of all supercomputers installed worldwide are Cray systems. They are used in advanced scientific and research Eaboratories around the world and have gained strong acceptance in diverse industrial environments. No other manufacturer has Cray Research's breadth of success and experience in supercomputer development. The company's initial product, the CRAY-I Computer System, was first installed in 1976and quickly became the standard for large-scale scientific computers -and the first commercially successful vector processor. For some time previously, the potential advantages of vector processing had been understood, but effective practical implementation had eluded computer architects. The CRAY-1 broke that barrier, and today vectorization techniques are used commonly by scientists and engineers in a wide variety of disciplines. The field-proven GRAY X-MP Computer Systems now offer significantly more power tosolve new and bigger problems while providing better value than any other systems available. Large memory size options allow a wider range of problems to be solved, while innovative multiprocessor design provides practical opportunities to exploit multitasking, the next dimension of parallel processing beyond vectorization. Once again, Cray Research, has moved supercomputing forward, offering new levels of hardware performance and software techniques to serve the needs of scientists and engineers today and in the future. Introducing the CRAY X-MP Series of Computer Systems Announcing expanded capabilities to serve the needs of a broadening marketplace: the CRAY X-MP Series of Computer Systems. The CRAY X-MP Series now comprises nine models, ranging from a uniprocessor version with one million words of central memory to a top-end system with four processors and a 16-million-word memory. -
The Evolution of Lisp
1 The Evolution of Lisp Guy L. Steele Jr. Richard P. Gabriel Thinking Machines Corporation Lucid, Inc. 245 First Street 707 Laurel Street Cambridge, Massachusetts 02142 Menlo Park, California 94025 Phone: (617) 234-2860 Phone: (415) 329-8400 FAX: (617) 243-4444 FAX: (415) 329-8480 E-mail: [email protected] E-mail: [email protected] Abstract Lisp is the world’s greatest programming language—or so its proponents think. The structure of Lisp makes it easy to extend the language or even to implement entirely new dialects without starting from scratch. Overall, the evolution of Lisp has been guided more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness that is characteristic of the “hacker culture” than by sober assessments of technical requirements. Nevertheless this process has eventually produced both an industrial- strength programming language, messy but powerful, and a technically pure dialect, small but powerful, that is suitable for use by programming-language theoreticians. We pick up where McCarthy’s paper in the first HOPL conference left off. We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines, to the present era of standardization activities. We then examine the technical evolution of a few representative language features, including both some notable successes and some notable failures, that illuminate design issues that distinguish Lisp from other programming languages. We also discuss the use of Lisp as a laboratory for designing other programming languages. We conclude with some reflections on the forces that have driven the evolution of Lisp. -
RATFOR User's Guide B
General Disclaimer One or more of the Following Statements may affect this Document This document has been reproduced from the best copy furnished by the organizational source. It is being released in the interest of making available as much information as possible. This document may contain data, which exceeds the sheet parameters. It was furnished in this condition by the organizational source and is the best copy available. This document may contain tone-on-tone or color graphs, charts and/or pictures, which have been reproduced in black and white. This document is paginated as submitted by the original source. Portions of this document are not fully legible due to the historical nature of some of the material. However, it is the best reproduction available from the original submission. Produced by the NASA Center for Aerospace Information (CASI) NASA CONTRACTOR REPORT 166601 Ames RATFOR User's Guide b Leland C. Helmle (NASA—CH-166601) RATFOR USERS GUIDE 0 (Informatics General Corp,) N85-16490 51 p HC A04/mZ A01 CSCL 09B Unclas G3/61 13243 CONTRACT NAS2— 11555 January 1985 v NASA NASA CONTRACTOR REPORT 166601 *i Ames RATF'OR User's Guide Leland C. Helmle Informatics General Corporation 1121 San Antonio Road Palo Alto, CA 94303 i i _I Prepared for Ames Research Center under Contract NAS2-11555 i n ASA National Aeronautics and 1 Space Administration Ames Research Center Moffett Field, California 94035 '^ I I w Ames R.A.TFOR User's Guide Version 2.0 by Loland C. Helmle Informatics General Corporation July 16, 1983 Prepared under Contract NA52-11555, Task 101 ^ Table of Contents page v 1 Introduction . -
P106-Kessler.Pdf
Proceedings of the ACM SIGPLAN '84 Symposium on Compiler Construction, SIGPLAN Noticea Vol. 19, No. 8, June 198~ Peep - An Architectura! Description Driven Peephob Optimizer Robert R. Kessler 1 Portable AI Support Systems Project Department of Computer Science University of Utah Salt Lake City, Utah 84112 Abstract global flow analysis, because even though it uses the global information it still performs "local" peephole transformations, Peep is an architectural description driven peephole optimizer, e.g. it doesn't do code motion and loop invariant removal. that is being adapted for use in the Portable Standard Lisp Peep is currently being integrated into the Portable Standard compiler. Tables of optimizable instructions are generated prior Lisp (PSL) compiler [9]. The PSL environment is an ideal place to the creation of the compiler from the architectural description to utilize Peep, since its compiler needs a new optimizer for each of the target machine. Peep then performs global flow analysis target machine (Currently PSL is supported on DecSystem-20, on the target machine code and optimizes instructions as defined DEC Vax, Motorola 68000, Cray-1 and IBM-370.) The PSL in the table. This global flow analysis allows optimization across compiler generates code by translating the input language into a basic blocks of instructions, and the use of tables created at sequence of virtual machine instructions, which are then macro compiler-generation time minimizes the overhead of discovering expanded into the target machine "LAP" instructions. These optimizable instructions. instructions are then translated into binary code for subsequent direct loading into the running image, or into a FASL file for 1. -
Cielo Computational Environment Usage Model
Cielo Computational Environment Usage Model With Mappings to ACE Requirements for the General Availability User Environment Capabilities Release Version 1.1 Version 1.0: Bob Tomlinson, John Cerutti, Robert A. Ballance (Eds.) Version 1.1: Manuel Vigil, Jeffrey Johnson, Karen Haskell, Robert A. Ballance (Eds.) Prepared by the Alliance for Computing at Extreme Scale (ACES), a partnership of Los Alamos National Laboratory and Sandia National Laboratories. Approved for public release, unlimited dissemination LA-UR-12-24015 July 2012 Los Alamos National Laboratory Sandia National Laboratories Disclaimer Unless otherwise indicated, this information has been authored by an employee or employees of the Los Alamos National Security, LLC. (LANS), operator of the Los Alamos National Laboratory under Contract No. DE-AC52-06NA25396 with the U.S. Department of Energy. The U.S. Government has rights to use, reproduce, and distribute this information. The public may copy and use this information without charge, provided that this Notice and any statement of authorship are reproduced on all copies. Neither the Government nor LANS makes any warranty, express or implied, or assumes any liability or responsibility for the use of this information. Bob Tomlinson – Los Alamos National Laboratory John H. Cerutti – Los Alamos National Laboratory Robert A. Ballance – Sandia National Laboratories Karen H. Haskell – Sandia National Laboratories (Editors) Cray, LibSci, and PathScale are federally registered trademarks. Cray Apprentice2, Cray Apprentice2 Desktop, Cray C++ Compiling System, Cray Fortran Compiler, Cray Linux Environment, Cray SHMEM, Cray XE, Cray XE6, Cray XT, Cray XTm, Cray XT3, Cray XT4, Cray XT5, Cray XT5h, Cray XT5m, Cray XT6, Cray XT6m, CrayDoc, CrayPort, CRInform, Gemini, Libsci and UNICOS/lc are trademarks of Cray Inc. -
Chippewa Operating System
Chippewa Operating System The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world. The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines. The Chippewa Operating System often called COS is the discontinued operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world. The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines. The Chippewa was a rather simple job control oriented system derived from the earlier CDC 3000 which later influenced Kronos and SCOPE. The name of the system was based on the Chippewa Falls research and The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world.[1] The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines.[2]. Bibliography. Peterson, J. B. (1969). CDC 6600 control cards, Chippewa Operating System. U.S. Dept. of the Interior. Categories: Operating systems. Supercomputing. Wikimedia Foundation. 2010. The Chippewa Operating System often called COS was the operating system for the CDC 6600 supercomputer, generally considered the first super computer in the world.[1] The Chippewa was initially developed as an experimental system, but was then also deployed on other CDC 6000 machines.[2]. This operating system at Control Data Corporation was distinct from and preceded the Cray Operating System (also called COS) at Cray. -
Some Performance Comparisons for a Fluid Dynamics Code
NBSIR 87-3638 Some Performance Comparisons for a Fluid Dynamics Code Daniel W. Lozier Ronald G. Rehm U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Applied Mathematics Gaithersburg, MD 20899 September 1987 U.S. DEPARTMENT OF COMMERCE NATIONAL BUREAU OF STANDARDS NBSIR 87-3638 SOME PERFORMANCE COMPARISONS FOR A FLUID DYNAMICS CODE Daniel W. Lozier Ronald G. Rehm U.S. DEPARTMENT OF COMMERCE National Bureau of Standards National Engineering Laboratory Center for Applied Mathematics Gaithersburg, MD 20899 September 1987 U.S. DEPARTMENT OF COMMERCE, Clarence J. Brown, Acting Secretary NATIONAL BUREAU OF STANDARDS, Ernest Ambler, Director - 2 - 2. BENCHMARK PROBLEM In this section we describe briefly the source of the benchmark problem, the major logical structure of the Fortran program, and the parameters of three different specific instances of the benchmark problem that vary widely in the amount of time and memory required for execution. Research Background As stated in the introduction, our purpose in benchmarking computers is solely in the interest of further- ing our investigations into fundamental problems of fire science. Over a decade ago, stimulated by federal recognition of very large losses of fife and property by fires each year throughout the nation, NBS became actively involved in a national effort to reduce such losses. The work at NBS ranges from very practical to quite theoretical; our approach, which proceeds directly from basic principles, is at the theoretical end of this spectrum. Early work was concentrated on developing a mathematical model of convection arising from a prescribed source of heat in an enclosure, e.g. -
Jon L. White Collection on Common Lisp
http://oac.cdlib.org/findaid/ark:/13030/c89w0mkb No online items Jon L. White collection on Common Lisp Finding aid prepared by Bo Doub, Kim Hayden, and Sara Chabino Lott Processing of this collection was made possible through generous funding from The Andrew W. Mellon Foundation, administered through the Council on Library and Information Resources' Cataloging Hidden Special Collections and Archives grant. Computer History Museum 1401 N. Shoreline Blvd. Mountain View, CA, 94043 (650) 810-1010 [email protected] March 2017 Jon L. White collection on X6823.2013 1 Common Lisp Title: Jon L. White collection Identifier/Call Number: X6823.2013 Contributing Institution: Computer History Museum Language of Material: English Physical Description: 8.75 Linear feet,7 record cartons Date (bulk): Bulk, 1978-1995 Date (inclusive): 1963-2012 Abstract: The Jon L. White collection on Common Lisp contains material relating to the development and standardization of the programming language Common Lisp and, more generally, the Lisp family of programming languages. Records date from 1963 to 2012, with the bulk of the material ranging from 1978 to 1995, when White was working at MIT’s Artificial Intelligence Laboratory, Xerox PARC (Palo Alto Research Center), Lucid, and Harlequin Group. Throughout many of these positions, White was serving on the X3J13 Committee to formalize a Common Lisp standard, which aimed to combine and standardize many previous dialects of Lisp. This collection consists of conference proceedings, manuals, X3J13 Committee correspondence and meeting minutes, notebooks, technical papers, and periodicals documenting White’s work in all of these roles. Other dialects of Lisp--especially MAClisp--are also major focuses of the collection. -
View Article(3467)
Problems of information technology, 2018, №1, 92–97 Kamran E. Jafarzade DOI: 10.25045/jpit.v09.i1.10 Institute of Information Technology of ANAS, Baku, Azerbaijan [email protected] COMPARATIVE ANALYSIS OF THE SOFTWARE USED IN SUPERCOMPUTER TECHNOLOGIES The article considers the classification of the types of supercomputer architectures, such as MPP, SMP and cluster, including software and application programming interfaces: MPI and PVM. It also offers a comparative analysis of software in the study of the dynamics of the distribution of operating systems (OS) for the last year of use in supercomputer technologies. In addition, the effectiveness of the use of CentOS software on the scientific network "AzScienceNet" is analyzed. Keywords: supercomputer, operating system, software, cluster, SMP-architecture, MPP-architecture, MPI, PVM, CentOS. Introduction Supercomputer is a computer with high computing performance compared to a regular computer. Supercomputers are often used for scientific and engineering applications that need to process very large databases or perform a large number of calculations. The performance of a supercomputer is measured in floating-point operations per second (FLOPS) instead of millions of instructions per second (MIPS). Since 2015, the supercomputers performing up to quadrillion FLOPS have started to be developed. Modern supercomputers represent a large number of high performance server computers, which are interconnected via a local high-speed backbone to achieve the highest performance [1]. Supercomputers were originally introduced in the 1960s and bearing the name or monogram of the companies such as Seymour Cray of Control Data Corporation (CDC), Cray Research over the next decades. By the end of the 20th century, massively parallel supercomputers with tens of thousands of available processors started to be manufactured. -
System Programmer Reference (Cray SV1™ Series)
® System Programmer Reference (Cray SV1™ Series) 108-0245-003 Cray Proprietary (c) Cray Inc. All Rights Reserved. Unpublished Proprietary Information. This unpublished work is protected by trade secret, copyright, and other laws. Except as permitted by contract or express written permission of Cray Inc., no part of this work or its content may be used, reproduced, or disclosed in any form. U.S. GOVERNMENT RESTRICTED RIGHTS NOTICE: The Computer Software is delivered as "Commercial Computer Software" as defined in DFARS 48 CFR 252.227-7014. All Computer Software and Computer Software Documentation acquired by or for the U.S. Government is provided with Restricted Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7014, as applicable. Technical Data acquired by or for the U.S. Government, if any, is provided with Limited Rights. Use, duplication or disclosure by the U.S. Government is subject to the restrictions described in FAR 48 CFR 52.227-14 or DFARS 48 CFR 252.227-7013, as applicable. Autotasking, CF77, Cray, Cray Ada, Cray Channels, Cray Chips, CraySoft, Cray Y-MP, Cray-1, CRInform, CRI/TurboKiva, HSX, LibSci, MPP Apprentice, SSD, SuperCluster, UNICOS, UNICOS/mk, and X-MP EA are federally registered trademarks and Because no workstation is an island, CCI, CCMT, CF90, CFT, CFT2, CFT77, ConCurrent Maintenance Tools, COS, Cray Animation Theater, Cray APP, Cray C90, Cray C90D, Cray CF90, Cray C++ Compiling System, CrayDoc, Cray EL, CrayLink, -
HP Advancenet for Engineering Dave Morse Hewlett-Packard Company 3404 East Harmony Road Fort Collins, Co 80525
HP AdvanceNet for Engineering Dave Morse Hewlett-Packard Company 3404 East Harmony Road Fort Collins, co 80525 INTRODUCTION As one of the five solutions in the HP AdvanceNet offering, HP AdvanceNet for Engineering addresses the networking needs of technical professionals engaged in engineering and other technical pursuits. This solution features the same emphasis on standards common to the other solutions. The solution is best understood by considering a model computing environment for engineering. MODEL ENVIRONMENT .........A6,..... -T.mnlll ICCNI -Fl1eJlfr -ARMGW Toe -ou.lIftIS -*"" -OffICe - ~ 01 WS Dirt. CII5K-tIGIO - StnIrG BuI Doen ... -~n - RIlle oil'"tat.ncllrdl) - Ststa v: Blrlllllr u _to -1JItr1lUtldC6 -WhkIwI -SftIIM,.tIOnIId8D_ - TtndlItac_ CD ftIlWOdI -Fllt-..- -PI$.....,... -Fu.Jlfr toftllWOdl -DICInIa. -Al"utv - TICMIDII offtce ...cton - .......-.-aon -PC-oo&fIN -ApIlOedDftIOlCtllc9N HlWlltt-JlcIcn I ~~ ---------- ,a..,., The diagram of the environment shows many of the key characteristics of both the computers and the network. A major trend in the engineering area in the past few years has 2030-1 been a move to engineering workstations and acceptance of the UNIX operating system as a defacto standard. These workstations offer many advantages in terms of powerful graphics and consistent performance; but in order to be effective, they must easily integrate with the installed base of timeshare computers and other larger computers which may be added in the future. The resulting environment represents a range of computing power from personal computers to mainframes and super computers. In almost all cases, these computers will be supplied by several different vendors. In order for users to realize the maximum benefit of this environment, they should retain the desirable characteristics of the timeshare environment - easy information sharing and centralized system management - and also gain the benefits of the workstations in terms of distributed computing power.