STK-6401 the Affordable Mini-Supercomputer with Muscle

Total Page:16

File Type:pdf, Size:1020Kb

STK-6401 the Affordable Mini-Supercomputer with Muscle STK-6401 The Affordable Mini-Supercomputer with Muscle I I II ~~. STKIII SUPERTEK COMPUTERS STK-6401 - The Affordable Mini-Supercomputer with Muscle The STK-6401 is a high-performance design supports two vector reads, one more than 300 third-party and public mini-supercomputer which is fully vector write, and one I/O transfer domain applications developed for compatible with the Cray X-MP/ 48™ with an aggregate bandwidth of the Cray 1 ™ and Cray X-MP instruction set, including some 640 MB/s. Bank conflicts are reduced architectures. important operations not found to a minimum by a 16-way, fully A UNIX™ environment is also in the low-end Cray machines. interleaved structure. available under CTSS. The system design combines Coupled with the multi-ported vec­ advanced TTL/CMOS technologies tor register file and a built-in vector Concurrent Interactive and with a highly optimized architecture. chaining capability, the Memory Unit Batch Processing By taking advantage of mature, makes most vector operations run as CTSS accommodates the differing multiple-sourced off-the-shelf devices, if they were efficient memory-to­ requirements of applications develop­ the STK-6401 offers performance memory operations. This important ment and long-running, computa­ equal to or better than comparable feature is not offered in most of the tionally intensive codes. As a result, mini-supers at approximately half machines on the market today. concurrent interactive and batch their cost. 110 Subsystem access to the STK-6401 is supported Additional benefits of this design with no degradation in system per­ The I/O subsystem of the STK-6401 approach are much smaller size, formance. CTSS manages multiple low power consumption, the ability communicates with central memory concurrent processes for efficient via a high-speed port which is trans­ to operate with fan cooling, and sharing of the STK-6401's resources. parent to CPU operation. This port intrinsic high reliability. User-controlled preemptive priority has a bandwidth of 160 MB/sec and Central Processing Unit scheduling allows users to control is available to multiple data paths resource allocation and system with individual bandwidths of up to The STK-6401 architecture is based workload, for optimal use of the on five major, tightly-coupled sub­ 50 MB/sec. Controllers, based on the STK-6401. systems: Instruction Unit, Vector Unit, VMEbus, manage the data flow To simplify the user's interface with Scalar Unit, Memory Unit, and I/O associated with these channels. the system, CTSS provides a single Processor. This structure yields a The flexibility of this approach command language for both interac­ peak computational rate of 40 MFlDPS enables very high density disk drives tive and batch access; the batch job and high throughputs for a wide to be interfaced easily to the STK-6401, manager - COSMOS - accepts a allowing accommodation of new drives range of applications with various directives file containing commands degrees of vectorizability or inherent as they become available. Currently in the same form as would be used parallelism. both 2.4 MB/sec and 12.5 MB/sec interactively. The Instruction Unit executes the drives are offered. Cray X-MP instruction set, enabling High-performance magnetic tape Program Recovery/ Restart Facility programs currently running on a Cray units, terminals, and networking via To aid recovery and restart, CTSS to be used without change on the both Ethernet (TCP/ IP) and HYPER­ writes a running program's memory STK-6401. channeFM are supported. image to a file (called a dropfile) The Vector Unit contains a multi­ The STK-6401's I/O structure also whenever the program is temporarily ported vector register file which makes high bandwidth channels removed from memory or terminates supports as many as 16 word available for customer-specific I/o. abnormally. CTSS creates the dropfile transfers per clock cycle - with a in the user's directory (where it is bandwidth of 2.56 GB/s. It can fully Productive Software Environment accessible to the user) as part of the support all concurrent vector opera­ The Cray Time Sharing System setup for program execution. tions as well as vector-memory and (CTSS) was specifically designed to When a program terminates abnor­ vector-scalar data transfers. Hence, give computational .scientists and mally, the dropfile receives the pro­ peak or near-optimal vector perfor­ applications developers a highly­ gram's memory image together with mance can readily be sustained in productive, interactive, supercom­ all the system information needed to most applications. puting environment. restart execution from the point at The Scalar Unit contains a multi­ Large, complex computational which it was interrupted. Since the ported scalar register file that sup­ models can be developed rapidly and dropfile is itself an executable file, ports simultaneous scalar operations efficiently using CTSS' broad range the program may be recovered simply with low latencies. Its 20-MIPS peak of facilities - including advanced by executing the dropfile. performance for 64-bit scalar opera­ text editing, powerful symbolic tions, supported by the Instruction debugging, fast turnaround of testing, High-Performance Disk 110 Unit which issues instructions at the and interaction with long running Many engineering and scientific maximum rate of one per cycle, codes. applications require very high disk makes the STK-6401 suitable for The STK-6401's sophisticated I/O bandwidth to properly support many scalar oriented applications. FORTRAN applications environment, their computational workload. coupled with bit-for-bit instruction CTSS has been designed to support Central Memory compatibility with the Cray X-MP, exceptionally high I/O rates via a The STK-6401 Memory Unit serves lets users retain their current combination of features: file system the other major subsystems at very FORTRAN applications interface overhead is reduced through a stream­ high data transfer rates. Its 4-ported while also taking advantage of the lined, optimized disk file index struc- t CONTROL J t t MODUL ES FUN CTIONAL MEMORY VECTOR SCALAR ADDRESS UNITS INSTRUCTION FETCH AND DECODE MEMORY FUNCTIONAL BUSES UNIT BUSES INSTRUCTION .. ... .. ... F.P. VE CTOR BU FFER ... .. ... MULTIPLY - F.P. SC ALAR ADD AND T &S .. ... .. SUBTRACT - MEMORY CE NTRAL .. ADDR ESS F.P. DATA PATH MEMORY RECIPROCAL CONTROL -r-- B&A ... - 1.1 - REGISTERS I I · I I I OTHER· UNITS I 7"- I I · I ADDITION j INPUT/OUTPUT AND I' LOGICAL FUNCTIONAL U UNITS PERIPHERALS AND USER INTERFACE STK·6401 FUNCTIONAL BLOCK DIAGRAM ture; disk positioning overhead is Vectorizing Compiler: The Cray cerned with underlying system and minimized by allocating to new files FORTRAN Compiler (CFT) is an hardware dependencies. the largest possible blocks of con­ optimizing, vectorizing compiler that MATHUB and OMNIUB provide tiguous disk space; lIO transfers are supports language and library optimized and vectorized basic math­ optimized by moving data in multi­ enhancements for vector processing. ematical functions and high-level ples of disk sectors (512 64-bit words); Existing FORTRAN applications pro­ mathematics and scientific routines, operating system lIO processing grams can therefore take full advan­ including the Basic Linear Algebra overhead is substantially lowered by tage of the STK-6401's outstanding (BLAS) routines. FORTUB and performing all lIO processing tasks vector performance. CFTUB furnish optimized systems in an intelligent lOP subsystem. CFT enhancements also support support routines, including high­ Further, CTSS and the FORTRAN other manufacturers' extensions to the performance, asynchronous Run-Time Library fully support ANSI '77 FORTRAN standard, such FORTRAN lIO. asynchronous lIO, thus enabling as the VMS™ FORTRAN extensions. Dynamic Symbolic Debugging: Under applications to take advantage of Consequently CFT assists the pro­ CTSS the applications developer has computational and lIO overlap. grammer's productivity and maximizes a powerful and convenient means of software execution speed and troubleshooting code - the Dynamic FORTRAN Applications portability. Debug Tool (DDT). Since CTSS Development Environment Scientific Libraries: Under CTSS four allows a program to directly control CTSS provides an efficient FORTRAN system libraries are available to the the execution of another, DDT may environment for the applications pro­ applications developer. The library be used on a program's dropfile for grammer through a powerful vec­ interface is optimized to achieve max­ debugging or for post-mortem dumps torizing compiler, scientific libraries, imum program performance without without having to recompile or relink and dynamic debugging. the programmer having to be con- the applications program. Hardware Specifications Software Specifications Architecture CTSS Operating System Full Cray X-MP/ 48 instruction set. Interactive / batch. Hardware support for scatter/gather, compressed Multi-user. index, and enhanced addressing mode. Hierarchical file system. Process priority levels. Computation Rate Interprocess communication. 40 MFWPS peak vector performance. Windowing capability. - 20 MIPS peak scalar performance. UNIX™ environment. Central Memory FORTRAN Applications 640 MB/s aggregate bandwidth. Development Environment 160 MB/s bandwidth to I/o. CFT (CRAY FORTRAN compiler) Up to 128 MBytes of storage. ANSI '77. Error detection/correction (SECDED). Scalar optimization. Automatic vectorization. Vector Registers (64-Bit) VMS™ FORTRAN extensions.
Recommended publications
  • Emerging Technologies Multi/Parallel Processing
    Emerging Technologies Multi/Parallel Processing Mary C. Kulas New Computing Structures Strategic Relations Group December 1987 For Internal Use Only Copyright @ 1987 by Digital Equipment Corporation. Printed in U.S.A. The information contained herein is confidential and proprietary. It is the property of Digital Equipment Corporation and shall not be reproduced or' copied in whole or in part without written permission. This is an unpublished work protected under the Federal copyright laws. The following are trademarks of Digital Equipment Corporation, Maynard, MA 01754. DECpage LN03 This report was produced by Educational Services with DECpage and the LN03 laser printer. Contents Acknowledgments. 1 Abstract. .. 3 Executive Summary. .. 5 I. Analysis . .. 7 A. The Players . .. 9 1. Number and Status . .. 9 2. Funding. .. 10 3. Strategic Alliances. .. 11 4. Sales. .. 13 a. Revenue/Units Installed . .. 13 h. European Sales. .. 14 B. The Product. .. 15 1. CPUs. .. 15 2. Chip . .. 15 3. Bus. .. 15 4. Vector Processing . .. 16 5. Operating System . .. 16 6. Languages. .. 17 7. Third-Party Applications . .. 18 8. Pricing. .. 18 C. ~BM and Other Major Computer Companies. .. 19 D. Why Success? Why Failure? . .. 21 E. Future Directions. .. 25 II. Company/Product Profiles. .. 27 A. Multi/Parallel Processors . .. 29 1. Alliant . .. 31 2. Astronautics. .. 35 3. Concurrent . .. 37 4. Cydrome. .. 41 5. Eastman Kodak. .. 45 6. Elxsi . .. 47 Contents iii 7. Encore ............... 51 8. Flexible . ... 55 9. Floating Point Systems - M64line ................... 59 10. International Parallel ........................... 61 11. Loral .................................... 63 12. Masscomp ................................. 65 13. Meiko .................................... 67 14. Multiflow. ~ ................................ 69 15. Sequent................................... 71 B. Massively Parallel . 75 1. Ametek.................................... 77 2. Bolt Beranek & Newman Advanced Computers ...........
    [Show full text]
  • Supercomputers: Current Status and Future Trends
    Supercomputers: Current Status and Future Trends Presentation to Melbourne PC Users Group, Inc. August 2nd, 2016 http://levlafayette.com 0.0 About Your Speaker 0.1 Lev works as the HPC and Training Officer at the University of Melbourne, and do the day-to-day administration of the Spartan HPC/Cloud hybrid and the Edward HPC. 0.2 Prior to that he worked for several years at the Victorian Partnership for Advanced Computing, which he looked after the Tango and Trifid systems and taught researchers at 17 different universities and research institutions. Prior to that he worked for the East Timorese government, and prior to that the Parliament of Victoria. 0.3 He is involved in various community organisations, collect degrees, learns languages, designs games, and writes books for fun. 0.4 You can stalk him at: http://levlafayette.com or https://www.linkedin.com/in/levlafayette 1.0 What is a Supercomputer anyway? 1.1 A supercomputer is a rather nebulous term for any computer system that is at the frontline of current processing capacity. 1.2 The Top500 metric (http://top500.org) measures pure speed of floating point operations with LINPACK. The HPC Challenge uses a variety of metrics (floating point calculation speed, matrix calculations, sustainable memory bandwidth, paired processor communications, random memory updates, discrete Fourier transforms, and communication bandwidth and latency). 1.3 The term supercomputer is not quite the same as "high performance computer", and not quite the same as "cluster computing", and not quite the same as "scientific (or research) computing". 2.0 What are they used for? 2.1 Typically used for complex calculations where many cores are required to operate in a tightly coupled fashion, or for extremely large collections datasets where many cores are required to carry out the analysis simultaneously.
    [Show full text]
  • DE86 006665 Comparison of the CRAY X-MP-4, Fujitsu VP-200, And
    Distribution Category: Mathematics and Computers General (UC-32) ANL--85-1 9 DE86 006665 ARGONNE NATIONAL LABORATORY 9700 South Cass Avenue Argonne, Illinois 60439 Comparison of the CRAY X-MP-4, Fujitsu VP-200, and Hitachi S-810/20: An Argonne Perspcctive Jack J. Dongarra Mathematics and Computer Science Division and Alan Hinds Computing Services October 1985 DISCLAIMER This report was prepared as an account of work sponsored by an agency o h ntdSae Government. Neither the United States Government nor any agency of the United States employees, makes any warranty, express or implied, or assumes ancy thereof, nor any of their bility for the accuracy, completeness, or usefulness of any informany legal liability or responsi- process disclosed, or represents that its use would nyinformation, apparatus, product, or ence enceherinherein tooay any specificcomriseii commercial rdt not infringe privately owned rights. Refer- product, process, or service by trade name, trademak manufacturer, or otherwise does not necessarily constitute or imply itsenrme, r ark, mendation, or favoring by the United States Government or any ag endorsement, recom- and opinions of authors expressed herein do not necessarily st agency thereof. The views United States Government or any agency thereof.ry to or reflect those of the DISTRIBUTION OF THIS DOCUMENT IS UNLIMITE Table of Contents List of Tables v List of Figures v Abstract 1 1. Introduction 1 2. Architectures 1 2.1 CRAY X-MP 2 2.2 Fujitsu VP-200 4 2.3 Hitachi S-810/20 6 3. Comparison of Computers 8 3.1 IBM Compatibility
    [Show full text]
  • Abkürzungs-Liste ABKLEX
    Abkürzungs-Liste ABKLEX (Informatik, Telekommunikation) W. Alex 1. Juli 2021 Karlsruhe Copyright W. Alex, Karlsruhe, 1994 – 2018. Die Liste darf unentgeltlich benutzt und weitergegeben werden. The list may be used or copied free of any charge. Original Point of Distribution: http://www.abklex.de/abklex/ An authorized Czechian version is published on: http://www.sochorek.cz/archiv/slovniky/abklex.htm Author’s Email address: [email protected] 2 Kapitel 1 Abkürzungen Gehen wir von 30 Zeichen aus, aus denen Abkürzungen gebildet werden, und nehmen wir eine größte Länge von 5 Zeichen an, so lassen sich 25.137.930 verschiedene Abkür- zungen bilden (Kombinationen mit Wiederholung und Berücksichtigung der Reihenfol- ge). Es folgt eine Auswahl von rund 16000 Abkürzungen aus den Bereichen Informatik und Telekommunikation. Die Abkürzungen werden hier durchgehend groß geschrieben, Akzente, Bindestriche und dergleichen wurden weggelassen. Einige Abkürzungen sind geschützte Namen; diese sind nicht gekennzeichnet. Die Liste beschreibt nur den Ge- brauch, sie legt nicht eine Definition fest. 100GE 100 GBit/s Ethernet 16CIF 16 times Common Intermediate Format (Picture Format) 16QAM 16-state Quadrature Amplitude Modulation 1GFC 1 Gigabaud Fiber Channel (2, 4, 8, 10, 20GFC) 1GL 1st Generation Language (Maschinencode) 1TBS One True Brace Style (C) 1TR6 (ISDN-Protokoll D-Kanal, national) 247 24/7: 24 hours per day, 7 days per week 2D 2-dimensional 2FA Zwei-Faktor-Authentifizierung 2GL 2nd Generation Language (Assembler) 2L8 Too Late (Slang) 2MS Strukturierte
    [Show full text]
  • Hail and Farewell
    Hail and Farewell 4341 on the St. Paul campus; and finally a CRAY-1 supercomputer at ucc's Lauderdale facility. One of the early achievements of his administration was the establishment of state-wide higher education time-sharing. What began as a 32-port CDC 6400 KRONOS system in 1971 with an innovative single-port charge developed jointly with Control Data Corporation has grown to a CYBER 17 4 with 325 ports. In addition, the rapid success of this idea led the State Legislature to fund a similar system, the Minnesota Educational Computing Consortium (MECC), in 1973 for the Dr. Peter Patton entire state of Minnesota­ Dr. Frank Verbrugge vocational, secondary, and Some people contend that qualified elementary schools-thus making microcomputers in instructional labs. people can be placed at random in Minnesota a leader in educational In 1981, Dr. Verbrugge and an any management job and the computing. advisory committee approved the organization will continue to Under Dr. Verbrugge's aegis, the purchase of the CRAY-1, the first prosper and grow. But I believe that University's other computing centers Class VI supercomputer at a United quality people possess special changed from batch oriented to States university. When his individual abilities; they are not just interactive systems: St. Paul from a retirement date was postponed from pegs to be moved around on a batch-driven IBM 360/30 used for June 30 to December 31, 1983 to board. dairy herd improvement to an IBM allow the University's central Dr. Frank Verbrugge, director of 4341 with a comprehensive administration to continue to plan University Computer Services (UCS), Statistical Analysis System (SAS) and the future of computing at the and Dr.
    [Show full text]
  • Inertial-Fusion Program Los Alamos National Laboratory Los Alamos
    LA-90B6-PR Progress Report UC-21 Issued: May 1982 LA—9086-PR DE82 019495 Inertial-Fusion Program January—December 1980 The Inertial Fusion Program Staff Los Alamos National Laboratory Los Alamos, New Mexico 87545 CONTENTS ACRONYMS vii ABSTRACT 1 SUMMARY 2 Introduction 2 Operating CO2 Laser Systems 2 Antares—High Energy Gas Laser Facility 2 Advanced Laser Teshnology 3 Target Experiments 4 Diagnostics Development 4 Laser Fusion Theory and Target Design 5 '- aser Fusion Target Fabrication 5 Heavy-Ion Driver Development 6 Systems and Applications Studies of Inertia! Fusion 7 I. OPERATING CO2 LASER SYSTEMS 8 Helios Laser System 8 Gemini Laser System 11 II. ANTARES—HIGH ENERGY GAS LASER FACILITY 13 Introduction 13 Optical System 13 Large Optical Components 17 Front-End System 18 Power Amplifier System 19 Energy Storage System 25 Antares Target System 26 Control System 33 III. ADVANCED LASER TECHNOLOGY 38 Studies of Antares and Helios Laser Systems 38 Studies in Nonlinear Media 47 Advanced CO2 Laser Development 60 Advanced Concepts 83 Gigawatt Test Facility 93 References 94 IV. TARGET EXPERIMENTS 99 Introduction 99 Integrated Absorption Experiments on CO2 Laser-Generated Plasmas 99 Lateral Transport of Energy from a Laser-Produced Plasma 102 Virible Harmonic Generation in CO2 Laser Fusion Experiments 103 High-Energy X-Ray Measurements at Helios 105 MULTIFLEX Fast X-Ray Detection System 106 Effect of Hydrogen in CO2 Laser-Induced Plasmas 106 Backscatter and Harmonic Generation Measurements in Helios 114 References 115 V. DIAGNOSTICS DEVELOPMENT 117 Introduction 117 X-Ray Diagnostics 117 Pulsed Calibration of a Proximity-Focused X-Ray Streak Camera !I9 Detector Response Functions for MULTIFLEX 122 A "Thin Lens" Electron Magnetic Spectrometer and an Electron Ray-Tracing Code for Cylindrically Symmetric Fields 126 Siit-Viewing Homogeneous Sphere: Least Squares Fitting of Analytical Expressions to Image Data 129 Reconstruction of X-Ray Emission Profiles of Imploded Targets from Slit Photographs .
    [Show full text]
  • Computing Division Two-Year Operational Plan FY 1981 — 1982
    LA-8725-MS UC-32 Issued: February 1981 Computing Division Two-Year Operational Plan FY 1981 — 1982 Prepared by Robert H. Ewald, Division Leader W. Jack Worlton, Associate Division Leader Margery McCormick, Editor •••'\ ACKNOWLEDGMENTS This plan is the work of many people in C Division. We wish to thank our Group and Division Leaders who worked so closely with C Division Office to develop the plan, and the C Division people who provided input to the plan. Special thanks are due Marjorie Blackwel'L, Computer Documentation Group, who entered all the original text and cheerfully entered all the revisions as well; and to Virginia Romero and Betty Hewett, Computer Documentation Group, and Vdma Payne1, Computer Graphics Group, who prepared the computer-generated figures. iv EXECUTIVE SUMMARY This plan provides users of the Los Alamos Central Computing Facility (CCF) and the Integrated Computer Network (ICN) with C Division's software and hardware plans for the next two years. In FY 1980 we completed a large number of projects, including the following. • Adopted ? change-control mechanism to better control ICN changes and provide a more stable computing environment. • Defined levels of support for our software and began defining ICN standard utilities and libraries. • Installed the second Cray-1 and moved a second CDC 7600 to the Open partition. • Implemented CTSS on both Cray-Is and provided the Los Alamos standard libraries, utilities, and documentation. • Placed distributed processors on line and provided a standard set of software. • Implemented the Print and Graphics Express Station (PAGES). PAGES now automatically controls all three FR80s and the two Versatec plotters as an integrated part of the Network.
    [Show full text]
  • Lawrence Berkeley National Laboratory Recent Work
    Lawrence Berkeley National Laboratory Recent Work Title NERSC and ESnet: 25 Years of Leadership Permalink https://escholarship.org/uc/item/1km524jn Author Bashor, Jon Publication Date 1999-10-05 eScholarship.org Powered by the California Digital Library University of California --- :::0 1'1'1 () 0 , ..... 0 1'1'1 ""1 C1l :::0 (") (/) 1'1'1 s::: z z () Ill 0 1'1'1 r+ r+ C1l () 0 "'U -< () 0 "0 '< VISION Greetings. In 1974 the staff of the Controlled Thermonuclear Research Computer Center, NERSC's original name, began a revolution in computing. Their new vision was to provide a supercomputer to a user community spread over the nation, instead of only to local users, and they set about creating and adapting technology to do so. Over the years, the staff of NERSC and ESnet have delivered on that original concept, and continued to set the standards for the supercomputer centers and research networks that followed. Twenty-five years ago, the word "supercomputer" was not yet in current use, and we were just scientists; "computational scientist" was not yet a respectable or even understandable term. Computer science was only beginning to gain recognition as an independent discipline. In those early days, scientists ordinarily sat in a computer center, keypunching decks of computer cards for batch submis­ sion, and then waiting for fan-folded stacks of printout. That was what it meant to use a computer center. Interacting with a large computer via a printing teletype or a "dumb" terminal was still a novelty for most of the country's scientists. The idea that interactive scientific computing could be provided to a national community from a central facility was truly revolutionary.
    [Show full text]
  • The Ultimate Computer Acronyms Archive
    The Ultimate Computer Acronyms Archive www.acronyms.ch Last updated: May 29, 2000 «I'm sitting in a coffee shop in Milford, NH. In the booth next to me are two men, a father and a son. Over coffee, the father is asking his son about modems, and the son is holding forth pretty well on the subject of fax compatibility, UART requirements, and so on. But he's little out of date: The father asks, "So should I get one with a DSP?" "A what?" says the son. You just can't get far if you're not up on the lingo. You might squeak by in your company of technological nonexperts, but even some of them will surprise you. These days, technical acronyms quickly insinuate themselves into the vernacular.» Raphael Needleman THIS DOCUMENT IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. The information contained in this document represents the current view of the authors on this subject as of the date of publication. Because this kind of information changes very quickly, it should not be interpreted to be a commitment on the part of the authors and the authors cannot guarantee the accuracy of any information presented herein. INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND FREEDOM FROM INFRINGEMENT. The user assumes the entire risk as to the accuracy and the use of this document. This document may be copied and distributed subject to the following conditions: 1.
    [Show full text]
  • United States Patent (19) 11 Patent Number: 5,440,749 Moore Et Al
    USOO544O749A United States Patent (19) 11 Patent Number: 5,440,749 Moore et al. 45 Date of Patent: Aug. 8, 1995 54 HIGH PERFORMANCE, LOW COST 4,713,749 12/1987 Magar et al. ........................ 395/375 MCROPROCESSOR ARCHITECTURE 4,714,994 12/1987 Oklobdzija et al. ................ 395/375 4,720,812 1/1988 Kao et al. ............................ 395/700 75 Inventors: Charles H. Moore, Woodside; 4,772,888 9/1988 Kimura .......... ... 340/825.5 Russell H. Fish, III, Mt. View, both 4,777,591 10/1988 Chang et al. ........................ 395/800 of Calif. 4,787,032 11/1988 Culley et al. ........................ 364/200 - 4,803,621 2/1989 Kelly ................................... 395/400 (73) Assignee: Nanotronics Corporation, Eagle 4,860,198 8/1989 Takenaka ... ... 364/DIG. Point, Oreg. 4,870,562 9/1989 Kimoto ...... ... 364/DIG. 1 4,931,986 6/1990 Daniel et al. ........................ 395/550 21 Appl. No.: 389,334 5,036,460 7/1991 Takahira ............................. 395/425 22 Filed: Aug. 3, 1989 5,070,451 12/1991 Moore et al. ....................... 395/375 5,127,091 6/1992 Bonfarah ............................. 395/375 511 Int. Cl'................................................ GO6F 9/22 52 U.S. Cl. .................................... 395/800; 364/931; OTHER PUBLICATIONS 364/925.6; 364/937.1; 364/965.4; (2. Intel 80386 Programmer's Reference Manual, 1986. 58) Field of Search ................ 395/425,725,775, 800 Attorney,Primary ExaminerAgent, or Firm-CooleyDavid Y. Eng Godward Castro 56) References Cited Huddleson & Tatum U.S. PATENT DOCUMENTS 57 ABSTRACT 3,603,934 9/1971 Heath ........................... 364/DIG. 4,003,033 1/1977 O'Keefe et al.
    [Show full text]
  • April 30, 2004 the CDC-3300 and 6000 Series
    CMPSCI 201 – Spring 2004 – © Professor William T. Verts Lecture #33 – April 30, 2004 The CDC-3300 and 6000 Series Continuing on in “Bizarre Architectures Week” we look at two machines from the 1960s, both strongly influenced by Seymour Cray before he left Control Data Corporation (CDC) to form his own company. Both machines were very powerful computers, and were considered to be supercomputers in their day. By today’s standards they are very underpowered. Part of the lesson of examining these two machines is that neither observes the “standard” design rules implicit in essentially all modern machines. Modern processors all use a word that is eight bits or an even power of eight bits in length, all use two’s complement binary arithmetic with occasional BCD augmentation, and except for some machines that still use EBCDIC nearly all use ASCII or the Unicode superset of ASCII as their character set. While these design rules are implicit today, early machines were designed with very different criteria. Only as alternatives were explored and discarded over time did we converge on the current approaches. The CDC-3300 For example, the CDC-3300 was a 24-bit architecture, using one’s complement binary arithmetic, with two accumulators labeled A and Q, and two or three banks of 32K of 24-bit magnetic core memory. Accumulator A was the main accumulator, and Q was a “quotient register” partially dedicated to handling multiplications and divisions. In addition, there were four 15-bit index registers used for memory offsets into arrays (actually, there were only three physical index registers R1, R2, and R3; “register” R0 didn’t actually exist and therefore always had the value zero).
    [Show full text]
  • Highlights and Impacts of ASCR's Programs
    ASCR@40 Highlights and Impacts of ASCR’s Programs A report compiled by the ASCAC Subcommittee on the 40-year history of ASCR for the U.S. Department of Energy’s Office of Advanced Scientific Computing Research 2020 Cover image: This snapshot of a thermonuclear supernova simulation with asymmetric ignition in a white dwarf was created by the Supernova Science Center which was part of SciDAC-1. Led by Michael Zingale of Stony Brook, the project wanted to see whether the ash bubble (in purple) would sweep around the star and collide with enough force to compress fuel in the area to trigger a spontaneous detonation. The 3D results disfavored that possibility. At the time, the issue of how the white dwarf ignites was one of the most debated issues in Type Ia supernova modeling. 11/2020 ASCR@40: Highlights and Impacts of ASCR’s Programs A report compiled by the ASCAC Subcommittee on the 40-year history of ASCR for the U.S. Department of Energy’s Office of Advanced Scientific Computing Research Chairs: Bruce Hendrickson, Paul Messina Committee members and authors: Buddy Bland, Jackie Chen, Phil Colella, Eli Dart, Jack Dongarra, Thom Dunning, Ian Foster, Richard Gerber, Rachel Harken, Wendy Huntoon, Bill Johnston, John Sarrao, Jeff Vetter Editors: Jon Bashor, Tiffani Conner 11/2020 CONTENTS 1.0 INTRODUCTION .................................................................................................................................................. 1 Document Overview .............................................................................................................................................................
    [Show full text]