Trends in Supercomputers and Computational Physics

Total Page:16

File Type:pdf, Size:1020Kb

Trends in Supercomputers and Computational Physics - 192 - TRENDS IN SUPERCOMPUTERS AND COMPUTATIONAL PHYSICS T. Bloch Centre de Calcul Vectoriel pour la Recherche, Ecole Polytechnique, Palaiseau 1. Introduction Investigation and evaluation of complex non-linear systems is frequently only possible using numerical modelling. This - almost experimental - computational science is increasing in scope and importance every day thanks to the combination of improved computational methods and the continuous development of faster and faster "supercomputers". Application fields which were pioneered in the 1950's and the 1960's have today entered the production stage; e.g. structural safety calculations, global circulation models for meteorological forecasting, aerodynamics' design, oil reservoir simulation. Use of supercomputers and numerical modelling permit considerable global economies in agriculture and transport due to improved weather forecasting and, in engineering applications, shorter development cycles, lower design costs and lighter end designs are achieved. Today, scientists using numerical models explore the basic mechanisms of semi• conductors, apply global circulation models to climatic and océanographie problems, probe into the behaviour of galaxies and try to verify basic theories of matter, such as Quantum Chromo Dynamics by simulating the constitution of elementary particles. Chemists, crystallographers and molecular dynamics researchers develop models for chemical reactions, formation of crystals and try to deduce the chemical properties of molecules as a function of the shapes of their states. Chaotic systems are studied extensively in turbulence (combustion included) and the design of the next generation of controlled fusion devices relies heavily on computational physics. Thus, many complex systems which cannot be studied analytically with present methods and which are not open to experimentation, are studied on supercomputers improving our basic scientific understanding. As a result the scientific and economic importance of supercomputers and the advanced use of them is now clearly perceived by all major industrial countries. More than 100 CRAY and Cyber 205 computers are installed world-wide with Europe accounting for about one third and Japan about 10%. Although Japan thus seems to lag slightly behind on supercomputer applications, the trend is to catch up very rapidly now that both Hitachi and Fujitsu have proven machines available (S-810 and VP-200) and NEC is bringing out the SX-2 by the middle of 1985. In the United States there are more supercomputer installations than anywhere else, but it has now been generally accepted - thanks for a large part to K. Wilson - that the access for the university research community has not been as good as in Europe where an appreciable proportion of supercomputers are installed in generally accessible regional or inter-university computer centres: one in Holland, two in the United Kingdom, one in France, one in Italy and four or five in Germany. A major effort is now under way in the United States to distribute the access to supercomputers much more - 193 - widely; this programme is handled through the National Science Foundation which, already in 1985, is funded to set up about three new supercomputer centres. 2. Examples taken from the early history of computational physics The starting point for using numerical methods in cases where analytical ones are no longer possible can perhaps be traced back to the work of W.F.Sheppard concerning the finite difference methods. This method was used by L.F. Richardson from 1910 to make a dam computation after which he went on to conceive the first global circulation model imagining 64,000 people in a huge amphitheatre, each advancing a time-step at a time based on the information from the neighbours under the baton of a conductor. R. Courant, R. Friedrichs and H. Lewy (1928) brought the theoretical foundation that partial differential equations are indeed approximated by the discretized equations under certain stability criteria. Around 1950, John von Neumann and a number of collaborators attacked new problems with the first digital computers (ENIAC in 1946, then MANIAC) and started to realize the programme of computational physics which he had published in 1946 together with H.H. Goldstine: develop stability criteria for time-step and grid size for finite difference methods to solve parabolic partial differential equations. make shock-wave calculations using smoothing techniques with boundary layers with at least the size of the grid. code the first barytropic (no temperature variation along isobars) general circula• tion model with a grid of 15 x 18 over the continental United States (with J.G. Charney, R. Fjörtoft and J. Smagorinski). make hydrodynamic instabilities and neutron cascade calculations with E. Fermi and S. Ulam in Los Alamos on MANIAC. On this occasion, S. Ulam discovered the Monte Carlo method. The discovery of solitons is a classic example of a new insight into physics found by computational physics: in about 1955, E. Fermi, J. Pasta and S. Ulam studied the motion of a chain on masses coupled by weakly non-linear springs. (They were trying to develop a theory explaining the finite thermal conductivity of solids.) Long wave• length disturbances of this system resulted, as expected, in a mix of shorter wave• length modes. However, after a finite time, the system was unexpectedly seen to revert to (almost) the initial state (without equi-partition of the energy). In the early 1960's, M. Kruskal and N. Zabusky found that the problem studied could be approximately reduced to the Korteweg-de Vries equation which is a well known hydrodynamics equation describing shallow water waves. Numerical studies of the solutions to this equation lead to the discovery that its complicated solutions could be classified as a set of pulse-like waves propagating at constant velocity with an unchanged shape, called solitons. A fertile period of analytic proofs of many new properties of a variety of non-linear partial differential equations used in physics then followed. - 194 - 3. Historical notes from the development of recent supercomputers Several definitions of supercomputers exist; from the fast(est) computer(s) of a given period to the only computers which, already on the day of their first delivery, are not fast enough. 3.1 The "pure FORTRAN" supercomputers up to 1969 From about 1956 to 1969, it was the reign of the FORTRAN-machine in the field of the fastest computers: UNIVAC, Ferranti, IBM and CDC installed computers such as the LARK, the 7090, the 6600, the STRETCH, the ATLAS, the 360-91 and the 7600. During this period the speed of computers increased by a factor of two every two or three years, without significantly affecting the price and without the user having to restructure his programs. (In practice, of course, reliability problems and the painful evolution of the first multiprogrammed, and later, time-sharing, operating systems left much to be desired in the service provided.) The first delivery of the CDC 7600 in 1969 marked the end of the reign of the FORTRAN machine in supercomputing; it was about 50 times faster than the IBM 7090, its senior by 10 years. Its basic circuits were about 10 times faster, and another factor of five has been achieved by internal improvements in architecture, allowing more things to go on in parallel both within the CPU itself, and also between the CPU and the stored information: multiple memory banks, instruction buffers and preloading of instructions, many more registers (programmable and otherwise) in the CPU, multiple and pipelined execution units (and the logic to keep track of what was going on where), etc... From 1969 on, two major factors became dominant in determining which super• computer could be developed: a) the basic technology no longer improved rapidly (the CRAY X-MP2, first delivered in 1983, is based on circuits only about three times faster than those of the CDC 7600) and the benefits even from these improvements became difficult to realise fully because the dominant factor in high-speed design has moved away from the switching speed of the active devices to the propaga• tion delays in the interconnecting conductors. b) higher speeds with the same technology can be obtained if one gives up the "simulation", vis-à-vis the user, of the basic "von Neumann" architecture of the CPU: fetch an instruction, decode it (being prepared for anything), fetch the operands (from anywhere), execute the operation, store the result anywhere and start all over again for the next instruction. The result is that new and faster "vector" computers are now available but the user must be soilici ted much more. He must collaborate in order to organize the execution of his program better, he must tell the computer clearly through the program how the access to operands is organized when there is regularity and he must avoid IFs where he can and try to organise the computation in long D0-loops. Ultimately, he must rethink the problem, the method and the coding to suit the internal organisation of the computer... The next step, already true for the CRAY X-MP and soon to be imposed by the CRAY-2, the ETA-10 and the Fujitsu VP-400 will be that the user must split his - 195 - problem into two, four or eight "pieces" which can execute concurrently on inde• pendent processors with a common main memory. The "Vector" supercomputers from 1974 to 1985 Two pioneering supercomputers with this new vector architecture were con• ceived from 1966 onwards and first delivered in 1973/1974, the CDC STAR-100 and the Texas Instruments ASC (Advanced Scientific Computer). They had rich instruc• tions sets including explicit vector instructions operating in a memory-to-memory fashion and multiple "pipelines" to execute them. However, they already clearly demonstrated the potential problems of these architectures: high peak vector speeds on long vectors contrasting only too sharply with the scalar speeds achieved.
Recommended publications
  • Advances in Theoretical & Computational Physics
    ISSN: 2639-0108 Review Article Advances in Theoretical & Computational Physics Cosmology: Preprogrammed Evolution (The problem of Providence) Besud Chu Erdeni Unified Theory Lab, Bayangol disrict, Ulan-Bator, Mongolia *Corresponding author Besud Chu Erdeni, Unified Theory Lab, Bayangol disrict, Ulan-Bator, Mongolia. Submitted: 25 March 2021; Accepted: 08 Apr 2021; Published: 18 Apr 2021 Citation: Besud Chu Erdeni (2021) Cosmology: Preprogrammed Evolution (The problem of Providence). Adv Theo Comp Phy 4(2): 113-120. Abstract This is continued from the article Superunification: Pure Mathematics and Theoretical Physics published in this journal and intended to discuss the general logical and philosophical consequences of the universal mathematical machine described by the superunified field theory. At first was mathematical continuum, that is, uncountably infinite set of real numbers. The continuum is self-exited and self- organized into the universal system of mathematical harmony observed by the intelligent beings in the Cosmos as the physical Universe. Consequently, cosmology as a science of evolution in the Uni- verse can be thought as a preprogrammed natural phenomenon beginning with the Big Bang event, or else, the Creation act pro- (6) cess. The reader is supposed to be acquinted with the Pythagoras’ (Arithmetization) and Plato’s (Geometrization) concepts. Then, With this we can derive even the human gene-chromosome topol- the numeric, Pythagorean, form of 4-dim space-time shall be ogy. (1) The operator that images the organic growth process from nothing is where time is an agorithmic (both geometric and algebraic) bifur- cation of Newton’s absolute space denoted by the golden section exp exp e.
    [Show full text]
  • Journal of Computational Physics
    JOURNAL OF COMPUTATIONAL PHYSICS AUTHOR INFORMATION PACK TABLE OF CONTENTS XXX . • Description p.1 • Audience p.2 • Impact Factor p.2 • Abstracting and Indexing p.2 • Editorial Board p.2 • Guide for Authors p.6 ISSN: 0021-9991 DESCRIPTION . Journal of Computational Physics has an open access mirror journal Journal of Computational Physics: X which has the same aims and scope, editorial board and peer-review process. To submit to Journal of Computational Physics: X visit https://www.editorialmanager.com/JCPX/default.aspx. The Journal of Computational Physics focuses on the computational aspects of physical problems. JCP encourages original scientific contributions in advanced mathematical and numerical modeling reflecting a combination of concepts, methods and principles which are often interdisciplinary in nature and span several areas of physics, mechanics, applied mathematics, statistics, applied geometry, computer science, chemistry and other scientific disciplines as well: the Journal's editors seek to emphasize methods that cross disciplinary boundaries. The Journal of Computational Physics also publishes short notes of 4 pages or less (including figures, tables, and references but excluding title pages). Letters to the Editor commenting on articles already published in this Journal will also be considered. Neither notes nor letters should have an abstract. Review articles providing a survey of particular fields are particularly encouraged. Full text articles have a recommended length of 35 pages. In order to estimate the page limit, please use our template. Published conference papers are welcome provided the submitted manuscript is a significant enhancement of the conference paper with substantial additions. Reproducibility, that is the ability to reproduce results obtained by others, is a core principle of the scientific method.
    [Show full text]
  • Computational Science and Engineering
    Computational Science and Engineering, PhD College of Engineering Graduate Coordinator: TBA Email: Phone: Department Chair: Marwan Bikdash Email: [email protected] Phone: 336-334-7437 The PhD in Computational Science and Engineering (CSE) is an interdisciplinary graduate program designed for students who seek to use advanced computational methods to solve large problems in diverse fields ranging from the basic sciences (physics, chemistry, mathematics, etc.) to sociology, biology, engineering, and economics. The mission of Computational Science and Engineering is to graduate professionals who (a) have expertise in developing novel computational methodologies and products, and/or (b) have extended their expertise in specific disciplines (in science, technology, engineering, and socioeconomics) with computational tools. The Ph.D. program is designed for students with graduate and undergraduate degrees in a variety of fields including engineering, chemistry, physics, mathematics, computer science, and economics who will be trained to develop problem-solving methodologies and computational tools for solving challenging problems. Research in Computational Science and Engineering includes: computational system theory, big data and computational statistics, high-performance computing and scientific visualization, multi-scale and multi-physics modeling, computational solid, fluid and nonlinear dynamics, computational geometry, fast and scalable algorithms, computational civil engineering, bioinformatics and computational biology, and computational physics. Additional Admission Requirements Master of Science or Engineering degree in Computational Science and Engineering (CSE) or in science, engineering, business, economics, technology or in a field allied to computational science or computational engineering field. GRE scores Program Outcomes: Students will demonstrate critical thinking and ability in conducting research in engineering, science and mathematics through computational modeling and simulations.
    [Show full text]
  • Contrpl Data Nos Version 2 Administration Handbook
    60459840 CONTRPL DATA NOS VERSION 2 ADMINISTRATION HANDBOOK /fP^v CDC® OPERATING SYSTEMS: CYBER 180 CYBER 170 CYBER 70 MODELS 71, 72, 73, 74 6000 REVISION RECORD T-gSZBZaSESEl jiito wminan REVISION DESCRIPTION Manual released; reflects NOS 2.3 at PSR level 617. Features include default charge restriction, (10-05-84) terminal input and output count at logoff, password randomization, a new CHARGE directive for the SUBMIT command, and support of the Mass Storage Archival Subsystem. Supports CYBER 180 computer systems. B Revision B reflects NOS 2.4.2 at PSR level 642. It incorporates new features such as support of (09-26-85) CYBER 180 Models 840, 850, 860, and 990, Printer Support Utility, and NAM Application Switching. Revision C reflects NOS 2.5.1 at PSR level 664. It documents the personal identification (09-30-86) v a l i d a t i o n , t h e s i n g l e t e r m i n a l s e s s i o n r e s t r i c t i o n , a n d o t h e r m i s c e l l a n e o u s t e c h n i c a l c h a n g e s . Revision D reflects NOS 2.6.1 at PSR level 700. It includes miscellaneous corrections and minor (04-14-88) additions. Publication No, 60459840 REVISION LETTERS I. O. Q. S. X AND Z ARE NOT USED. Address comments concerning this manual to: Control Data Technical Publications 4201 N.
    [Show full text]
  • Musings RIK FARROWOPINION
    Musings RIK FARROWOPINION Rik is the editor of ;login:. While preparing this issue of ;login:, I found myself falling down a rabbit hole, like [email protected] Alice in Wonderland . And when I hit bottom, all I could do was look around and puzzle about what I discovered there . My adventures started with a casual com- ment, made by an ex-Cray Research employee, about the design of current super- computers . He told me that today’s supercomputers cannot perform some of the tasks that they are designed for, and used weather forecasting as his example . I was stunned . Could this be true? Or was I just being dragged down some fictional rabbit hole? I decided to learn more about supercomputer history . Supercomputers It is humbling to learn about the early history of computer design . Things we take for granted, such as pipelining instructions and vector processing, were impor- tant inventions in the 1970s . The first supercomputers were built from discrete components—that is, transistors soldered to circuit boards—and had clock speeds in the tens of nanoseconds . To put that in real terms, the Control Data Corpora- tion’s (CDC) 7600 had a clock cycle of 27 .5 ns, or in today’s terms, 36 4. MHz . This was CDC’s second supercomputer (the 6600 was first), but included instruction pipelining, an invention of Seymour Cray . The CDC 7600 peaked at 36 MFLOPS, but generally got 10 MFLOPS with carefully tuned code . The other cool thing about the CDC 7600 was that it broke down at least once a day .
    [Show full text]
  • Mi!!Lxlosalamos SCIENTIFIC LABORATORY
    LA=8902-MS C3b ClC-l 4 REPORT COLLECTION REPRODUCTION COPY VAXNMS Benchmarking 1-’ > .— u) 9 g .— mi!!lxLOS ALAMOS SCIENTIFIC LABORATORY Post Office Box 1663 Los Alamos. New Mexico 87545 — wAifiimative Action/Equal Opportunity Employer b . l)lS(”L,\l\ll K “Thisreport wm prcpmd J, an xcttunt ,,1”wurk ,pmwrd by an dgmcy d the tlnitwl SIdtcs (kvcm. mm:. Ncit her t hc llniml SIJIL.. ( Lwcrnmcm nor any .gcncy tlhmd. nor my 08”Ihcif cmployccs. makci my wur,nly. mprcss w mphd. or JwImL.s m> lcg.d Iululity ur rcspmuhdily ltw Ilw w.cur- acy. .vmplctcncs. w uscftthtc>. ttt”any ml’ormdt ml. dpprdl us. prudu.i. w proccw didowd. or rep. resent%Ihd IIS us wuukl not mfrm$e priwtcly mvnd rqdtts. Itcl”crmcti herein 10 my sp.xi!l tom. mrcial ptotlucr. prtxcm. or S.rvskc hy tdc mmw. Irdcnmrl.. nmu(a.lurm. or dwrwi~.. does nut mmwsuily mnstitutc or reply its mdursmwnt. rccummcnddton. or favorin: by the llniwd States (“mvcmment ormy qxncy thctcd. rhc V!C$VSmd opinmm d .mthor% qmxd herein do nut net’. UMrily r;~lt or died lhow. ol”the llnttcd SIJIL.S( ;ovwnnwnt or my ugcncy lhure of. UNITED STATES .. DEPARTMENT OF ENERGY CONTRACT W-7405 -ENG. 36 . ... LA-8902-MS UC-32 Issued: July 1981 G- . VAX/VMS Benchmarking Larry Creel —. I . .._- -- ----- ,. .- .-. .: .- ,.. .. ., ..,..: , .. .., . ... ..... - .-, ..:. .. *._–: - . VAX/VMS BENCHMARKING by Larry Creel ABSTRACT Primary emphasis in this report is on the perform- ance of three Digital Equipment Corporation VAX-11/780 computers at the Los Alamos National Laboratory. Programs used in the study are part of the Laboratory’s set of benchmark programs.
    [Show full text]
  • COMPUTER PHYSICS COMMUNICATIONS an International Journal and Program Library for Computational Physics
    COMPUTER PHYSICS COMMUNICATIONS An International Journal and Program Library for Computational Physics AUTHOR INFORMATION PACK TABLE OF CONTENTS XXX . • Description p.1 • Audience p.2 • Impact Factor p.2 • Abstracting and Indexing p.2 • Editorial Board p.2 • Guide for Authors p.4 ISSN: 0010-4655 DESCRIPTION . Visit the CPC International Computer Program Library on Mendeley Data. Computer Physics Communications publishes research papers and application software in the broad field of computational physics; current areas of particular interest are reflected by the research interests and expertise of the CPC Editorial Board. The focus of CPC is on contemporary computational methods and techniques and their implementation, the effectiveness of which will normally be evidenced by the author(s) within the context of a substantive problem in physics. Within this setting CPC publishes two types of paper. Computer Programs in Physics (CPiP) These papers describe significant computer programs to be archived in the CPC Program Library which is held in the Mendeley Data repository. The submitted software must be covered by an approved open source licence. Papers and associated computer programs that address a problem of contemporary interest in physics that cannot be solved by current software are particularly encouraged. Computational Physics Papers (CP) These are research papers in, but are not limited to, the following themes across computational physics and related disciplines. mathematical and numerical methods and algorithms; computational models including those associated with the design, control and analysis of experiments; and algebraic computation. Each will normally include software implementation and performance details. The software implementation should, ideally, be available via GitHub, Zenodo or an institutional repository.In addition, research papers on the impact of advanced computer architecture and special purpose computers on computing in the physical sciences and software topics related to, and of importance in, the physical sciences may be considered.
    [Show full text]
  • Computational Complexity for Physicists
    L IMITS OF C OMPUTATION Copyright (c) 2002 Institute of Electrical and Electronics Engineers. Reprinted, with permission, from Computing in Science & Engineering. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of the discussed products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by sending a blank email message to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it. COMPUTATIONALCOMPLEXITY FOR PHYSICISTS The theory of computational complexity has some interesting links to physics, in particular to quantum computing and statistical mechanics. This article contains an informal introduction to this theory and its links to physics. ompared to the traditionally close rela- mentation as well as the computer on which the tionship between physics and mathe- program is running. matics, an exchange of ideas and meth- The theory of computational complexity pro- ods between physics and computer vides us with a notion of complexity that is Cscience barely exists. However, the few interac- largely independent of implementation details tions that have gone beyond Fortran program- and the computer at hand. Its precise definition ming and the quest for faster computers have been requires a considerable formalism, however. successful and have provided surprising insights in This is not surprising because it is related to a both fields.
    [Show full text]
  • Computational Complexity: a Modern Approach
    i Computational Complexity: A Modern Approach Sanjeev Arora and Boaz Barak Princeton University http://www.cs.princeton.edu/theory/complexity/ [email protected] Not to be reproduced or distributed without the authors’ permission ii Chapter 10 Quantum Computation “Turning to quantum mechanics.... secret, secret, close the doors! we always have had a great deal of difficulty in understanding the world view that quantum mechanics represents ... It has not yet become obvious to me that there’s no real problem. I cannot define the real problem, therefore I suspect there’s no real problem, but I’m not sure there’s no real problem. So that’s why I like to investigate things.” Richard Feynman, 1964 “The only difference between a probabilistic classical world and the equations of the quantum world is that somehow or other it appears as if the probabilities would have to go negative..” Richard Feynman, in “Simulating physics with computers,” 1982 Quantum computing is a new computational model that may be physically realizable and may provide an exponential advantage over “classical” computational models such as prob- abilistic and deterministic Turing machines. In this chapter we survey the basic principles of quantum computation and some of the important algorithms in this model. One important reason to study quantum computers is that they pose a serious challenge to the strong Church-Turing thesis (see Section 1.6.3), which stipulates that every physi- cally reasonable computation device can be simulated by a Turing machine with at most polynomial slowdown. As we will see in Section 10.6, there is a polynomial-time algorithm for quantum computers to factor integers, whereas despite much effort, no such algorithm is known for deterministic or probabilistic Turing machines.
    [Show full text]
  • COMPUTATIONAL SCIENCE 2017-2018 College of Engineering and Computer Science BACHELOR of SCIENCE Computer Science
    COMPUTATIONAL SCIENCE 2017-2018 College of Engineering and Computer Science BACHELOR OF SCIENCE Computer Science *The Computational Science program offers students the opportunity to acquire a knowledge in computing integrated with knowledge in one of the following areas of study: (a) bioinformatics, (b) computational physics, (c) computational chemistry, (d) computational mathematics, (e) environmental science informatics, (f) health informatics, (g) digital forensics and cyber security, (h) business informatics, (i) biomedical informatics, (j) computational engineering physics, and (k) computational engineering technology. Graduates of this program major in computational science with a concentration in one of the above areas of study. (Amended for clarification 12/5/2018). *Previous UTRGV language: Computational science graduates develop emphasis in two major fields, one in computer science and one in another field, in order to integrate an interdisciplinary computing degree applied to a number of emerging areas of study such as biomedical-informatics, digital forensics, computational chemistry, and computational physics, to mention a few examples. Graduates of this program are prepared to enter the workforce or to continue a graduate studies either in computer science or in the second major. A – GENERAL EDUCATION CORE – 42 HOURS Students must fulfill the General Education Core requirements. The courses listed below satisfy both degree requirements and General Education core requirements. Required 020 - Mathematics – 3 hours For all
    [Show full text]
  • Seymour Cray: the Father of World Supercomputer
    History Research 2019; 7(1): 1-6 http://www.sciencepublishinggroup.com/j/history doi: 10.11648/j.history.20190701.11 ISSN: 2376-6700 (Print); ISSN: 2376-6719 (Online) Seymour Cray: The Father of World Supercomputer Si Hongwei Department of the History of Science, Tsinghua University, Beijing, China Email address: To cite this article: Si Hongwei. Seymour Cray: The Father of World Supercomputer. History Research. Vol. 7, No. 1, 2019, pp. 1-6. doi: 10.11648/j.history.20190701.11 Received : May 14, 2019; Accepted : June 13, 2019; Published : June 26, 2019 Abstract: Seymour R. Cray was an American engineer and supercomputer developer who designed a series of the fastest computers in the world in 1960-1980s. The difference between Cray and most other corporate engineers is that he often won those business battles. His success was attributable to his existence in a postwar culture where engineers were valued. He was able to also part of an extraordinary industry where revolutionary developments were encouraged, and even necessary. Lastly Cray is recognized as "the father of world supercomputer". From the perspective of science and technology history, this paper describes the history of Cray and his development of supercomputer. It also sums up his innovative ideas and scientific spirit. It provides a reference for supercomputer enthusiasts and peers in the history of computer research. Keywords: Seymour R. Cray, Supercomputer, Science and Technology History 1. Introduction 2. The Genius Seymour Supercomputer refers to the most advanced electronic computer system with the most advanced technology, the Seymour Cray was born on September 28th, 1925 in the fastest computing speed, the largest storage capacity and the town of Chippewa, Wisconsin.
    [Show full text]
  • Conversion of the COBRA-IV-I Code from CDC CYBER to HP 9000/700 Version
    KAERI/TR-803/97 Conversion of the COBRA-IV-I Code from CDC CYBER to HP 9000/700 Version CDC CYBER Version COBRA-IV-I ^H HP 9000/700 Version ° 1997. 1 Korea Atomic Energy Research Institute £- JiJL>Hl- "CDC CYBER Version COBRA-IV-I 3.B.4) HP 9000/700 Version ±£-2\ ^^" o\] 1997 tf 71 «V Abstract COBRA-IV-I is a multichannel analysis code for the thermal-hydraulic analysis of rod bundle nuclear fuel elements and cores based on the subchannel approach. The existing COBRA-IV-I code is the Control Data Corporation (CDC) CYBER version, which has limitations on the computer core storage and gives some inconvenience to the user interface. To solve these problems, we have converted the COBRA-IV-I code from the CDC CYBER mainframe to an Hewlett Packard (HP) 9000/700-series workstation version, and have verified the converted code. As a result, we have found almost no difference between the two versions in their calculation results. Therefore we expect the HP 9000/700 version of the COBRA-IV-I code to be the basis for the future development of an improved multichannel analysis code under the more convenient user environment. COBRA-IV-I fe COBRA-IV-I £• CDC CYBER-§- SE CDC CYBER oflSlH COBRA-IV-I SHf HP 9000/700 HP 9000/700 -§--^- COBRA-IV-I 3-E.t: ^± Table of Contents Abstract 2 Abstract (in Korean) 3 Table of Contents 4 List of Tables 6 List of Figures 7 1. Introduction 8 2. Conversion of COBRA-IV-1 9 2.1 Precision 9 2.2 Convenience 12 2.3 Emulation 12 2.4 Syntax 13 2.5 Equivalence 17 3.
    [Show full text]