APPENDIX Description of the L Yap AS Language* A. D. Zakrevskii

Total Page:16

File Type:pdf, Size:1020Kb

APPENDIX Description of the L Yap AS Language* A. D. Zakrevskii APPENDIX Description of the LYaP AS Language* A. D. Zakrevskii § 1. General Principles of the LYaPAS Language Purpose of the LYaPAS Language In many cases, the complexity of contemporary devices for the automatic processing of discrete information becomes so great that the actual building ofthese devices, performed on the intuitive level, leads to enormous expenditures of resources, materials, and time in the design phase, and to obtaining unnecessarily large, and hardly complete, constructive solutions. It is therefore necessary to replace the intuitive methods of construction by a rigorous theory of discrete automaton design. Today, the theory of discrete automaton design has been intensively developed, but the re­ duction of its results to the level of engineering practice has come up against two serious ob­ stacles. The first is that the practical methods of synthesis developed within the framework of the present theory are too complicated for a large number of engineer-designers. The second obstacle is the arduousness of many of the well-known algorithms for solving practical prob­ lems of synthesis: in many cases, the realization of these algorithms entails the performance of millions, and even billions, of elementary computational steps. There thus arises the necessity of automating the synthesis processes, where the effec­ tiveness of such automation can be judged by the degree to which the following criteria are met: a) the practical utilization of the developed system for automating synthesis must be suf- ficiently simple: b) the number of problems susceptible of handling by the system must be quite large; c) the system must provide high productivity; d) use of the system must be conducive to the further development of the theory of syn­ thesis; e) the system must be sufficiently flexible that it will not be rendered obsolete by further development of synthesis theory but, rather, will be able to be improved on the basis of this development; • Translated from Logical Language for the Representation of Algorithms for the Synthesis of Relay Devices (editor M. A. Gavrilov), Nauka, Moscow (1966), pp. 7-38. 141 142 A. D. ZAKREVSKII f) the system must be actually developed over a short period of time, and at relatively low cost. Taking the level of development of modern technology of information-processing into ac­ count, we can assert that the best way to satisfy the aforementioned requirements if to base ourselves on the use of general-purpose digital computers (GPDC). To be sure, we should realize that modern GPDC are specialized, in the sense that they are basically designed to operate on numbers. Programming problems of a logical character, to which synthesis problems belong, are accompanied by significantly greater difficulties, and require special knowledge and resourcefulness. It is possible to alleviate this problem by de­ veloping a special language for the representation of the algorithms for solving logical prob­ lems, such as those involved in synthesizing discrete automata, and by developing, on the basis of this language, a system for the automation of the programming of the problems of interest to us. In this paper, we expound the results of the development of such a language, called the LYaPAS language.* LYaPAS was developed in parallel with the programming program corres­ ponding to it. t In its construction, we assumed the following quite explicit goals, although the formulation avowedly lacks precision: a) the algorithms of interest to us must be expressed in LYaPAS compactly and compre­ hensibly, in order that these expressions may be published as they stand; b) LYaPAS must make maximum use of the capabilities of modern computational tech­ niques; at the same time, it must be independent of the details of programming technique and of the idiosyncracies of specific computers, Le., it must be maximally machine-independent; c) the corresponding software for the specific computers must be quite compact: it must fit completely in the machine's operational memory, providing high-speed interpretation and compilation; d) the capability must be provided of having all programmer and user interfaces with the computer on the level of the LYaPAS language; in particular, program debugging must be re­ duced to debugging algorithms, with the debugging results output in LYaPAS. Types of Words LYaPAS expreSSions are sequences of elements, called words. Each word is a symbol of some well-defined concept: variable, constant, certain actions on variables (represented by neighboring words in the sequence of words), etc. It is precisely this meaningful content in the elements of the language which justifies the use, in this case, of the term "word." Depending on their meaning, words are divided into types: the greater part of the words serve as symbols of operands and operators, while some words play ancillary roles, expressing relative connections of words in LYa PAS sentences, or defining the mode of interpretation of the succeeding symbols. The basic LYaPAS operands are variables, the values of which are represented by 32-bit codes, interpreted, in general, not as numbers, but as subsets of some abstract set of 32 ele­ ments. Such variables are the standard ones, and operations with them on modern GPDC are performed quite simply. ·Translator's note: acronyms are not particularly amenable to translation. This one means: logical language for the rep­ resentation of synthesis algorithms, the English acronym for which, LOLAFOTROSA, has obviously no future. tTranslator's note: by "programming program," the Russians can mean a compiler, an assembler, an operating system, or any combination of these. The English word "software" is an adequate equivalent in most cases. DESCRIPTION OF THE LYaPAS LANGUAGE 143 The capability is also provided of operating with compound variables, the values of which are given by 32w-bit codes, where w is some integer greater than unity, and also with arrays, which also play the role of operands in the language, an array being a set whose elements are analogous to variables. A special system of subscripting array elements allows one to operate both with arrays as a whole, and with their individual elements. In particular, the subscripts of array elements can be specified by the values of standard variables of a special designation, called indices. In addition to variables, constants, representable by 32-bit codes, can be LYaPAS operands. A special type of constant can be specified directly by LYaPAS words which, in the given case, do not simply play the role of symbols of some 32-bit constant, but provide, in their right part (seven bits), the constant itself, the remaining bits being filled with zeros. Such a method of giving constants (i.e., literals) is particularly convenient if these con­ stants are interpreted as integers and are used, in particular, for the serial numeration of program statements. We shall consider only those ~equences of LYaPAS words which represent algorithms for the solution of certain problems, and we shall call them the L-programs, or simply the prb­ grams, for solving these problems. Two Levels of LYaPAS LYaPAS has two levels. The first level, closer to machine language, is simpler, and is designed for the representation of algorithms which are not too complex; the more compact software corresponds to this level. On this level, ~ can operate directly on those variables whose values are 32-bit codes, Le., on simple variables, subscripts, and individual elements of arrays. For the representation of operations on this level, there is a special collection of standard first-level operators, or l-operators, many of which are close to the elementary operations of general-purpose computers: bitwise disjunction, complementation, shifting, counting the num­ ber of ones in a code, etc. Particular roles are played by the operators of assigning to one var­ iable the value of another, of incrementing the value of a variable, etc. A small group of opera­ tors is used for controlling the order of execution of program steps on the basis of current in­ formation. Provision is made for procedures to input initial values of operands and to output results in indexed form, Le., the value of a variable is accompanied in the printout by the sym­ bol of the variable. Provision is also made for automatic program debugging (in the LYaPAS language), this facility consisting of the printout of the corresponding process of program execu­ tion via the sequence of values of those variables and indices, the set of which is specified by a special code, as well as a printout of a trace of the program's trajectory. A special mechanism built in to the debugging block provides for practical uniformity of output of monitoring informa­ tion in all cycles of programs being debugged, independently of the mutual configurations of these cycles. On the second level of LYaPAS, the L-operators are generalized to the cases when they can be used with compound variables (in programming practice, this generalization corresponds to the unification of several memory cells, i.e., to the usual method of handling multiple-preci­ sion arithmetic). Many of the first-level operations are also generalized to operate with arrays. New operations are introduced. In the first instance, these new operations include the operations of union, intersection, and subtraction of sets representable by given arrays, the operations of finding the upper and the lower bounds of sets, the operations of identifying, in a given set, the subset of elements possessing some given property, the operation of finding a minimal interval of a set which includes some given set, etc. On the second level, there is also provided a special 144 A. D. ZAKREVSKII mode for realizing operations of the first level. For example, if these operations are two-place operations, operating on arrays, they are performed on all pairs made up of one element from each of the two arrays in question; the resultant array is formed from the set of results after similar pairs have been identified.
Recommended publications
  • Redacted for Privacy Professor Donald Guthrie, Jr
    AN ABSTRACT OF THE THESIS OF John Anthony Battilega for the DOCTOR OF PHILOSOPHY (Name) (Degree) in Mathematics presented on May 4, 1973 (Major) (Date) Title: COMPUTATIONAL IMPROVEMENTS TO BENDERS DECOMPOSITION FOR GENERALIZED FIXED CHARGE PROBLEMS Abstract approved Redacted for privacy Professor Donald Guthrie, Jr. A computationally efficient algorithm has been developed for determining exact or approximate solutions for large scale gener- alized fixed charge problems. This algorithm is based on a relaxa- tion of the Benders decomposition procedure, combined with a linear mixed integer programming (MIP) algorithm specifically designed to solve the problem associated with Benders decomposition and a com- putationally improved generalized upper bounding (GUB) algorithm which solves a convex separable programming problem by generalized linear programming. A dynamic partitioning technique is defined and used to improve computational efficiency.All component algor- ithms have been theoretically and computationally integrated with the relaxed Benders algorithm for maximum efficiency for the gener- alized fixed charge problem. The research was directed toward the approximate solution of a particular class of large scale generalized fixed charge problems, and extensive computational results for problemsof this type are given.As the size of the problem diminishes, therelaxations can be enforced, resulting in a classical Bendersdecomposition, but with special purpose sub-algorithmsand improved convergence pro- perties. Many of the results obtained apply to the sub-algorithmsinde- pendently of the context in which theywere developed. The proce- dure for solving the associated MIP is applicableto any linear 0/1 problem of Benders form, and the techniquesdeveloped for the linear program are applicable to any large scale generalized GUB implemen- tation.
    [Show full text]
  • An Array-Oriented Language with Static Rank Polymorphism
    An array-oriented language with static rank polymorphism Justin Slepak, Olin Shivers, and Panagiotis Manolios Northeastern University fjrslepak,shivers,[email protected] Abstract. The array-computational model pioneered by Iverson's lan- guages APL and J offers a simple and expressive solution to the \von Neumann bottleneck." It includes a form of rank, or dimensional, poly- morphism, which renders much of a program's control structure im- plicit by lifting base operators to higher-dimensional array structures. We present the first formal semantics for this model, along with the first static type system that captures the full power of the core language. The formal dynamic semantics of our core language, Remora, illuminates several of the murkier corners of the model. This allows us to resolve some of the model's ad hoc elements in more general, regular ways. Among these, we can generalise the model from SIMD to MIMD computations, by extending the semantics to permit functions to be lifted to higher- dimensional arrays in the same way as their arguments. Our static semantics, a dependent type system of carefully restricted power, is capable of describing array computations whose dimensions cannot be determined statically. The type-checking problem is decidable and the type system is accompanied by the usual soundness theorems. Our type system's principal contribution is that it serves to extract the implicit control structure that provides so much of the language's expres- sive power, making this structure explicitly apparent at compile time. 1 The Promise of Rank Polymorphism Behind every interesting programming language is an interesting model of com- putation.
    [Show full text]
  • Compendium of Technical White Papers
    COMPENDIUM OF TECHNICAL WHITE PAPERS Compendium of Technical White Papers from Kx Technical Whitepaper Contents Machine Learning 1. Machine Learning in kdb+: kNN classification and pattern recognition with q ................................ 2 2. An Introduction to Neural Networks with kdb+ .......................................................................... 16 Development Insight 3. Compression in kdb+ ................................................................................................................. 36 4. Kdb+ and Websockets ............................................................................................................... 52 5. C API for kdb+ ............................................................................................................................ 76 6. Efficient Use of Adverbs ........................................................................................................... 112 Optimization Techniques 7. Multi-threading in kdb+: Performance Optimizations and Use Cases ......................................... 134 8. Kdb+ tick Profiling for Throughput Optimization ....................................................................... 152 9. Columnar Database and Query Optimization ............................................................................ 166 Solutions 10. Multi-Partitioned kdb+ Databases: An Equity Options Case Study ............................................. 192 11. Surveillance Technologies to Effectively Monitor Algo and High Frequency Trading ..................
    [Show full text]
  • Dynamic Functions in Dyalog
    Direct Functions in Dyalog APL John Scholes – Dyalog Ltd. [email protected] A Direct Function (dfn) is a new function definition style, which bridges the gap between named function expressions such as and APL’s traditional ‘header’ style definition. Simple Expressions The simplest form of dfn is: {expr} where expr is an APL expression containing s and s representing the left and right argument of the function respectively. For example: A dfn can be used in any function context ... ... and of course, assigned a name: Dfns are ambivalent. Their right (and if present, left) arguments are evaluated irrespective of whether these are subsequently referenced within the function body. Guards A guard is a boolean-single valued expression followed by . A simple expression can be preceded by a guard, and any number of guarded expressions can occur separated by s. Guards are evaluated in turn (left to right) until one of them yields a 1. Its corresponding expr is then evaluated as the result of the dfn. A guard is equivalent to an If-Then-Else or Switch-Case construct. A final simple expr can be thought of as a default case: The s can be replaced with newlines. For readability, extra null phrases can be included. The parity example above becomes: Named dfns can be reviewed using the system editor: or , and note how you can comment them in the normal way using . The following example interprets a dice throw: Local Definition The final piece of dfn syntax is the local definition. An expression whose principal function is a simple or vector assignment, introduces a name that is local to the dfn.
    [Show full text]
  • Application and Interpretation
    Programming Languages: Application and Interpretation Shriram Krishnamurthi Brown University Copyright c 2003, Shriram Krishnamurthi This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License. If you create a derivative work, please include the version information below in your attribution. This book is available free-of-cost from the author’s Web site. This version was generated on 2007-04-26. ii Preface The book is the textbook for the programming languages course at Brown University, which is taken pri- marily by third and fourth year undergraduates and beginning graduate (both MS and PhD) students. It seems very accessible to smart second year students too, and indeed those are some of my most successful students. The book has been used at over a dozen other universities as a primary or secondary text. The book’s material is worth one undergraduate course worth of credit. This book is the fruit of a vision for teaching programming languages by integrating the “two cultures” that have evolved in its pedagogy. One culture is based on interpreters, while the other emphasizes a survey of languages. Each approach has significant advantages but also huge drawbacks. The interpreter method writes programs to learn concepts, and has its heart the fundamental belief that by teaching the computer to execute a concept we more thoroughly learn it ourselves. While this reasoning is internally consistent, it fails to recognize that understanding definitions does not imply we understand consequences of those definitions. For instance, the difference between strict and lazy evaluation, or between static and dynamic scope, is only a few lines of interpreter code, but the consequences of these choices is enormous.
    [Show full text]
  • Handout 16: J Dictionary
    J Dictionary Roger K.W. Hui Kenneth E. Iverson Copyright © 1991-2002 Jsoftware Inc. All Rights Reserved. Last updated: 2002-09-10 www.jsoftware.com . Table of Contents 1 Introduction 2 Mnemonics 3 Ambivalence 4 Verbs and Adverbs 5 Punctuation 6 Forks 7 Programs 8 Bond Conjunction 9 Atop Conjunction 10 Vocabulary 11 Housekeeping 12 Power and Inverse 13 Reading and Writing 14 Format 15 Partitions 16 Defined Adverbs 17 Word Formation 18 Names and Displays 19 Explicit Definition 20 Tacit Equivalents 21 Rank 22 Gerund and Agenda 23 Recursion 24 Iteration 25 Trains 26 Permutations 27 Linear Functions 28 Obverse and Under 29 Identity Functions and Neutrals 30 Secondaries 31 Sample Topics 32 Spelling 33 Alphabet and Numbers 34 Grammar 35 Function Tables 36 Bordering a Table 37 Tables (Letter Frequency) 38 Tables 39 Classification 40 Disjoint Classification (Graphs) 41 Classification I 42 Classification II 43 Sorting 44 Compositions I 45 Compositions II 46 Junctions 47 Partitions I 48 Partitions II 49 Geometry 50 Symbolic Functions 51 Directed Graphs 52 Closure 53 Distance 54 Polynomials 55 Polynomials (Continued) 56 Polynomials in Terms of Roots 57 Polynomial Roots I 58 Polynomial Roots II 59 Polynomials: Stopes 60 Dictionary 61 I. Alphabet and Words 62 II. Grammar 63 A. Nouns 64 B. Verbs 65 C. Adverbs and Conjunctions 66 D. Comparatives 67 E. Parsing and Execution 68 F. Trains 69 G. Extended and Rational Arithmeti 70 H. Frets and Scripts 71 I. Locatives 72 J. Errors and Suspensions 73 III. Definitions 74 Vocabulary 75 = Self-Classify - Equal 76 =. Is (Local) 77 < Box - Less Than 78 <.
    [Show full text]
  • Xerox University Microfilms 300 North Zeeb Road Ann Arbor, Michigan 48106 76-15,823
    INFORMATION TO USERS This material was produced from a microfilm copy of the original document. While the most advanced technological means to photograph and reproduce this document have been used, the quality is heavily dependent upon the quality of the original submitted. The following explanation of techniques is provided to help you understand markings or patterns which may appear on this reproduction. 1. The sign or "target" for pages apparently lacking from the document photographed is "Missing Page(s)". If it was possible to obtain the missing page(s) or section, they are spliced into the film along with adjacent pages. This may have necessitated cutting thru an image and duplicating adjacent pages to insure you complete continuity. 2. When an image on the film is obliterated with a large round black mark, it is an indication that the photographer suspected that the copy may have moved during exposure and thus cause a blurred image. You will find a good image of the page in the adjacent frame. 3. When a map, drawing or chart, etc., was part of the material being photographed the photographer followed a definite method in "sectioning" the material. It is customary to begin photoing at the upper left hand corner of a large sheet and to continue photoing from left to right in equal sections with a small overlap. If necessary, sectioning is continued again — beginning below the first row and continuing on until complete. 4. The majority of users indicate that the textual content is of greatest value, however, a somewhat higher quality reproduction could be made from "photographs" if essential to the understanding of the dissertation.
    [Show full text]
  • Chapter 1 Basic Principles of Programming Languages
    Chapter 1 Basic Principles of Programming Languages Although there exist many programming languages, the differences among them are insignificant compared to the differences among natural languages. In this chapter, we discuss the common aspects shared among different programming languages. These aspects include: programming paradigms that define how computation is expressed; the main features of programming languages and their impact on the performance of programs written in the languages; a brief review of the history and development of programming languages; the lexical, syntactic, and semantic structures of programming languages, data and data types, program processing and preprocessing, and the life cycles of program development. At the end of the chapter, you should have learned: what programming paradigms are; an overview of different programming languages and the background knowledge of these languages; the structures of programming languages and how programming languages are defined at the syntactic level; data types, strong versus weak checking; the relationship between language features and their performances; the processing and preprocessing of programming languages, compilation versus interpretation, and different execution models of macros, procedures, and inline procedures; the steps used for program development: requirement, specification, design, implementation, testing, and the correctness proof of programs. The chapter is organized as follows. Section 1.1 introduces the programming paradigms, performance, features, and the development of programming languages. Section 1.2 outlines the structures and design issues of programming languages. Section 1.3 discusses the typing systems, including types of variables, type equivalence, type conversion, and type checking during the compilation. Section 1.4 presents the preprocessing and processing of programming languages, including macro processing, interpretation, and compilation.
    [Show full text]
  • A Modern Reversible Programming Language April 10, 2015
    Arrow: A Modern Reversible Programming Language Author: Advisor: Eli Rose Bob Geitz Abstract Reversible programming languages are those whose programs can be run backwards as well as forwards. This condition impacts even the most basic constructs, such as =, if and while. I discuss Janus, the first im- perative reversible programming language, and its limitations. I then introduce Arrow, a reversible language with modern features, including functions. Example programs are provided. April 10, 2015 Introduction: Many processes in the world have the property of reversibility. To start washing your hands, you turn the knob to the right, and the water starts to flow; the process can be undone, and the water turned off, by turning the knob to the left. To turn your computer on, you press the power button; this too can be undone, by again pressing the power button. In each situation, we had a process (turning the knob, pressing the power button) and a rule that told us how to \undo" that process (turning the knob the other way, and pressing the power button again). Call the second two the inverses of the first two. By a reversible process, I mean a process that has an inverse. Consider two billiard balls, with certain positions and velocities such that they are about to collide. The collision is produced by moving the balls accord- ing to the laws of physics for a few seconds. Take that as our process. It turns out that we can find an inverse for this process { a set of rules to follow which will undo the collision and restore the balls to their original states1.
    [Show full text]
  • APL-The Language Debugging Capabilities, Entirely in APL Terms (No Core Symbol Denotes an APL Function Named' 'Compress," Dumps Or Other Machine-Related Details)
    DANIEL BROCKLEBANK APL - THE LANGUAGE Computer programming languages, once the specialized tools of a few technically trained peo­ p.le, are now fundamental to the education and activities of millions of people in many profes­ SIons, trades, and arts. The most widely known programming languages (Basic, Fortran, Pascal, etc.) have a strong commonality of concepts and symbols; as a collection, they determine our soci­ ety's general understanding of what programming languages are like. There are, however, several ~anguages of g~eat interest and quality that are strikingly different. One such language, which shares ItS acronym WIth the Applied Physics Laboratory, is APL (A Programming Language). A SHORT HISTORY OF APL it struggled through the 1970s. Its international con­ Over 20 years ago, Kenneth E. Iverson published tingent of enthusiasts was continuously hampered by a text with the rather unprepossessing title, A inefficient machine use, poor availability of suitable terminal hardware, and, as always, strong resistance Programming Language. I Dr. Iverson was of the opinion that neither conventional mathematical nota­ to a highly unconventional language. tions nor the emerging Fortran-like programming lan­ At the Applied Physics Laboratory, the APL lan­ guages were conducive to the fluent expression, guage and its practical use have been ongoing concerns publication, and discussion of algorithms-the many of the F. T. McClure Computing Center, whose staff alternative ideas and techniques for carrying out com­ has long been convinced of its value. Addressing the putation. His text presented a solution to this nota­ inefficiency problems, the Computing Center devel­ tion dilemma: a new formal language for writing clear, oped special APL systems software for the IBM concise computer programs.
    [Show full text]
  • Ginger Documentation Release 1.0
    Ginger Documentation Release 1.0 sfkl / gjh Nov 03, 2017 Contents 1 Contents 3 2 Help Topics 27 3 Common Syntax 53 4 Design Rationales 55 5 The Ginger Toolchain 83 6 Low-Level Implementation 99 7 Release Notes 101 8 Indices and tables 115 Bibliography 117 i ii Ginger Documentation, Release 1.0 This documentation is still very much work in progress The aim of the Ginger Project is to create a modern programming language and its ecosystem of libraries, documen- tation and supporting tools. The Ginger language draws heavily on the multi-language Poplog environment. Contents 1 Ginger Documentation, Release 1.0 2 Contents CHAPTER 1 Contents 1.1 Overview of Ginger Author Stephen Leach Email [email protected] 1.1.1 Background Ginger is our next evolution of the Spice project. Ginger itself is a intended to be a rigorous but friendly programming language and supporting toolset. It includes a syntax-neutral programming language, a virtual machine implemented in C++ that is designed to support the family of Spice language efficiently, and a collection of supporting tools. Spice has many features that are challenging to support efficiently in existing virtual machines: pervasive multiple values, multiple-dispatch, multiple-inheritance, auto-loading and auto-conversion, dynamic virtual machines, implicit forcing and last but not least fully dynamic typing. The virtual machine is a re-engineering of a prototype interpreter that I wrote on holiday while I was experimenting with GCC’s support for FORTH-like threaded interpreters. But the toolset is designed so that writing alternative VM implementations is quite straightforward - and we hope to exploit that to enable embedding Ginger into lots of other systems.
    [Show full text]
  • A Guide to PL/M Programming
    INTEL CORP. 3065 Bowers Avenue, Santa Clara, California 95051 • (408) 246-7501 mcs=a A Guide to PL/M programming PL/M is a new high level programming language designed specifically for Intel's 8 bit microcomputers. The new language gives the microcomputer systems program­ mer the same advantages of high level language programming currently available in the mini and large computer fields. Designed to meet the special needs of systems programming, the new language will drastically cut microcomputer programming time and costs without sacrifice of program efficiency. in addition, training, docu­ mentation, program maintenance and the inclusion of library subroutines will all be made correspondingly easier. PL/M is well suited for all microcomputer program­ ming applications, retaining the control and efficiency of assembly language, while greatly reducing programming effort The PL/M compiler is written in ANSI stand­ ard Fortran I V and thus will execute on most machines without alteration. SEPTEMBER 1973 REV.1 © Intel Corporation 1973 TABLE OF CONTENTS Section Page I. INTRODUCTION TO PL/M 1 II. A TUTORIAL APPROACH TO PL/M ............................ 2 1. The Organization of a PL/M Program ........................ 2 2. Basic Constituents of a PL/M Program ....................... 4 2.1. PL/M Character Set ................................ 4 2.2. Identifiers and Reserved Words ....................... 5 2.3. Comments . 7 3. PL/M Statement Organization . 7 4. PL/M Data Elements . 9 4.1. Variable Declarations ............................... 9 4.2. Byte and Double Byte Constants. .. 10 5. Well-Formed Expressions and Assignments . 11 6. A Simple Example. 15 7. DO-Groups. 16 7.1. The DO-WHILE Group ...........
    [Show full text]