Introduction to Programming with Arrays Using ELI

Total Page:16

File Type:pdf, Size:1020Kb

Introduction to Programming with Arrays Using ELI Introduction to Programming with Arrays using ELI by Wai-Mee Ching July, Nov., Dec. 2014 Copyright © 2014 Contents 1. Array, List and Primitive Operations in ELI ......................................................................................................... 3 1.1 Computers and Programming Languages ............................................................................................................ 3 1.2 ELI System and its Data Types ............................................................................................................................ 6 1.3 Shape of Data, Reshape and Data Conversion ................................................................................................... 10 1.4 Mathematical Computations .............................................................................................................................. 13 1.5 Comparisons, Selection, Membership and Index of .......................................................................................... 17 1.6 Array Indexing, Indexed Assignment and taking Sections ................................................................................ 21 1.7 Array Transformations ....................................................................................................................................... 26 1.8 Operators and Derived Functions ...................................................................................................................... 31 1.9 Lists and Operations on Lists............................................................................................................................. 34 2. Defined Functions, Control Structures and Files ................................................................................................. 38 2.1 Defined Functions, Short-Form and Order of Evaluation .................................................................................. 38 2.2 One-liner Functions .......................................................................................................................................... 41 2.2.1 number conversion and lexicographic ordering .......................................................................................... 41 2.2.2 a queuing network model ............................................................................................................................ 44 2.3 Control Structures .............................................................................................................................................. 46 2.4 Recursion .......................................................................................................................................................... 47 2.4.1 sorting ......................................................................................................................................................... 47 2.4.2 tower of Honoi ............................................................................................................................................ 50 2.4.3 determinant ................................................................................................................................................. 51 2.5 Script Files ......................................................................................................................................................... 52 3. Array Implementation of Data Structures ............................................................................................................... 54 3.1 Emulation of PASCAL Data Structures in ELI ................................................................................................. 54 3.1.1 a small database for a company .................................................................................................................. 54 3.1.2 implementation of linked lists in ELI ......................................................................................................... 58 3.1.3 implementation of queries to a database ..................................................................................................... 61 3.2 Binary Trees ..................................................................................................................................................... 63 3.2.1 tree representation ....................................................................................................................................... 63 3.2.2 tree operations ............................................................................................................................................. 65 3.2.3 tree transversals .......................................................................................................................................... 68 3.3 Quad Trees ........................................................................................................................................................ 70 3.4 Graph Algorithms ............................................................................................................................................. 72 3.4.1 graph representations .................................................................................................................................. 72 3.4.2 depth first search ......................................................................................................................................... 73 3.4.3 single-source source shortest path .............................................................................................................. 74 4. Computational Algorithms ..................................................................................................................................... 77 4.1 Iterative Method ................................................................................................................................................ 77 4.2 Simple Encryption and Monte Carlo Method .................................................................................................... 79 4.3 Sparse Matrix Computation ............................................................................................................................... 82 References .................................................................................................................................................................... 85 The process of preparing programs for a digital computer is especially attractive, not only because it can be economically and scientifically rewarding, but it can be an aesthetic experience much like composing poetry or music. -D. Knuth, the Art of Programming By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race. -A.N. Whitehead, quoted in Ken Iverson’s Turing Lecture Preface The mathematician and computer scientist Jacob T. Schwartz once remarked that programming (and programming languages) consists of two sides: internal and external. The internal side is of a mathematical nature concerning the algorithmic transformation of data. The external side is of a mundane nature concerning system interface and human factors. In the programming language APL, designed by Ken Iverson, language idiosyncrasies arising from the external side have been kept to a bare minimum so a programmer can concentrate his effort on the algorithmic side of a programming job, and its programming environment is particularly simple. APL treats arrays as its primary data structure, and provides a comprehensive set of array operations as language primitives each denoted by one character in a special font (APL font). As Alan Perlis once pointed out APL encourages a dataflow style of programming , i.e. organizing tasks in chains of functions and operators on arrays, or what we call array- oriented programming . This combination of succinct notation and powerful primitives allows an APL programmer to have a clearer global view of his programming constructs than in any other language, with the possible exception of SETL, since many low level details have been suppressed. While the APL font used to denote primitives achieves succinctness with exquisite beauty it presents difficulties for input/output, exchange of code segments in mails, and APL systems often do not communicate easily with ASCII-based text files. ELI is an array programming language based on APL where primitive operations are denoted by one or two ASCII characters thus maintaining the one-character one symbol principle of APL to preserves APL’s dataflow style in array-oriented programming . ELI has the same programming environment centered on a workspace as that in APL, it also makes input/output of ASCII-based text files containing code as well as data much easier. ELI has all the language features of the classical APL [9]; it does not have the general array feature of APL2 [13] but it provides lists and basic operations on lists to deal with irregular or non-homogeneous data . In addition, ELI supports complex numbers , symbol type and temporal data . Any discussion of program design inevitably brings up the question of efficiency . There are actually two different kinds of efficiency which are of concern here: the human programming efficiency and the machine execution efficiency . In other words, we like to know how quickly a working program can be implemented, and how
Recommended publications
  • An Array-Oriented Language with Static Rank Polymorphism
    An array-oriented language with static rank polymorphism Justin Slepak, Olin Shivers, and Panagiotis Manolios Northeastern University fjrslepak,shivers,[email protected] Abstract. The array-computational model pioneered by Iverson's lan- guages APL and J offers a simple and expressive solution to the \von Neumann bottleneck." It includes a form of rank, or dimensional, poly- morphism, which renders much of a program's control structure im- plicit by lifting base operators to higher-dimensional array structures. We present the first formal semantics for this model, along with the first static type system that captures the full power of the core language. The formal dynamic semantics of our core language, Remora, illuminates several of the murkier corners of the model. This allows us to resolve some of the model's ad hoc elements in more general, regular ways. Among these, we can generalise the model from SIMD to MIMD computations, by extending the semantics to permit functions to be lifted to higher- dimensional arrays in the same way as their arguments. Our static semantics, a dependent type system of carefully restricted power, is capable of describing array computations whose dimensions cannot be determined statically. The type-checking problem is decidable and the type system is accompanied by the usual soundness theorems. Our type system's principal contribution is that it serves to extract the implicit control structure that provides so much of the language's expres- sive power, making this structure explicitly apparent at compile time. 1 The Promise of Rank Polymorphism Behind every interesting programming language is an interesting model of com- putation.
    [Show full text]
  • R from a Programmer's Perspective Accompanying Manual for an R Course Held by M
    DSMZ R programming course R from a programmer's perspective Accompanying manual for an R course held by M. Göker at the DSMZ, 11/05/2012 & 25/05/2012. Slightly improved version, 10/09/2012. This document is distributed under the CC BY 3.0 license. See http://creativecommons.org/licenses/by/3.0 for details. Introduction The purpose of this course is to cover aspects of R programming that are either unlikely to be covered elsewhere or likely to be surprising for programmers who have worked with other languages. The course thus tries not be comprehensive but sort of complementary to other sources of information. Also, the material needed to by compiled in short time and perhaps suffers from important omissions. For the same reason, potential participants should not expect a fully fleshed out presentation but a combination of a text-only document (this one) with example code comprising the solutions of the exercises. The topics covered include R's general features as a programming language, a recapitulation of R's type system, advanced coding of functions, error handling, the use of attributes in R, object-oriented programming in the S3 system, and constructing R packages (in this order). The expected audience comprises R users whose own code largely consists of self-written functions, as well as programmers who are fluent in other languages and have some experience with R. Interactive users of R without programming experience elsewhere are unlikely to benefit from this course because quite a few programming skills cannot be covered here but have to be presupposed.
    [Show full text]
  • Parallel Functional Programming with APL
    Parallel Functional Programming, APL Lecture 1 Parallel Functional Programming with APL Martin Elsman Department of Computer Science University of Copenhagen DIKU December 19, 2016 Martin Elsman (DIKU) Parallel Functional Programming, APL Lecture 1 December 19, 2016 1 / 18 Outline 1 Outline Course Outline 2 Introduction to APL What is APL APL Implementations and Material APL Scalar Operations APL (1-Dimensional) Vector Computations Declaring APL Functions (dfns) APL Multi-Dimensional Arrays Iterations Declaring APL Operators Function Trains Examples Reading Martin Elsman (DIKU) Parallel Functional Programming, APL Lecture 1 December 19, 2016 2 / 18 Outline Course Outline Teachers Martin Elsman (ME), Ken Friis Larsen (KFL), Andrzej Filinski (AF), and Troels Henriksen (TH) Location Lectures in Small Aud, Universitetsparken 1 (UP1); Labs in Old Library, UP1 Course Description See http://kurser.ku.dk/course/ndak14009u/2016-2017 Course Outline Week 47 48 49 50 51 1–3 Mon 13–15 Intro, Futhark Parallel SNESL APL (ME) Project Futhark (ME) Haskell (AF) (ME) (KFL) Mon 15–17 Lab Lab Lab Lab Project Wed 13–15 Futhark Parallel SNESL Invited APL Project (ME) Haskell (AF) Lecture (ME) / (KFL) (John Projects Reppy) Martin Elsman (DIKU) Parallel Functional Programming, APL Lecture 1 December 19, 2016 3 / 18 Introduction to APL What is APL APL—An Ancient Array Programming Language—But Still Used! Pioneered by Ken E. Iverson in the 1960’s. E. Dijkstra: “APL is a mistake, carried through to perfection.” There are quite a few APL programmers around (e.g., HIPERFIT partners). Very concise notation for expressing array operations. Has a large set of functional, essentially parallel, multi- dimensional, second-order array combinators.
    [Show full text]
  • Programming Language
    Programming language A programming language is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Most programming languages consist of instructions for computers. There are programmable machines that use a set of specific instructions, rather than general programming languages. Early ones preceded the invention of the digital computer, the first probably being the automatic flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic Golden Age.[1] Since the early 1800s, programs have been used to direct the behavior of machines such as Jacquard looms, music boxes and player pianos.[2] The programs for these The source code for a simple computer program written in theC machines (such as a player piano's scrolls) did not programming language. When compiled and run, it will give the output "Hello, world!". produce different behavior in response to different inputs or conditions. Thousands of different programming languages have been created, and more are being created every year. Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform) while other languages use the declarative form (i.e. the desired result is specified, not how to achieve it). The description of a programming language is usually split into the two components ofsyntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, theC programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference.
    [Show full text]
  • RAL-TR-1998-060 Co-Array Fortran for Parallel Programming
    RAL-TR-1998-060 Co-Array Fortran for parallel programming 1 by R. W. Numrich23 and J. K. Reid Abstract Co-Array Fortran, formerly known as F−− , is a small extension of Fortran 95 for parallel processing. A Co-Array Fortran program is interpreted as if it were replicated a number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is termed an image. The array syntax of Fortran 95 is extended with additional trailing subscripts in square brackets to give a clear and straightforward representation of any access to data that is spread across images. References without square brackets are to local data, so code that can run independently is uncluttered. Only where there are square brackets, or where there is a procedure call and the procedure contains square brackets, is communication between images involved. There are intrinsic procedures to synchronize images, return the number of images, and return the index of the current image. We introduce the extension; give examples to illustrate how clear, powerful, and flexible it can be; and provide a technical definition. Categories and subject descriptors: D.3 [PROGRAMMING LANGUAGES]. General Terms: Parallel programming. Additional Key Words and Phrases: Fortran. Department for Computation and Information, Rutherford Appleton Laboratory, Oxon OX11 0QX, UK August 1998. 1 Available by anonymous ftp from matisa.cc.rl.ac.uk in directory pub/reports in the file nrRAL98060.ps.gz 2 Silicon Graphics, Inc., 655 Lone Oak Drive, Eagan, MN 55121, USA. Email:
    [Show full text]
  • Programming the Capabilities of the PC Have Changed Greatly Since the Introduction of Electronic Computers
    1 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in INTRODUCTION TO PROGRAMMING LANGUAGE Topic Objective: At the end of this topic the student will be able to understand: History of Computer Programming C++ Definition/Overview: Overview: A personal computer (PC) is any general-purpose computer whose original sales price, size, and capabilities make it useful for individuals, and which is intended to be operated directly by an end user, with no intervening computer operator. Today a PC may be a desktop computer, a laptop computer or a tablet computer. The most common operating systems are Microsoft Windows, Mac OS X and Linux, while the most common microprocessors are x86-compatible CPUs, ARM architecture CPUs and PowerPC CPUs. Software applications for personal computers include word processing, spreadsheets, databases, games, and myriad of personal productivity and special-purpose software. Modern personal computers often have high-speed or dial-up connections to the Internet, allowing access to the World Wide Web and a wide range of other resources. Key Points: 1. History of ComputeWWW.BSSVE.INr Programming The capabilities of the PC have changed greatly since the introduction of electronic computers. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person. The introduction of the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, led to the proliferation of personal computers after about 1975. Early personal computers - generally called microcomputers - were sold often in Electronic kit form and in limited volumes, and were of interest mostly to hobbyists and technicians.
    [Show full text]
  • Tangent: Automatic Differentiation Using Source-Code Transformation for Dynamically Typed Array Programming
    Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming Bart van Merriënboer Dan Moldovan Alexander B Wiltschko MILA, Google Brain Google Brain Google Brain [email protected] [email protected] [email protected] Abstract The need to efficiently calculate first- and higher-order derivatives of increasingly complex models expressed in Python has stressed or exceeded the capabilities of available tools. In this work, we explore techniques from the field of automatic differentiation (AD) that can give researchers expressive power, performance and strong usability. These include source-code transformation (SCT), flexible gradient surgery, efficient in-place array operations, and higher-order derivatives. We implement and demonstrate these ideas in the Tangent software library for Python, the first AD framework for a dynamic language that uses SCT. 1 Introduction Many applications in machine learning rely on gradient-based optimization, or at least the efficient calculation of derivatives of models expressed as computer programs. Researchers have a wide variety of tools from which they can choose, particularly if they are using the Python language [21, 16, 24, 2, 1]. These tools can generally be characterized as trading off research or production use cases, and can be divided along these lines by whether they implement automatic differentiation using operator overloading (OO) or SCT. SCT affords more opportunities for whole-program optimization, while OO makes it easier to support convenient syntax in Python, like data-dependent control flow, or advanced features such as custom partial derivatives. We show here that it is possible to offer the programming flexibility usually thought to be exclusive to OO-based tools in an SCT framework.
    [Show full text]
  • Compendium of Technical White Papers
    COMPENDIUM OF TECHNICAL WHITE PAPERS Compendium of Technical White Papers from Kx Technical Whitepaper Contents Machine Learning 1. Machine Learning in kdb+: kNN classification and pattern recognition with q ................................ 2 2. An Introduction to Neural Networks with kdb+ .......................................................................... 16 Development Insight 3. Compression in kdb+ ................................................................................................................. 36 4. Kdb+ and Websockets ............................................................................................................... 52 5. C API for kdb+ ............................................................................................................................ 76 6. Efficient Use of Adverbs ........................................................................................................... 112 Optimization Techniques 7. Multi-threading in kdb+: Performance Optimizations and Use Cases ......................................... 134 8. Kdb+ tick Profiling for Throughput Optimization ....................................................................... 152 9. Columnar Database and Query Optimization ............................................................................ 166 Solutions 10. Multi-Partitioned kdb+ Databases: An Equity Options Case Study ............................................. 192 11. Surveillance Technologies to Effectively Monitor Algo and High Frequency Trading ..................
    [Show full text]
  • Application and Interpretation
    Programming Languages: Application and Interpretation Shriram Krishnamurthi Brown University Copyright c 2003, Shriram Krishnamurthi This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 United States License. If you create a derivative work, please include the version information below in your attribution. This book is available free-of-cost from the author’s Web site. This version was generated on 2007-04-26. ii Preface The book is the textbook for the programming languages course at Brown University, which is taken pri- marily by third and fourth year undergraduates and beginning graduate (both MS and PhD) students. It seems very accessible to smart second year students too, and indeed those are some of my most successful students. The book has been used at over a dozen other universities as a primary or secondary text. The book’s material is worth one undergraduate course worth of credit. This book is the fruit of a vision for teaching programming languages by integrating the “two cultures” that have evolved in its pedagogy. One culture is based on interpreters, while the other emphasizes a survey of languages. Each approach has significant advantages but also huge drawbacks. The interpreter method writes programs to learn concepts, and has its heart the fundamental belief that by teaching the computer to execute a concept we more thoroughly learn it ourselves. While this reasoning is internally consistent, it fails to recognize that understanding definitions does not imply we understand consequences of those definitions. For instance, the difference between strict and lazy evaluation, or between static and dynamic scope, is only a few lines of interpreter code, but the consequences of these choices is enormous.
    [Show full text]
  • Handout 16: J Dictionary
    J Dictionary Roger K.W. Hui Kenneth E. Iverson Copyright © 1991-2002 Jsoftware Inc. All Rights Reserved. Last updated: 2002-09-10 www.jsoftware.com . Table of Contents 1 Introduction 2 Mnemonics 3 Ambivalence 4 Verbs and Adverbs 5 Punctuation 6 Forks 7 Programs 8 Bond Conjunction 9 Atop Conjunction 10 Vocabulary 11 Housekeeping 12 Power and Inverse 13 Reading and Writing 14 Format 15 Partitions 16 Defined Adverbs 17 Word Formation 18 Names and Displays 19 Explicit Definition 20 Tacit Equivalents 21 Rank 22 Gerund and Agenda 23 Recursion 24 Iteration 25 Trains 26 Permutations 27 Linear Functions 28 Obverse and Under 29 Identity Functions and Neutrals 30 Secondaries 31 Sample Topics 32 Spelling 33 Alphabet and Numbers 34 Grammar 35 Function Tables 36 Bordering a Table 37 Tables (Letter Frequency) 38 Tables 39 Classification 40 Disjoint Classification (Graphs) 41 Classification I 42 Classification II 43 Sorting 44 Compositions I 45 Compositions II 46 Junctions 47 Partitions I 48 Partitions II 49 Geometry 50 Symbolic Functions 51 Directed Graphs 52 Closure 53 Distance 54 Polynomials 55 Polynomials (Continued) 56 Polynomials in Terms of Roots 57 Polynomial Roots I 58 Polynomial Roots II 59 Polynomials: Stopes 60 Dictionary 61 I. Alphabet and Words 62 II. Grammar 63 A. Nouns 64 B. Verbs 65 C. Adverbs and Conjunctions 66 D. Comparatives 67 E. Parsing and Execution 68 F. Trains 69 G. Extended and Rational Arithmeti 70 H. Frets and Scripts 71 I. Locatives 72 J. Errors and Suspensions 73 III. Definitions 74 Vocabulary 75 = Self-Classify - Equal 76 =. Is (Local) 77 < Box - Less Than 78 <.
    [Show full text]
  • Speeding up Spmv for Power-Law Graph Analytics by Enhancing Locality & Vectorization
    Speeding Up SpMV for Power-Law Graph Analytics by Enhancing Locality & Vectorization Serif Yesil Azin Heidarshenas Adam Morrison Josep Torrellas Dept. of Computer Science Dept. of Computer Science Blavatnik School of Dept. of Computer Science University of Illinois at University of Illinois at Computer Science University of Illinois at Urbana-Champaign Urbana-Champaign Tel Aviv University Urbana-Champaign [email protected] [email protected] [email protected] [email protected] Abstract—Graph analytics applications often target large-scale data-dependent behavior of some accesses makes them hard web and social networks, which are typically power-law graphs. to predict and optimize for. As a result, SpMV on large power- Graph algorithms can often be recast as generalized Sparse law graphs becomes memory bound. Matrix-Vector multiplication (SpMV) operations, making SpMV optimization important for graph analytics. However, executing To address this challenge, previous work has focused on SpMV on large-scale power-law graphs results in highly irregular increasing SpMV’s Memory-Level Parallelism (MLP) using memory access patterns with poor cache utilization. Worse, we vectorization [9], [10] and/or on improving memory access find that existing SpMV locality and vectorization optimiza- locality by rearranging the order of computation. The main tions are largely ineffective on modern out-of-order (OOO) techniques for improving locality are binning [11], [12], which processors—they are not faster (or only marginally so) than the standard Compressed Sparse Row (CSR) SpMV implementation. translates indirect memory accesses into efficient sequential To improve performance for power-law graphs on modern accesses, and cache blocking [13], which processes the ma- OOO processors, we propose Locality-Aware Vectorization (LAV).
    [Show full text]
  • Sparse-Matrix Representation of Spiking Neural P Systems for Gpus
    Sparse-matrix Representation of Spiking Neural P Systems for GPUs Miguel A.´ Mart´ınez-del-Amor1, David Orellana-Mart´ın1, Francis G.C. Cabarle2, Mario J. P´erez-Jim´enez1, Henry N. Adorna2 1Research Group on Natural Computing Department of Computer Science and Artificial Intelligence Universidad de Sevilla Avda. Reina Mercedes s/n, 41012 Sevilla, Spain E-mail: [email protected], [email protected], [email protected] 2Algorithms and Complexity Laboratory Department of Computer Science University of the Philippines Diliman Diliman 1101 Quezon City, Philippines E-mail: [email protected], [email protected] Summary. Current parallel simulation algorithms for Spiking Neural P (SNP) systems are based on a matrix representation. This helps to harness the inherent parallelism in algebraic operations, such as vector-matrix multiplication. Although it has been convenient for the first parallel simulators running on Graphics Processing Units (GPUs), such as CuSNP, there are some bottlenecks to cope with. For example, matrix representation of SNP systems with a low-connectivity-degree graph lead to sparse matrices, i.e. containing more zeros than actual values. Having to deal with sparse matrices downgrades the performance of the simulators because of wasting memory and time. However, sparse matrices is a known problem on parallel computing with GPUs, and several solutions and algorithms are available in the literature. In this paper, we briefly analyse some of these ideas and apply them to represent some variants of SNP systems. We also conclude which variant better suit a sparse-matrix representation. Keywords: Spiking Neural P systems, Simulation Algorithm, Sparse Matrix Representation, GPU computing, CUDA 1 Introduction Spiking Neural P (SNP) systems [9] are a type of P systems [16] composed of a directed graph inspired by how neurons are interconnected by axons and synapses 162 M.A.
    [Show full text]