Keir, Paul G (2012) Design and Implementation of an Array Language for Computational Science on a Heterogeneous Multicore Architecture

Total Page:16

File Type:pdf, Size:1020Kb

Keir, Paul G (2012) Design and Implementation of an Array Language for Computational Science on a Heterogeneous Multicore Architecture Keir, Paul G (2012) Design and implementation of an array language for computational science on a heterogeneous multicore architecture. PhD thesis. http://theses.gla.ac.uk/3645/ Copyright and moral rights for this thesis are retained by the author A copy can be downloaded for personal non-commercial research or study, without prior permission or charge This thesis cannot be reproduced or quoted extensively from without first obtaining permission in writing from the Author The content must not be changed in any way or sold commercially in any format or medium without the formal permission of the Author When referring to this work, full bibliographic details including the author, title, awarding institution and date of the thesis must be given Glasgow Theses Service http://theses.gla.ac.uk/ [email protected] Design and Implementation of an Array Language for Computational Science on a Heterogeneous Multicore Architecture Paul Keir Submitted in fulfilment of the requirements for the degree of Doctor of Philosophy School of Computing Science College of Science and Engineering University of Glasgow July 8, 2012 c Paul Keir, 2012 Abstract The packing of multiple processor cores onto a single chip has become a mainstream solution to fundamental physical issues relating to the microscopic scales employed in the manufac- ture of semiconductor components. Multicore architectures provide lower clock speeds per core, while aggregate floating-point capability continues to increase. Heterogeneous multicore chips, such as the Cell Broadband Engine (CBE) and modern graphics chips, also address the related issue of an increasing mismatch between high pro- cessor speeds, and huge latency to main memory. Such chips tackle this memory wall by the provision of addressable caches; increased bandwidth to main memory; and fast thread con- text switching. An associated cost is often reduced functionality of the individual accelerator cores; and the increased complexity involved in their programming. This dissertation investigates the application of a programming language supporting the first- class use of arrays; and capable of automatically parallelising array expressions; to the het- erogeneous multicore domain of the CBE, as found in the Sony PlayStation R 3 (PS3). The language is a pre-existing and well-documented proper subset of Fortran, known as the ‘F’ programming language. A bespoke compiler, referred to as E], is developed to support this aim, and written in the Haskell programming language. The output of the compiler is in an extended C++ dialect known as Offload C++, which tar- gets the PS3. A significant feature of this language is its use of multiple, statically typed, address spaces. By focusing on generic, polymorphic interfaces for both the generated and hand constructed code, a number of interesting design patterns relating to the memory local- ity are introduced. A suite of medium-sized (100-700 lines), real-world benchmark programs are used to eval- uate the performance, correctness, and scalability of the compiler technology. Absolute speedup values, well in excess of one, are observed for all of the programs. The work ultimately demonstrates that an array language can significantly reduce the effort expended to utilise a parallel heterogeneous multicore architecture, while retaining high per- formance. A substantial, related advantage in using standard ‘F’ is that any Fortran compiler can create debuggable, and competitively performing serial programs. Acknowledgements I would firstly like to thank my wonderful wife, Steffi, not only for her moral support, pa- tience, and above all, love; but also for her practical advice, albeit through the refreshingly alternative perspective of a doctor of letters. Many thanks are also due to my supervisor, Dr. Paul Cockshott, who possesses an uncanny, raw intelligence, and knack for constructing the very question one might hope to avoid; to the benefit of all in earshot. Paul was exactly the supervisor I needed, and I greatly appreciate his kindness, candour, wisdom, and tolerance over these last years. I have seen far more of my second supervisor, Dr. John O’ Donnell, than is prescribed by reg- ulation, and I have benefited from our regular meetings more than he may know. Thankyou John for patiently introducing me to the mathematics behind programming languages. Thankyou to all the academic and administrative staff in the School of Computing Science. Dr. Alastair Donaldson, my first industrial supervisor, thankyou. You provided a solid op- erational structure; enthusiasm for the topic; and nothing less than a complete and thorough answer to any question I ever posed. Regrettably, time with my second industrial supervisor, Dr. Andrew Cook, was brief. Your assistance was nevertheless gratefully received. Andrew Richards, my third and final industrial supervisor, please accept my thanks also. Without your early decisions and integrity I would not be writing this. Thankyou Uwe Dolinsky, for your attention to detail, and diligent responses to my technical emails. To all those at Codeplay Software Ltd., many thanks for your assistance. To Prof. Lenore Mullin, thankyou kindly for being our SICSA distinguished visitor, and introducing the school to the mathematics of arrays. Your wisdom and ideas remain a source of inspiration long after your departure. To my dear colleague and companion, Youssef Gdura, you have reminded me of the essential quality of a good sense of humour. Your constant humility and equanimity, even in the face of a turbulent period for your homeland, remains a valuable lesson. Thanks also go to Mark i Shannon for being a friend on the road. Your certainty with regard to both technical and collegiate issues provided an essential ying to my doubting yang. Thankyou to Wikipedia R and the Wikimedia Foundation Inc. ii Contents 1 Introduction 10 1.1 Thesis Statement . 12 1.2 Contributions . 13 1.3 Publications . 14 1.4 Outline . 15 2 Related Work 17 2.1 Library Support for Parallelism . 18 2.1.1 The Cell SDK . 18 2.1.2 Threading Building Blocks . 20 2.2 Annotations, Directives and Pragmas . 20 2.2.1 High Performance Fortran . 21 2.2.2 OpenMP . 23 2.2.3 Cell Superscalar . 29 2.3 Language Extensions . 30 2.3.1 Sieve C++ . 31 2.3.2 OpenCL . 33 2.3.3 Cilk++ . 34 2.3.4 Deterministic Parallel Java . 34 2.4 Array Languages . 35 2.4.1 APL . 35 2.4.2 Fortran 90 . 37 2.4.3 ZPL . 37 2.4.4 Single Assignment C . 39 2.4.5 Vector Pascal . 40 2.5 Partitioned Global Address Space Languages . 42 2.5.1 Co-Array Fortran . 43 2.5.2 Unified Parallel C . 44 2.5.3 MPI Virtual Process Topologies . 47 iii 3 Methodology 48 3.1 The Cell Broadband Engine . 49 3.2 Offload C++ . 52 3.2.1 Launching Threads . 52 3.2.2 Pointer Locality . 53 3.2.3 Template Declarations . 56 3.2.4 Call Graph Duplication . 57 3.2.5 Translation Units . 59 3.2.6 Offload Block Parameters . 59 3.2.7 Explicit Inner Pointers . 62 3.3 Fortran and F . 63 3.3.1 Modules and Variables in ‘F’ . 63 3.3.2 Array Sections . 66 3.3.3 Scalar and Array Pointers . 67 3.3.4 The Do Construct . 68 3.3.5 Subroutines and Functions . 70 3.3.6 Derived Data Types . 71 3.3.7 Differences between ‘F’ and Fortran . 72 3.3.8 Differences between Fortran and ‘C’ . 73 3.4 Implicit Parallelism . 73 3.4.1 Scalar Expressions . 73 3.4.2 Elemental Functions and Array Expressions . 75 3.4.3 Array Assignment . 76 3.4.4 Parallel Execution . 77 4 Compiler Implementation 81 4.1 Overview of the E] toolchain . 82 4.2 Intermediate Representations . 84 4.2.1 The Parse Tree . 85 4.2.2 The Typed ‘F’ Abstract Syntax Tree . 89 4.2.3 The Offload C++ Abstract Syntax Tree . 90 4.3 Parsing with Monadic Combinators . 93 4.3.1 Predictive Parsing . 96 4.3.2 Combinators for Parsing ‘F’ . 100 4.4 Object Binding . 100 4.5 Translating into C++ . 103 4.5.1 Basic Types . 104 4.5.2 Pointers . 106 iv 4.5.3 Derived Types . 107 4.5.4 Functions and Subroutines . 111 4.5.5 Program Units . 112 4.6 Transforming Array Expressions . 113 4.6.1 Scrap Your Boilerplate . 114 4.6.2 Transforming the ‘F’ AST using SYB . 116 4.7 Optimisation for Constant Sizes . 122 5 Runtime Support and Static Pointer Locality 125 5.1 Scalar ‘F’ Types in E] ............................. 126 5.1.1 Representing ‘F’ Character Strings . 126 5.2 The E] Array Runtime Library . 127 5.2.1 Array Class Templates . 128 5.2.2 Array Declarations . 128 5.2.3 Implementing Essential ‘F’ Array Operations . 131 5.3 Interfacing with the ‘F’ Runtime Libraries . 135 5.3.1 Calculating the Result Type and Kind . 135 5.3.2 Application to Matrix Multiplication . 137 5.4 Manual Replication in Offload C++ . 139 5.4.1 Problem Description . 140 5.4.2 New with Locality . 144 5.4.3 Inner Pointers and Outer Duplication . 147 5.4.4 A Pointer Locality Class . 149 5.4.5 Closing Thoughts on Manual Replication . 153 5.5 Test Driven Development . 154 6 Experimental Results 156 6.1 Preliminary Setup and Experiments . 157 6.1.1 Benchmark Timing Routines . 157 6.1.2 SPU Memory Footprint Macro . 157 6.1.3 GNU Fortran Runtime SPU Memory Footprint . 158 6.1.4 New Operator SPU Memory Footprint . 159 6.1.5 Launch and Join Microbenchmark . 160 6.1.6 Benchmark Program Code Sizes . 160 6.1.7 Compilation Times . ..
Recommended publications
  • R from a Programmer's Perspective Accompanying Manual for an R Course Held by M
    DSMZ R programming course R from a programmer's perspective Accompanying manual for an R course held by M. Göker at the DSMZ, 11/05/2012 & 25/05/2012. Slightly improved version, 10/09/2012. This document is distributed under the CC BY 3.0 license. See http://creativecommons.org/licenses/by/3.0 for details. Introduction The purpose of this course is to cover aspects of R programming that are either unlikely to be covered elsewhere or likely to be surprising for programmers who have worked with other languages. The course thus tries not be comprehensive but sort of complementary to other sources of information. Also, the material needed to by compiled in short time and perhaps suffers from important omissions. For the same reason, potential participants should not expect a fully fleshed out presentation but a combination of a text-only document (this one) with example code comprising the solutions of the exercises. The topics covered include R's general features as a programming language, a recapitulation of R's type system, advanced coding of functions, error handling, the use of attributes in R, object-oriented programming in the S3 system, and constructing R packages (in this order). The expected audience comprises R users whose own code largely consists of self-written functions, as well as programmers who are fluent in other languages and have some experience with R. Interactive users of R without programming experience elsewhere are unlikely to benefit from this course because quite a few programming skills cannot be covered here but have to be presupposed.
    [Show full text]
  • Parallel Functional Programming with APL
    Parallel Functional Programming, APL Lecture 1 Parallel Functional Programming with APL Martin Elsman Department of Computer Science University of Copenhagen DIKU December 19, 2016 Martin Elsman (DIKU) Parallel Functional Programming, APL Lecture 1 December 19, 2016 1 / 18 Outline 1 Outline Course Outline 2 Introduction to APL What is APL APL Implementations and Material APL Scalar Operations APL (1-Dimensional) Vector Computations Declaring APL Functions (dfns) APL Multi-Dimensional Arrays Iterations Declaring APL Operators Function Trains Examples Reading Martin Elsman (DIKU) Parallel Functional Programming, APL Lecture 1 December 19, 2016 2 / 18 Outline Course Outline Teachers Martin Elsman (ME), Ken Friis Larsen (KFL), Andrzej Filinski (AF), and Troels Henriksen (TH) Location Lectures in Small Aud, Universitetsparken 1 (UP1); Labs in Old Library, UP1 Course Description See http://kurser.ku.dk/course/ndak14009u/2016-2017 Course Outline Week 47 48 49 50 51 1–3 Mon 13–15 Intro, Futhark Parallel SNESL APL (ME) Project Futhark (ME) Haskell (AF) (ME) (KFL) Mon 15–17 Lab Lab Lab Lab Project Wed 13–15 Futhark Parallel SNESL Invited APL Project (ME) Haskell (AF) Lecture (ME) / (KFL) (John Projects Reppy) Martin Elsman (DIKU) Parallel Functional Programming, APL Lecture 1 December 19, 2016 3 / 18 Introduction to APL What is APL APL—An Ancient Array Programming Language—But Still Used! Pioneered by Ken E. Iverson in the 1960’s. E. Dijkstra: “APL is a mistake, carried through to perfection.” There are quite a few APL programmers around (e.g., HIPERFIT partners). Very concise notation for expressing array operations. Has a large set of functional, essentially parallel, multi- dimensional, second-order array combinators.
    [Show full text]
  • Programming Language
    Programming language A programming language is a formal language, which comprises a set of instructions that produce various kinds of output. Programming languages are used in computer programming to implement algorithms. Most programming languages consist of instructions for computers. There are programmable machines that use a set of specific instructions, rather than general programming languages. Early ones preceded the invention of the digital computer, the first probably being the automatic flute player described in the 9th century by the brothers Musa in Baghdad, during the Islamic Golden Age.[1] Since the early 1800s, programs have been used to direct the behavior of machines such as Jacquard looms, music boxes and player pianos.[2] The programs for these The source code for a simple computer program written in theC machines (such as a player piano's scrolls) did not programming language. When compiled and run, it will give the output "Hello, world!". produce different behavior in response to different inputs or conditions. Thousands of different programming languages have been created, and more are being created every year. Many programming languages are written in an imperative form (i.e., as a sequence of operations to perform) while other languages use the declarative form (i.e. the desired result is specified, not how to achieve it). The description of a programming language is usually split into the two components ofsyntax (form) and semantics (meaning). Some languages are defined by a specification document (for example, theC programming language is specified by an ISO Standard) while other languages (such as Perl) have a dominant implementation that is treated as a reference.
    [Show full text]
  • RAL-TR-1998-060 Co-Array Fortran for Parallel Programming
    RAL-TR-1998-060 Co-Array Fortran for parallel programming 1 by R. W. Numrich23 and J. K. Reid Abstract Co-Array Fortran, formerly known as F−− , is a small extension of Fortran 95 for parallel processing. A Co-Array Fortran program is interpreted as if it were replicated a number of times and all copies were executed asynchronously. Each copy has its own set of data objects and is termed an image. The array syntax of Fortran 95 is extended with additional trailing subscripts in square brackets to give a clear and straightforward representation of any access to data that is spread across images. References without square brackets are to local data, so code that can run independently is uncluttered. Only where there are square brackets, or where there is a procedure call and the procedure contains square brackets, is communication between images involved. There are intrinsic procedures to synchronize images, return the number of images, and return the index of the current image. We introduce the extension; give examples to illustrate how clear, powerful, and flexible it can be; and provide a technical definition. Categories and subject descriptors: D.3 [PROGRAMMING LANGUAGES]. General Terms: Parallel programming. Additional Key Words and Phrases: Fortran. Department for Computation and Information, Rutherford Appleton Laboratory, Oxon OX11 0QX, UK August 1998. 1 Available by anonymous ftp from matisa.cc.rl.ac.uk in directory pub/reports in the file nrRAL98060.ps.gz 2 Silicon Graphics, Inc., 655 Lone Oak Drive, Eagan, MN 55121, USA. Email:
    [Show full text]
  • Programming the Capabilities of the PC Have Changed Greatly Since the Introduction of Electronic Computers
    1 www.onlineeducation.bharatsevaksamaj.net www.bssskillmission.in INTRODUCTION TO PROGRAMMING LANGUAGE Topic Objective: At the end of this topic the student will be able to understand: History of Computer Programming C++ Definition/Overview: Overview: A personal computer (PC) is any general-purpose computer whose original sales price, size, and capabilities make it useful for individuals, and which is intended to be operated directly by an end user, with no intervening computer operator. Today a PC may be a desktop computer, a laptop computer or a tablet computer. The most common operating systems are Microsoft Windows, Mac OS X and Linux, while the most common microprocessors are x86-compatible CPUs, ARM architecture CPUs and PowerPC CPUs. Software applications for personal computers include word processing, spreadsheets, databases, games, and myriad of personal productivity and special-purpose software. Modern personal computers often have high-speed or dial-up connections to the Internet, allowing access to the World Wide Web and a wide range of other resources. Key Points: 1. History of ComputeWWW.BSSVE.INr Programming The capabilities of the PC have changed greatly since the introduction of electronic computers. By the early 1970s, people in academic or research institutions had the opportunity for single-person use of a computer system in interactive mode for extended durations, although these systems would still have been too expensive to be owned by a single person. The introduction of the microprocessor, a single chip with all the circuitry that formerly occupied large cabinets, led to the proliferation of personal computers after about 1975. Early personal computers - generally called microcomputers - were sold often in Electronic kit form and in limited volumes, and were of interest mostly to hobbyists and technicians.
    [Show full text]
  • Tangent: Automatic Differentiation Using Source-Code Transformation for Dynamically Typed Array Programming
    Tangent: Automatic differentiation using source-code transformation for dynamically typed array programming Bart van Merriënboer Dan Moldovan Alexander B Wiltschko MILA, Google Brain Google Brain Google Brain [email protected] [email protected] [email protected] Abstract The need to efficiently calculate first- and higher-order derivatives of increasingly complex models expressed in Python has stressed or exceeded the capabilities of available tools. In this work, we explore techniques from the field of automatic differentiation (AD) that can give researchers expressive power, performance and strong usability. These include source-code transformation (SCT), flexible gradient surgery, efficient in-place array operations, and higher-order derivatives. We implement and demonstrate these ideas in the Tangent software library for Python, the first AD framework for a dynamic language that uses SCT. 1 Introduction Many applications in machine learning rely on gradient-based optimization, or at least the efficient calculation of derivatives of models expressed as computer programs. Researchers have a wide variety of tools from which they can choose, particularly if they are using the Python language [21, 16, 24, 2, 1]. These tools can generally be characterized as trading off research or production use cases, and can be divided along these lines by whether they implement automatic differentiation using operator overloading (OO) or SCT. SCT affords more opportunities for whole-program optimization, while OO makes it easier to support convenient syntax in Python, like data-dependent control flow, or advanced features such as custom partial derivatives. We show here that it is possible to offer the programming flexibility usually thought to be exclusive to OO-based tools in an SCT framework.
    [Show full text]
  • Abstractions for Programming Graphics Processors in High-Level Programming Languages
    Abstracties voor het programmeren van grafische processoren in hoogniveau-programmeertalen Abstractions for Programming Graphics Processors in High-Level Programming Languages Tim Besard Promotor: prof. dr. ir. B. De Sutter Proefschrift ingediend tot het behalen van de graad van Doctor in de ingenieurswetenschappen: computerwetenschappen Vakgroep Elektronica en Informatiesystemen Voorzitter: prof. dr. ir. K. De Bosschere Faculteit Ingenieurswetenschappen en Architectuur Academiejaar 2018 - 2019 ISBN 978-94-6355-244-8 NUR 980 Wettelijk depot: D/2019/10.500/52 Examination Committee Prof. Filip De Turck, chair Department of Information Technology Faculty of Engineering and Architecture Ghent University Prof. Koen De Bosschere, secretary Department of Electronics and Information Systems Faculty of Engineering and Architecture Ghent University Prof. Bjorn De Sutter, supervisor Department of Electronics and Information Systems Faculty of Engineering and Architecture Ghent University Prof. Jutho Haegeman Department of Physics and Astronomy Faculty of Sciences Ghent University Prof. Jan Lemeire Department of Electronics and Informatics Faculty of Engineering Vrije Universiteit Brussel Prof. Christophe Dubach School of Informatics College of Science & Engineering The University of Edinburgh Prof. Alan Edelman Computer Science & Artificial Intelligence Laboratory Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology ii Dankwoord Ik wist eigenlijk niet waar ik aan begon, toen ik in 2012 in de cata- comben van het Technicum op gesprek ging over een doctoraat. Of ik al eens met LLVM gewerkt had. Ondertussen zijn we vele jaren verder, werk ik op een bureau waar er wel daglicht is, en is het eindpunt van deze studie zowaar in zicht. Dat mag natuurlijk wel, zo vertelt men mij, na 7 jaar.
    [Show full text]
  • NOVEMBER 18 • WEDNESDAY Build Stuff'15 Lithuania
    Build Stuff'15 Lithuania NOVEMBER 18 • WEDNESDAY A Advanced B Beginner I Intermediate N Non­Technical 08:30 – 09:00 Registration 1. Alfa Speakers: Registration 09:00 – 09:15 Welcome talk 1. Alfa Speakers: Welcome talk 09:00 – 18:00 Open Space 6. Lobby Speakers: Open Space Sponsors: 4Finance, Devbridge, Storebrand, Visma Lietuva, WIX Lietuva 09:15 – 10:15 B Uncle Bob / Robert Martin @unclebobmartin ­ The Last Programming Language 1. Alfa Speakers: Uncle Bob / Robert Martin For the last 50 years we’ve been exploring language after language. Now many of the “new” languages are actually quite old. The latest fad is “functional programming” which got it’s roots back in the 50s. Have we come full circle? Have we explored all the different kinds of languages? Is it time for us to finally decide on a single language for all software development? In this talk Uncle Bob walks through some of the history of programming languages, and then prognosticates on the future of languages. 10:15 – 10:35 Coffee/tea break 1. Alfa Speakers: Coffee/tea break 10:35 – 11:30 I Dmytro Mindra @dmytromindra ­ Refactoring Legacy Code 5. Theta Speakers: Dmytro Mindra Every programmer has to face legacy code day after day. It might be ugly, it might look scary, it can make a grown man cry. Some will throw it away and try rewriting everything from scratch. Most of them will fail. Refactoring legacy code is a much better idea. It is not so scary when you take it in very small bites, introduce small changes, add unit tests.
    [Show full text]
  • The Best of Both Worlds?
    The Best of Both Worlds? Reimplementing an Object-Oriented System with Functional Programming on the .NET Platform and Comparing the Two Paradigms Erik Bugge Thesis submitted for the degree of Master in Informatics: Programming and Networks 60 credits Department of Informatics Faculty of mathematics and natural sciences UNIVERSITY OF OSLO Autumn 2019 The Best of Both Worlds? Reimplementing an Object-Oriented System with Functional Programming on the .NET Platform and Comparing the Two Paradigms Erik Bugge © 2019 Erik Bugge The Best of Both Worlds? http://www.duo.uio.no/ Printed: Reprosentralen, University of Oslo Abstract Programming paradigms are categories that classify languages based on their features. Each paradigm category contains rules about how the program is built. Comparing programming paradigms and languages is important, because it lets developers make more informed decisions when it comes to choosing the right technology for a system. Making meaningful comparisons between paradigms and languages is challenging, because the subjects of comparison are often so dissimilar that the process is not always straightforward, or does not always yield particularly valuable results. Therefore, multiple avenues of comparison must be explored in order to get meaningful information about the pros and cons of these technologies. This thesis looks at the difference between the object-oriented and functional programming paradigms on a higher level, before delving in detail into a development process that consisted of reimplementing parts of an object- oriented system into functional code. Using results from major comparative studies, exploring high-level differences between the two paradigms’ tools for modular programming and general program decompositions, and looking at the development process described in detail in this thesis in light of the aforementioned findings, a comparison on multiple levels was done.
    [Show full text]
  • On the Cognitive Prerequisites of Learning Computer Programming
    On the Cognitive Prerequisites of Learning Computer Programming Roy D. Pea D. Midian Kurland Technical Report No. 18 ON THE COGNITIVE PREREQUISITES OF LEARNING COMPUTER PROGRAMMING* Roy D. Pea and D. Midian Kurland Introduction Training in computer literacy of some form, much of which will consist of training in computer programming, is likely to involve $3 billion of the $14 billion to be spent on personal computers by 1986 (Harmon, 1983). Who will do the training? "hardware and software manu- facturers, management consultants, -retailers, independent computer instruction centers, corporations' in-house training programs, public and private schools and universities, and a variety of consultants1' (ibid.,- p. 27). To date, very little is known about what one needs to know in order to learn to program, and the ways in which edu- cators might provide optimal learning conditions. The ultimate suc- cess of these vast training programs in programming--especially toward the goal of providing a basic computer programming compe- tency for all individuals--will depend to a great degree on an ade- quate understanding of the developmental psychology of programming skills, a field currently in its infancy. In the absence of such a theory, training will continue, guided--or to express it more aptly, misguided--by the tacit Volk theories1' of programming development that until now have served as the underpinnings of programming instruction. Our paper begins to explore the complex agenda of issues, promise, and problems that building a developmental science of programming entails. Microcomputer Use in Schools The National Center for Education Statistics has recently released figures revealing that the use of micros in schools tripled from Fall 1980 to Spring 1983.
    [Show full text]
  • Handout 16: J Dictionary
    J Dictionary Roger K.W. Hui Kenneth E. Iverson Copyright © 1991-2002 Jsoftware Inc. All Rights Reserved. Last updated: 2002-09-10 www.jsoftware.com . Table of Contents 1 Introduction 2 Mnemonics 3 Ambivalence 4 Verbs and Adverbs 5 Punctuation 6 Forks 7 Programs 8 Bond Conjunction 9 Atop Conjunction 10 Vocabulary 11 Housekeeping 12 Power and Inverse 13 Reading and Writing 14 Format 15 Partitions 16 Defined Adverbs 17 Word Formation 18 Names and Displays 19 Explicit Definition 20 Tacit Equivalents 21 Rank 22 Gerund and Agenda 23 Recursion 24 Iteration 25 Trains 26 Permutations 27 Linear Functions 28 Obverse and Under 29 Identity Functions and Neutrals 30 Secondaries 31 Sample Topics 32 Spelling 33 Alphabet and Numbers 34 Grammar 35 Function Tables 36 Bordering a Table 37 Tables (Letter Frequency) 38 Tables 39 Classification 40 Disjoint Classification (Graphs) 41 Classification I 42 Classification II 43 Sorting 44 Compositions I 45 Compositions II 46 Junctions 47 Partitions I 48 Partitions II 49 Geometry 50 Symbolic Functions 51 Directed Graphs 52 Closure 53 Distance 54 Polynomials 55 Polynomials (Continued) 56 Polynomials in Terms of Roots 57 Polynomial Roots I 58 Polynomial Roots II 59 Polynomials: Stopes 60 Dictionary 61 I. Alphabet and Words 62 II. Grammar 63 A. Nouns 64 B. Verbs 65 C. Adverbs and Conjunctions 66 D. Comparatives 67 E. Parsing and Execution 68 F. Trains 69 G. Extended and Rational Arithmeti 70 H. Frets and Scripts 71 I. Locatives 72 J. Errors and Suspensions 73 III. Definitions 74 Vocabulary 75 = Self-Classify - Equal 76 =. Is (Local) 77 < Box - Less Than 78 <.
    [Show full text]
  • Introduction to the Cell Multiprocessor
    Introduction J. A. Kahle M. N. Day to the Cell H. P. Hofstee C. R. Johns multiprocessor T. R. Maeurer D. Shippy This paper provides an introductory overview of the Cell multiprocessor. Cell represents a revolutionary extension of conventional microprocessor architecture and organization. The paper discusses the history of the project, the program objectives and challenges, the design concept, the architecture and programming models, and the implementation. Introduction: History of the project processors in order to provide the required Initial discussion on the collaborative effort to develop computational density and power efficiency. After Cell began with support from CEOs from the Sony several months of architectural discussion and contract and IBM companies: Sony as a content provider and negotiations, the STI (SCEI–Toshiba–IBM) Design IBM as a leading-edge technology and server company. Center was formally opened in Austin, Texas, on Collaboration was initiated among SCEI (Sony March 9, 2001. The STI Design Center represented Computer Entertainment Incorporated), IBM, for a joint investment in design of about $400,000,000. microprocessor development, and Toshiba, as a Separate joint collaborations were also set in place development and high-volume manufacturing technology for process technology development. partner. This led to high-level architectural discussions A number of key elements were employed to drive the among the three companies during the summer of 2000. success of the Cell multiprocessor design. First, a holistic During a critical meeting in Tokyo, it was determined design approach was used, encompassing processor that traditional architectural organizations would not architecture, hardware implementation, system deliver the computational power that SCEI sought structures, and software programming models.
    [Show full text]