Terra: Simplifying High-Performance Programming Using Multi-Stage Programming

Total Page:16

File Type:pdf, Size:1020Kb

Terra: Simplifying High-Performance Programming Using Multi-Stage Programming TERRA: SIMPLIFYING HIGH-PERFORMANCE PROGRAMMING USING MULTI-STAGE PROGRAMMING A DISSERTATION SUBMITTED TO THE DEPARTMENT OF COMPUTER SCIENCE AND THE COMMITTEE ON GRADUATE STUDIES OF STANFORD UNIVERSITY IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY Zachary DeVito December 2014 © 2014 by Zachary Lipner DeVito. All Rights Reserved. Re-distributed by Stanford University under license with the author. This work is licensed under a Creative Commons Attribution- Noncommercial 3.0 United States License. http://creativecommons.org/licenses/by-nc/3.0/us/ This dissertation is online at: http://purl.stanford.edu/zt513qh3032 ii I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Pat Hanrahan, Primary Adviser I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Alex Aiken I certify that I have read this dissertation and that, in my opinion, it is fully adequate in scope and quality as a dissertation for the degree of Doctor of Philosophy. Oyekunle Olukotun Approved for the Stanford University Committee on Graduate Studies. Patricia J. Gumport, Vice Provost for Graduate Education This signature page was generated electronically upon submission of this dissertation in electronic format. An original signed hard copy of the signature page is on file in University Archives. iii ABSTRACT Modern high-performance computers are heterogeneous machines that contain a mixture of multi-core CPUs, GPUs, interconnects, and other custom accelerators. The composition of individual machines varies and will change over time, so it is important that applications written for these machines are portable. One way to write portable high-performance applications is to abstract the problem at a higher-level and use approaches such as auto- tuners or domain-specific languages (DSLs) to compile the abstraction into high-performance code using domain knowledge. These approaches require that the application generate code programmatically. However, current state-of-the-art high-performance languages lack support for generating code, making it difficult to design these programs. This thesis presents a novel two-language design for generating high-performance code programmatically. We leverage a high-level language, Lua, to stage the execution of a new low-level language, Terra. Programmers can implement program analyses and transformations in Lua, but use built-in constructs to generate and run high-performance Terra code. This design reflects how programs like DSLs are often created in practice but applies principled techniques to make them easier to create. We adapt techniques from multi-stage programming to work within the two-language design. In particular, shared lexical scope makes it easy to generate code, but separate execution enables the programmer to control performance. Furthermore, we show how adding staged compilation to the typechecking process of Terra allows for the creation of high-performance types. This typechecking process combines meta-object protocols frequently used in dynamically-typed languages with staged programming to produce flexible but high-performance types. We evaluate the design presented in this thesis by implementing a number of example programs using Lua and Terra in areas where performance is critical such as linear algebra, iv image processing, probabilistic programming, serialization, and dynamic assembly. We show that these applications, which previously required many different languages and technologies, can be implemented concisely using just Lua and Terra. They perform as well as or better than their equivalents written using existing technologies. In some cases, we found that the added expressiveness of using our approach made it feasible to implement more aggressive optimizations, enabling the program to perform up to an order of magnitude better than existing approaches. v ACKNOWLEDGMENTS Creating this dissertation was only possible with the help, suggestions, and support of many people. First, I want to thank my advisor Pat Hanrahan for always encouraging me to work on the problems that I found most interesting, and giving me scarily accurate advice well before I realized that I needed it. Frequently there were times that the solution to a problem that I was working on was something Pat mentioned off-hand six months earlier. I am also grateful to Alex Aiken for his advice over the years on the many projects I worked on including the Terra and Liszt languages. As my interests turned to programming languages, Alex helped me navigate the community, always asking on-point questions that helped focus the work. I also want to thank my other Committee members — Kunle Olukotun, Jan Vitek, and Eric Darve — for their advice and for many fruitful discussions about how to design high-performance domain-specific languages. My work would not have been possible without the help of many other students in the lab at Stanford. I am grateful to James Hegarty for participating in endless discussions about programming language design, and for using Terra even at a very early stage, and to Daniel Ritchie and Matt Fisher for their help in crafting the design of Exotypes. I started working on the Liszt programming language only a few months after I came to Stanford. The project has continued up to this day, accumulating a large number of people who have helped out over the years. I am grateful for this help, especially to Niels Joubert, who spent long hours getting Liszt to run on large clusters and GPUs, and to Crystal Lemire, Gilbert Bernstein, and Chinmayee Shah who are now continuing work on the project. I also want to thank my academic “siblings” Mike Bauer, Eric Schkufza, Sharon Lin, and Lingfeng Yang who helped me navigate the academic process, and encouraged me to keep going on problems even when I thought it was not worth it. I am also grateful to the vi rest of Pat’s lab and other students I interacted with while at Stanford. Their diverse interests, enthusiasm for problem solving, and inquisitiveness meant that meetings never had a dull moment. Finally, I would like to thank my mom and dad for their support and encouragement through this process. I wouldn’t have been able to do it without you. vii CONTENTS ABSTRACT iv ACKNOWLEDGMENTS vi 1 INTRODUCTION 1 1.1 Contributions . 4 1.2 Thesis outline . 6 2 BACKGROUND 7 2.1 Practical designs of libraries that do code generation . 7 2.1.1 Ad hoc templates and scripts . 8 2.1.2 Source-to-source autotuners . 9 2.1.3 Statically-compiled DSLs . 11 2.1.4 Dynamically-compiled DSLs . 14 2.1.5 Summary . 16 2.2 Existing techniques for code generation . 16 2.2.1 LISP . 17 2.2.2 Partial evaluation . 19 2.2.3 Multi-stage programming . 21 2.3 Conclusion . 23 3 GENERATING CODE 25 3.1 Writing code in Terra . 26 3.2 Multi-stage programming in Terra . 30 3.3 Writing optimizations using staging . 32 viii 3.4 Generating arbitrary code using staging. 33 3.5 Design decisions for code generation . 36 3.5.1 Shared lexical environment . 36 3.5.2 Hygienic code generation . 37 3.5.3 Separate evaluation of Terra code . 38 3.5.4 Eager specialization with lazy typechecking . 39 4 FORMALIZING CODE GENERATION WITH TERRA CORE 41 4.1 Terra Core . 41 4.2 Design decisions as reflected in Terra Core . 46 4.2.1 Terra entities are first-class . 46 4.2.2 Hygiene and shared lexical environment . 47 4.2.3 Eager specialization with lazy typechecking . 48 4.2.4 Separate evaluation of Terra code . 50 5 BUILDING DSLS 51 5.1 Tuning DGEMM . 51 5.2 Darkroom: A stencil DSL for images . 55 5.3 Liszt: A DSL for physical simulation . 57 5.4 Quicksand: A DSL for probabilistic programming . 59 6 GENERATING TYPES WITH EXOTYPES 63 6.1 Meta-object protocols . 65 6.2 Exotypes interface . 66 6.3 Example: Array(T) . 70 6.4 Composability . 72 6.5 Related Work . 74 7 FORMALIZATION OF EXOTYPES 76 8 BUILDING LIBRARIES WITH EXOTYPES 83 8.1 Class systems . 83 8.2 Data layout . 85 ix 8.3 Serialization . 87 8.4 Dynamic x86 assembly . 89 8.5 Automatic differentiation . 92 9 EMBEDDING CUSTOM DSLS IN LUA 95 9.1 Current approaches to embedding languages . 95 9.2 Our language extension interface . 98 9.3 Using language extensions . 100 9.3.1 Darkroom: Lua-based meta-programming with custom syntax . 100 9.3.2 Liszt: Custom syntax with a language extension API . 102 9.4 Comparison to other approaches to syntax extension . 103 10 INTEROPERABILITY 105 10.1 With existing applications . 105 10.1.1 Embedding as a runtime . 106 10.1.2 Zero-overhead compatibility with C . 107 10.2 With Lua . 111 10.3 With DSLs . 115 11 IMPLEMENTATION 117 12 DISCUSSION 121 12.1 Lessons learned from building DSLs . 121 12.2 Software development as a library . 123 12.3 Statically-typed languages as heterogeneous staged programming . 124 12.4 A middle ground for static and dynamic typing . 126 12.5 Applying dynamic code generation to scientific computing . 128 12.6 Using high-performance staged programming to improve compiler perfor- mance . 128 13 CONCLUSION 130 x LIST OF FIGURES 2.1 Example Liszt code . 12 2.2 Generating code with LLVM . 15 2.3 Macro hygiene . 18 2.4 Multi-stage programming operators . 22 3.1 Phases of evaluation of a Lua-Terra program. 30 3.2 An example function that compiles a simple IR for image processing code into a Terra function . 35 3.3 A simple autotuner for the image processing language.
Recommended publications
  • The LLVM Instruction Set and Compilation Strategy
    The LLVM Instruction Set and Compilation Strategy Chris Lattner Vikram Adve University of Illinois at Urbana-Champaign lattner,vadve ¡ @cs.uiuc.edu Abstract This document introduces the LLVM compiler infrastructure and instruction set, a simple approach that enables sophisticated code transformations at link time, runtime, and in the field. It is a pragmatic approach to compilation, interfering with programmers and tools as little as possible, while still retaining extensive high-level information from source-level compilers for later stages of an application’s lifetime. We describe the LLVM instruction set, the design of the LLVM system, and some of its key components. 1 Introduction Modern programming languages and software practices aim to support more reliable, flexible, and powerful software applications, increase programmer productivity, and provide higher level semantic information to the compiler. Un- fortunately, traditional approaches to compilation either fail to extract sufficient performance from the program (by not using interprocedural analysis or profile information) or interfere with the build process substantially (by requiring build scripts to be modified for either profiling or interprocedural optimization). Furthermore, they do not support optimization either at runtime or after an application has been installed at an end-user’s site, when the most relevant information about actual usage patterns would be available. The LLVM Compilation Strategy is designed to enable effective multi-stage optimization (at compile-time, link-time, runtime, and offline) and more effective profile-driven optimization, and to do so without changes to the traditional build process or programmer intervention. LLVM (Low Level Virtual Machine) is a compilation strategy that uses a low-level virtual instruction set with rich type information as a common code representation for all phases of compilation.
    [Show full text]
  • What Is LLVM? and a Status Update
    What is LLVM? And a Status Update. Approved for public release Hal Finkel Leadership Computing Facility Argonne National Laboratory Clang, LLVM, etc. ✔ LLVM is a liberally-licensed(*) infrastructure for creating compilers, other toolchain components, and JIT compilation engines. ✔ Clang is a modern C++ frontend for LLVM ✔ LLVM and Clang will play significant roles in exascale computing systems! (*) Now under the Apache 2 license with the LLVM Exception LLVM/Clang is both a research platform and a production-quality compiler. 2 A role in exascale? Current/Future HPC vendors are already involved (plus many others)... Apple + Google Intel (Many millions invested annually) + many others (Qualcomm, Sony, Microsoft, Facebook, Ericcson, etc.) ARM LLVM IBM Cray NVIDIA (and PGI) Academia, Labs, etc. AMD 3 What is LLVM: LLVM is a multi-architecture infrastructure for constructing compilers and other toolchain components. LLVM is not a “low-level virtual machine”! LLVM IR Architecture-independent simplification Architecture-aware optimization (e.g. vectorization) Assembly printing, binary generation, or JIT execution Backends (Type legalization, instruction selection, register allocation, etc.) 4 What is Clang: LLVM IR Clang is a C++ frontend for LLVM... Code generation Parsing and C++ Source semantic analysis (C++14, C11, etc.) Static analysis ● For basic compilation, Clang works just like gcc – using clang instead of gcc, or clang++ instead of g++, in your makefile will likely “just work.” ● Clang has a scalable LTO, check out: https://clang.llvm.org/docs/ThinLTO.html 5 The core LLVM compiler-infrastructure components are one of the subprojects in the LLVM project. These components are also referred to as “LLVM.” 6 What About Flang? ● Started as a collaboration between DOE and NVIDIA/PGI.
    [Show full text]
  • Truffle/C Interpreter
    JOHANNES KEPLER UNIVERSITAT¨ LINZ JKU Faculty of Engineering and Natural Sciences Truffle/C Interpreter Master’s Thesis submitted in partial fulfillment of the requirements for the academic degree Diplom-Ingenieur in the Master’s Program Computer Science Submitted by Manuel Rigger, BSc. At the Institut f¨urSystemsoftware Advisor o.Univ.-Prof. Dipl.-Ing. Dr.Dr.h.c. Hanspeter M¨ossenb¨ock Co-advisor Dipl.-Ing. Lukas Stadler Dipl.-Ing. Dr. Thomas W¨urthinger Xiamen, April 2014 Contents I Contents 1 Introduction 3 1.1 Motivation . .3 1.2 Goals and Scope . .4 1.3 From C to Java . .4 1.4 Structure of the Thesis . .6 2 State of the Art 9 2.1 Graal . .9 2.2 Truffle . 10 2.2.1 Rewriting and Specialization . 10 2.2.2 Truffle DSL . 11 2.2.3 Control Flow . 12 2.2.4 Profiling and Inlining . 12 2.2.5 Partial Evaluation and Compilation . 12 2.3 Clang . 13 3 Architecture 14 3.1 From Clang to Java . 15 3.2 Node Construction . 16 3.3 Runtime . 16 4 The Truffle/C File 17 4.1 Truffle/C File Format Goals . 17 4.2 Truffle/C File Format 1 . 19 4.2.1 Constant Pool . 19 4.2.2 Function Table . 20 4.2.3 Functions and Attributes . 20 4.3 Truffle/C File Considerations and Comparison . 21 4.3.1 Java Class File and Truffle/C File . 21 4.3.2 ELF and Truffle/C File . 22 4.4 Clang Modification Truffle/C File . 23 Contents II 5 Truffle/C Data Types 25 5.1 Data Type Hierarchy: Boxing, Upcasts and Downcasts .
    [Show full text]
  • CUDA Flux: a Lightweight Instruction Profiler for CUDA Applications
    CUDA Flux: A Lightweight Instruction Profiler for CUDA Applications Lorenz Braun Holger Froning¨ Institute of Computer Engineering Institute of Computer Engineering Heidelberg University Heidelberg University Heidelberg, Germany Heidelberg, Germany [email protected] [email protected] Abstract—GPUs are powerful, massively parallel processors, hardware, it lacks verbosity when it comes to analyzing the which require a vast amount of thread parallelism to keep their different types of instructions executed. This problem is aggra- thousands of execution units busy, and to tolerate latency when vated by the recently frequent introduction of new instructions, accessing its high-throughput memory system. Understanding the behavior of massively threaded GPU programs can be for instance floating point operations with reduced precision difficult, even though recent GPUs provide an abundance of or special function operations including Tensor Cores for ma- hardware performance counters, which collect statistics about chine learning. Similarly, information on the use of vectorized certain events. Profiling tools that assist the user in such analysis operations is often lost. Characterizing workloads on PTX for their GPUs, like NVIDIA’s nvprof and cupti, are state-of- level is desirable as PTX allows us to determine the exact types the-art. However, instrumentation based on reading hardware performance counters can be slow, in particular when the number of the instructions a kernel executes. On the contrary, nvprof of metrics is large. Furthermore, the results can be inaccurate as metrics are based on a lower-level representation called SASS. instructions are grouped to match the available set of hardware Still, even if there was an exact mapping, some information on counters.
    [Show full text]
  • The Evolution of Lisp
    1 The Evolution of Lisp Guy L. Steele Jr. Richard P. Gabriel Thinking Machines Corporation Lucid, Inc. 245 First Street 707 Laurel Street Cambridge, Massachusetts 02142 Menlo Park, California 94025 Phone: (617) 234-2860 Phone: (415) 329-8400 FAX: (617) 243-4444 FAX: (415) 329-8480 E-mail: [email protected] E-mail: [email protected] Abstract Lisp is the world’s greatest programming language—or so its proponents think. The structure of Lisp makes it easy to extend the language or even to implement entirely new dialects without starting from scratch. Overall, the evolution of Lisp has been guided more by institutional rivalry, one-upsmanship, and the glee born of technical cleverness that is characteristic of the “hacker culture” than by sober assessments of technical requirements. Nevertheless this process has eventually produced both an industrial- strength programming language, messy but powerful, and a technically pure dialect, small but powerful, that is suitable for use by programming-language theoreticians. We pick up where McCarthy’s paper in the first HOPL conference left off. We trace the development chronologically from the era of the PDP-6, through the heyday of Interlisp and MacLisp, past the ascension and decline of special purpose Lisp machines, to the present era of standardization activities. We then examine the technical evolution of a few representative language features, including both some notable successes and some notable failures, that illuminate design issues that distinguish Lisp from other programming languages. We also discuss the use of Lisp as a laboratory for designing other programming languages. We conclude with some reflections on the forces that have driven the evolution of Lisp.
    [Show full text]
  • Implementing Continuation Based Language in LLVM and Clang
    Implementing Continuation based language in LLVM and Clang Kaito TOKUMORI Shinji KONO University of the Ryukyus University of the Ryukyus Email: [email protected] Email: [email protected] Abstract—The programming paradigm which use data seg- 1 __code f(Allocate allocate){ 2 allocate.size = 0; ments and code segments are proposed. This paradigm uses 3 goto g(allocate); Continuation based C (CbC), which a slight modified C language. 4 } 5 Code segments are units of calculation and Data segments are 6 // data segment definition sets of typed data. We use these segments as units of computation 7 // (generated automatically) 8 union Data { and meta computation. In this paper we show the implementation 9 struct Allocate { of CbC on LLVM and Clang 3.7. 10 long size; 11 } allocate; 12 }; I. A PRACTICAL CONTINUATION BASED LANGUAGE TABLE I The proposed units of programming are named code seg- CBCEXAMPLE ments and data segments. Code segments are units of calcu- lation which have no state. Data segments are sets of typed data. Code segments are connected to data segments by a meta data segment called a context. After the execution of a code have input data segments and output data segments. Data segment and its context, the next code segment (continuation) segments have two kind of dependency with code segments. is executed. First, Code segments access the contents of data segments Continuation based C (CbC) [1], hereafter referred to as using field names. So data segments should have the named CbC, is a slight modified C which supports code segments.
    [Show full text]
  • Knowledge Based Engineering: Between AI and CAD
    Advanced Engineering Informatics 26 (2012) 159–179 Contents lists available at SciVerse ScienceDirect Advanced Engineering Informatics journal homepage: www.elsevier.com/locate/aei Knowledge based engineering: Between AI and CAD. Review of a language based technology to support engineering design ⇑ Gianfranco La Rocca Faculty of Aerospace Engineering, Delft University of Technology, Chair of Flight Performance and Propulsion, Kluyverweg 1, 2629HS Delft, The Netherlands article info abstract Article history: Knowledge based engineering (KBE) is a relatively young technology with an enormous potential for Available online 16 March 2012 engineering design applications. Unfortunately the amount of dedicated literature available to date is quite low and dispersed. This has not promoted the diffusion of KBE in the world of industry and acade- Keywords: mia, neither has it contributed to enhancing the level of understanding of its technological fundamentals. Knowledge based engineering The scope of this paper is to offer a broad technological review of KBE in the attempt to fill the current Knowledge engineering information gap. The artificial intelligence roots of KBE are briefly discussed and the main differences and Engineering design similarities with respect to classical knowledge based systems and modern general purpose CAD systems Generative design highlighted. The programming approach, which is a distinctive aspect of state-of-the-art KBE systems, is Knowledge based design Rule based design discussed in detail, to illustrate its effectiveness in capturing and re-using engineering knowledge to automate large portions of the design process. The evolution and trends of KBE systems are investigated and, to conclude, a list of recommendations and expectations for the KBE systems of the future is provided.
    [Show full text]
  • Lisp: Final Thoughts
    20 Lisp: Final Thoughts Both Lisp and Prolog are based on formal mathematical models of computation: Prolog on logic and theorem proving, Lisp on the theory of recursive functions. This sets these languages apart from more traditional languages whose architecture is just an abstraction across the architecture of the underlying computing (von Neumann) hardware. By deriving their syntax and semantics from mathematical notations, Lisp and Prolog inherit both expressive power and clarity. Although Prolog, the newer of the two languages, has remained close to its theoretical roots, Lisp has been extended until it is no longer a purely functional programming language. The primary culprit for this diaspora was the Lisp community itself. The pure lisp core of the language is primarily an assembly language for building more complex data structures and search algorithms. Thus it was natural that each group of researchers or developers would “assemble” the Lisp environment that best suited their needs. After several decades of this the various dialects of Lisp were basically incompatible. The 1980s saw the desire to replace these multiple dialects with a core Common Lisp, which also included an object system, CLOS. Common Lisp is the Lisp language used in Part III. But the primary power of Lisp is the fact, as pointed out many times in Part III, that the data and commands of this language have a uniform structure. This supports the building of what we call meta-interpreters, or similarly, the use of meta-linguistic abstraction. This, simply put, is the ability of the program designer to build interpreters within Lisp (or Prolog) to interpret other suitably designed structures in the language.
    [Show full text]
  • Code Transformation and Analysis Using Clang and LLVM Static and Dynamic Analysis
    Code transformation and analysis using Clang and LLVM Static and Dynamic Analysis Hal Finkel1 and G´abor Horv´ath2 Computer Science Summer School 2017 1 Argonne National Laboratory 2 Ericsson and E¨otv¨osLor´adUniversity Table of contents 1. Introduction 2. Static Analysis with Clang 3. Instrumentation and More 1 Introduction Space of Techniques During this set of lectures we'll cover a space of techniques for the analysis and transformation of code using LLVM. Each of these techniques have overlapping areas of applicability: Static Analysis LLVM Instrumentation Source Transformation 2 Space of Techniques When to use source-to-source transformation: • When you need to use the instrumented code with multiple compilers. • When you intend for the instrumentation to become a permanent part of the code base. You'll end up being concerned with the textual formatting of the instrumentation if humans also need to maintain or enhance this same code. 3 Space of Techniques When to use Clang's static analysis: • When the analysis can be performed on an AST representation. • When you'd like to maintain a strong connection to the original source code. • When false negatives are acceptable (i.e. it is okay if you miss problems). https://clang-analyzer.llvm.org/ 4 Space of Techniques When to use IR instrumentation: • When the necessary conditions can be (or can only be) detected at runtime (often in conjunction with a specialized runtime library). • When you require stronger coverage guarantees than static analysis. • When you'd like to reduce the cost of the instrumentation by running optimizations before the instrumentation is inserted, after the instrumentation is inserted, or both.
    [Show full text]
  • DATA and C a Sample Program
    03 0672326965 CH03 10/19/04 1:53 PM Page 49 CHAPTER 3 DATA AND C You will learn about the following in this chapter: • Keywords: • The distinctions between integer int, short, long, unsigned, char, types and floating-point types float, double, _Bool, _Complex, • Writing constants and declaring _Imaginary variables of those types • Operator: • How to use the printf() and sizeof scanf() functions to read and • Function: write values of different types scanf() • The basic data types that C uses rograms work with data. You feed numbers, letters, and words to the computer, and you expect it to do something with the data. For example, you might want the com- puter to calculate an interest payment or display a sorted list of vintners. In this chap- ter,P you do more than just read about data; you practice manipulating data, which is much more fun. This chapter explores the two great families of data types: integer and floating point. C offers several varieties of these types. This chapter tells you what the types are, how to declare them, and how and when to use them. Also, you discover the differences between constants and variables, and as a bonus, your first interactive program is coming up shortly. A Sample Program Once again, you begin with a sample program. As before, you’ll find some unfamiliar wrinkles that we’ll soon iron out for you. The program’s general intent should be clear, so try compiling and running the source code shown in Listing 3.1. To save time, you can omit typing the com- ments.
    [Show full text]
  • C Programming Tutorial
    C Programming Tutorial C PROGRAMMING TUTORIAL Simply Easy Learning by tutorialspoint.com tutorialspoint.com i COPYRIGHT & DISCLAIMER NOTICE All the content and graphics on this tutorial are the property of tutorialspoint.com. Any content from tutorialspoint.com or this tutorial may not be redistributed or reproduced in any way, shape, or form without the written permission of tutorialspoint.com. Failure to do so is a violation of copyright laws. This tutorial may contain inaccuracies or errors and tutorialspoint provides no guarantee regarding the accuracy of the site or its contents including this tutorial. If you discover that the tutorialspoint.com site or this tutorial content contains some errors, please contact us at [email protected] ii Table of Contents C Language Overview .............................................................. 1 Facts about C ............................................................................................... 1 Why to use C ? ............................................................................................. 2 C Programs .................................................................................................. 2 C Environment Setup ............................................................... 3 Text Editor ................................................................................................... 3 The C Compiler ............................................................................................ 3 Installation on Unix/Linux ............................................................................
    [Show full text]
  • A Lisp Oriented Architecture by John W.F
    A Lisp Oriented Architecture by John W.F. McClain Submitted to the Department of Electrical Engineering and Computer Science in partial fulfillment of the requirements for the degrees of Master of Science in Electrical Engineering and Computer Science and Bachelor of Science in Electrical Engineering at the MASSACHUSETTS INSTITUTE OF TECHNOLOGY September 1994 © John W.F. McClain, 1994 The author hereby grants to MIT permission to reproduce and to distribute copies of this thesis document in whole or in part. Signature of Author ...... ;......................... .............. Department of Electrical Engineering and Computer Science August 5th, 1994 Certified by....... ......... ... ...... Th nas F. Knight Jr. Principal Research Scientist 1,,IA £ . Thesis Supervisor Accepted by ....................... 3Frederic R. Morgenthaler Chairman, Depattee, on Graduate Students J 'FROM e ;; "N MfLIT oARIES ..- A Lisp Oriented Architecture by John W.F. McClain Submitted to the Department of Electrical Engineering and Computer Science on August 5th, 1994, in partial fulfillment of the requirements for the degrees of Master of Science in Electrical Engineering and Computer Science and Bachelor of Science in Electrical Engineering Abstract In this thesis I describe LOOP, a new architecture for the efficient execution of pro- grams written in Lisp like languages. LOOP allows Lisp programs to run at high speed without sacrificing safety or ease of programming. LOOP is a 64 bit, long in- struction word architecture with support for generic arithmetic, 64 bit tagged IEEE floats, low cost fine grained read and write barriers, and fast traps. I make estimates for how much these Lisp specific features cost and how much they may speed up the execution of programs written in Lisp.
    [Show full text]