Combining Processor Virtualization and Component-Based Engineering in C for Many-Core Heterogeneous Embedded Mpsocs

Total Page:16

File Type:pdf, Size:1020Kb

Combining Processor Virtualization and Component-Based Engineering in C for Many-Core Heterogeneous Embedded Mpsocs Combining Processor Virtualization and Component-Based Engineering in C for Many-Core Heterogeneous Embedded MPSoCs Erven Rohou∗, Andrea C. Ornsteiny, Ali Erdem Ozcan¨ y, and Marco Corneroz ∗INRIA, ySTMicroelectronics, zST-Ericsson Abstract so high that the same IP must be reused in many products. Short time-to-market also plays in favor of reuse. In other The design of embedded systems is driven by strong words, hardware needs to be more programmable. In this constraints in terms of performance, silicon area and context, the design of software-based solution substituting power consumption, as well as pressure on the cost and the hardware IPs becomes one of the most critical topics time-to-market. This has three consequences: 1) many-core of the overall production process of SoCs. systems are becoming mainstream, but there is still no sat- Problem Statement Unfortunately, designing software isfactory approach for distributing software applications systems for those constrained platforms is a complex issue. on these platforms; 2) these systems integrate heteroge- The reasons can be categorized as follows. neous processors for efficiency reasons, thus programming 1) Large scale multi-processing is a must. For almost them requires complex compilation environments; 3) hard- 30 years, general purpose processors as well as embedded ware resources are precious and low-level languages are systems followed Moore’s law. Performance used to “au- still a must to exploit them fully. These factors negatively tomatically” double every 18-24 months. However, since impact the programmability of many-core platforms and 2002, diminishing performance return, as well as increas- limit our ability to address the challenges of the next ing power consumption, and the approaching “temperature decade. wall” made the microprocessor industry follow a new path This paper devises a new programming approach lever- for performance: the multicore approach. While it is true aging processor virtualization and component-based soft- that embedded system have always been heterogeneous ware engineering paradigms to address these issues all multicores, several studies take parallelism to unprece- together. We present a programming model based on C for dented levels and forecast thousands of cores on a chip developing fine grain component-based applications and by the end of the next decade [1], [2]. Programming a toolset that compiles them into a processor-independent distributed applications has always been difficult, even bytecode representation that can be deployed on hetero- in general purpose computing, because of the lack of geneous MPSoCs. We also discuss the effectiveness of this convenient programming abstractions and tools [3]. The approach and present future directions that will have a key situation is worse for SoC platforms, with minimum run- role in addressing the above challenges. time support, and no alternative to low-level programming languages. This has always constituted a big barrier to the productivity. Given the new order of magnitude of I. Introduction cores, the exploitation of the available hardware is a major challenge of the coming decade. Compared to the general purpose computing world, em- 2) Heterogeneity is unavoidable. Although the homoge- bedded consumer systems are characterized by particularly neization of hardware platforms aims at reducing produc- stringent efficiency requirements in terms of performance, tion costs, modern embedded platforms continue to inte- power and area. SoC (System-on-Chip) vendors used to grate heterogeneous computing nodes (e.g. DSP, VLIW, design application specific hardware components (IPs) etc.) for several reasons. First, suitable hardware support controlled by a host processor for implementing complex (i.e. instruction sets, data representation, etc.) is key to operations. This process is no longer valid because of to- satisfying the performance requirements of different kinds day’s technological and market constraints. Non-recurring of applications executed on a single chip. Second, these engineering costs of silicon manufacturing are becoming computing nodes are always evolving to enjoy the best 1 technological solutions in the market. Beside evolving and implementation of a toolset combining these two architectures, heterogeneity is a source of difficulties since paradigms, for the C language, in order to start ad- it requires the software development kits (compilers, de- dressing the above issues all together. Using this toolset, buggers, profilers, etc.) to be ported and maintained, and legacy code written in C can be encapsulated into well- the developers to be trained on them. Tools for different defined components programming model with limited re- cores are likely to come from different vendors and to use engineering effort. These components can then be com- different technologies, yielding to integration problems. posed using an Architecture Description Language (ADL) The source code is likely to be written with conditional that describes the software architecture of the system, and compilation directives (#ifdef) in order to adapt to each compiled into a target independent bytecode representa- compiler and to best exploit each architecture. While they tion. Using this toolset, programmers can develop reusable are of no theoretical nature, these issues are a significant binary component libraries that can be used by system burden (and cost) to software companies. architects for composing applications to be deployed on 3) Physical resources are precious. Embedding soft- heterogeneous multiprocessor SoC (MPSoC). Newly de- ware in consumer systems has always been a challenge veloped components, on the other hand, can be envisioned for satisfying the performance requirements on top of either in C or in any higher-level language that can be limited constrained physical resources. Even though an compiled to the bytecode representation. unprecedented number of cores is expected soon, silicon In addition, the combination of component-based design area (be it CPU or memory) directly translates into cost. with such a target independent bytecode representation Each core must be as efficient as possible. Henceforth, opens various perspectives that leverage the performance software developers are still not free to enjoy high-level of embedded systems. First, the design flexibility can programming languages and run-time environments such be increased by mapping components on heterogeneous as garbage collectors, exception handlers, and rich li- processors at deployment time. In particular, this lets braries. This results in many applications being coded from programmers deal with memory hierarchy as an orthog- scratch and reduces the productivity while increasing the onal issue. Second, important memory and performance maintenance, support and evolution costs. optimizations can be obtained by on-board generation of These issues motivate many language-oriented research interface-specific communication bridges between remote projects [4], [5]. However, some pragmatic concerns components. (legacy, efficiency, toolset availability, etc.) make industrial Outline This paper is organized as follows. Section II solutions evolve with small steps. In particular, since overviews the paradigms we propose to combine, and it many multimedia processors do not support higher level presents the state-of-the-art on these domains. Section III languages such as C++ or Java, the C language remains presents our proposal. Section IV discusses its effective- the de facto standard, although it does not sufficiently deal ness and presents some ideas that we will investigate in with heterogeneity and multi-processing issues. Indeed, future work. Finally, we conclude in Section V. the C language implies the use of different development kits (compiler, debugger, etc.), potentially coming from II. Related Work different vendors, for each processor. Beside the over- head related to their installation and maintenance, these This paper proposes a new methodology and toolset for tools may behave differently (command-line flags, error programming heterogeneous MPSoCs in the C language. messages, etc.), and may imply the source code to be We build upon previous works on processor virtualization specialized for their own header files, intrinsics and library 1 and component based software engineering. This section functions . In addition, the lack of specific abstractions reviews previous work related to both paradigms. for application distribution makes plain C programs very difficult to deploy efficiently on multiple processors. A. Processor Virtualization Contribution In the past, some research and industrial proposals partially addressed these issues. On the one Processor virtualization first appeared for deploying hand, processor virtualization provided a way of dealing programs on computers connected through the Internet, with heterogeneous target platforms. On the other hand, and became a well established technique for dealing with component-based software engineering approach improved the processor heterogeneity. While the traditional compila- the software modularity and contributed to the develop- tion flow consists in compiling program sources into binary ment of distributed systems. objects that can be natively executed on a given processor, The main contribution of this paper is the design processor virtualization splits that flow in two parts. The 1Target-specific source-code specialization makes the code difficult to first part — the front-end
Recommended publications
  • A New Macro-Programming Paradigm in Data Collection Sensor Networks
    SpaceTime Oriented Programming: A New Macro-Programming Paradigm in Data Collection Sensor Networks Hiroshi Wada and Junichi Suzuki Department of Computer Science University of Massachusetts, Boston Program Insert This paper proposes a new programming paradigm for wireless sensor networks (WSNs). It is designed to significantly reduce the complexity of WSN programming by providing a new high- level programming abstraction to specify spatio-temporal data collections in a WSN from a global viewpoint as a whole rather than sensor nodes as individuals. Abstract SpaceTime Oriented Programming (STOP) is new macro-programming paradigm for wireless sensor networks (WSNs) in that it supports both spatial and temporal aspects of sensor data and it provides a high-level abstraction to express data collections from a global viewpoint for WSNs as a whole rather than sensor nodes as individuals. STOP treats space and time as first-class citizens and combines them as spacetime continuum. A spacetime is a three dimensional object that consists of a two special dimensions and a time playing the role of the third dimension. STOP allows application developers to program data collections to spacetime, and abstracts away the details in WSNs, such as how many nodes are deployed in a WSN, how nodes are connected with each other, and how to route data queries in a WSN. Moreover, STOP provides a uniform means to specify data collections for the past and future. Using the STOP application programming interfaces (APIs), application programs (STOP macro programs) can obtain a “snapshot” space at a given time in a given spatial resolutions (e.g., a space containing data on at least 60% of nodes in a give spacetime at 30 minutes ago.) and a discrete set of spaces that meet specified spatial and temporal resolutions.
    [Show full text]
  • SQL Processing with SAS® Tip Sheet
    SQL Processing with SAS® Tip Sheet This tip sheet is associated with the SAS® Certified Professional Prep Guide Advanced Programming Using SAS® 9.4. For more information, visit www.sas.com/certify Basic Queries ModifyingBasic Queries Columns PROC SQL <options>; SELECT col-name SELECT column-1 <, ...column-n> LABEL= LABEL=’column label’ FROM input-table <WHERE expression> SELECT col-name <GROUP BY col-name> FORMAT= FORMAT=format. <HAVING expression> <ORDER BY col-name> <DESC> <,...col-name>; Creating a SELECT col-name AS SQL Query Order of Execution: new column new-col-name Filtering Clause Description WHERE CALCULATED new columns new-col-name SELECT Retrieve data from a table FROM Choose and join tables Modifying Rows WHERE Filter the data GROUP BY Aggregate the data INSERT INTO table SET column-name=value HAVING Filter the aggregate data <, ...column-name=value>; ORDER BY Sort the final data Inserting rows INSERT INTO table <(column-list)> into tables VALUES (value<,...value>); INSERT INTO table <(column-list)> Managing Tables SELECT column-1<,...column-n> FROM input-table; CREATE TABLE table-name Eliminating SELECT DISTINCT CREATE TABLE (column-specification-1<, duplicate rows col-name<,...col-name> ...column-specification-n>); WHERE col-name IN DESCRIBE TABLE table-name-1 DESCRIBE TABLE (value1, value2, ...) <,...table-name-n>; WHERE col-name LIKE “_string%” DROP TABLE table-name-1 DROP TABLE WHERE col-name BETWEEN <,...table-name-n>; Filtering value AND value rows WHERE col-name IS NULL WHERE date-value Managing Views “<01JAN2019>”d WHERE time-value “<14:45:35>”t CREATE VIEW CREATE VIEW table-name AS query; WHERE datetime-value “<01JAN201914:45:35>”dt DESCRIBE VIEW view-name-1 DESCRIBE VIEW <,...view-name-n>; Remerging Summary Statistics DROP VIEW DROP VIEW view-name-1 <,...view-name-n>; SELECT col-name, summary function(argument) FROM input table; Copyright © 2019 SAS Institute Inc.
    [Show full text]
  • Rexx to the Rescue!
    Session: G9 Rexx to the Rescue! Damon Anderson Anixter May 21, 2008 • 9:45 a.m. – 10:45 a.m. Platform: DB2 for z/OS Rexx is a powerful tool that can be used in your daily life as an z/OS DB2 Professional. This presentation will cover setup and execution for the novice. It will include Edit macro coding examples used by DBA staff to solve everyday tasks. DB2 data access techniques will be covered as will an example of calling a stored procedure. Examples of several homemade DBA tools built with Rexx will be provided. Damon Anderson is a Senior Technical DBA and Technical Architect at Anixter Inc. He has extensive experience in DB2 and IMS, data replication, ebusiness, Disaster Recovery, REXX, ISPF Dialog Manager, and related third-party tools and technologies. He is a certified DB2 Database Administrator. He has written articles for the IDUG Solutions Journal and presented at IDUG and regional user groups. Damon can be reached at [email protected] 1 Rexx to the Rescue! 5 Key Bullet Points: • Setting up and executing your first Rexx exec • Rexx execs versus edit macros • Edit macro capabilities and examples • Using Rexx with DB2 data • Examples to clone data, compare databases, generate utilities and more. 2 Agenda: • Overview of Anixter’s business, systems environment • The Rexx “Setup Problem” Knowing about Rexx but not knowing how to start using it. • Rexx execs versus Edit macros, including coding your first “script” macro. • The setup and syntax for accessing DB2 • Discuss examples for the purpose of providing tips for your Rexx execs • A few random tips along the way.
    [Show full text]
  • The Semantics of Syntax Applying Denotational Semantics to Hygienic Macro Systems
    The Semantics of Syntax Applying Denotational Semantics to Hygienic Macro Systems Neelakantan R. Krishnaswami University of Birmingham <[email protected]> 1. Introduction body of a macro definition do not interfere with names oc- Typically, when semanticists hear the words “Scheme” or curring in the macro’s arguments. Consider this definition of and “Lisp”, what comes to mind is “untyped lambda calculus a short-circuiting operator: plus higher-order state and first-class control”. Given our (define-syntax and typical concerns, this seems to be the essence of Scheme: it (syntax-rules () is a dynamically typed applied lambda calculus that sup- ((and e1 e2) (let ((tmp e1)) ports mutable data and exposes first-class continuations to (if tmp the programmer. These features expose a complete com- e2 putational substrate to programmers so elegant that it can tmp))))) even be characterized mathematically; every monadically- representable effect can be implemented with state and first- In this definition, even if the variable tmp occurs freely class control [4]. in e2, it will not be in the scope of the variable definition However, these days even mundane languages like Java in the body of the and macro. As a result, it is important to support higher-order functions and state. So from the point interpret the body of the macro not merely as a piece of raw of view of a working programmer, the most distinctive fea- syntax, but as an alpha-equivalence class. ture of Scheme is something quite different – its support for 2.2 Open Recursion macros. The intuitive explanation is that a macro is a way of defining rewrites on abstract syntax trees.
    [Show full text]
  • Concatenative Programming
    Concatenative Programming From Ivory to Metal Jon Purdy ● Why Concatenative Programming Matters (2012) ● Spaceport (2012–2013) Compiler engineering ● Facebook (2013–2014) Site integrity infrastructure (Haxl) ● There Is No Fork: An Abstraction for Efficient, Concurrent, and Concise Data Access (ICFP 2014) ● Xamarin/Microsoft (2014–2017) Mono runtime (performance, GC) What I Want in a ● Prioritize reading & modifying code over writing it Programming ● Be expressive—syntax closely Language mirroring high-level semantics ● Encourage “good” code (reusable, refactorable, testable, &c.) ● “Make me do what I want anyway” ● Have an “obvious” efficient mapping to real hardware (C) ● Be small—easy to understand & implement tools for ● Be a good citizen—FFI, embedding ● Don’t “assume you’re the world” ● Forth (1970) Notable Chuck Moore Concatenative ● PostScript (1982) Warnock, Geschke, & Paxton Programming ● Joy (2001) Languages Manfred von Thun ● Factor (2003) Slava Pestov &al. ● Cat (2006) Christopher Diggins ● Kitten (2011) Jon Purdy ● Popr (2012) Dustin DeWeese ● … History Three ● Lambda Calculus (1930s) Alonzo Church Formal Systems of ● Turing Machine (1930s) Computation Alan Turing ● Recursive Functions (1930s) Kurt Gödel Church’s Lambdas e ::= x Variables λx.x ≅ λy.y | λx. e Functions λx.(λy.x) ≅ λy.(λz.y) | e1 e2 Applications λx.M[x] ⇒ λy.M[y] α-conversion (λx.λy.λz.xz(yz))(λx.λy.x)(λx.λy.x) ≅ (λy.λz.(λx.λy.x)z(yz))(λx.λy.x) (λx.M)E ⇒ M[E/x] β-reduction ≅ λz.(λx.λy.x)z((λx.λy.x)z) ≅ λz.(λx.λy.x)z((λx.λy.x)z) ≅ λz.z Turing’s Machines ⟨ ⟩ M
    [Show full text]
  • Metaprogramming with Julia
    Metaprogramming with Julia https://szufel.pl Programmers effort vs execution speed Octave R Python Matlab time, time, log scale - C JavaScript Java Go Julia C rozmiar kodu Sourcewego w KB Source: http://www.oceanographerschoice.com/2016/03/the-julia-language-is-the-way-of-the-future/ 2 Metaprogramming „Metaprogramming is a programming technique in which computer programs have the ability to treat other programs as their data. It means that a program can be designed to read, generate, analyze or transform other programs, and even modify itself while running.” (source: Wikipedia) julia> code = Meta.parse("x=5") :(x = 5) julia> dump(code) Expr head: Symbol = args: Array{Any}((2,)) 1: Symbol x 2: Int64 5 3 Metaprogramming (cont.) julia> code = Meta.parse("x=5") :(x = 5) julia> dump(code) Expr head: Symbol = args: Array{Any}((2,)) 1: Symbol x 2: Int64 5 julia> eval(code) 5 julia> x 5 4 Julia Compiler system not quite accurate picture... Source: https://www.researchgate.net/ publication/301876510_High- 5 level_GPU_programming_in_Julia Example 1. Select a field from an object function getValueOfA(x) return x.a end function getValueOf(x, name::String) return getproperty(x, Symbol(name)) end function getValueOf2(name::String) field = Symbol(name) code = quote (obj) -> obj.$field end return eval(code) end function getValueOf3(name::String) return eval(Meta.parse("obj -> obj.$name")) end 6 Let’s test using BenchmarkTools struct MyStruct a b end x = MyStruct(5,6) @btime getValueOfA($x) @btime getValueOf($x,"a") const getVal2 = getValueOf2("a") @btime
    [Show full text]
  • Dynamic Economics Quantitative Methods and Applications to Macro and Micro
    Dynamic Economics Quantitative Methods and Applications to Macro and Micro J¶er^ome Adda and Nicola Pavoni MACT1 2003-2004. I Overview Dynamic Programming Theory ² { Contraction mapping theorem. { Euler equation Numerical Methods ² Econometric Methods ² Applications ² MACT1 2003-2004. I Numerical Methods Examples: Cake Eating Deterministic Cake eating: ² V (K) = max u(c) + ¯V (K c) c ¡ with { K: size of cake. K 0 ¸ { c: amount of cake consumed. c 0 ¸ Stochastic Cake eating: ² V (K; y) = max u(c) + ¯Ey0 V (K0; y0) c K0 = K c + y ¡ Discrete Cake eating: ² V (K; ") = max[u(K; "); ¯E 0 V (½K; "0)] ½ [0; 1] " 2 MACT1 2003-2004. I How do we Solve These Models? Not necessarily a closed form solution for V (:). ² Numerical approximations. ² MACT1 2003-2004. I Solution Methods Value function iterations. (Contraction Mapping Th.) ² Policy function iterations. (Contraction Mapping Th.) ² Projection methods. (Euler equation) ² MACT1 2003-2004. I Value Function Iterations Value Function Iterations Vn(S) = max u(action; S) + ¯EVn 1(S0) action ¡ Vn(:) = T Vn 1(:) ¡ Take advantage of the Contraction Mapping Theorem. If T is ² the contraction operator, we use the fact that d(Vn; Vn 1) ¯d(Vn 1; Vn 2) ¡ · ¡ ¡ n Vn(:) = T V0(:) This guarantee that: ² 1. successive iterations will converge to the (unique) ¯xed point. 2. starting guess for V0 can be arbitrary. Successive iterations: ² { Start with a given V0(:). Usually V0(:) = 0. { Compute V1(:) = T V0(:) { Iterate Vn(:) = T Vn 1(:) ¡ { Stop when d(Vn; Vn 1) < ". ¡ MACT1 2003-2004. I Value Function Iterations: Deterministic Cake Eating Model: ² V (K) = max u(c) + ¯V (K c) c ¡ Can be rewritten as: ² V (K) = max u(K K0) + ¯V (K0) K0 ¡ The iterations will be on ² Vn(K) = max u(K K0) + ¯Vn 1(K0) K0 ¡ ¡ example: take u(c) = ln(c).
    [Show full text]
  • Top 10 Uses of Macro %Varlist - in Proc SQL, Data Step and Elsewhere
    PhUSE 2016 Paper CS09 Top 10 uses of macro %varlist - in proc SQL, Data Step and elsewhere Jean-Michel Bodart, Business & Decision Life Sciences, Brussels, Belgium ABSTRACT The SAS® macro-function %VARLIST(), a generic utility function that can retrieve lists of variables from one or more datasets, and process them in various ways in order to generate code fragments, was presented at the PhUSE 2015 Annual Conference in Vienna. That first paper focused on generating code for SQL joins of arbitrary complexity. However the macro was also meant to be useful in the Data Step as well as in other procedure calls, in global statements and as a building block in other macros. The full code of macro %VARLIST is freely available from the PhUSE Wiki (http://www.phusewiki.org/wiki/index.php?title=SAS_macro-function_%25VARLIST). The current paper reviews and provides examples of the top ten uses of %VARLIST() according to a survey of real-life SAS programs developed to create Analysis Datasets, Summary Tables and Figures in post-hoc and exploratory analyses of clinical trials data. It is meant to provide users with additional information about how and where to use it best. INTRODUCTION SAS Macro Language has been available for a long time. It provides users with the tools to control program flow execution, execute repeatedly and/or conditionally single or multiple SAS Data steps, Procedure steps and/or Global statements that are aggregated, included in user-written macro definitions. However, the macro statements that implement the repeating and conditional loops cannot be submitted on their own, as “open code”, but only as part of calls to the full macro definition.
    [Show full text]
  • (Dynamic (Programming Paradigms)) Performance and Expressivity Didier Verna
    (Dynamic (programming paradigms)) Performance and expressivity Didier Verna To cite this version: Didier Verna. (Dynamic (programming paradigms)) Performance and expressivity. Software Engineer- ing [cs.SE]. LRDE - Laboratoire de Recherche et de Développement de l’EPITA, 2020. hal-02988180 HAL Id: hal-02988180 https://hal.archives-ouvertes.fr/hal-02988180 Submitted on 4 Nov 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. THÈSE D’HABILITATION À DIRIGER LES RECHERCHES Sorbonne Université Spécialité Sciences de l’Ingénieur (DYNAMIC (PROGRAMMING PARADIGMS)) PERFORMANCE AND EXPRESSIVITY Didier Verna [email protected] Laboratoire de Recherche et Développement de l’EPITA (LRDE) 14-16 rue Voltaire 94276 Le Kremlin-Bicêtre CEDEX Soutenue le 10 Juillet 2020 Rapporteurs: Robert Strandh Université de Bordeaux, France Nicolas Neuß FAU, Erlangen-Nürnberg, Allemagne Manuel Serrano INRIA, Sophia Antipolis, France Examinateurs: Marco Antoniotti Université de Milan-Bicocca, Italie Ralf Möller Université de Lübeck, Allemagne Gérard Assayag IRCAM, Paris, France DOI 10.5281/zenodo.4244393 Résumé (French Abstract) Ce rapport d’habilitation traite de travaux de recherche fondamentale et appliquée en informatique, effectués depuis 2006 au Laboratoire de Recherche et Développement de l’EPITA (LRDE). Ces travaux se situent dans le domaine des langages de pro- grammation dynamiques, et plus particulièrement autour de leur expressivité et de leur performance.
    [Show full text]
  • The Paradigms of Programming
    The Paradigms of Programming Robert W Floyd Stanford University Today I want to talk about the paradigms of programming, how they affect our success as designers of computer programs, how they should be taught, and how they should be embodied in our programming languages. A familiar example of a paradigm of programming is the technique of structured programming, which appears to be the dominant paradigm in most current treatments of programming methodology. Structured programming, as formulated by Dijkstra [6], Wirth [27,29], and Parnas [21], among others,. consists of two phases. In the first phase, that of top-down design, or stepwise refinement, the problem is decomposed into a very small number of simpler subproblems. In programming the solution of simultaneous linear equations, say, the first level of decomposition would be into a stage of triangularizing the equations and a following stage of back-substitution in the triangularized system. This gradual decomposition is continued until the subproblems that arise are simple enough to cope with directly. In the simultaneous equation example, the back substitution process would be further decomposed as a backwards iteration of a process which finds and stores the value of the ith variable from the ith equation. Yet further decomposition would yield a fully detailed algorithm. The second phase of the structured· programming paradigm entails working upward from the concrete objects and functions of the underlying machine to the more abstract objects and functions used throughout the modules produced by the top-down design. In the linear equation example, if the coefficients of the equations are rational functions of one variable, we might first design a multiple-precision arithmetic representation Robert W Floyd, Turing Award Lecture, 1978: "The Paradigms of Programming," Communications of the ACM, VoI.22:8, August 1979, pp.455-460.
    [Show full text]
  • PL/I for AIX: Programming Guide
    IBM PL/I for AIX Programming Guide Ve r s i o n 2.0.0 SC18-9328-00 IBM PL/I for AIX Programming Guide Ve r s i o n 2.0.0 SC18-9328-00 Note! Before using this information and the product it supports, be sure to read the general information under “Notices” on page 309. Second Edition (June 2004) This edition applies to IBM PL/I for AIX 2.0.0, 5724-H45, and to any subsequent releases until otherwise indicated in new editions or technical newsletters. Make sure you are using the correct edition for the level of the product. Order publications through your IBM representative or the IBM branch office serving your locality. Publications are not stocked at the address below. A form for readers’ comments is provided at the back of this publication. If the form has been removed, address your comments to: IBM Corporation, Department HHX/H1 555 Bailey Ave San Jose, CA, 95141-1099 United States of America When you send information to IBM, you grant IBM a nonexclusive right to use or distribute the information in any way it believes appropriate without incurring any obligation to you. ©International Business Machines Corporation 1998,2004. All rights reserved. Contents Figures . vii COMPILE . .47 COPYRIGHT . .48 CURRENCY . .48 Part 1. Introducing PL/I on your DEFAULT . .48 workstation . .1 EXIT. .54 EXTRN . .54 Chapter 1. About this book . .3 FLAG . .55 FLOATINMATH. .55 Chapter 2. How to read the syntax GONUMBER . .56 GRAPHIC . .56 diagrams . .5 IMPRECISE . .56 INCAFTER . .57 Chapter 3.
    [Show full text]
  • 61A Extra Lecture 8 Homoiconicity Macros
    Announcements • Extra Homework 3 due Thursday 4/16 @ 11:59pm § Extending the object system created in Extra Lecture 6 • Extra Homework 4 due Thursday 4/23 @ 11:59pm (Warning: same day as Project 4 is due!) § Complete the three extensions to Project 4 described today 61A Extra Lecture 8 Thursday, April 2 2 A Scheme Expression is a Scheme List Scheme programs consist of expressions, which can be: • Primitive expressions: 2 3.3 true + quotient • Combinations: (quotient 10 2) (not true) The built-in Scheme list data structure (which is a linked list) can represent combinations Homoiconicity scm> (list 'quotient 10 2) (quotient 10 2) ! scm> (eval (list 'quotient 10 2)) 5 In such a language, it is straightforward to write a program that writes a program (Demo) 4 Homoiconic Languages Languages have both a concrete syntax and an abstract syntax (Python Demo) A language is homoiconic if the abstract syntax can be read from the concrete syntax (Scheme Demo) Macros Quotation is actually a combination in disguise (Quote Demo) 5 Macros Perform Code Transformations Problem 1 A macro is an operation performed on the source code of a program before evaluation Define a macro that evaluates an expression for each value in a sequence Macros exist in many languages, but are easiest to define correctly in homoiconic languages (define (map fn vals) (if (null? vals) Scheme has a define-macro special form that defines a source code transformation () (cons (fn (car vals)) (map fn (cdr vals))))) (define-macro (twice expr) > (twice (print 2)) ! (list 'begin expr
    [Show full text]