Combining Source and Target Level Cost Analyses for Ocaml Programs

Total Page:16

File Type:pdf, Size:1020Kb

Combining Source and Target Level Cost Analyses for Ocaml Programs Combining Source and Target Level Cost Analyses for OCaml Programs Stefan K. Muller Jan Hoffmann Carnegie Mellon University Carnegie Mellon University Abstract bounding gas usage in smart contracts [24]. More generally, Deriving resource bounds for programs is a well-studied it is an appealing idea to provide programmers with imme- problem in the programming languages community. For com- diate feedback about the efficiency of their code at design piled languages, there is a tradeoff in deciding when during time. compilation to perform such a resource analysis. Analyses at When designing a resource analysis for a compiled higher- the level of machine code can precisely bound the wall-clock level language, there is a tension between working on the execution time of programs, but often require manual anno- source code, the target code, or an intermediate represen- tations describing loop bounds and memory access patterns. tation. Advantages of analyzing the source code include Analyses that run on source code can more effectively derive more effective interaction with the programmer and more such information from types and other source-level features, opportunities for automation since the control flow and type but produce abstract bounds that do not directly reflect the information are readily available. The advantage of analyz- execution time of the compiled machine code. ing the target code is that analysis results apply to the code This paper proposes and compares different techniques for that is eventually executed. Many of the tools developed combining source-level and target-level resource analyses in in the programming languages community operate on the the context of the functional programming language OCaml. source level and derive upper bounds on a high-level notion The source level analysis is performed by Resource Aware of cost like number of loop iterations or user-defined cost ML (RaML), which automatically derives bounds on the costs metrics [25, 40, 53]. In the embedded systems community, of source-level operations. By relating these high-level oper- the focus is on tools that operate on machine code and derive ations to the low-level code, these bounds can be composed bounds that apply to concrete hardware [9, 55]. with results from target-level tools. We first apply this idea In this paper, we study the integration of source and target to the OCaml bytecode compiler and derive bounds on the level resource analyses for OCaml programs. We build on number of virtual machine instructions executed by the com- Resource Aware ML (RaML) [29, 30], a source level resource piled code. Next, we target OCaml’s native compiler for ARM analysis tool for OCaml programs that is based on automatic and combine the analysis with an off-the-shelf worst-case ex- amortized resource analysis (AARA) [33, 36]. AARA systemat- ecution time (WCET) tool that provides clock-cycle bounds ically annotates types with potential functions that map val- for basic blocks. In this way, we derive clock-cycle bounds ues of the type to a non-negative number. A type derivation for a specific ARM processor. An experimental evaluation can be seen as a proof that the initial potential is sufficient to analyzes and compares the performance of the different ap- cover the cost of an execution. Advantages of AARA include proaches and shows that combined resource analyses can compositionality and efficient inference of potential func- provide developers with useful performance information. tions, and thus resource bounds, using linear programming, even if potential functions are polynomial [28]. RaML can 1 Introduction derive bounds for user-defined metrics that assign a constant cost to an evaluation step in the dynamic semantics. The programming languages community has extensively Our approaches to integrating source and target level anal- studied the problem of statically analyzing the resource yses broadly follow the idea of using RaML to derive resource consumption of programs. The developed techniques range usage bounds that are parametric in the resource usages from fully automatic techniques based on static analysis of basic blocks, and then composing these results with a and automated recurrence solving [2, 11, 25, 38, 51, 54], lower-level analysis that operates on each basic block. Im- to semi-automatic techniques that check user annotated plementing these approaches in practice requires a technical bounds [19, 53], to manual reasoning systems that are inte- extension of RaML: we extend RaML to enable bound in- grated with type systems and program logics [15, 20, 21, 40]. ference for cost metrics that contain symbolic expressions. Static resource analysis has interesting applications that in- Instead of specifying cost 8128 at a certain spot in the pro- clude prevention of side channels [46], finding performance gram, it is now possible to specify a cost expression such bugs and algorithmic complexity vulnerabilities [49] and 8128a + 9b where a and b are symbolic constants. RaML will then derive a bound that is a function of both the argu- PL’19, January 01–03, 2018, New York, NY, USA 2019. ments and the constants a and b. In the context of this paper, 1 PL’19, January 01–03, 2018, New York, NY, USA Stefan K. Muller and Jan Hoffmann symbolic resource analysis can be used to devise resource 1 let rec fold f b l = metrics that are parametrized by the costs of basic blocks. 2 match l with To this end, we automatically annotate the source program 3 | [] ! b with cost annotations that correspond to beginnings of basic 4 | x::xs ! f (fold f b xs) x blocks in the compiled code. Each cost annotation is labeled 5 with a fresh symbol that corresponds to the, yet unknown, 6 let countsum1 l = cost of the corresponding basic block. A simple translation 7 let count = fold (fun c _ !c + 1) 0 in validation procedure ensures that every block has been la- 8 let sum = fold (fun s n !s + n) 0 in beled with at least one cost annotation. At the target level, 9 (count l, sum l) we can now analyze the cost of individual basic blocks and 10 substitute the results for the corresponding symbol in the 11 let countsum2 l = high-level bound. 12 fold (fun (count, sum) n ! Our third contribution is the implementation of the de- 13 (count + 1, sum + n)) scribed technique for the OCaml bytecode and native-code 14 (0, 0) compilers. For the OCaml bytecode compiler, we associate 15 l the symbolic constants with the number of bytecode instruc- tions in their respective basic block. In this way, we derive Figure 1. Two implementations of the countsum function symbolic bounds on the number of bytecode instructions that are executed by a function. For the OCaml native code compiler, we use AbsInt’s Worst-case execution time (WCET) the OCaml bytecode compiler and evaluates its effectiveness analysis tool aiT to derive clock-cycle bounds for each basic with experiments. In Section 5, we study the combination block for the ARM Cortex-R5 platform. Together with the with WCET analysis and the OCaml native compiler and source-level bounds, this yields symbolic clock-cycle bounds report the findings from the respective experiments. Finally, for the compiled machine code. In many cases, aiT cannot we discuss related work (Section 6) and conclude. automatically derive loop and recursion bounds. So a final combination of source and target level analysis that we ex- 2 Symbolic Resource Analysis plore is to use the basic block analysis performed by RaML to The first ingredient for connecting the source-level resource derive aiT control-flow annotations for specific input sizes. analysis with compiled code is an extension of RaML we Our technique for connecting an high-level cost model call symbolic resource analysis. Before describing the tech- with compiled code is similar to existing techniques that nique, we present an overview of symbolic resource analysis have been implemented in the context of verified C compil- and its applications through an example. Consider the two ers [5, 15] (see Section 6). The novelty of our work is that OCaml functions in Figure 1, defined using the auxiliary we implemented the technique for a functional language function fold. Both take as an argument an integer list and and an existing optimizing compiler, support higher-order return a pair of the count and the sum of the elements. The functions, and combine compilation with AARA, and sup- first function, countsum1, makes two passes over the list, port OCaml-specific features such as an argument stack for counting the elements, then summing them, and finally re- avoiding the creation of function closures. turns a pair. The second, countsum2, computes both results We have evaluated our techniques on several OCaml pro- in one pass. RaML allows us to compare the two implemen- grams and found them to be both practical and reasonably tations based on how many list operations they perform by precise. For example, our bytecode analysis generates asymp- instrumenting fold with a “tick” annotation indicating that totically tight bounds on instruction counts for all of the it performs one list operation (a pattern match). example programs, and exact bounds for several of them. In 1 let rec fold f b l = addition, for several of our example programs, the control- 2 (Raml.tick (1.0); flow annotations derived by our analysis result inWCET 3 match l with cycle counts that are identical to results from hand-written 4 | [] ! b annotations. Hand annotations require manual reasoning 5 | x::xs ! f (fold f b xs) x) about the recursive structure of the program (which is labor- intensive and error-prone) in addition to the effort of manu- When the code is analyzed with this version of fold, ally inserting the annotations.
Recommended publications
  • Source-To-Source Translation and Software Engineering
    Journal of Software Engineering and Applications, 2013, 6, 30-40 http://dx.doi.org/10.4236/jsea.2013.64A005 Published Online April 2013 (http://www.scirp.org/journal/jsea) Source-to-Source Translation and Software Engineering David A. Plaisted Department of Computer Science, University of North Carolina at Chapel Hill, Chapel Hill, USA. Email: [email protected] Received February 5th, 2013; revised March 7th, 2013; accepted March 15th, 2013 Copyright © 2013 David A. Plaisted. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. ABSTRACT Source-to-source translation of programs from one high level language to another has been shown to be an effective aid to programming in many cases. By the use of this approach, it is sometimes possible to produce software more cheaply and reliably. However, the full potential of this technique has not yet been realized. It is proposed to make source- to-source translation more effective by the use of abstract languages, which are imperative languages with a simple syntax and semantics that facilitate their translation into many different languages. By the use of such abstract lan- guages and by translating only often-used fragments of programs rather than whole programs, the need to avoid writing the same program or algorithm over and over again in different languages can be reduced. It is further proposed that programmers be encouraged to write often-used algorithms and program fragments in such abstract languages. Libraries of such abstract programs and program fragments can then be constructed, and programmers can be encouraged to make use of such libraries by translating their abstract programs into application languages and adding code to join things together when coding in various application languages.
    [Show full text]
  • Extending Fortran with Meta-Programming
    OpenFortran: Extending Fortran with Meta-programming Songqing Yue and Jeff Gray Department of Computer Science University of Alabama Tuscaloosa, AL, USA {syue, gray}@cs.ua.edu Abstract—Meta-programming has shown much promise for manipulated [1]. MOPs add the ability of meta-programming improving the quality of software by offering programming to programming languages by providing users with standard language techniques to address issues of modularity, reusability, interfaces to modify the internal implementation of the maintainability, and extensibility. A system that supports meta- program. Through the interface, programmers can programming is able to generate or manipulate other programs incrementally change the implementation and the behavior of a to extend their behavior. This paper describes OpenFortran, a program to better suit their needs. Furthermore, a metaprogram Meta-Object Protocol (MOP) that is able to bring the power of can capture the needs of a commonly needed feature and be meta-programming to Fortran, one of the most widely used applied to several different base programs. languages in the area of High Performance Computing (HPC). OpenFortran provides a framework that allows library A MOP may perform the underling adaptation of the base developers to build arbitrary source-to-source program program at either run-time or compile-time. Run-time MOPs transformation libraries for Fortran programs. To relieve the function while a program is executing and can be used to burden of learning the details about how the underlying perform real-time adaptation, for example the Common Lisp transformations are performed, a domain-specific language Object System (CLOS) [13] that allows the mechanisms of named SPOT has been created that allows developers to specify inheritance, method dispatching, and class instantiation to be direct manipulation of programs.
    [Show full text]
  • ROSE Tutorial: a Tool for Building Source-To-Source Translators Draft Tutorial (Version 0.9.11.115)
    ROSE Tutorial: A Tool for Building Source-to-Source Translators Draft Tutorial (version 0.9.11.115) Daniel Quinlan, Markus Schordan, Richard Vuduc, Qing Yi Thomas Panas, Chunhua Liao, and Jeremiah J. Willcock Lawrence Livermore National Laboratory Livermore, CA 94550 925-423-2668 (office) 925-422-6278 (fax) fdquinlan,panas2,[email protected] [email protected] [email protected] [email protected] [email protected] Project Web Page: www.rosecompiler.org UCRL Number for ROSE User Manual: UCRL-SM-210137-DRAFT UCRL Number for ROSE Tutorial: UCRL-SM-210032-DRAFT UCRL Number for ROSE Source Code: UCRL-CODE-155962 ROSE User Manual (pdf) ROSE Tutorial (pdf) ROSE HTML Reference (html only) September 12, 2019 ii September 12, 2019 Contents 1 Introduction 1 1.1 What is ROSE.....................................1 1.2 Why you should be interested in ROSE.......................2 1.3 Problems that ROSE can address...........................2 1.4 Examples in this ROSE Tutorial...........................3 1.5 ROSE Documentation and Where To Find It.................... 10 1.6 Using the Tutorial................................... 11 1.7 Required Makefile for Tutorial Examples....................... 11 I Working with the ROSE AST 13 2 Identity Translator 15 3 Simple AST Graph Generator 19 4 AST Whole Graph Generator 23 5 Advanced AST Graph Generation 29 6 AST PDF Generator 31 7 Introduction to AST Traversals 35 7.1 Input For Example Traversals............................. 35 7.2 Traversals of the AST Structure............................ 36 7.2.1 Classic Object-Oriented Visitor Pattern for the AST............ 37 7.2.2 Simple Traversal (no attributes)....................... 37 7.2.3 Simple Pre- and Postorder Traversal....................
    [Show full text]
  • Towards Large-Scale Refactoring for Ocaml
    Towards Large-scale Refactoring for OCaml REUBEN N. S. ROWE, University of Kent, UK SIMON J. THOMPSON, University of Kent, UK Refactoring is the process of changing the way a program works without changing its overall behaviour. The functional programming paradigm presents its own unique challenges to refactoring. For the OCaml language in particular, the expressiveness of its module system makes this a highly non-trivial task. The use of PPX preprocessors, other language extensions, and idiosyncratic build systems complicates matters further. We begin to address the question of how to refactor large OCaml programs by looking at a particular refactoring—value binding renaming—and implementing a prototype tool to carry it out. Our tool, Rotor, is developed in OCaml itself and combines several features to manage the complexities of refactoring OCaml code. Firstly it defines a rich, hierarchical way of identifying bindings which distinguishes between structures and functors and their associated module types, and is able to refer directly to functor parameters. Secondly it makes use of the recently developed visitors library to perform generic traversals of abstract syntax trees. Lastly it implements a notion of ‘dependency’ between renamings, allowing refactorings to be computed in a modular fashion. We evaluate Rotor using a snapshot of Jane Street’s core library and its dependencies, comprising some 900 source files across 80 libraries, and a test suite of around 3000 renamings. We propose that the notion of dependency is a general one for refactoring, distinct from a refactoring ‘precondition’. Dependencies may actually be mutual, in that all must be applied together for each one individually to be correct, and serve as declarative specifications of refactorings.
    [Show full text]
  • A Bundle of Program Transformation Tools System Description
    ¡ ¢ £ ¤ ¦ ¨ ¨ ! $ % & ! % $ ' ( * * ( + - . / 0 XT: a bundle of program transformation tools system description Merijn de Jonge 1 CWI, Department SEN, PO Box 94079, 1090 GB Amsterdam, The Netherlands Eelco Visser 2 Universiteit Utrecht, Institute of Information and Computing Sciences PO Box 80089, 3508 TB Utrecht, The Netherlands Joost Visser 3 CWI, Department SEN, PO Box 94079, 1090 GB Amsterdam, The Netherlands Abstract XT bundles existing and newly developed program transformation libraries and tools into an open framework that supports component-based development of program transforma- tions. We discuss the roles of XT’s constituents in the development process of program transformation tools, as well as some experiences with building program transformation systems with XT. 1 Introduction Program transformation encompasses a variety of different, but related, language processing scenarios, such as optimization, compilation, normalization, and reno- vation. Across these scenarios, many common, or similar subtasks can be distin- guished, which opens possibilities for software reuse. To support and demonstrate such reuse across program transformation project boundaries, we have developed XT. XT is a bundle of existing and newly developed libraries and tools useful in the 4 Email:[email protected] 5 Email:[email protected] 6 Email:[email protected] 8 : : ; < > @ BC E G H J @ L N BE H P CH R T V CH Y V H [ ] _ ] 7 c 79 a b d f g i b j g a m oq q b s j g a m oq q b s context of program transformation. It bundles its constituents into an open frame- work for component-based transformation tool development, which is flexible and extendible.
    [Show full text]
  • When Polyhedral Transformations Meet SIMD Code Generation
    When Polyhedral Transformations Meet SIMD Code Generation Martin Kong Richard Veras Kevin Stock Ohio State University Carnegie Mellon University Ohio State University [email protected] [email protected] [email protected] Franz Franchetti Louis-Noel¨ Pouchet P. Sadayappan Carnegie Mellon University University of California Los Angeles Ohio State University [email protected] [email protected] [email protected] Abstract While hand tuned library kernels such as GotoBLAS address all Data locality and parallelism are critical optimization objectives for the above factors to achieve over 95% of machine peak for spe- performance on modern multi-core machines. Both coarse-grain par- cific computations, no vectorizing compiler today comes anywhere allelism (e.g., multi-core) and fine-grain parallelism (e.g., vector SIMD) close. Recent advances in polyhedral compiler optimization [5, 11] must be effectively exploited, but despite decades of progress at both have resulted in effective approaches to tiling for cache locality, ends, current compiler optimization schemes that attempt to address even for imperfectly nested loops. However, while significant per- data locality and both kinds of parallelism often fail at one of the three formance improvement over untiled code has been demonstrated, objectives. the absolute achieved performance is still very far from machine We address this problem by proposing a 3-step framework, which peak. A significant challenge arises from the fact that polyhedral aims for integrated data locality, multi-core parallelism and SIMD exe- cution of programs. We define the concept of vectorizable codelets, with compiler transformations to produce tiled code generally require properties tailored to achieve effective SIMD code generation for the auxiliary transformations like loop skewing, causing much more codelets.
    [Show full text]
  • Formalizing the SSA-Based Compiler for Verified Advanced Program Transformations Jianzhou Zhao University of Pennsylvania, [email protected]
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by ScholarlyCommons@Penn University of Pennsylvania ScholarlyCommons Publicly Accessible Penn Dissertations 1-1-2013 Formalizing the SSA-based Compiler for Verified Advanced Program Transformations Jianzhou Zhao University of Pennsylvania, [email protected] Follow this and additional works at: http://repository.upenn.edu/edissertations Part of the Computer Sciences Commons Recommended Citation Zhao, Jianzhou, "Formalizing the SSA-based Compiler for Verified Advanced Program Transformations" (2013). Publicly Accessible Penn Dissertations. 825. http://repository.upenn.edu/edissertations/825 This paper is posted at ScholarlyCommons. http://repository.upenn.edu/edissertations/825 For more information, please contact [email protected]. Formalizing the SSA-based Compiler for Verified Advanced Program Transformations Abstract Compilers are not always correct due to the complexity of language semantics and transformation algorithms, the trade-offs between compilation speed and verifiability,etc.The bugs of compilers can undermine the source-level verification efforts (such as type systems, static analysis, and formal proofs) and produce target programs with different meaning from source programs. Researchers have used mechanized proof tools to implement verified compilers that are guaranteed to preserve program semantics and proved to be more robust than ad-hoc non-verified compilers. The og al of the dissertation is to make a step towards verifying an industrial strength modern compiler-- LLVM, which has a typed, SSA-based, and general-purpose intermediate representation, therefore allowing more advanced program transformations than existing approaches. The dissertation formally defines the sequential semantics of the LLVM intermediate representation with its type system, SSA properties, memory model, and operational semantics.
    [Show full text]
  • Compiler-Based Code-Improvement Techniques
    Compiler-Based Code-Improvement Techniques KEITH D. COOPER, KATHRYN S. MCKINLEY, and LINDA TORCZON Since the earliest days of compilation, code quality has been recognized as an important problem [18]. A rich literature has developed around the issue of improving code quality. This paper surveys one part of that literature: code transformations intended to improve the running time of programs on uniprocessor machines. This paper emphasizes transformations intended to improve code quality rather than analysis methods. We describe analytical techniques and specific data-flow problems to the extent that they are necessary to understand the transformations. Other papers provide excellent summaries of the various sub-fields of program analysis. The paper is structured around a simple taxonomy that classifies transformations based on how they change the code. The taxonomy is populated with example transformations drawn from the literature. Each transformation is described at a depth that facilitates broad understanding; detailed references are provided for deeper study of individual transformations. The taxonomy provides the reader with a framework for thinking about code-improving transformations. It also serves as an organizing principle for the paper. Copyright 1998, all rights reserved. You may copy this article for your personal use in Comp 512. Further reproduction or distribution requires written permission from the authors. 1INTRODUCTION This paper presents an overview of compiler-based methods for improving the run-time behavior of programs — often mislabeled code optimization. These techniques have a long history in the literature. For example, Backus makes it quite clear that code quality was a major concern to the implementors of the first Fortran compilers [18].
    [Show full text]
  • Design and Implementation of a PHP Compiler Front-End
    Design and Implementation of a PHP Compiler Front-end Edsko de Vries∗ and John Gilbert fdevriese, [email protected] Department of Computer Science School of Computer Science and Statistics Trinity College Dublin, Ireland September 21, 2007 Abstract This technical report describes the design and implementation of a front-end for phc, the open source PHP compiler. This front-end provides an excellent basis for developing tools which process PHP source code, and consists of a well-defined Abstract Syntax Tree (AST) specification for the PHP language, a lexical analyser and parser which construct the AST for a given script, and a visitor and transformation API for manipulating these ASTs. 1 Introduction PHP [1] is a dynamically typed general purpose scripting language which can be embedded in HTML pages. It was designed in 1995 for implementing dynamic web pages, its name initially standing for Personal Home Pages. PHP now stands for the recursive acronym PHP: Hypertext Preprocessor. While predominantly used for server-side scripting PHP can be used for writing command line scripts and client-side GUI applications. It may also be embedded into host applications to provide them with scripting functionality. The main implementation of PHP is free open source software. This provides the de facto definition of the syntax and semantics for the language since there is no formal specification. While the end user interface and extensions API for the PHP interpreter are well documented [2, 10], scant documentation exists for the internals of the parser and the Zend engine (which interprets scripts). The lexical analyser is defined using a Flex description of 2000 lines, and the parser a Bison specification of 900 lines.
    [Show full text]
  • Tail Modulo Cons
    Tail Modulo Cons Fr´ed´ericBour12, Basile Cl´ement1, and Gabriel Scherer1 1 INRIA 2 Tarides Abstract OCaml function calls consume space on the system stack. Operating systems set default limits on the stack space which are much lower than the available memory. If a program runs out of stack space, they get the dreaded \Stack Overflow" exception { they crash. As a result, OCaml programmers have to be careful, when they write recursive functions, to remain in the so-called tail-recursive fragment, using tail calls that do not consume stack space. This discipline is a source of difficulties for both beginners and experts. Beginners have to be taught recursion, and then tail-recursion. Experts disagree on the \right" way to write List.map. The direct version is beautiful but not tail-recursive, so it crashes on larger inputs. The naive tail-recursive transformation is (slightly) slower than the direct version, and experts may want to avoid that cost. Some libraries propose horrible implementations, unrolling code by hand, to compensate for this performance loss. In general, tail-recursion requires the programmer to manually perform sophisticated program transformations. In this work we propose an implementation of \Tail Modulo Cons" (TMC) for OCaml. TMC is a program transformation for a fragment of non-tail-recursive functions, that rewrites them in destination-passing style. The supported fragment is smaller than other approaches such as continuation-passing-style, but the performance of the transformed code is on par with the direct, non-tail-recursive version. Many useful functions that traverse a recursive datastructure and rebuild another recursive structure are in the TMC fragment, in particular List.map (and List.{filter,append}, etc.).
    [Show full text]
  • Structured Program Generation Techniques
    Structured Program Generation Techniques Yannis Smaragdakis, Aggelos Biboudis, and George Fourtounis University of Athens fsmaragd,biboudis,[email protected] Abstract. So, you can write a program that generates other programs. Sorry, . not impressed. You want to impress me? Make sure your program-generating program only produces well-formed programs. What is \well-formed", you ask? Well, let's start with \it parses". Then let's get to \. and type-checks". You want to really impress me? Give me an expressive language for program generators in which any program you write will only generate well-formed programs. In this briefing, we will sample the state-of-the-art in program genera- tion relative to the above important goal. If we want to establish program generation as a general-purpose, disciplined methodology, instead of an ad hoc hack, we should be able to check the generator once and immedi- ately validate the well-formedness of anything it might generate. This is a modular safety property for meta-programs, much akin to static typing for regular programs. Some of the emphasis will be on our own work on \class morphing" (or just \morphing"): the statically-safe adaptation of the contents of a class, depending on other classes supplied as parameters. Along the way, lots of other techniques will be discussed and contrasted, from different template facilities, to syntactically-safe program generation, to program staging techniques. 1 Introduction A program generator (or just generator) is a program that generates programs expressed in a high-level language. The language in which the generator is writ- ten (commonly called the host or the meta language) and the output language (commonly called the object or target language) do not have to be the same, although they often are.1 Generators arise in so many practical scenarios that one may wonder whether they deserve a special name, or they are merely \programs".
    [Show full text]
  • High-Fidelity Metaprogramming with Separator Syntax Trees
    High-Fidelity Metaprogramming with Separator Syntax Trees Rodin T. A. Aarssen Tijs van der Storm Centrum Wiskunde & Informatica Centrum Wiskunde & Informatica Amsterdam, The Netherlands Amsterdam, The Netherlands Eindhoven University of Technology University of Groningen Eindhoven, The Netherlands Groningen, The Netherlands [email protected] [email protected] Abstract Manipulation (PEPM ’20), January 20, 2020, New Orleans, LA, USA. Many metaprogramming tasks, such as refactorings, auto- ACM, New York, NY, USA, 11 pages. https://doi.org/10.1145/3372884. mated bug fixing, or large-scale software renovation, require 3373162 high-fidelity source code transformations – transformations which preserve comments and layout as much as possible. 1 Introduction Abstract syntax trees (ASTs) typically abstract from such Many metaprogramming tasks require high-fidelity source details, and hence would require pretty printing, destroying code transformations: transformations that preserve as much the original program layout. Concrete syntax trees (CSTs) layout (comments, indentation, whitespace, etc.) of the input preserve all layout information, but transformation systems as possible. Typical examples include: or parsers that support CSTs are rare and can be cumbersome to use. • Automated refactorings [12]; In this paper we present separator syntax trees (SSTs), a • Mass maintenance scenarios [14]; lightweight syntax tree format, that sits between AST and • Automated bug fixing [10]. CSTs, in terms of the amount of information they preserve. High-fidelity metaprogramming further promotes end- SSTs extend ASTs by recording textual layout information user scripting of source code transformations, where pro- separating AST nodes. This information can be used to re- grammers not only apply standard refactorings and restruc- construct the textual code after parsing, but can largely be turings offered by mainstream IDEs or dedicated tools (such ignored when implementing high-fidelity transformations.
    [Show full text]