Pallene: a Statically Typed Companion Language for Lua

Total Page:16

File Type:pdf, Size:1020Kb

Pallene: a Statically Typed Companion Language for Lua Pallene: A statically typed companion language for Lua Hugo Musso Gualandi Roberto Ierusalimschy PUC-Rio PUC-Rio [email protected] [email protected] ABSTRACT that interact with the underlying operating system, and a The simplicity and flexibility of dynamic languages make high-level scripting language for the parts that need flexibility them popular for prototyping and scripting, but the lack of and ease of use. Its main advantage is that the programmer compile-time type information makes it very challenging to can choose the programming language most suited for each generate efficient executable code. particular task. The main disadvantages are due to the large Inspired by ideas from scripting, just-in-time compilers, differences between the languages. That makes it hardto and optional type systems, we are developing Pallene, a rewrite a piece of code from one language to the other and statically typed companion language to the Lua scripting also adds run-time overhead in the API between the two language. Pallene is designed to be amenable to standard language runtimes. ahead-of-time compilation techniques, to interoperate seam- Just-in-time compilers [6] dynamically translate high level lessly with Lua (even sharing its runtime), and to be familiar code into low-level machine code during the program execu- to Lua programmers. tion, on demand. To maximize performance, JIT compilers In this paper, we compare the performance of the Pallene may collect run-time information, such as function parameter compiler against LuaJIT, a just in time compiler for Lua, and types, and use that information to generate efficient special- with C extension modules. The results suggest that Pallene ized code. The most appealing aspect of JIT compilers for can achieve similar levels of performance. dynamic languages is that they can provide a large speedup without the need to rewrite the code in a different language. CCS CONCEPTS In practice, however, this speedup is not always guaranteed as programmers often need to rewrite their code anyway, • Software and its engineering → Scripting languages; using idioms and “incantations” that are more amenable to Just-in-time compilers; Dynamic compilers; Imperative lan- optimization [10]. guages; As the name suggests, optional type systems allow pro- ACM Reference Format: grammers to partially add types to programs.1 These systems Hugo Musso Gualandi and Roberto Ierusalimschy. 2018. Pallene: combine the static typing and dynamic typing disciplines in A statically typed companion language for Lua. In XXII Brazilian Symposium on Programming Languages (SBLP 2018), September a single language. From static types they seek better compile- 20–21, 2018, SAO CARLOS, Brazil. ACM, New York, NY, USA, time error checking, machine-checked lightweight documen- 8 pages. https://doi.org/10.1145/3264637.3264640 tation, and run-time performance; from dynamic typing they seek flexibility and ease of use. One of the selling points of 1 INTRODUCTION optional type systems is that they promise a smooth transi- tion from small dynamically typed scripts to larger statically The simplicity and flexibility of dynamic languages make typed applications [27]. The main challenge these systems them popular for prototyping and scripting, but the lack of face is that it is hard to design a type system that is at the compile-time type information makes it very challenging to same time simple, correct, and amenable to optimizations. generate efficient executable code. There are at least three Both scripting and optional types assume that program- approaches to improve the performance of dynamically typed mers need to restrict the dynamism of their code when they languages: scripting, just-in-time compilation, and optional seek better performance. Although in theory JIT compilers type systems. do not require this, in practice programmers also need to The scripting approach [19] advocates the use of two sepa- restrict themselves to achieve maximum performance. Realiz- rate languages to write a program: a low-level system language ing how these self-imposed restrictions result in the creation for the parts of the program that need good performance and of vaguely defined language subsets, and how restricting dy- Permission to make digital or hard copies of all or part of this work namism seems unavoidable, we asked ourselves: what if we for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage accepted the restrictions and defined a new programming and that copies bear this notice and the full citation on the first page. language based on them? By focusing on this “well behaved” Copyrights for components of this work owned by others than the subset and making it explicit, instead of trying to optimize author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, or type a dynamic language in its full generality, we would be requires prior specific permission and/or a fee. Request permissions able to drastically simplify the type system and the compiler. from [email protected]. To study this question, we are developing the Pallene pro- SBLP 2018, September 20–21, 2018, SAO CARLOS, Brazil © 2018 Copyright held by the owner/author(s). Publication rights gramming language, a statically typed companion to Lua. licensed to ACM. 1 ACM ISBN 978-1-4503-6480-5/18/09. $15.00 Gualandi [9] presents a historical review of type systems for dynamic https://doi.org/10.1145/3264637.3264640 languages. SBLP 2018, September 20–21, 2018, SAO CARLOS, Brazil Hugo Musso Gualandi and Roberto Ierusalimschy Pallene is intended to act as a system language counterpart to this advice when the code is mainly about low-level opera- Lua’s scripting, but with better interoperability than existing tions that are easy to express in C, such as doing arithmetic static languages. To avoid the complexity of a JIT compila- and calling external libraries. Another obstacle to this sug- tion, Pallene should be amenable to standard ahead-of-time gestion is that it is hard to estimate in advance both the compiler optimization techniques. To minimize the run-time costs of rewriting the code and the performance benefits to mismatch, Pallene should use Lua’s data representation and be achieved by the change. Often, the gain in performance is garbage collector. To minimize the conceptual mismatch, Pal- not what one would expect: As we will see in Section 5, the lene should be familiar to Lua programmers, syntactically overhead of converting data from one runtime to the other and semantically. can cancel out the inherent gains of switching to a static In the next section of this paper, we overview the main language. approaches currently used to tackle the performance problems of dynamic languages. In Section 3, we describe how we 2.2 JIT Compilers designed Pallene, aiming to combine desirable properties from those approaches. In Section 4, we discuss how our Just-in-time (JIT) compilers are the state of the art in dy- goals for Pallene affected its implementation. In Section 5,we namic language optimization. A JIT compiler initially ex- evaluate the performance of our prototype implementation of ecutes the program without any optimization, observes its Pallene on a set of micro benchmarks, comparing it with the behavior at runtime, and then, based on this, generates highly reference Lua implementation and with LuaJIT [20], a state specialized and optimized executable code. For example, if it of the art JIT compiler for Lua. This evaluation suggests observes that some code is always operating on values of type that it is possible to produce efficient executable code for double, the compiler will optimistically compile a version of Pallene programs. In the last two sections of this paper we this code that is specialized for that type. It will also insert compare Pallene with related research in type systems and tests (guards) in the beginning of the code that jump back optimization for dynamic languages, and we discuss avenues to a less optimized generic version in case some value is not for future work. of type double as expected. JIT compilers are broadly classified as either method-based or trace-based [7], according to their main unit of compilation. 2 OPTIMIZING SCRIPTING In method-based JITs, the unit of compilation is one function LANGUAGES or subroutine. In trace-based JITs, the unit of compilation is In this section we discuss the existing approaches to optimiz- a linear trace of the program execution, which may cross over ing dynamic languages that we mentioned in the introduction. function boundaries. Trace compilation allows for a more embedable implementation and is better at compiling across abstraction boundaries. However, it has trouble optimizing 2.1 Scripting programs which contain unpredictable branch statements. For One way to overcome the slowness of dynamic languages this reason, most JIT compilers now tend to use the method- is to avoid them for performance-sensitive code. Dynamic based approach, with the notable exceptions of LuaJIT [20] scripting languages are often well-suited for a multi-language and the RPython Framework [5]. architecture, where a statically typed low-level system lan- JIT compilers detect types at run-time because inferring guage is combined with a flexible dynamically typed scripting types at compile type is very hard and usually produces less language, a style of programming that has been championed specific results. Additionally, in many dynamic languages by John Ousterhout [19]. data types are created at run-time and there are no data Lua has been designed from the start with scripting in definition declarations or type annotations for the compiler mind [13] and many applications that use Lua follow this to take advantage from.
Recommended publications
  • Chapter 12 Calc Macros Automating Repetitive Tasks Copyright
    Calc Guide Chapter 12 Calc Macros Automating repetitive tasks Copyright This document is Copyright © 2019 by the LibreOffice Documentation Team. Contributors are listed below. You may distribute it and/or modify it under the terms of either the GNU General Public License (http://www.gnu.org/licenses/gpl.html), version 3 or later, or the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0/), version 4.0 or later. All trademarks within this guide belong to their legitimate owners. Contributors This book is adapted and updated from the LibreOffice 4.1 Calc Guide. To this edition Steve Fanning Jean Hollis Weber To previous editions Andrew Pitonyak Barbara Duprey Jean Hollis Weber Simon Brydon Feedback Please direct any comments or suggestions about this document to the Documentation Team’s mailing list: [email protected]. Note Everything you send to a mailing list, including your email address and any other personal information that is written in the message, is publicly archived and cannot be deleted. Publication date and software version Published December 2019. Based on LibreOffice 6.2. Using LibreOffice on macOS Some keystrokes and menu items are different on macOS from those used in Windows and Linux. The table below gives some common substitutions for the instructions in this chapter. For a more detailed list, see the application Help. Windows or Linux macOS equivalent Effect Tools > Options menu LibreOffice > Preferences Access setup options Right-click Control + click or right-click
    [Show full text]
  • Rexx Programmer's Reference
    01_579967 ffirs.qxd 2/3/05 9:00 PM Page i Rexx Programmer’s Reference Howard Fosdick 01_579967 ffirs.qxd 2/3/05 9:00 PM Page iv 01_579967 ffirs.qxd 2/3/05 9:00 PM Page i Rexx Programmer’s Reference Howard Fosdick 01_579967 ffirs.qxd 2/3/05 9:00 PM Page ii Rexx Programmer’s Reference Published by Wiley Publishing, Inc. 10475 Crosspoint Boulevard Indianapolis, IN 46256 www.wiley.com Copyright © 2005 by Wiley Publishing, Inc., Indianapolis, Indiana Published simultaneously in Canada ISBN: 0-7645-7996-7 Manufactured in the United States of America 10 9 8 7 6 5 4 3 2 1 1MA/ST/QS/QV/IN No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning or otherwise, except as permitted under Sections 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 646-8600. Requests to the Publisher for permission should be addressed to the Legal Department, Wiley Publishing, Inc., 10475 Crosspoint Blvd., Indianapolis, IN 46256, (317) 572-3447, fax (317) 572-4355, e-mail: [email protected]. LIMIT OF LIABILITY/DISCLAIMER OF WARRANTY: THE PUBLISHER AND THE AUTHOR MAKE NO REPRESENTATIONS OR WARRANTIES WITH RESPECT TO THE ACCURACY OR COM- PLETENESS OF THE CONTENTS OF THIS WORK AND SPECIFICALLY DISCLAIM ALL WAR- RANTIES, INCLUDING WITHOUT LIMITATION WARRANTIES OF FITNESS FOR A PARTICULAR PURPOSE.
    [Show full text]
  • Chapter 1 Introduction to Computers, Programs, and Java
    Chapter 1 Introduction to Computers, Programs, and Java 1.1 Introduction • The central theme of this book is to learn how to solve problems by writing a program . • This book teaches you how to create programs by using the Java programming languages . • Java is the Internet program language • Why Java? The answer is that Java enables user to deploy applications on the Internet for servers , desktop computers , and small hand-held devices . 1.2 What is a Computer? • A computer is an electronic device that stores and processes data. • A computer includes both hardware and software. o Hardware is the physical aspect of the computer that can be seen. o Software is the invisible instructions that control the hardware and make it work. • Computer programming consists of writing instructions for computers to perform. • A computer consists of the following hardware components o CPU (Central Processing Unit) o Memory (Main memory) o Storage Devices (hard disk, floppy disk, CDs) o Input/Output devices (monitor, printer, keyboard, mouse) o Communication devices (Modem, NIC (Network Interface Card)). Bus Storage Communication Input Output Memory CPU Devices Devices Devices Devices e.g., Disk, CD, e.g., Modem, e.g., Keyboard, e.g., Monitor, and Tape and NIC Mouse Printer FIGURE 1.1 A computer consists of a CPU, memory, Hard disk, floppy disk, monitor, printer, and communication devices. CMPS161 Class Notes (Chap 01) Page 1 / 15 Kuo-pao Yang 1.2.1 Central Processing Unit (CPU) • The central processing unit (CPU) is the brain of a computer. • It retrieves instructions from memory and executes them.
    [Show full text]
  • A Java Implementation of a Portable Desktop Manager Scott .J Griswold University of North Florida
    UNF Digital Commons UNF Graduate Theses and Dissertations Student Scholarship 1998 A Java Implementation of a Portable Desktop Manager Scott .J Griswold University of North Florida Suggested Citation Griswold, Scott .,J "A Java Implementation of a Portable Desktop Manager" (1998). UNF Graduate Theses and Dissertations. 95. https://digitalcommons.unf.edu/etd/95 This Master's Thesis is brought to you for free and open access by the Student Scholarship at UNF Digital Commons. It has been accepted for inclusion in UNF Graduate Theses and Dissertations by an authorized administrator of UNF Digital Commons. For more information, please contact Digital Projects. © 1998 All Rights Reserved A JAVA IMPLEMENTATION OF A PORTABLE DESKTOP MANAGER by Scott J. Griswold A thesis submitted to the Department of Computer and Information Sciences in partial fulfillment of the requirements for the degree of Master of Science in Computer and Information Sciences UNIVERSITY OF NORTH FLORIDA DEPARTMENT OF COMPUTER AND INFORMATION SCIENCES April, 1998 The thesis "A Java Implementation of a Portable Desktop Manager" submitted by Scott J. Griswold in partial fulfillment of the requirements for the degree of Master of Science in Computer and Information Sciences has been ee Date APpr Signature Deleted Dr. Ralph Butler Thesis Advisor and Committee Chairperson Signature Deleted Dr. Yap S. Chua Signature Deleted Accepted for the Department of Computer and Information Sciences Signature Deleted i/2-{/1~ Dr. Charles N. Winton Chairperson of the Department Accepted for the College of Computing Sciences and E Signature Deleted Dr. Charles N. Winton Acting Dean of the College Accepted for the University: Signature Deleted Dr.
    [Show full text]
  • Scripting: Higher- Level Programming for the 21St Century
    . John K. Ousterhout Sun Microsystems Laboratories Scripting: Higher- Cybersquare Level Programming for the 21st Century Increases in computer speed and changes in the application mix are making scripting languages more and more important for the applications of the future. Scripting languages differ from system programming languages in that they are designed for “gluing” applications together. They use typeless approaches to achieve a higher level of programming and more rapid application development than system programming languages. or the past 15 years, a fundamental change has been ated with system programming languages and glued Foccurring in the way people write computer programs. together with scripting languages. However, several The change is a transition from system programming recent trends, such as faster machines, better script- languages such as C or C++ to scripting languages such ing languages, the increasing importance of graphical as Perl or Tcl. Although many people are participat- user interfaces (GUIs) and component architectures, ing in the change, few realize that the change is occur- and the growth of the Internet, have greatly expanded ring and even fewer know why it is happening. This the applicability of scripting languages. These trends article explains why scripting languages will handle will continue over the next decade, with more and many of the programming tasks in the next century more new applications written entirely in scripting better than system programming languages. languages and system programming
    [Show full text]
  • 10 Programming Languages You Should Learn Right Now by Deborah Rothberg September 15, 2006 8 Comments Posted Add Your Opinion
    10 Programming Languages You Should Learn Right Now By Deborah Rothberg September 15, 2006 8 comments posted Add your opinion Knowing a handful of programming languages is seen by many as a harbor in a job market storm, solid skills that will be marketable as long as the languages are. Yet, there is beauty in numbers. While there may be developers who have had riches heaped on them by knowing the right programming language at the right time in the right place, most longtime coders will tell you that periodically learning a new language is an essential part of being a good and successful Web developer. "One of my mentors once told me that a programming language is just a programming language. It doesn't matter if you're a good programmer, it's the syntax that matters," Tim Huckaby, CEO of San Diego-based software engineering company CEO Interknowlogy.com, told eWEEK. However, Huckaby said that while his company is "swimmi ng" in work, he's having a nearly impossible time finding recruits, even on the entry level, that know specific programming languages. "We're hiring like crazy, but we're not having an easy time. We're just looking for attitude and aptitude, kids right out of school that know .Net, or even Java, because with that we can train them on .Net," said Huckaby. "Don't get fixated on one or two languages. When I started in 1969, FORTRAN, COBOL and S/360 Assembler were the big tickets. Today, Java, C and Visual Basic are. In 10 years time, some new set of languages will be the 'in thing.' …At last count, I knew/have learned over 24 different languages in over 30 years," Wayne Duqaine, director of Software Development at Grandview Systems, of Sebastopol, Calif., told eWEEK.
    [Show full text]
  • Safe, Fast and Easy: Towards Scalable Scripting Languages
    Safe, Fast and Easy: Towards Scalable Scripting Languages by Pottayil Harisanker Menon A dissertation submitted to The Johns Hopkins University in conformity with the requirements for the degree of Doctor of Philosophy. Baltimore, Maryland Feb, 2017 ⃝c Pottayil Harisanker Menon 2017 All rights reserved Abstract Scripting languages are immensely popular in many domains. They are char- acterized by a number of features that make it easy to develop small applications quickly - flexible data structures, simple syntax and intuitive semantics. However they are less attractive at scale: scripting languages are harder to debug, difficult to refactor and suffers performance penalties. Many research projects have tackled the issue of safety and performance for existing scripting languages with mixed results: the considerable flexibility offered by their semantics also makes them significantly harder to analyze and optimize. Previous research from our lab has led to the design of a typed scripting language built specifically to be flexible without losing static analyzability. Inthis dissertation, we present a framework to exploit this analyzability, with the aim of producing a more efficient implementation Our approach centers around the concept of adaptive tags: specialized tags attached to values that represent how it is used in the current program. Our frame- work abstractly tracks the flow of deep structural types in the program, and thuscan ii ABSTRACT efficiently tag them at runtime. Adaptive tags allow us to tackle key issuesatthe heart of performance problems of scripting languages: the framework is capable of performing efficient dispatch in the presence of flexible structures. iii Acknowledgments At the very outset, I would like to express my gratitude and appreciation to my advisor Prof.
    [Show full text]
  • Moxa Nport Real TTY Driver for Arm-Based Platform Porting Guide
    Moxa NPort Real TTY Driver for Arm-based Platform Porting Guide Moxa Technical Support Team [email protected] Contents 1 Introduction ...................................................................................2 2 Porting to the Moxa UC-Series—Arm-based Computer ....................2 2.1 Build binaries on a general Arm platform ...................................................... 2 2.2 Cross-compiler and the Real TTY driver ........................................................ 3 2.3 Moxa cross-compiling interactive script......................................................... 4 2.4 Manually build the Real TTY driver with a cross-compiler ................................ 5 2.5 Deploy cross-compiled binary to target......................................................... 8 3 Porting to Raspberry Pi OS .............................................................9 4 Porting to the Yocto Project on Raspberry Pi ................................ 10 4.1 Prerequisite............................................................................................... 10 4.2 Create a Moxa layer for the Yocto Project..................................................... 11 4.3 Install a Moxa layer into the Yocto Project.................................................... 17 4.4 Deploy the Yocto image in Raspberry Pi ....................................................... 17 4.5 Start the Real TTY driver in Raspberry Pi ..................................................... 18 4.6 Set the default tty mapping to the Real TTY configuration ............................
    [Show full text]
  • Complementary Slides for the Evolution of Programming Languages
    The Evolution of Programming Languages In Text: Chapter 2 Programming Language Genealogy 2 Zuse’s Plankalkül • Designed in 1945, but not published until 1972 • Never implemented • Advanced data structures – floating point, arrays, records • Invariants 3 Plankalkül Syntax • An assignment statement to assign the expression A[4] + 1 to A[5] | A + 1 => A V | 4 5 (subscripts) S | 1.n 1.n (data types) 4 Minimal Hardware Programming: Pseudocodes • Pseudocodes were developed and used in the late 1940s and early 1950s • What was wrong with using machine code? – Poor readability – Poor modifiability – Expression coding was tedious – Machine deficiencies--no indexing or floating point 5 Machine Code • Any binary instruction which the computer’s CPU will read and execute – e.g., 10001000 01010111 11000101 11110001 10100001 00010101 • Each instruction performs a very specific task, such as loading a value into a register, or adding two binary numbers together 6 Short Code: The First Pseudocode • Short Code developed by Mauchly in 1949 for BINAC computers – Expressions were coded, left to right – Example of operations: 01 – 06 abs value 1n (n+2)nd power 02 ) 07 + 2n (n+2)nd root 03 = 08 pause 4n if <= n 04 / 09 ( 58 print and tab 7 • Variables were named with byte-pair codes – E.g., X0 = SQRT(ABS(Y0)) – 00 X0 03 20 06 Y0 – 00 was used as padding to fill the word 8 IBM 704 and Fortran • Fortran 0: 1954 - not implemented • Fortran I: 1957 – Designed for the new IBM 704, which had index registers and floating point hardware • This led to the idea of compiled
    [Show full text]
  • Language Translators
    Student Notes Theory LANGUAGE TRANSLATORS A. HIGH AND LOW LEVEL LANGUAGES Programming languages Low – Level Languages High-Level Languages Example: Assembly Language Example: Pascal, Basic, Java Characteristics of LOW Level Languages: They are machine oriented : an assembly language program written for one machine will not work on any other type of machine unless they happen to use the same processor chip. Each assembly language statement generally translates into one machine code instruction, therefore the program becomes long and time-consuming to create. Example: 10100101 01110001 LDA &71 01101001 00000001 ADD #&01 10000101 01110001 STA &71 Characteristics of HIGH Level Languages: They are not machine oriented: in theory they are portable , meaning that a program written for one machine will run on any other machine for which the appropriate compiler or interpreter is available. They are problem oriented: most high level languages have structures and facilities appropriate to a particular use or type of problem. For example, FORTRAN was developed for use in solving mathematical problems. Some languages, such as PASCAL were developed as general-purpose languages. Statements in high-level languages usually resemble English sentences or mathematical expressions and these languages tend to be easier to learn and understand than assembly language. Each statement in a high level language will be translated into several machine code instructions. Example: number:= number + 1; 10100101 01110001 01101001 00000001 10000101 01110001 B. GENERATIONS OF PROGRAMMING LANGUAGES 4th generation 4GLs 3rd generation High Level Languages 2nd generation Low-level Languages 1st generation Machine Code Page 1 of 5 K Aquilina Student Notes Theory 1. MACHINE LANGUAGE – 1ST GENERATION In the early days of computer programming all programs had to be written in machine code.
    [Show full text]
  • Gnu Compiler Collection Backend Port for the Integral Parallel Architecture
    U.P.B. Sci. Bull., Series C, Vol. 74, Iss. 3, 2012 ISSN 1454-234x GNU COMPILER COLLECTION BACKEND PORT FOR THE INTEGRAL PARALLEL ARCHITECTURE Radu HOBINCU1, Valeriu CODREANU2, Lucian PETRICĂ3 Lucrarea de față prezintă procesul de portare a compilatorului GCC oferit de către Free Software Foundation pentru arhitectura hibridă Integral Parallel Architecture, constituită dintr-un controller multithreading și o mașina vectorială SIMD. Este bine cunoscut faptul că motivul principal pentru care mașinile hibride ca și cele vectoriale sunt dificil de utilizat eficient, este programabilitatea. În această lucrare vom demonstra că folosind un compilator open-source și facilitățile de care acesta dispune, putem ușura procesul de dezvoltare software pentru aplicații complexe. This paper presents the process of porting the GCC compiler offered by the Free Software Foundation, for the hybrid Integral Parallel Architecture composed of an interleaved multithreading controller and a vectorial SIMD machine. It is well known that the main reason for which hybrid and vectorial machines are difficult to use efficiently, is programmability. In this paper we well show that by using an open-source compiler and the features it provides, we can ease the software developing process for complex applications. Keywords: integral parallel architecture, multithreading, interleaved multithreading, bubble-free embedded architecture for multithreading, compiler, GCC, backend port 1. Introduction The development of hardware technology in the last decades has required the programmers to offer support for the new features and performances of the last generation processors. This support comes as more complex compilers that have to use the machines' capabilities at their best, and more complex operating systems that need to meet the users' demand for speed, flexibility and accessibility.
    [Show full text]
  • Complete Code Generation from UML State Machine
    Complete Code Generation from UML State Machine Van Cam Pham, Ansgar Radermacher, Sebastien´ Gerard´ and Shuai Li CEA, LIST, Laboratory of Model Driven Engineering for Embedded Systems, P.C. 174, Gif-sur-Yvette, 91191, France Keywords: UML State Machine, Code Generation, Semantics-conformance, Efficiency, Events, C++. Abstract: An event-driven architecture is a useful way to design and implement complex systems. The UML State Machine and its visualizations are a powerful means to the modeling of the logical behavior of such an archi- tecture. In Model Driven Engineering, executable code can be automatically generated from state machines. However, existing generation approaches and tools from UML State Machines are still limited to simple cases, especially when considering concurrency and pseudo states such as history, junction, and event types. This paper provides a pattern and tool for complete and efficient code generation approach from UML State Ma- chine. It extends IF-ELSE-SWITCH constructions of programming languages with concurrency support. The code generated with our approach has been executed with a set of state-machine examples that are part of a test-suite described in the recent OMG standard Precise Semantics Of State Machine. The traced execution results comply with the standard and are a good hint that the execution is semantically correct. The generated code is also efficient: it supports multi-thread-based concurrency, and the (static and dynamic) efficiency of generated code is improved compared to considered approaches. 1 INTRODUCTION Completeness: Existing tools and approaches mainly focus on the sequential aspect while the concurrency The UML State Machine (USM) (Specification and of state machines is limitedly supported.
    [Show full text]