Is Dynamic Compilation Possible for Embedded Systems?

Is Dynamic Compilation Possible for Embedded Systems?

Is dynamic compilation possible for embedded systems ? Henri-Pierre Charles, Victor Lomüller To cite this version: Henri-Pierre Charles, Victor Lomüller. Is dynamic compilation possible for embedded systems ?. SCOPES ’15, June 01 - 0, Jun 2015, Sanckt Goar, Germany. 10.1145/2764967.2782785. cea- 01162180 HAL Id: cea-01162180 https://hal-cea.archives-ouvertes.fr/cea-01162180 Submitted on 9 Jun 2015 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Is dynamic compilation possible for embedded systems ? [Extended Abstract] Henri-Pierre Charles Victor Lomüller Univ. Grenoble Alpes, F-38000 Grenoble, France Univ. Grenoble Alpes, F-38000 Grenoble, France CEA, LIST, MINATEC Campus, F-38054 CEA, LIST, MINATEC Campus, F-38054 Grenoble, France Grenoble, France [email protected] [email protected] ABSTRACT JIT compilation and dynamic compilation are powerful tech- niques allowing to delay the final code generation to the run- time. There is many benefits : improved portability, virtual machine security, etc. Unforturnately the tools used for JIT compilation and dy- namic compilation does not met the classical requirement for embedded platforms: memory size is huge and code genera- tion has big overheads. In this paper we show how dynamic code specialization (JIT) can be used and be beneficial in terms of execution speed and energy consumption with memory footprint kept under control. We based our approaches on our tool de- Figure 1: Multiple compilation time for a static com- Goal and on LLVM, that we extended to be able to produce piler lightweight runtime specializers from annotated LLVM pro- grams. Benchmarks are manipulated and transformed into tem- plates and a specialization routine is build to instantiate the 1. INTRODUCTION routines. Such approach allows to produce efficient special- The general compiler usage is shown in the figure 1. This izations routines, with a minimal energy consumption and figure show the many steps from an intial idea to the run- memory footprint compare to a generic JIT application. ning code using a data set on an architecture. The main Through some benchmarks, we present its efficiency in drawback of this classical scheme is that all important de- terms of speed, energy and memory footprint. We show cisions in the compilation process should be taken at static that over static compilation we can achieve a speed-up of compile time. Unfortunately on modern processors the per- 21 % in terms of execution speed but also a 10 % energy formance is mainly driven by data characteristics (data size, reduction with a moderate memory footprint. data alignment, data values). JIT compilation is a trial to delay code generation at run-time. Categories and Subject Descriptors Just In Time (JIT) compilers' efficiency dramatically im- D.2.8 [Software Engineering]: Metrics|complexity mea- proved over the years. In the embedded domain, the increase sures, performance measures of processor capabilities has enabled the usage of such tech- nologies. Usually a JIT compiler is used to ensure porta- bility across platforms, but it can also be used to perform General Terms program specialization in order to improve performance lev- JIT, Dynamic code generation els (in general, improve execution speed). An issue raised by the use of such infrastructure is their resources consumption. By resources, we refer to memory Keywords usage, code generation time and energy consumption. Dynamic compilation, binary code generation An alternative to JIT is the use of dedicated program specialization routines. These routines are dedicated to the Permission to make digital or hard copies of all or part of this work for specialization of particular code chunks (basic block, func- personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies tions). They offer the benefit of performing their task at a bear this notice and the full citation on the first page. To copy otherwise, to low cost by precomputing some specialization works during republish, to post on servers or to redistribute to lists, requires prior specific the static compilation stage. permission and/or a fee. The research in our team focus on tools which help to SCOPES-2015 SCOPES ’15, June 01 - 03, 2015, Sankt Goar, Germany improve performance and functionality on embedded plat- Copyright is held by the owner/author(s). Publication rights licensed to forms. Our final dream is shown in figure 2. In this figure, we ACM. Copyright 2015 ACM 978-1-4503-3593-5/15/06 can choose, at static algorithmic time, what optimization to http://dx.doi.org/10.1145/2764967.2782785 ...$15.00. implement and when the optimizations will take place (when 3. produce compact code: as we are able to generate only the needed specialized code and not all variants at the same time. 4. portable across processor familly: i.e. a compiletteis portable on RISC platforms or on GPU platforms. 5. able to perform cross-jit applications, i.e. a compilette can run on one processor model and generate code for an other processor and download the generated code. More information can be found in the article [2]. Figure 2: Multiple compilation time for a dynamic compiler the imformation is available). In this article, we present two tools that are steps in this final direction : deGoal which help to generate compilettes and Kahuna to automatically generate such specialization routines. Figure 3: Degoal Compilation flow 2. TOOLS DESCRIPTION The very high flexibility of deGoal has a cost : the pro- This section will give an bird eye view of the two tools grammer has to developp manually the compilette : which deGoal and Kahuna. is a small optimizer embedded into the application. The next tool Kahuna has tried to remove this constraint. 2.1 deGoal code generator The deGoal infrastructure [2] integrates a language for 2.2 Kahuna kernel description and a small run-time environment for code Kahuna is a retargetable runtime program specializer gen- generation. The tools used by the infrastructure are archi- erator independent from any high-level language and that tecture agnostic, they only require a python interpreter and can target embedded platforms. The purpose is to produce an ANSI C compiler. specialized function following the context at the lowest cost We use a dedicated language to describe the kernel gener- possible during runtime. It was invented and devolopped by ation at runtime. This language is mixed with C code, this Victor Lomuller¨ during his PhD [6]. latter allowing to control the code generation performed in An overview of the approach to produce specializers is compilettes. This combination of C and deGoal code allows shown in Figure 4 . A creator (point \C") will annotate a to efficiently design a code generator able to: LLVM program to identify (point 1): 1. inject immediate values into the code being generated, • Values unknown at compile time but constant at run- time, 2. specialize code according to runtime data, e.g. select- ing instructions, • Location in the program that will trigger the special- 3. perform classical compilation optimizations such as loop ization. unrolling or dead code elimination. This annotated program goes through the Kahuna com- deGoal uses a pseudo-assembly language whose instruc- pilation chain to produce a low overhead specializer (point tions are similar to a neutral RISC-like instruction set. 2). To do so, the compilation chain produce templates and As a consequence, code generation is: the specializer that will instantiate the function with those templates. At runtime, when the values are known, the spe- 1. very fast: 10 to 100 times faster than typical JITs or cializer can be trigger to produce the specialized function. dynamic compilers which allow to use code generation Kahuna can work in 2 modes: out-of-place and in-place inside the application and during the code execution, mode. Like for most runtime program specializer, the out-of- not in a virtual machine. place mode create a dedicated memory space in which the specialized code is written. This mode has the advantage 2. lightweight: the typical size of compilettes is only a to allow several specialized functions at the same time and few kilobytes which allows its use on constrained mem- strong runtime optimizations (like dead code elimination, ory micro controllers such as the Texas Instrument loop unrolling, strength reduction) but at the expense of MSP430 which has only 512 bytes of available memory. a longer specialization time. The in-place mode modifies Standard code generators, such as LLC of the LLVM directly the template to create a valid function. This mode infrastructure, have Mbytes of memory footprint, mak- allows less runtime optimizations, but considerably lowers ing their use impossible in this context. down the specialization cost. speedup than Kahuna, but with a higher cost which will discussed later. The deGoal version give a higher speedup (27%) because it contains it's own code generator. Scheme Speedup LLVM+JIT 0 Kahuna 21 LLVM + JIT + Spe 21 deGoal 27 Figure 4: Kahuna Compilation flow Figure 5 presents the code generation time in cycles took by the different approaches. It presents both the time took by code generators and time took per generated instruction. Note that the LLVM backend has a broader range of run- time capabilities than Kahuna. The high cost of the LLVM application is a price to pay for those capabilities.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    5 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us