Denotational Translation Validation

Total Page:16

File Type:pdf, Size:1020Kb

Denotational Translation Validation Denotational Translation Validation The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters Citation Govereau, Paul. 2012. Denotational Translation Validation. Doctoral dissertation, Harvard University. Citable link http://nrs.harvard.edu/urn-3:HUL.InstRepos:10121982 Terms of Use This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at http:// nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of- use#LAA c 2012 - Paul Govereau All rights reserved. Thesis advisor Author Greg Morrisett Paul Govereau Denotational Translation Validation Abstract In this dissertation we present a simple and scalable system for validating the correctness of low-level program transformations. Proving that program transfor- mations are correct is crucial to the development of security critical software tools. We achieve a simple and scalable design by compiling sequential low-level programs to synchronous data-flow programs. Theses data-flow programs are a denotation of the original programs, representing all of the relevant aspects of the program se- mantics. We then check that the two denotations are equivalent, which implies that the program transformation is semantics preserving. Our denotations are computed by means of symbolic analysis. In order to achieve our design, we have extended symbolic analysis to arbitrary control-flow graphs. To this end, we have designed an intermediate language called Synchronous Value Graphs (SVG), which is capable of representing our denotations for arbitrary control-flow graphs, we have built an algorithm for computing SVG from normal assembly language, and we have given a formal model of SVG which allows us to simplify and compare denotations. Finally, we report on our experiments with LLVM M.D., a prototype denotational translation validator for the LLVM optimization framework. iii Contents Title Page . .i Abstract . iii Table of Contents . iv List of Figures . vii List of Tables . ix 1 Introduction 1 1.1 Design . .6 1.2 A brief history of translation validation . 10 1.3 Outline of this dissertation . 13 2 Overview 16 2.1 Introduction . 17 2.2 Normalizing translation validation . 18 2.3 Validation by Example . 21 2.3.1 Basic Blocks . 21 2.3.2 Extended Basic Blocks . 29 2.3.3 Loops . 32 2.4 Normalization . 36 2.4.1 Efficiency . 41 2.4.2 Extended Example . 43 3 Assembly Language 48 3.1 Assembly Language Syntax . 49 3.1.1 Working with Assembly Language . 54 3.2 Target Language . 56 3.2.1 Target Language Syntax . 57 3.2.2 Translation . 60 3.3 Side Effects . 62 3.3.1 Modeling Memory . 65 3.4 Completing the Semantics . 66 3.4.1 Translation Validation . 68 iv Contents v 3.4.2 λ-Graphs . 71 4 Synchronous Value Graphs 77 4.0.3 SVG Syntax . 80 4.1 Categorical Semantics . 91 4.1.1 Motivation . 91 4.2 Generalizing Monads . 100 4.3 Arrows . 104 4.3.1 Relationship to Monads . 109 4.3.2 Choice . 112 4.3.3 Loops . 114 4.3.4 Denotational Model . 116 5 Mechanized Semantics and Rewrite Rules 122 5.1 SVG Category . 123 5.1.1 Co-inductive Proof Techniques . 127 5.1.2 Category Lemmas . 131 5.2 SVG Arrow . 132 5.2.1 Choice . 135 5.2.2 Loops . 137 6 Implementation 139 6.1 Introduction . 140 6.1.1 Gated SSA Form . 140 6.1.2 Computing Gated SSA . 145 6.2 Background . 146 6.2.1 Relation Between The Dominator Tree and The Dominance Frontier . 149 6.2.2 Gating Paths . 151 6.3 Gating as a single-source path-expression problem . 154 6.3.1 From Path Expressions to Gating Functions . 159 6.4 Path Expressions via Gaussian Elimination . 165 6.4.1 Path Sequences . 167 6.4.2 Front- and Back-solving . 170 6.4.3 Decomposition . 172 6.4.4 Examples . 174 6.4.5 Simple Optimizations . 179 6.5 Decomposition using dominators . 180 6.5.1 Gating Monad . 182 6.5.2 Path Compression . 185 6.5.3 Block Processing Order . 187 Contents vi 6.5.4 Main Algorithm . 189 6.6 Irreducible graphs . 193 6.6.1 The Derived Graph . 195 6.6.2 Modifications to Main Algorithm . 197 7 Experimental Evaluation 202 7.1 Experimental Setup . 205 7.1.1 Pipeline information . 207 7.2 Compiler User Experiments . 208 7.2.1 Pipeline Results . 208 7.2.2 Validated Optimization . 210 7.2.3 Validation Time . 215 7.3 Compiler Developer Experiments . 217 7.3.1 Testing Individual Optimizations . 218 7.3.2 Rewrite Rules . 221 7.3.3 Improving Results with Additional Rewrite Rules . 226 8 Conclusion 235 8.0.4 Discussion . 236 8.1 Future work . 238 A Data-flow Semantics 248 A.1 Operational Semantics . 248 A.2 Static Semantics . 248 List of Figures 2.1 LLVM M.D. from a bird's eye view . 20 2.2 Extraction of loop iterations . 32 2.3 Representation of while loops . 35 2.4 Normalization of Large Example. 46 2.5 Normalization of Large Example (continued). 47 3.1 Syntax of Assembly Language Functions . 49 3.2 Assembly Language Type System . 50 3.3 Assembly Lanugauge Values . 51 3.4 Assembly Language Instructions . 51 3.5 Assembly Language Operations . 52 3.6 Loop Example . 53 3.7 Syntax of the Extended Assembly Language . 58 3.8 Graph representation of isEven function . 72 4.1 Syntax of Synchronous Value Graphs . 80 5.1 Rewrite lemmas corresponding to composition. 130 5.2 Lemmas for first properties. 134 5.3 Lemmas for left properties. 136 5.4 Lemmas for left properties. 138 6.1 Gated Assembly Language Instructions . 141 6.2 Loop Example . 143 6.3 SSA and GSA for Loop Example . 144 6.4 A Control-flow Graph . 147 6.5 Dominator Tree for Control-flow Graph in Figure 6.4 . 148 6.6 Example: a conditional statement. 174 6.7 Control-flow graph for a simple loop . 177 6.8 Main Algorithm . 190 6.9 Irreducible graph and its dominator tree. 193 6.10 Graph derived from Control-flow Graph in Figure 6.9 . 196 vii List of Figures viii 7.1 Validation results for optimization pipeline . 209 7.2 A validated optimizer using LLVM M.D. and an off-the-shelf optimizer. 211 7.3 Comparison of validated and unvalidate AES benchmark. 214 7.4 Validation time normalized to compilation time. 216 7.5 Validator results for individual optimizations . 219 7.6 Validator results for individual optimizations (continued) . 220 7.7 Results for GVN optimization . 221 7.8 LICM . 224.
Recommended publications
  • CSE 582 – Compilers
    CSE P 501 – Compilers Optimizing Transformations Hal Perkins Autumn 2011 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-1 Agenda A sampler of typical optimizing transformations Mostly a teaser for later, particularly once we’ve looked at analyzing loops 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-2 Role of Transformations Data-flow analysis discovers opportunities for code improvement Compiler must rewrite the code (IR) to realize these improvements A transformation may reveal additional opportunities for further analysis & transformation May also block opportunities by obscuring information 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-3 Organizing Transformations in a Compiler Typically middle end consists of many individual transformations that filter the IR and produce rewritten IR No formal theory for order to apply them Some rules of thumb and best practices Some transformations can be profitably applied repeatedly, particularly if others transformations expose more opportunities 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-4 A Taxonomy Machine Independent Transformations Realized profitability may actually depend on machine architecture, but are typically implemented without considering this Machine Dependent Transformations Most of the machine dependent code is in instruction selection & scheduling and register allocation Some machine dependent code belongs in the optimizer 11/8/2011 © 2002-11 Hal Perkins & UW CSE S-5 Machine Independent Transformations Dead code elimination Code motion Specialization Strength reduction
    [Show full text]
  • ICS803 Elective – III Multicore Architecture Teacher Name: Ms
    ICS803 Elective – III Multicore Architecture Teacher Name: Ms. Raksha Pandey Course Structure L T P 3 1 0 4 Prerequisite: Course Content: Unit-I: Multi-core Architectures Introduction to multi-core architectures, issues involved into writing code for multi-core architectures, Virtual Memory, VM addressing, VA to PA translation, Page fault, TLB- Parallel computers, Instruction level parallelism (ILP) vs. thread level parallelism (TLP), Performance issues, OpenMP and other message passing libraries, threads, mutex etc. Unit-II: Multi-threaded Architectures Brief introduction to cache hierarchy - Caches: Addressing a Cache, Cache Hierarchy, States of Cache line, Inclusion policy, TLB access, Memory Op latency, MLP, Memory Wall, communication latency, Shared memory multiprocessors, General architectures and the problem of cache coherence, Synchronization primitives: Atomic primitives; locks: TTS, ticket, array; barriers: central and tree; performance implications in shared memory programs; Chip multiprocessors: Why CMP (Moore's law, wire delay); shared L2 vs. tiled CMP; core complexity; power/performance; Snoopy coherence: invalidate vs. update, MSI, MESI, MOESI, MOSI; performance trade-offs; pipelined snoopy bus design; Memory consistency models: SC, PC, TSO, PSO, WO/WC, RC; Chip multiprocessor case studies: Intel Montecito and dual-core, Pentium4, IBM Power4, Sun Niagara Unit-III: Compiler Optimization Issues Code optimizations: Copy Propagation, dead Code elimination , Loop Optimizations-Loop Unrolling, Induction variable Simplification, Loop Jamming, Loop Unswitching, Techniques to improve detection of parallelism: Scalar Processors, Special locality, Temporal locality, Vector machine, Strip mining, Shared memory model, SIMD architecture, Dopar loop, Dosingle loop. Unit-IV: Control Flow analysis Control flow analysis, Flow graph, Loops in Flow graphs, Loop Detection, Approaches to Control Flow Analysis, Reducible Flow Graphs, Node Splitting.
    [Show full text]
  • Fast-Path Loop Unrolling of Non-Counted Loops to Enable Subsequent Compiler Optimizations∗
    Fast-Path Loop Unrolling of Non-Counted Loops to Enable Subsequent Compiler Optimizations∗ David Leopoldseder Roland Schatz Lukas Stadler Johannes Kepler University Linz Oracle Labs Oracle Labs Austria Linz, Austria Linz, Austria [email protected] [email protected] [email protected] Manuel Rigger Thomas Würthinger Hanspeter Mössenböck Johannes Kepler University Linz Oracle Labs Johannes Kepler University Linz Austria Zurich, Switzerland Austria [email protected] [email protected] [email protected] ABSTRACT Non-Counted Loops to Enable Subsequent Compiler Optimizations. In 15th Java programs can contain non-counted loops, that is, loops for International Conference on Managed Languages & Runtimes (ManLang’18), September 12–14, 2018, Linz, Austria. ACM, New York, NY, USA, 13 pages. which the iteration count can neither be determined at compile https://doi.org/10.1145/3237009.3237013 time nor at run time. State-of-the-art compilers do not aggressively optimize them, since unrolling non-counted loops often involves 1 INTRODUCTION duplicating also a loop’s exit condition, which thus only improves run-time performance if subsequent compiler optimizations can Generating fast machine code for loops depends on a selective appli- optimize the unrolled code. cation of different loop optimizations on the main optimizable parts This paper presents an unrolling approach for non-counted loops of a loop: the loop’s exit condition(s), the back edges and the loop that uses simulation at run time to determine whether unrolling body. All these parts must be optimized to generate optimal code such loops enables subsequent compiler optimizations. Simulat- for a loop.
    [Show full text]
  • Compiler Optimization for Configurable Accelerators Betul Buyukkurt Zhi Guo Walid A
    Compiler Optimization for Configurable Accelerators Betul Buyukkurt Zhi Guo Walid A. Najjar University of California Riverside University of California Riverside University of California Riverside Computer Science Department Electrical Engineering Department Computer Science Department Riverside, CA 92521 Riverside, CA 92521 Riverside, CA 92521 [email protected] [email protected] [email protected] ABSTRACT such as constant propagation, constant folding, dead code ROCCC (Riverside Optimizing Configurable Computing eliminations that enables other optimizations. Then, loop Compiler) is an optimizing C to HDL compiler targeting level optimizations such as loop unrolling, loop invariant FPGA and CSOC (Configurable System On a Chip) code motion, loop fusion and other such transforms are architectures. ROCCC system is built on the SUIF- applied. MACHSUIF compiler infrastructure. Our system first This paper is organized as follows. Next section discusses identifies frequently executed kernel loops inside programs on the related work. Section 3 gives an in depth description and then compiles them to VHDL after optimizing the of the ROCCC system. SUIF2 level optimizations are kernels to make best use of FPGA resources. This paper described in Section 4 followed by the compilation done at presents an overview of the ROCCC project as well as the MACHSUIF level in Section 5. Section 6 mentions our optimizations performed inside the ROCCC compiler. early results. Finally Section 7 concludes the paper. 1. INTRODUCTION 2. RELATED WORK FPGAs (Field Programmable Gate Array) can boost There are several projects and publications on translating C software performance due to the large amount of or other high level languages to different HDLs. Streams- parallelism they offer.
    [Show full text]
  • “Scrap Your Boilerplate” Reloaded
    “Scrap Your Boilerplate” Reloaded Ralf Hinze1, Andres L¨oh1, and Bruno C. d. S. Oliveira2 1 Institut f¨urInformatik III, Universit¨atBonn R¨omerstraße164, 53117 Bonn, Germany {ralf,loeh}@informatik.uni-bonn.de 2 Oxford University Computing Laboratory Wolfson Building, Parks Road, Oxford OX1 3QD, UK [email protected] Abstract. The paper “Scrap your boilerplate” (SYB) introduces a com- binator library for generic programming that offers generic traversals and queries. Classically, support for generic programming consists of two es- sential ingredients: a way to write (type-)overloaded functions, and in- dependently, a way to access the structure of data types. SYB seems to lack the second. As a consequence, it is difficult to compare with other approaches such as PolyP or Generic Haskell. In this paper we reveal the structural view that SYB builds upon. This allows us to define the combinators as generic functions in the classical sense. We explain the SYB approach in this changed setting from ground up, and use the un- derstanding gained to relate it to other generic programming approaches. Furthermore, we show that the SYB view is applicable to a very large class of data types, including generalized algebraic data types. 1 Introduction The paper “Scrap your boilerplate” (SYB) [1] introduces a combinator library for generic programming that offers generic traversals and queries. Classically, support for generic programming consists of two essential ingredients: a way to write (type-)overloaded functions, and independently, a way to access the structure of data types. SYB seems to lacks the second, because it is entirely based on combinators.
    [Show full text]
  • Manydsl One Host for All Language Needs
    ManyDSL One Host for All Language Needs Piotr Danilewski March 2017 A dissertation submitted towards the degree (Dr.-Ing.) of the Faculty of Mathematics and Computer Science of Saarland University. Saarbrücken Dean Prof. Dr. Frank-Olaf Schreyer Date of Colloquium June 6, 2017 Examination Board: Chairman Prof. Dr. Sebastian Hack Reviewers Prof. Dr.-Ing. Philipp Slusallek Prof. Dr. Wilhelm Reinhard Scientific Asistant Dr. Tim Dahmen Piotr Danilewski, [email protected] Saarbrücken, June 6, 2017 Statement I hereby declare that this dissertation is my own original work except where otherwise indicated. All data or concepts drawn directly or indirectly from other sources have been correctly acknowledged. This dissertation has not been submitted in its present or similar form to any other academic institution either in Germany or abroad for the award of any degree. Saarbrücken, June 6, 2017 (Piotr Danilewski) Declaration of Consent Herewith I agree that my thesis will be made available through the library of the Computer Science Department. Saarbrücken, June 6, 2017 (Piotr Danilewski) Zusammenfassung Die Sprachen prägen die Denkweise. Das ist die Tatsache für die gesprochenen Sprachen aber auch für die Programmiersprachen. Da die Computer immer wichtiger in jedem Aspekt des menschlichen Lebens sind, steigt der Bedarf um entsprechend neue Konzepte in den Programmiersprachen auszudrücken. Jedoch, damit unsere Denkweise sich weiterentwicklen könnte, müssen sich auch die Programmiersprachen weiterentwickeln. Aber welche Hilfsmittel gibt es um die Programmiersprachen zu schaffen und aufzurüsten? Wie kann man Entwickler ermutigen damit sie eigene Sprachen definieren, die dem Bereich in dem sie arbeiten am besten passen? Heutzutage gibt es zwei Methoden.
    [Show full text]
  • On Specifying and Visualising Long-Running Empirical Studies
    On Specifying and Visualising Long-Running Empirical Studies Peter Y. H. Wong and Jeremy Gibbons Computing Laboratory, University of Oxford, United Kingdom fpeter.wong,[email protected] Abstract. We describe a graphical approach to formally specifying tem- porally ordered activity routines designed for calendar scheduling. We introduce a workflow model OWorkflow, for constructing specifications of long running empirical studies such as clinical trials in which obser- vations for gathering data are performed at strict specific times. These observations, either manually performed or automated, are often inter- leaved with scientific procedures, and their descriptions are recorded in a calendar for scheduling and monitoring to ensure each observation is carried out correctly at a specific time. We also describe a bidirectional transformation between OWorkflow and a subset of Business Process Modelling Notation (BPMN), by which graphical specification, simula- tion, automation and formalisation are made possible. 1 Introduction A typical long-running empirical study consists of a series of scientific proce- dures interleaved with a set of observations performed over a period of time; these observations may be manually performed or automated, and are usually recorded in a calendar schedule. An example of a long-running empirical study is a clinical trial, where observations, specifically case report form submissions, are performed at specific points in the trial. In such examples, observations are interleaved with clinical interventions on patients; precise descriptions of these observations are then recorded in a patient study calendar similar to the one shown in Figure 1(a). Currently study planners such as trial designers supply information about observations either textually or by inputting textual infor- mation and selecting options on XML-based data entry forms [2], similar to the one shown in Figure 1(b).
    [Show full text]
  • Lecture Notes in Computer Science 6120 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan Van Leeuwen
    Lecture Notes in Computer Science 6120 Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany Claude Bolduc Jules Desharnais Béchir Ktari (Eds.) Mathematics of Program Construction 10th International Conference, MPC 2010 Québec City, Canada, June 21-23, 2010 Proceedings 13 Volume Editors Claude Bolduc Jules Desharnais Béchir Ktari Université Laval, Département d’informatique et de génie logiciel Pavillon Adrien-Pouliot, 1065 Avenue de la Médecine Québec, QC, G1V 0A6, Canada E-mail: {Claude.Bolduc, Jules.Desharnais, Bechir.Ktari}@ift.ulaval.ca Library of Congress Control Number: 2010927075 CR Subject Classification (1998): F.3, D.2, F.4.1, D.3, D.2.4, D.1 LNCS Sublibrary: SL 1 – Theoretical Computer Science and General Issues ISSN 0302-9743 ISBN-10 3-642-13320-7 Springer Berlin Heidelberg New York ISBN-13 978-3-642-13320-6 Springer Berlin Heidelberg New York This work is subject to copyright.
    [Show full text]
  • Generalizing Loop-Invariant Code Motion in a Real-World Compiler
    Imperial College London Department of Computing Generalizing loop-invariant code motion in a real-world compiler Author: Supervisor: Paul Colea Fabio Luporini Co-supervisor: Prof. Paul H. J. Kelly MEng Computing Individual Project June 2015 Abstract Motivated by the perpetual goal of automatically generating efficient code from high-level programming abstractions, compiler optimization has developed into an area of intense research. Apart from general-purpose transformations which are applicable to all or most programs, many highly domain-specific optimizations have also been developed. In this project, we extend such a domain-specific compiler optimization, initially described and implemented in the context of finite element analysis, to one that is suitable for arbitrary applications. Our optimization is a generalization of loop-invariant code motion, a technique which moves invariant statements out of program loops. The novelty of the transformation is due to its ability to avoid more redundant recomputation than normal code motion, at the cost of additional storage space. This project provides a theoretical description of the above technique which is fit for general programs, together with an implementation in LLVM, one of the most successful open-source compiler frameworks. We introduce a simple heuristic-driven profitability model which manages to successfully safeguard against potential performance regressions, at the cost of missing some speedup opportunities. We evaluate the functional correctness of our implementation using the comprehensive LLVM test suite, passing all of its 497 whole program tests. The results of our performance evaluation using the same set of tests reveal that generalized code motion is applicable to many programs, but that consistent performance gains depend on an accurate cost model.
    [Show full text]
  • Domain-Specific Languages for Modeling and Simulation
    Domain-specifc Languages for Modeling and Simulation Dissertation zur Erlangung des akademischen Grades Doktor-Ingenieur (Dr.-Ing.) der Fakultät für Informatik und Elektrotechnik der Universität Rostock vorgelegt von Tom Warnke, geb. am 25.01.1988 in Malchin aus Rostock Rostock, 14. September 2020 https://doi.org/10.18453/rosdok_id00002966 Dieses Werk ist lizenziert unter einer Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International Lizenz. Gutachter: Prof. Dr. Adelinde M. Uhrmacher (Universität Rostock) Prof. Rocco De Nicola (IMT Lucca) Prof. Hans Vangheluwe (Universität Antwerpen) Eingereicht am 14. September 2020 Verteidigt am 8. Januar 2021 Abstract Simulation models and simulation experiments are increasingly complex. One way to handle this complexity is developing software languages tailored to specifc application domains, so-called domain-specifc languages (DSLs). This thesis explores the potential of employing DSLs in modeling and simulation. We study diferent DSL design and implementation techniques and illustrate their benefts for expressing simulation models as well as simulation experiments with several examples. Regarding simulation models, we focus on discrete-event models based on continuous- time Markov chains (CTMCs). Most of our work revolves around ML-Rules, an rule-based modeling language for biochemical reaction networks. First, we relate the expressive power of ML-Rules to other currently available modeling languages for this application domain. Then we defne the abstract syntax and operational semantics for ML-Rules, mapping models to CTMCs in an unambiguous and precise way. Based on the formal defnitions, we present two approaches to implement ML-Rules as a DSL. The core of both implementations is fnding the matches for the patterns on the left side of ML-Rules’ rules.
    [Show full text]
  • Loop Transformations and Parallelization
    Loop transformations and parallelization Claude Tadonki LAL/CNRS/IN2P3 University of Paris-Sud [email protected] December 2010 Claude Tadonki Loop transformations and parallelization C. Tadonki – Loop transformations Introduction Most of the time, the most time consuming part of a program is on loops. Thus, loops optimization is critical in high performance computing. Depending on the target architecture, the goal of loops transformations are: improve data reuse and data locality efficient use of memory hierarchy reducing overheads associated with executing loops instructions pipeline maximize parallelism Loop transformations can be performed at different levels by the programmer, the compiler, or specialized tools. At high level, some well known transformations are commonly considered: loop interchange loop (node) splitting loop unswitching loop reversal loop fusion loop inversion loop skewing loop fission loop vectorization loop blocking loop unrolling loop parallelization Claude Tadonki Loop transformations and parallelization C. Tadonki – Loop transformations Dependence analysis Extract and analyze the dependencies of a computation from its polyhedral model is a fundamental step toward loop optimization or scheduling. Definition For a given variable V and given indexes I1, I2, if the computation of X(I1) requires the value of X(I2), then I1 ¡ I2 is called a dependence vector for variable V . Drawing all the dependence vectors within the computation polytope yields the so-called dependencies diagram. Example The dependence vectors are (1; 0); (0; 1); (¡1; 1). Claude Tadonki Loop transformations and parallelization C. Tadonki – Loop transformations Scheduling Definition The computation on the entire domain of a given loop can be performed following any valid schedule.A timing function tV for variable V yields a valid schedule if and only if t(x) > t(x ¡ d); 8d 2 DV ; (1) where DV is the set of all dependence vectors for variable V .
    [Show full text]
  • Foundations of Scientific Research
    2012 FOUNDATIONS OF SCIENTIFIC RESEARCH N. M. Glazunov National Aviation University 25.11.2012 CONTENTS Preface………………………………………………….…………………….….…3 Introduction……………………………………………….…..........................……4 1. General notions about scientific research (SR)……………….……….....……..6 1.1. Scientific method……………………………….………..……..……9 1.2. Basic research…………………………………………...……….…10 1.3. Information supply of scientific research……………..….………..12 2. Ontologies and upper ontologies……………………………….…..…….…….16 2.1. Concepts of Foundations of Research Activities 2.2. Ontology components 2.3. Ontology for the visualization of a lecture 3. Ontologies of object domains………………………………..………………..19 3.1. Elements of the ontology of spaces and symmetries 3.1.1. Concepts of electrodynamics and classical gauge theory 4. Examples of Research Activity………………….……………………………….21 4.1. Scientific activity in arithmetics, informatics and discrete mathematics 4.2. Algebra of logic and functions of the algebra of logic 4.3. Function of the algebra of logic 5. Some Notions of the Theory of Finite and Discrete Sets…………………………25 6. Algebraic Operations and Algebraic Structures……………………….………….26 7. Elements of the Theory of Graphs and Nets…………………………… 42 8. Scientific activity on the example “Information and its investigation”……….55 9. Scientific research in Artificial Intelligence……………………………………..59 10. Compilers and compilation…………………….......................................……69 11. Objective, Concepts and History of Computer security…….………………..93 12. Methodological and categorical apparatus of scientific research……………114 13. Methodology and methods of scientific research…………………………….116 13.1. Methods of theoretical level of research 13.1.1. Induction 13.1.2. Deduction 13.2. Methods of empirical level of research 14. Scientific idea and significance of scientific research………………………..119 15. Forms of scientific knowledge organization and principles of SR………….121 1 15.1. Forms of scientific knowledge 15.2.
    [Show full text]