Automatic Parallelization for Heterogeneous Embedded Systems Rokiatou Diarra

Total Page:16

File Type:pdf, Size:1020Kb

Automatic Parallelization for Heterogeneous Embedded Systems Rokiatou Diarra Automatic Parallelization for Heterogeneous Embedded Systems Rokiatou Diarra To cite this version: Rokiatou Diarra. Automatic Parallelization for Heterogeneous Embedded Systems. Distributed, Par- allel, and Cluster Computing [cs.DC]. Université Paris Saclay (COmUE), 2019. English. NNT : 2019SACLS485. tel-02528823 HAL Id: tel-02528823 https://tel.archives-ouvertes.fr/tel-02528823 Submitted on 2 Apr 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Automatic Parallelization for Heterogeneous Embedded Systems These` de doctorat de l’Universite´ Paris-Saclay prepar´ ee´ a` l’Universite´ Paris-Sud Ecole doctorale n◦580 Sciences et Technologies de l’Information et de la Communication (STIC) Specialit´ e´ de doctorat : Traitement du Signal et des Images NNT : 2019SACLS485 These` present´ ee´ et soutenue a` Digiteo Labs, Gif-sur-Yvette, le 25/11/2019, par MME ROKIATOU DIARRA Composition du Jury : Dominique HOUZET Professeur, Universite´ Grenoble Alpes (GIPSA-Lab) President´ Mohamed SHAWKY Professeur, Universite´ de Technologie de Compiegne` (COSTECH) Rapporteur Bertrand GRANADO Professeur, Sorbonne Universite´ (LIP6) Rapporteur Michele` GOUIFFES Maˆıtre de Conferences,´ Universite´ Paris-Sud (LIMSI - CNRS) Examinateur Marc DURANTON Chercheur, Commissariat a` l’energie´ atomique (LIST) Invite´ Alain MERIGOT´ Professeur, Universite´ Paris-Sud (SATIE) Directeur de these` Bastien VINCKE Maˆıtre de Conferences,´ Universite´ Paris-Sud (SATIE) Co-encadrant de these` ` ese de doctorat Th Abstract Recent years have seen an increase of heterogeneous architectures combining multi-core CPUs with accelerators such as GPU, FPGA and Intel Xeon Phi. GPU can achieve significant perfor- mance for certain categories of application. Nevertheless, achieving this performance with low- level APIs (e.g. CUDA, OpenCL) requires to rewrite the sequential code, to have a good knowledge of GPU architecture, and to apply complex optimizations that are sometimes not portable. On the other hand, directive-based programming models (e.g. OpenACC, OpenMP) offer a high-level abstraction of the underlying hardware, thus simplifying the code maintenance and improving productivity. They allow users to accelerate their sequential codes on GPU by simply inserting directives. OpenACC/OpenMP compilers have the daunting task of applying the necessary opti- mizations from the user-provided directives and generating efficient codes that take advantage of the GPU architecture. Although the OpenACC / OpenMP compilers are mature and able to apply some optimizations automatically, the generated code may not achieve the expected speedup as the compilers do not have full view of the whole application. Thus, there is generally a significant per- formance gap between the codes accelerated with OpenACC/OpenMP and those hand-optimized with CUDA/OpenCL. To help programmers for speeding up efficiently their legacy sequential codes on GPU with directive- based models and broaden OpenMP/OpenACC impact in both academia and industry, several re- search issues are discussed in this dissertation. We investigated OpenACC and OpenMP program- ming models and proposed an effective application parallelization methodology with directive- based programming approaches. Our application porting experience revealed that it is insufficient to simply insert OpenMP/OpenACC offloading directives to inform the compiler that a particular code region must be compiled for GPU execution. It is highly essential to combine offloading directives with loop parallelization constructs. Although current compilers are mature and perform several optimizations, the user may provide them more information through loop parallelization constructs clauses in order to get an optimized code. We have also revealed the challenge of choosing good loop schedules. The default loop schedule chosen by the compiler may not pro- duce the best performance, so the user has to manually try different loop schedules to improve the performance. We demonstrate that OpenMP and OpenACC programming models can achieve best performance with lesser programming effort, but OpenMP/OpenACC compilers quickly reach their limit when the offloaded region code is computed/memory bound and contain several nested loops. In such cases, low-level languages may be used. We also discuss pointers aliasing problem in GPU codes and propose two static analysis tools that perform automatically at source level type qualifier insertion and scalar promotion to solve aliasing issues. i Ph.D Report ii Rokiatou DIARRA Résumé L’utilisation d’architectures hétérogènes, combinant des processeurs multicœurs avec des accéléra- teurs tels que les GPU, FPGA et Intel Xeon Phi, a augmenté ces dernières années. Les GPUs peuvent atteindre des performances significatives pour certaines catégories d’applications. Néan- moins, pour atteindre ces performances avec des API de bas niveau comme CUDA et OpenCL, il est nécessaire de réécrire le code séquentiel, de bien connaître l’architecture des GPUs et d’appliquer des optimisations complexes, parfois non portables. D’autre part, les modèles de programmation basés sur des directives (par exemple, OpenACC, OpenMP) offrent une abstraction de haut niveau du matériel sous-jacent, simplifiant ainsi la maintenance du code et améliorant la productivité. Ils permettent aux utilisateurs d’accélérer leurs codes séquentiels sur les GPUs en insérant simplement des directives. Les compilateurs d’OpenACC/OpenMP ont la lourde tâche d’appliquer les optimi- sations nécessaires à partir des directives fournies par l’utilisateur et de générer des codes exploitant efficacement l’architecture sous-djacente. Bien que les compilateurs d’OpenACC/OpenMP soient matures et puissent appliquer certaines optimisations automatiquement, le code généré peut ne pas atteindre l’accélération prévue, car les compilateurs ne disposent pas d’une vue complète de l’ensemble de l’application. Ainsi, il existe généralement un écart de performance important entre les codes accélérés avec OpenACC/OpenMP et ceux optimisés manuellement avec CUD- A/OpenCL. Afin d’aider les programmeurs à accélérer efficacement leurs codes séquentiels sur GPU avec les modèles basés sur des directives et à élargir l’impact d’OpenMP/OpenACC dans le monde uni- versitaire et industrielle, cette thèse aborde plusieurs problématiques de recherche. Nous avons étudié les modèles de programmation OpenACC et OpenMP et proposé une méthodologie efficace de parallélisation d’applications avec les approches de programmation basées sur des directives. Notre expérience de portage d’applications a révélé qu’il était insuffisant d’insérer simplement des directives de déchargement OpenMP/OpenACC pour informer le compilateur qu’une région de code particulière devait être compilée pour être exécutéé sur la GPU. Il est essentiel de combiner les directives de déchargement avec celles de parallélisation de boucle. Bien que les compilateurs actuels soient matures et effectuent plusieurs optimisations, l’utilisateur peut leur fournir davantage d’informations par le biais des clauses des directives de parallélisation de boucle afin d’obtenir un code mieux optimisé. Nous avons également révélé le défi consistant à choisir le bon nombre de threads devant exécuter une boucle. Le nombre de threads choisi par défaut par le compilateur peut ne pas produire les meilleures performances. L’utilisateur doit donc essayer manuellement différents nombres de threads pour améliorer les performances. Nous démontrons que les modèles de programmation OpenMP et OpenACC peuvent atteindre de meilleures performances avec un iii effort de programmation moindre, mais les compilateurs OpenMP/OpenACC atteignent rapide- ment leur limite lorsque le code de région déchargée a une forte intensité arithmétique, nécessite un nombre très élevé d’accès à la mémoire globale et contient plusieurs boucles imbriquées. Dans de tels cas, des langages de bas niveau doivent être utilisés. Nous discutons également du problème d’alias des pointeurs dans les codes GPU et proposons deux outils d’analyse statiques qui perme- ttent d’insérer automatiquement les qualificateurs de type et le remplacement par scalaire dans le code source. Ph.D Report iv Rokiatou DIARRA Dedication To my late father Oumar DIARRA To my dear mother Korotimi DIARRA To my dear brothers and sisters To Minkoro FOMBA, my best half This work would not have been possible without your love and your unconditional support. v Ph.D Report vi Rokiatou DIARRA Acknowledgements I thank God for giving me the courage, the patience, and the strength to face all the difficulties and obstacles I have encountered in the last three years. The successful completion of this thesis would not have been possible without the guidance, in- fallible support and encouragement of many people. I take this opportunity to express my sincere gratitude and appreciation to all those who contributed to the success of my thesis. I am deeply indebted to both my advisor Alain MÉRIGOT and my co-supervisor Bastien VINCKE for their persistent encouragement and motivation,
Recommended publications
  • Other Apis What’S Wrong with Openmp?
    Threaded Programming Other APIs What’s wrong with OpenMP? • OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming CPU cycles. – cannot arbitrarily start/stop threads – cannot put threads to sleep and wake them up later • OpenMP is good for programs where each thread is doing (more-or-less) the same thing. • Although OpenMP supports C++, it’s not especially OO friendly – though it is gradually getting better. • OpenMP doesn’t support other popular base languages – e.g. Java, Python What’s wrong with OpenMP? (cont.) Can do this Can do this Can’t do this Threaded programming APIs • Essential features – a way to create threads – a way to wait for a thread to finish its work – a mechanism to support thread private data – some basic synchronisation methods – at least a mutex lock, or atomic operations • Optional features – support for tasks – more synchronisation methods – e.g. condition variables, barriers,... – higher levels of abstraction – e.g. parallel loops, reductions What are the alternatives? • POSIX threads • C++ threads • Intel TBB • Cilk • OpenCL • Java (not an exhaustive list!) POSIX threads • POSIX threads (or Pthreads) is a standard library for shared memory programming without directives. – Part of the ANSI/IEEE 1003.1 standard (1996) • Interface is a C library – no standard Fortran interface – can be used with C++, but not OO friendly • Widely available – even for Windows – typically installed as part of OS – code is pretty portable • Lots of low-level control over behaviour of threads • Lacks a proper memory consistency model Thread forking #include <pthread.h> int pthread_create( pthread_t *thread, const pthread_attr_t *attr, void*(*start_routine, void*), void *arg) • Creates a new thread: – first argument returns a pointer to a thread descriptor.
    [Show full text]
  • Opencl on Shared Memory Multicore Cpus
    OpenCL on shared memory multicore CPUs Akhtar Ali, Usman Dastgeer and Christoph Kessler Book Chapter N.B.: When citing this work, cite the original article. Part of: Proceedings of the 5th Workshop on MULTIPROG -2012. E. Ayguade, B. Gaster, L. Howes, P. Stenström, O. Unsal (eds), 2012. Copyright: HiPEAC Network of Excellence Available at: Linköping University Electronic Press http://urn.kb.se/resolve?urn=urn:nbn:se:liu:diva-93951 Published in: Proc. Fifth Workshop on Programmability Issues for Multi-Core Computers (MULTIPROG-2012) at HiPEAC-2012 conference, Paris, France, Jan. 2012. OpenCL for programming shared memory multicore CPUs Akhtar Ali, Usman Dastgeer, and Christoph Kessler PELAB, Dept. of Computer and Information Science, Linköping University, Sweden [email protected] {usman.dastgeer,christoph.kessler}@liu.se Abstract. Shared memory multicore processor technology is pervasive in mainstream computing. This new architecture challenges programmers to write code that scales over these many cores to exploit the full compu- tational power of these machines. OpenMP and Intel Threading Build- ing Blocks (TBB) are two of the popular frameworks used to program these architectures. Recently, OpenCL has been defined as a standard by Khronos group which focuses on programming a possibly heteroge- neous set of processors with many cores such as CPU cores, GPUs, DSP processors. In this work, we evaluate the effectiveness of OpenCL for programming multicore CPUs in a comparative case study with OpenMP and Intel TBB for five benchmark applications: matrix multiply, LU decomposi- tion, 2D image convolution, Pi value approximation and image histogram generation. The evaluation includes the effect of compiler optimizations for different frameworks, OpenCL performance on different vendors’ plat- forms and the performance gap between CPU-specific and GPU-specific OpenCL algorithms for execution on a modern GPU.
    [Show full text]
  • Heterogeneous Task Scheduling for Accelerated Openmp
    Heterogeneous Task Scheduling for Accelerated OpenMP ? ? Thomas R. W. Scogland Barry Rountree† Wu-chun Feng Bronis R. de Supinski† ? Department of Computer Science, Virginia Tech, Blacksburg, VA 24060 USA † Center for Applied Scientific Computing, Lawrence Livermore National Laboratory, Livermore, CA 94551 USA [email protected] [email protected] [email protected] [email protected] Abstract—Heterogeneous systems with CPUs and computa- currently requires a programmer either to program in at tional accelerators such as GPUs, FPGAs or the upcoming least two different parallel programming models, or to use Intel MIC are becoming mainstream. In these systems, peak one of the two that support both GPUs and CPUs. Multiple performance includes the performance of not just the CPUs but also all available accelerators. In spite of this fact, the models however require code replication, and maintaining majority of programming models for heterogeneous computing two completely distinct implementations of a computational focus on only one of these. With the development of Accelerated kernel is a difficult and error-prone proposition. That leaves OpenMP for GPUs, both from PGI and Cray, we have a clear us with using either OpenCL or accelerated OpenMP to path to extend traditional OpenMP applications incrementally complete the task. to use GPUs. The extensions are geared toward switching from CPU parallelism to GPU parallelism. However they OpenCL’s greatest strength lies in its broad hardware do not preserve the former while adding the latter. Thus support. In a way, though, that is also its greatest weak- computational potential is wasted since either the CPU cores ness. To enable one to program this disparate hardware or the GPU cores are left idle.
    [Show full text]
  • Using the GNU Compiler Collection (GCC)
    Using the GNU Compiler Collection (GCC) Using the GNU Compiler Collection by Richard M. Stallman and the GCC Developer Community Last updated 23 May 2004 for GCC 3.4.6 For GCC Version 3.4.6 Published by: GNU Press Website: www.gnupress.org a division of the General: [email protected] Free Software Foundation Orders: [email protected] 59 Temple Place Suite 330 Tel 617-542-5942 Boston, MA 02111-1307 USA Fax 617-542-2652 Last printed October 2003 for GCC 3.3.1. Printed copies are available for $45 each. Copyright c 1988, 1989, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2000, 2001, 2002, 2003, 2004 Free Software Foundation, Inc. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.2 or any later version published by the Free Software Foundation; with the Invariant Sections being \GNU General Public License" and \Funding Free Software", the Front-Cover texts being (a) (see below), and with the Back-Cover Texts being (b) (see below). A copy of the license is included in the section entitled \GNU Free Documentation License". (a) The FSF's Front-Cover Text is: A GNU Manual (b) The FSF's Back-Cover Text is: You have freedom to copy and modify this GNU Manual, like GNU software. Copies published by the Free Software Foundation raise funds for GNU development. i Short Contents Introduction ...................................... 1 1 Programming Languages Supported by GCC ............ 3 2 Language Standards Supported by GCC ............... 5 3 GCC Command Options .........................
    [Show full text]
  • AMD Accelerated Parallel Processing Opencl Programming Guide
    AMD Accelerated Parallel Processing OpenCL Programming Guide November 2013 rev2.7 © 2013 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD Accelerated Parallel Processing, the AMD Accelerated Parallel Processing logo, ATI, the ATI logo, Radeon, FireStream, FirePro, Catalyst, and combinations thereof are trade- marks of Advanced Micro Devices, Inc. Microsoft, Visual Studio, Windows, and Windows Vista are registered trademarks of Microsoft Corporation in the U.S. and/or other jurisdic- tions. Other names are for informational purposes only and may be trademarks of their respective owners. OpenCL and the OpenCL logo are trademarks of Apple Inc. used by permission by Khronos. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The information contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or envi- ronmental damage may occur.
    [Show full text]
  • NVIDIA CUDA on IBM POWER8: Technical Overview, Software Installation, and Application Development
    Redpaper Dino Quintero Wei Li Wainer dos Santos Moschetta Mauricio Faria de Oliveira Alexander Pozdneev NVIDIA CUDA on IBM POWER8: Technical overview, software installation, and application development Overview The exploitation of general-purpose computing on graphics processing units (GPUs) and modern multi-core processors in a single heterogeneous parallel system has proven highly efficient for running several technical computing workloads. This applied to a wide range of areas such as chemistry, bioinformatics, molecular biology, engineering, and big data analytics. Recently launched, the IBM® Power System S824L comes into play to explore the use of the NVIDIA Tesla K40 GPU, combined with the latest IBM POWER8™ processor, providing a unique technology platform for high performance computing. This IBM Redpaper™ publication discusses the installation of the system, and the development of C/C++ and Java applications using the NVIDIA CUDA platform for IBM POWER8. Note: CUDA stands for Compute Unified Device Architecture. It is a parallel computing platform and programming model created by NVIDIA and implemented by the GPUs that they produce. The following topics are covered: Advantages of NVIDIA on POWER8 The IBM Power Systems S824L server Software stack System monitoring Application development Tuning and debugging Application examples © Copyright IBM Corp. 2015. All rights reserved. ibm.com/redbooks 1 Advantages of NVIDIA on POWER8 The IBM and NVIDIA partnership was announced in November 2013, for the purpose of integrating IBM POWER®-based systems with NVIDIA GPUs, and enablement of GPU-accelerated applications and workloads. The goal is to deliver higher performance and better energy efficiency to companies and data centers. This collaboration produced its initial results in 2014 with: The announcement of the first IBM POWER8 system featuring NVIDIA Tesla GPUs (IBM Power Systems™ S824L).
    [Show full text]
  • Memory Tagging and How It Improves C/C++ Memory Safety Kostya Serebryany, Evgenii Stepanov, Aleksey Shlyapnikov, Vlad Tsyrklevich, Dmitry Vyukov Google February 2018
    Memory Tagging and how it improves C/C++ memory safety Kostya Serebryany, Evgenii Stepanov, Aleksey Shlyapnikov, Vlad Tsyrklevich, Dmitry Vyukov Google February 2018 Introduction 2 Memory Safety in C/C++ 2 AddressSanitizer 2 Memory Tagging 3 SPARC ADI 4 AArch64 HWASAN 4 Compiler And Run-time Support 5 Overhead 5 RAM 5 CPU 6 Code Size 8 Usage Modes 8 Testing 9 Always-on Bug Detection In Production 9 Sampling In Production 10 Security Hardening 11 Strengths 11 Weaknesses 12 Legacy Code 12 Kernel 12 Uninitialized Memory 13 Possible Improvements 13 Precision Of Buffer Overflow Detection 13 Probability Of Bug Detection 14 Conclusion 14 Introduction Memory safety in C and C++ remains largely unresolved. A technique usually called “memory tagging” may dramatically improve the situation if implemented in hardware with reasonable overhead. This paper describes two existing implementations of memory tagging: one is the full hardware implementation in SPARC; the other is a partially hardware-assisted compiler-based tool for AArch64. We describe the basic idea, evaluate the two implementations, and explain how they improve memory safety. This paper is intended to initiate a wider discussion of memory tagging and to motivate the CPU and OS vendors to add support for it in the near future. Memory Safety in C/C++ C and C++ are well known for their performance and flexibility, but perhaps even more for their extreme memory unsafety. This year we are celebrating the 30th anniversary of the Morris Worm, one of the first known exploitations of a memory safety bug, and the problem is still not solved.
    [Show full text]
  • Parallel Programming
    Parallel Programming Libraries and implementations Outline • MPI – distributed memory de-facto standard • Using MPI • OpenMP – shared memory de-facto standard • Using OpenMP • CUDA – GPGPU de-facto standard • Using CUDA • Others • Hybrid programming • Xeon Phi Programming • SHMEM • PGAS MPI Library Distributed, message-passing programming Message-passing concepts Explicit Parallelism • In message-passing all the parallelism is explicit • The program includes specific instructions for each communication • What to send or receive • When to send or receive • Synchronisation • It is up to the developer to design the parallel decomposition and implement it • How will you divide up the problem? • When will you need to communicate between processes? Message Passing Interface (MPI) • MPI is a portable library used for writing parallel programs using the message passing model • You can expect MPI to be available on any HPC platform you use • Based on a number of processes running independently in parallel • HPC resource provides a command to launch multiple processes simultaneously (e.g. mpiexec, aprun) • There are a number of different implementations but all should support the MPI 2 standard • As with different compilers, there will be variations between implementations but all the features specified in the standard should work. • Examples: MPICH2, OpenMPI Point-to-point communications • A message sent by one process and received by another • Both processes are actively involved in the communication – not necessarily at the same time • Wide variety of semantics provided: • Blocking vs. non-blocking • Ready vs. synchronous vs. buffered • Tags, communicators, wild-cards • Built-in and custom data-types • Can be used to implement any communication pattern • Collective operations, if applicable, can be more efficient Collective communications • A communication that involves all processes • “all” within a communicator, i.e.
    [Show full text]
  • Openmp API 5.1 Specification
    OpenMP Application Programming Interface Version 5.1 November 2020 Copyright c 1997-2020 OpenMP Architecture Review Board. Permission to copy without fee all or part of this material is granted, provided the OpenMP Architecture Review Board copyright notice and the title of this document appear. Notice is given that copying is by permission of the OpenMP Architecture Review Board. This page intentionally left blank in published version. Contents 1 Overview of the OpenMP API1 1.1 Scope . .1 1.2 Glossary . .2 1.2.1 Threading Concepts . .2 1.2.2 OpenMP Language Terminology . .2 1.2.3 Loop Terminology . .9 1.2.4 Synchronization Terminology . 10 1.2.5 Tasking Terminology . 12 1.2.6 Data Terminology . 14 1.2.7 Implementation Terminology . 18 1.2.8 Tool Terminology . 19 1.3 Execution Model . 22 1.4 Memory Model . 25 1.4.1 Structure of the OpenMP Memory Model . 25 1.4.2 Device Data Environments . 26 1.4.3 Memory Management . 27 1.4.4 The Flush Operation . 27 1.4.5 Flush Synchronization and Happens Before .................. 29 1.4.6 OpenMP Memory Consistency . 30 1.5 Tool Interfaces . 31 1.5.1 OMPT . 32 1.5.2 OMPD . 32 1.6 OpenMP Compliance . 33 1.7 Normative References . 33 1.8 Organization of this Document . 35 i 2 Directives 37 2.1 Directive Format . 38 2.1.1 Fixed Source Form Directives . 43 2.1.2 Free Source Form Directives . 44 2.1.3 Stand-Alone Directives . 45 2.1.4 Array Shaping . 45 2.1.5 Array Sections .
    [Show full text]
  • Statically Detecting Likely Buffer Overflow Vulnerabilities
    Statically Detecting Likely Buffer Overflow Vulnerabilities David Larochelle [email protected] University of Virginia, Department of Computer Science David Evans [email protected] University of Virginia, Department of Computer Science Abstract Buffer overflow attacks may be today’s single most important security threat. This paper presents a new approach to mitigating buffer overflow vulnerabilities by detecting likely vulnerabilities through an analysis of the program source code. Our approach exploits information provided in semantic comments and uses lightweight and efficient static analyses. This paper describes an implementation of our approach that extends the LCLint annotation-assisted static checking tool. Our tool is as fast as a compiler and nearly as easy to use. We present experience using our approach to detect buffer overflow vulnerabilities in two security-sensitive programs. 1. Introduction ed a prototype tool that does this by extending LCLint [Evans96]. Our work differs from other work on static detection of buffer overflows in three key ways: (1) we Buffer overflow attacks are an important and persistent exploit semantic comments added to source code to security problem. Buffer overflows account for enable local checking of interprocedural properties; (2) approximately half of all security vulnerabilities we focus on lightweight static checking techniques that [CWPBW00, WFBA00]. Richard Pethia of CERT have good performance and scalability characteristics, identified buffer overflow attacks as the single most im- but sacrifice soundness and completeness; and (3) we portant security problem at a recent software introduce loop heuristics, a simple approach for engineering conference [Pethia00]; Brian Snow of the efficiently analyzing many loops found in typical NSA predicted that buffer overflow attacks would still programs.
    [Show full text]
  • Pattern Matching
    Functional Programming Steven Lau March 2015 before function programming... https://www.youtube.com/watch?v=92WHN-pAFCs Models of computation ● Turing machine ○ invented by Alan Turing in 1936 ● Lambda calculus ○ invented by Alonzo Church in 1930 ● more... Turing machine ● A machine operates on an infinite tape (memory) and execute a program stored ● It may ○ read a symbol ○ write a symbol ○ move to the left cell ○ move to the right cell ○ change the machine’s state ○ halt Turing machine Have some fun http://www.google.com/logos/2012/turing-doodle-static.html http://www.ioi2012.org/wp-content/uploads/2011/12/Odometer.pdf http://wcipeg.com/problems/desc/ioi1211 Turing machine incrementer state symbol action next_state _____ state 0 __1__ state 1 0 _ or 0 write 1 1 _10__ state 2 __1__ state 1 0 1 write 0 2 _10__ state 0 __1__ state 0 _00__ state 2 1 _ left 0 __0__ state 2 _00__ state 0 __0__ state 0 1 0 or 1 right 1 100__ state 1 _10__ state 1 2 0 left 0 100__ state 1 _10__ state 1 100__ state 1 _10__ state 1 100__ state 1 _10__ state 0 100__ state 0 _11__ state 1 101__ state 1 _11__ state 1 101__ state 1 _11__ state 0 101__ state 0 λ-calculus Beware! ● think mathematical, not C++/Pascal ● (parentheses) are for grouping ● variables cannot be mutated ○ x = 1 OK ○ x = 2 NO ○ x = x + 1 NO λ-calculus Simplification 1 of 2: ● Only anonymous functions are used ○ f(x) = x2+1 f(1) = 12+1 = 2 is written as ○ (λx.x2+1)(1) = 12+1 = 2 note that f = λx.x2+1 λ-calculus Simplification 2 of 2: ● Only unary functions are used ○ a binary function can be written as a unary function that return another unary function ○ (λ(x,y).x+y)(1,2) = 1+2 = 3 is written as [(λx.(λy.x+y))(1)](2) = [(λy.1+y)](2) = 1+2 = 3 ○ this technique is known as Currying Haskell Curry λ-calculus ● A lambda term has 3 forms: ○ x ○ λx.A ○ AB where x is a variable, A and B are lambda terms.
    [Show full text]
  • Pathscale ENZO GTC12 S0631 – Programming Heterogeneous Many-Cores Using Directives C
    PathScale ENZO GTC12 S0631 – Programming Heterogeneous Many-Cores Using Directives C. Bergström | May 14th, 2012 Brief Introduction to ENZO 2 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 ENZO Overview & Goals Speed transition to GPU & many-core systems • Simplify the task of migrating software written in C, C++ & Fortran • Uses OpenHMPP Standard (easy migration) • CAPS HMPP compatible Performance & HPC focused • Fully exploits NVIDIA GPU features • Generates native instructions optimized for NVIDIA GPU 3 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Project Schedule & Status 4 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Project Schedule . ENZO Production release June 2012 – OpenHMPP 2.5 C, C++ and Fortran . Next ENZO Production release October 2012 – More tools and better support for libraries – x8664 OpenHMPP task parallelism (similar to OMP3 tasks) – More optimizations (IPA / CG2 / textures) – OpenHMPP 3.0 – CUDA 4.x – Kepler 5 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Project Status . OpenHMPP 2.5 – Running CAPS C & Fortran Labs – PathScale written HMPP test suite – Customer code . New C++ compiler – Perennial C++VS and CVSA regression free – Corner case compile time issues – Corner case runtime issues . Ongoing effort – Performance tuning & benchmarking – Compiler robustness – Nightly compiler builds to address issues 6 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Performance 7 | PathScale GTC12 S0631 Tutorial | May 14th, 2012 Performance . NVIDIA Tesla 2050 - “Lab2” SGEMM – ENZO – /opt/enzo/bin/pathcc -hmpp
    [Show full text]