On the Effective Parallel Programming of Multi-Core Processors

Total Page:16

File Type:pdf, Size:1020Kb

On the Effective Parallel Programming of Multi-Core Processors On the Effective Parallel Programming of Multi-core Processors PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Delft, op gezag van de Rector Magnificus prof. ir. K.C.A.M. Luyben, voorzitter van het College voor Promoties, in het openbaar te verdedigen op dinsdag 7 december 2010 om 10.00 uur door Ana Lucia VARBˆ ANESCUˇ Master of Science in Architecturi Avansate de Calcul, Universitatea POLITEHNICA Bucure¸sti, Romˆania geboren te Boekarest, Roemeni¨e. Dit proefschrift is goedgekeurd door de promotor: Prof. dr. ir. H.J. Sips Samenstelling promotiecommissie: Rector Magnificus voorzitter Prof.dr.ir. H.J. Sips Technische Universiteit Delft, promotor Prof.dr.ir. A.J.C. van Gemund Technische Universiteit Delft Prof.dr.ir. H.E. Bal Vrije Universiteit Amsterdam, The Netherlands Prof.dr.ir. P.H.J. Kelly Imperial College of London, United Kingdom Prof.dr.ing. N. T¸˜apu¸s Universitatea Politehnic˜aBucure¸sti, Romania Dr. R.M. Badia Barcelona Supercomputing Center, Spain Dr. M. Perrone IBM TJ Watson Research Center, USA Prof.dr. K.G. Langendoen Technische Universiteit Delft, reservelid This work was carried out in the ASCI graduate school. Advanced School for Computing and Imaging ASCI dissertation series number 223. The work in this dissertation was performed in the context of the Scalp project, funded by STW-PROGRESS. Parts of this work have been done in collaboration and with the material support of the IBM T.J. Watson Research Center, USA. Part of this work has been done in collaboration with the Barcelona Supercomputing Center, and supported by a HPC Europa mobility grant. Copyright c 2010 by Ana Lucia Varbanescu. All rights reserved. No part of the material protected by this copyright notice may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording or by any information storage and retrieval system, without the prior permission from the author. The author can be contacted at [email protected]. ISBN/EAN: 978-90-8570-708-0 Printed in the Netherlands by W¨ohrmann Print Service (www.wps.nl). Acknowledgments There have been many moments in which my PhD felt a lonely fight against the world. But now, when writing the very last pages of my thesis - the acknowledgements - I realize I’ve been lucky to have a lot of people by my side, all the way. I try to thank all of them here - in a vague chronological order. As the list can only be limited and therefore prone to omissions - I will probably fail. For those of you that I might have omitted: please accept my apologies, my thanks, and a glass of wine from me, to wash away the sorrow. I want to thank Prof. Nicolae Tapus, for his guidance during my academic years in Romania, and for accepting to be part of my committee. I am grateful to late Prof. Irina Athanasiu, for her continuous encouragement, and for her role in my first collaboration with TUDelft. And I want to thank Prof. Sorin Cotofana, for making my first encounters with TUDelft, The Netherlands, and the Dutch interesting and comfortable. Special thanks go to Prof. Henk Sips, my supervisor, for his help and support, for putting up with all my last minute ideas and panic moments (starting from that CPC’06 paper). I owe him thanks for his patience in waiting for my thesis, who was “almost ready” for many months, but also for our enlighting discussions about non-scientific topics such as sailing, traveling, wine, or earrings. I’d like to thank Prof. Arjan van Gemund for his help on PAM-SoC and the SCALP project, and for the valuable insight in performance analysis, modeling, and prediction. Furhter thanks go to the other SCALP members: Dr. Maik Nijhuis, Dr. Herbert Bos, and Prof. Henri Bal (from Vrije Universiteit Amsterdam) - for our collaboration in building SP@CE. One extra “thank you” to Prof. Henri Bal for his useful comments on the draft of my thesis. Thanks to Paul Stravers and Jos van Eindhoven (currently at VectorFabrics) for their help with the SpaceCAKE architecture itself, and for the valuable insights in the pragmatic thinking of “the industry”. I would like to thank Dr. Michael Perrone, my two-time manager at IBM TJ Watson Research Center - his enthusiasm for the Cell/B.E., and his ability to quickly detect good solutions from bad ones and extract the essence from any (vague) idea played an important role in my work so far. Thank you, Michael, for your help and support, for the subtle ways you found to motivate me, and for the continuous interest you showed in my career. Last, but not least, thank you for the suggestions made for improving this thesis. Additional thanks go to the other members of the Cell Solutions team with whom I worked at TJ Watson - Gordon Braudaway, Daniele Scarpazza, Fabrizio Petrini, and Virat Agarwal - for sharing with me some of the secrets of Cell/B.E. programming. I owe thanks to Rosa Badia and Xavier Martorell, with whom I worked for six weeks at the Barcelona Supercomputing Center (and continued to collaborate afterwards). Our work on classifying, i ii comparing, and evaluating a large variety of programming models, has helped me understand more of the subtle effects that programmability and productivity have on platform adoption and performance. Thanks to Ian Buck and Stephen Jones for my nice summer internship in the CUDA team from NVIDIA, where I had to opportunity to see how difficult it is to build the right software tools and models in a hardware-dominated environment. I’d like to thank Prof. Paul Kelly for accepting to be part of my committee, for his comments on the draft of my thesis, as well as for our occassional, yet very interesting discussions on parallelism and performance. And thank you, Prof. Joerg Keller, for our fruitful collaboration, which I hope to maintain and improve. I would like to end my “professional” acknowledgements by thanking two of my most stable scientific partners, and friends: Dr. Rob van Nieuwpoort and Alexander van Amesfoort. Thanks, guys, for the smooth collaboration in our various projects together, and for our entertaining discussions. I surely hope we will keep working together for a while longer. I am grateful for all the help and support I got from my friends and my family. Thanks, Monel and Mihaitza for always finding the energy to spend some time with me. Thanks, Robert, for keeping me up to date with the pubs in Bucharest. Thanks Nicoleta, Ervin and Anca Vitos, and Michou for your good thoughts from far away. Thank you, Ana Maria O. and Corina for the fun in Amsterdam, and for geek girls nights out. Thank you, Karan, for being so close to me from the other side of the world. Thank you, Ivo, for the wine and cheese evenings, and for being such a great officemate - no wonder I never wanted to work from home! Thanks, Otto, for your help with the Dutch in all its/their forms, and for your visits on the dark side of the corridor. Thanks, Pawel and Alex F., for making weekends at the TUDelft bearable. These years, basketball has been an important part of my life. I am grateful to my teammates, trainers, and coaches from Punch, for the many fun memories on and off court. Special thanks to Tessa, Marije, Marta, Charlotte, Siri, Bridgit, and Jill for their kind friendship outside the club. Thanks to “my” Heren3 team, for accepting such a shouting coach, and for teaching me the inner beauty of pink. Thank you, Remus, Coco, Patricia, and Rita for the many crazy tournaments we have played together - I enjoyed them a lot. And thank you very much, Maider, for extending our friendship so far beyond basketball. I would not remember my time in the US with that much pleasure if it wouldn’t have been for the friends I made there. Thank you, Janet, for your kindness and your care during the months I spent in your house. Thank you, Nicolas, for the biking trips and for your restless efforts to make me understand the beauty of american football (sorry, I still don’t get it). Thank you, Marcio, for the afternoon coffees. Thank you, Virat and Leo, for the tennis matches and the movie evenings in the conference rooms. Thank you, Mesut, for the wine and cheese parties at Janet’s. Thank you, Vlad, for showing me some of the beautiful places of California. Finally, there are four more special people in my life, to whom I am most grateful - without them, all these years would have been a lot tougher. Thank you, Ancutza, for being my best friend all these years, for being there in the cloudiest of days (and those are plenty in The Netherlands!), for listening and for talking, for always being honest, direct, and fair. Thank you, Alexandru, for your love, for putting up with my mood swings and pre-deadline panic, for standing by my side when I needed it, for believing in me and for encouraging me; thank you for your patience and your calm support. And thank you, Mom and Dad, for teaching me to work hard and to never give up, and for inspiring me to be the best in what I do. Thank you for being so proud of me. Thank you, Mom, for teaching me the magic powers of good food. Thank you, Dad, for these last 25 years of basketball and computing machineries. Thank you for everything. I love you very very much.
Recommended publications
  • Data-Level Parallelism
    Fall 2015 :: CSE 610 – Parallel Computer Architectures Data-Level Parallelism Nima Honarmand Fall 2015 :: CSE 610 – Parallel Computer Architectures Overview • Data Parallelism vs. Control Parallelism – Data Parallelism: parallelism arises from executing essentially the same code on a large number of objects – Control Parallelism: parallelism arises from executing different threads of control concurrently • Hypothesis: applications that use massively parallel machines will mostly exploit data parallelism – Common in the Scientific Computing domain • DLP originally linked with SIMD machines; now SIMT is more common – SIMD: Single Instruction Multiple Data – SIMT: Single Instruction Multiple Threads Fall 2015 :: CSE 610 – Parallel Computer Architectures Overview • Many incarnations of DLP architectures over decades – Old vector processors • Cray processors: Cray-1, Cray-2, …, Cray X1 – SIMD extensions • Intel SSE and AVX units • Alpha Tarantula (didn’t see light of day ) – Old massively parallel computers • Connection Machines • MasPar machines – Modern GPUs • NVIDIA, AMD, Qualcomm, … • Focus of throughput rather than latency Vector Processors 4 SCALAR VECTOR (1 operation) (N operations) r1 r2 v1 v2 + + r3 v3 vector length add r3, r1, r2 vadd.vv v3, v1, v2 Scalar processors operate on single numbers (scalars) Vector processors operate on linear sequences of numbers (vectors) 6.888 Spring 2013 - Sanchez and Emer - L14 What’s in a Vector Processor? 5 A scalar processor (e.g. a MIPS processor) Scalar register file (32 registers) Scalar functional units (arithmetic, load/store, etc) A vector register file (a 2D register array) Each register is an array of elements E.g. 32 registers with 32 64-bit elements per register MVL = maximum vector length = max # of elements per register A set of vector functional units Integer, FP, load/store, etc Some times vector and scalar units are combined (share ALUs) 6.888 Spring 2013 - Sanchez and Emer - L14 Example of Simple Vector Processor 6 6.888 Spring 2013 - Sanchez and Emer - L14 Basic Vector ISA 7 Instr.
    [Show full text]
  • High Performance Computing Through Parallel and Distributed Processing
    Yadav S. et al., J. Harmoniz. Res. Eng., 2013, 1(2), 54-64 Journal Of Harmonized Research (JOHR) Journal Of Harmonized Research in Engineering 1(2), 2013, 54-64 ISSN 2347 – 7393 Original Research Article High Performance Computing through Parallel and Distributed Processing Shikha Yadav, Preeti Dhanda, Nisha Yadav Department of Computer Science and Engineering, Dronacharya College of Engineering, Khentawas, Farukhnagar, Gurgaon, India Abstract : There is a very high need of High Performance Computing (HPC) in many applications like space science to Artificial Intelligence. HPC shall be attained through Parallel and Distributed Computing. In this paper, Parallel and Distributed algorithms are discussed based on Parallel and Distributed Processors to achieve HPC. The Programming concepts like threads, fork and sockets are discussed with some simple examples for HPC. Keywords: High Performance Computing, Parallel and Distributed processing, Computer Architecture Introduction time to solve large problems like weather Computer Architecture and Programming play forecasting, Tsunami, Remote Sensing, a significant role for High Performance National calamities, Defence, Mineral computing (HPC) in large applications Space exploration, Finite-element, Cloud science to Artificial Intelligence. The Computing, and Expert Systems etc. The Algorithms are problem solving procedures Algorithms are Non-Recursive Algorithms, and later these algorithms transform in to Recursive Algorithms, Parallel Algorithms particular Programming language for HPC. and Distributed Algorithms. There is need to study algorithms for High The Algorithms must be supported the Performance Computing. These Algorithms Computer Architecture. The Computer are to be designed to computer in reasonable Architecture is characterized with Flynn’s Classification SISD, SIMD, MIMD, and For Correspondence: MISD. Most of the Computer Architectures preeti.dhanda01ATgmail.com are supported with SIMD (Single Instruction Received on: October 2013 Multiple Data Streams).
    [Show full text]
  • The Importance of Data
    The landscape of Parallel Programing Models Part 2: The importance of Data Michael Wong and Rod Burns Codeplay Software Ltd. Distiguished Engineer, Vice President of Ecosystem IXPUG 2020 2 © 2020 Codeplay Software Ltd. Distinguished Engineer Michael Wong ● Chair of SYCL Heterogeneous Programming Language ● C++ Directions Group ● ISOCPP.org Director, VP http://isocpp.org/wiki/faq/wg21#michael-wong ● [email protected][email protected] Ported ● Head of Delegation for C++ Standard for Canada Build LLVM- TensorFlow to based ● Chair of Programming Languages for Standards open compilers for Council of Canada standards accelerators Chair of WG21 SG19 Machine Learning using SYCL Chair of WG21 SG14 Games Dev/Low Latency/Financial Trading/Embedded Implement Releasing open- ● Editor: C++ SG5 Transactional Memory Technical source, open- OpenCL and Specification standards based AI SYCL for acceleration tools: ● Editor: C++ SG1 Concurrency Technical Specification SYCL-BLAS, SYCL-ML, accelerator ● MISRA C++ and AUTOSAR VisionCpp processors ● Chair of Standards Council Canada TC22/SC32 Electrical and electronic components (SOTIF) ● Chair of UL4600 Object Tracking ● http://wongmichael.com/about We build GPU compilers for semiconductor companies ● C++11 book in Chinese: Now working to make AI/ML heterogeneous acceleration safe for https://www.amazon.cn/dp/B00ETOV2OQ autonomous vehicle 3 © 2020 Codeplay Software Ltd. Acknowledgement and Disclaimer Numerous people internal and external to the original C++/Khronos group, in industry and academia, have made contributions, influenced ideas, written part of this presentations, and offered feedbacks to form part of this talk. But I claim all credit for errors, and stupid mistakes. These are mine, all mine! You can’t have them.
    [Show full text]
  • Software Optimization Guide for Amd Family 15H Processors (.Pdf)
    Software Optimization Guide for AMD Family 15h Processors Publication No. Revision Date 47414 3.06 January 2012 Advanced Micro Devices © 2012 Advanced Micro Devices, Inc. All rights reserved. The contents of this document are provided in connection with Advanced Micro Devices, Inc. (“AMD”) products. AMD makes no representations or warranties with respect to the accuracy or completeness of the contents of this publication and reserves the right to make changes to specifications and product descriptions at any time without notice. The infor- mation contained herein may be of a preliminary or advance nature and is subject to change without notice. No license, whether express, implied, arising by estoppel or other- wise, to any intellectual property rights is granted by this publication. Except as set forth in AMD’s Standard Terms and Conditions of Sale, AMD assumes no liability whatsoever, and disclaims any express or implied warranty, relating to its products including, but not limited to, the implied warranty of merchantability, fitness for a particular purpose, or infringement of any intellectual property right. AMD’s products are not designed, intended, authorized or warranted for use as compo- nents in systems intended for surgical implant into the body, or in other applications intended to support or sustain life, or in any other application in which the failure of AMD’s product could create a situation where personal injury, death, or severe property or environmental damage may occur. AMD reserves the right to discontinue or make changes to its products at any time without notice. Trademarks AMD, the AMD Arrow logo, and combinations thereof, AMD Athlon, AMD Opteron, 3DNow!, AMD Virtualization and AMD-V are trademarks of Advanced Micro Devices, Inc.
    [Show full text]
  • Introduction to Multi-Threading and Vectorization Matti Kortelainen Larsoft Workshop 2019 25 June 2019 Outline
    Introduction to multi-threading and vectorization Matti Kortelainen LArSoft Workshop 2019 25 June 2019 Outline Broad introductory overview: • Why multithread? • What is a thread? • Some threading models – std::thread – OpenMP (fork-join) – Intel Threading Building Blocks (TBB) (tasks) • Race condition, critical region, mutual exclusion, deadlock • Vectorization (SIMD) 2 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading Image courtesy of K. Rupp 3 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading • One process on a node: speedups from parallelizing parts of the programs – Any problem can get speedup if the threads can cooperate on • same core (sharing L1 cache) • L2 cache (may be shared among small number of cores) • Fully loaded node: save memory and other resources – Threads can share objects -> N threads can use significantly less memory than N processes • If smallest chunk of data is so big that only one fits in memory at a time, is there any other option? 4 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization What is a (software) thread? (in POSIX/Linux) • “Smallest sequence of programmed instructions that can be managed independently by a scheduler” [Wikipedia] • A thread has its own – Program counter – Registers – Stack – Thread-local memory (better to avoid in general) • Threads of a process share everything else, e.g. – Program code, constants – Heap memory – Network connections – File handles
    [Show full text]
  • (CITIUS) Phd DISSERTATION
    UNIVERSITY OF SANTIAGO DE COMPOSTELA DEPARTMENT OF ELECTRONICS AND COMPUTER SCIENCE CENTRO DE INVESTIGACION´ EN TECNOLOX´IAS DA INFORMACION´ (CITIUS) PhD DISSERTATION PERFORMANCE COUNTER-BASED STRATEGIES TO IMPROVE DATA LOCALITY ON MULTIPROCESSOR SYSTEMS: REORDERING AND PAGE MIGRATION TECHNIQUES Author: Juan Angel´ Lorenzo del Castillo PhD supervisors: Prof. Francisco Fernandez´ Rivera Dr. Juan Carlos Pichel Campos Santiago de Compostela, October 2011 Prof. Francisco Fernandez´ Rivera, professor at the Computer Architecture Group of the University of Santiago de Compostela Dr. Juan Carlos Pichel Campos, researcher at the Computer Architecture Group of the University of Santiago de Compostela HEREBY CERTIFY: That the dissertation entitled Performance Counter-based Strategies to Improve Data Locality on Multiprocessor Systems: Reordering and Page Migration Techniques has been developed by Juan Angel´ Lorenzo del Castillo under our direction at the Department of Electronics and Computer Science of the University of Santiago de Compostela in fulfillment of the requirements for the Degree of Doctor of Philosophy. Santiago de Compostela, October 2011 Francisco Fernandez´ Rivera, Profesor Catedratico´ de Universidad del Area´ de Arquitectura de Computadores de la Universidad de Santiago de Compostela Juan Carlos Pichel Campos, Profesor Doctor del Area´ de Arquitectura de Computadores de la Universidad de Santiago de Compostela HACEN CONSTAR: Que la memoria titulada Performance Counter-based Strategies to Improve Data Locality on Mul- tiprocessor Systems: Reordering and Page Migration Techniques ha sido realizada por D. Juan Angel´ Lorenzo del Castillo bajo nuestra direccion´ en el Departamento de Electronica´ y Computacion´ de la Universidad de Santiago de Compostela, y constituye la Tesis que presenta para optar al t´ıtulo de Doctor por la Universidad de Santiago de Compostela.
    [Show full text]
  • Demystifying Parallel and Distributed Deep Learning: an In-Depth Concurrency Analysis
    1 Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis TAL BEN-NUN and TORSTEN HOEFLER, ETH Zurich, Switzerland Deep Neural Networks (DNNs) are becoming an important tool in modern computing applications. Accelerating their training is a major challenge and techniques range from distributed algorithms to low-level circuit design. In this survey, we describe the problem from a theoretical perspective, followed by approaches for its parallelization. We present trends in DNN architectures and the resulting implications on parallelization strategies. We then review and model the different types of concurrency in DNNs: from the single operator, through parallelism in network inference and training, to distributed deep learning. We discuss asynchronous stochastic optimization, distributed system architectures, communication schemes, and neural architecture search. Based on those approaches, we extrapolate potential directions for parallelism in deep learning. CCS Concepts: • General and reference → Surveys and overviews; • Computing methodologies → Neural networks; Parallel computing methodologies; Distributed computing methodologies; Additional Key Words and Phrases: Deep Learning, Distributed Computing, Parallel Algorithms ACM Reference Format: Tal Ben-Nun and Torsten Hoefler. 2018. Demystifying Parallel and Distributed Deep Learning: An In-Depth Concurrency Analysis. 47 pages. 1 INTRODUCTION Machine Learning, and in particular Deep Learning [143], is rapidly taking over a variety of aspects in our daily lives.
    [Show full text]
  • Intel® Oneapi Programming Guide
    Intel® oneAPI Programming Guide Intel Corporation www.intel.com Notices and Disclaimers Contents Notices and Disclaimers....................................................................... 5 Chapter 1: Introduction oneAPI Programming Model Overview ..........................................................7 Data Parallel C++ (DPC++)................................................................8 oneAPI Toolkit Distribution..................................................................9 About This Guide.......................................................................................9 Related Documentation ..............................................................................9 Chapter 2: oneAPI Programming Model Sample Program ..................................................................................... 10 Platform Model........................................................................................ 14 Execution Model ...................................................................................... 15 Memory Model ........................................................................................ 17 Memory Objects.............................................................................. 19 Accessors....................................................................................... 19 Synchronization .............................................................................. 20 Unified Shared Memory.................................................................... 20 Kernel Programming
    [Show full text]
  • Parallel Programming: Performance
    Introduction Parallel Programming: Rich space of techniques and issues • Trade off and interact with one another Performance Issues can be addressed/helped by software or hardware • Algorithmic or programming techniques • Architectural techniques Todd C. Mowry Focus here on performance issues and software techniques CS 418 • Point out some architectural implications January 20, 25 & 26, 2011 • Architectural techniques covered in rest of class –2– CS 418 Programming as Successive Refinement Outline Not all issues dealt with up front 1. Partitioning for performance Partitioning often independent of architecture, and done first 2. Relationshipp,y of communication, data locality and • View machine as a collection of communicating processors architecture – balancing the workload – reducing the amount of inherent communication 3. Orchestration for performance – reducing extra work • Tug-o-war even among these three issues For each issue: Then interactions with architecture • Techniques to address it, and tradeoffs with previous issues • Illustration using case studies • View machine as extended memory hierarchy • Application to g rid solver –extra communication due to archlhitectural interactions • Some architectural implications – cost of communication depends on how it is structured • May inspire changes in partitioning Discussion of issues is one at a time, but identifies tradeoffs 4. Components of execution time as seen by processor • Use examples, and measurements on SGI Origin2000 • What workload looks like to architecture, and relate to software issues –3– CS 418 –4– CS 418 Page 1 Partitioning for Performance Load Balance and Synch Wait Time Sequential Work 1. Balancing the workload and reducing wait time at synch Limit on speedup: Speedupproblem(p) < points Max Work on any Processor • Work includes data access and other costs 2.
    [Show full text]
  • Executable Modelling for Highly Parallel Accelerators
    Executable Modelling for Highly Parallel Accelerators Lorenzo Addazi, Federico Ciccozzi, Bjorn¨ Lisper School of Innovation, Design, and Engineering Malardalen¨ University -Vaster¨ as,˚ Sweden florenzo.addazi, federico.ciccozzi, [email protected] Abstract—High-performance embedded computing is develop- development are lagging behind. Programming accelerators ing rapidly since applications in most domains require a large and needs device-specific expertise in computer architecture and increasing amount of computing power. On the hardware side, low-level parallel programming for the accelerator at hand. this requirement is met by the introduction of heterogeneous systems, with highly parallel accelerators that are designed to Programming is error-prone and debugging can be very dif- take care of the computation-heavy parts of an application. There ficult due to potential race conditions: this is particularly is today a plethora of accelerator architectures, including GPUs, disturbing as many embedded applications, like autonomous many-cores, FPGAs, and domain-specific architectures such as AI vehicles, are safety-critical, meaning that failures may have accelerators. They all have their own programming models, which lethal consequences. Furthermore software becomes hard to are typically complex, low-level, and involve explicit parallelism. This yields error-prone software that puts the functional safety at port between different kinds of accelerators. risk, unacceptable for safety-critical embedded applications. In A convenient solution is to adopt a model-driven approach, this position paper we argue that high-level executable modelling where the computation-intense parts of the application are languages tailored for parallel computing can help in the software expressed in a high-level, accelerator-agnostic executable mod- design for high performance embedded applications.
    [Show full text]
  • Programming Multicores: Do Applications Programmers Need to Write Explicitly Parallel Programs?
    [3B2-14] mmi2010030003.3d 11/5/010 16:48 Page 2 .......................................................................................................................................................................................................................... PROGRAMMING MULTICORES: DO APPLICATIONS PROGRAMMERS NEED TO WRITE EXPLICITLY PARALLEL PROGRAMS? .......................................................................................................................................................................................................................... IN THIS PANEL DISCUSSION FROM THE 2009 WORKSHOP ON COMPUTER ARCHITECTURE Arvind RESEARCH DIRECTIONS,DAVID AUGUST AND KESHAV PINGALI DEBATE WHETHER EXPLICITLY Massachusetts Institute PARALLEL PROGRAMMING IS A NECESSARY EVIL FOR APPLICATIONS PROGRAMMERS, of Technology ASSESS THE CURRENT STATE OF PARALLEL PROGRAMMING MODELS, AND DISCUSS David August POSSIBLE ROUTES TOWARD FINDING THE PROGRAMMING MODEL FOR THE MULTICORE ERA. Princeton University Keshav Pingali Moderator’s introduction: Arvind in the 1970s and 1980s, when two main Do applications programmers need to write approaches were developed. Derek Chiou explicitly parallel programs? Most people be- The first approach required that the com- lieve that the current method of parallel pro- pilers do all the work in finding the parallel- University of Texas gramming is impeding the exploitation of ism. This was often referred to as the ‘‘dusty multicores. In other words, the number of decks’’ problem—that is, how to
    [Show full text]
  • Motmot Documentation Release 0
    motmot Documentation Release 0 Andrew Straw June 26, 2010 CONTENTS 1 Overview 3 1.1 The name motmot............................................3 1.2 Packages within motmot.........................................3 1.3 Mailing list................................................4 1.4 Related Software.............................................4 2 Download and installation instructions5 2.1 Quick install: FView application on Windows..............................5 3 Full install information 7 3.1 Supported operating systems.......................................7 3.2 Download.................................................7 3.3 Installation................................................7 3.4 Download direct from the source code repository............................8 4 Gallery of applications built on motmot packages9 4.1 Open source...............................................9 4.2 Closed source............................................... 12 5 Frequently Asked Questions 13 5.1 What cameras are supported?...................................... 13 5.2 What frame rates, image sizes, bit depths are possible?......................... 13 5.3 Which way is up? (Why are my images flipped or rotated?)...................... 13 6 Writing FView plugins 15 6.1 Overview................................................. 15 6.2 Register your FView plugin....................................... 15 6.3 Tutorials................................................. 15 7 Camera trigger device with precise timing and analog input 25 7.1 camtrig – Camera trigger
    [Show full text]