Gossamer: a Lightweight Approach to Using Multicore Machines

Total Page:16

File Type:pdf, Size:1020Kb

Gossamer: a Lightweight Approach to Using Multicore Machines Gossamer: A Lightweight Approach to Using Multicore Machines Item Type text; Electronic Dissertation Authors Roback, Joseph Anthony Publisher The University of Arizona. Rights Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author. Download date 06/10/2021 14:59:16 Link to Item http://hdl.handle.net/10150/194468 GOSSAMER: A LIGHTWEIGHT APPROACH TO USING MULTICORE MACHINES by Joseph Anthony Roback A Dissertation Submitted to the Faculty of the DEPARTMENT OF COMPUTER SCIENCE In Partial Fulfillment of the Requirements For the Degree of DOCTOR OF PHILOSOPHY In the Graduate College THE UNIVERSITY OF ARIZONA 2010 2 STATEMENT BY AUTHOR This dissertation has been submitted in partial fulfillment of requirements for an advanced degree at the University of Arizona and is deposited in the University Library to be made available to borrowers under rules of the Library. Brief quotations from this dissertation are allowable without special permission, provided that accurate acknowledgment of source is made. Requests for permission for extended quotation from or reproduction of this manuscript in whole or in part may be granted by the head of the major department or the Dean of the Graduate College when in his or her judgment the proposed use of the material is in the interests of scholarship. In all other instances, however, permission must be obtained from the author. SIGNED: Joseph Anthony Roback 3 ACKNOWLEDGEMENTS I cannot adequately express my gratitude to my advisor, Dr. Greg Andrews for his inspiration, support, encouragement, trust, and patience throughout my doctoral work. I have been very fortunate to have an advisor who helped me overcome many difficult situations and finish this dissertation. I am also very grateful to him for his generosity, coordination, and supervision, which made it possible for me to complete this dissertation in a distance of more than 1,000 miles (even after his official retirement). I wish Greg a long and healthy retirement with his wife Mary and their family. I would also like to thank the other members of my committee. Saumya Debray was always willing to put up with my ignorance of compiler theory and to take my random questions as I walked by his office’s open door. David Lowenthal and John Hartman also provided helpful advice and feedback. I would also like to thank the National Science Foundation. This research was supported in part by NSF Grants CNS-0410918 and CNS-0615347. A special thanks to my fellow graduate students from the department of com- puter science: Kevin Coogan, Somu Perianayagam, Igor Crk, Kyri Pavlou, and Varun Khare. They always were around for lunch or coffee when procrastination ranked higher than dissertation work. Most importantly, I'd like to thank my family, who have been very supportive throughout my life and academic career. I would especially like to thank my mother for her endless support and for encouraging me to pursue my passion in computer science. I would like to thank my grandfather for his endless teachings about life, and my grandmother, Jean-Jean, who was always there to mend my torn clothes. Last but certainly not least, a big thanks to Jess, my patient and loving soul mate whom I love dearly. 4 DEDICATION To Mom, Jean-Jean, and Gramp, for all their support and generosity. 5 TABLE OF CONTENTS LIST OF FIGURES................................7 LIST OF TABLES.................................8 ABSTRACT....................................9 CHAPTER 1 INTRODUCTION........................ 10 1.1 Current Approaches............................ 11 1.2 The Gossamer Approach and Contributions.............. 12 1.3 Outline................................... 13 CHAPTER 2 ANNOTATIONS......................... 14 2.1 Task and Recursive Parallelism..................... 14 2.2 Loop Parallelism............................. 15 2.3 Domain Decomposition.......................... 16 2.4 Synchronization.............................. 20 2.5 MapReduce................................ 21 CHAPTER 3 EXAMPLES............................ 23 3.1 Quicksort................................. 23 3.2 N-Queens................................. 23 3.3 Bzip2.................................... 25 3.4 Matrix Multiplication........................... 26 3.5 Sparse Matrix Multiplication....................... 27 3.6 Jacobi Iteration.............................. 30 3.7 Run Length Encoding.......................... 31 3.8 Word Count................................ 32 3.9 Multigrid................................. 34 3.10 Gravitational N-Body Problem..................... 34 3.11 Summary................................. 37 CHAPTER 4 TRANSLATOR AND RUN-TIME SYSTEM.......... 40 4.1 Recursive and Task Parallelism..................... 42 4.2 Loop Parallelism............................. 44 4.3 Domain Decomposition.......................... 47 4.4 Additional Synchronization....................... 50 6 TABLE OF CONTENTS { Continued 4.5 MapReduce................................ 54 4.6 Static Program Analysis and Feedback................. 55 4.7 Run-Time Program Analysis and Performance Profiling........ 57 4.8 Combining Multiple Levels of Parallelism................ 58 CHAPTER 5 PERFORMANCE......................... 60 5.1 Testing Methodology and Hardware................... 60 5.2 Application Performance on Intel Xeon E5405 (8-core)........ 61 5.3 Application Performance on SGI Altix 4700 (32-core)......... 65 5.4 Application-Independent Gossamer Overheads............. 68 5.5 Summary................................. 69 CHAPTER 6 RELATED WORK........................ 70 6.1 Thread Packages and Libraries...................... 70 6.2 Parallelizing Compilers.......................... 71 6.3 Parallel Languages............................ 72 6.4 Stream and Vector Processing Packages................. 75 6.5 Annotation Based Languages...................... 78 6.6 Performance of Gossamer versus Cilk++ and OpenMP........ 79 CHAPTER 7 CONCLUSION AND FUTURE WORK............ 82 APPENDIX A GOSSAMER ANNOTATIONS................. 85 A.1 Task Parallelism.............................. 85 A.2 Loop Parallelism............................. 85 A.3 Domain Decomposition.......................... 86 A.4 Synchronization.............................. 86 A.5 MapReduce................................ 87 APPENDIX B GOSSAMER MANUAL..................... 88 B.1 Installing Gossamer............................ 88 B.2 Compiling and Running Gossamer Programs.............. 88 B.3 Debugging and Profiling Gossamer Programs.............. 91 REFERENCES................................... 94 7 LIST OF FIGURES 3.1 Example: Quicksort............................ 24 3.2 Example: N-Queens Problem...................... 24 3.3 Example: Bzip2.............................. 26 3.4 Example: Matrix Multiplication..................... 27 3.5 Example: Sparse Matrix Multiplication................. 28 3.6 Example: Jacobi Iteration........................ 29 3.7 Example: Run Length Encoding..................... 31 3.8 Example: Word Count.......................... 33 3.9 Example: Multigrid............................ 35 3.10 Example: N-Body............................. 36 4.1 Gossamer Overview............................ 41 4.2 Two Dimensional divide ......................... 48 4.3 MapReduce Hash Table......................... 53 6.1 Matrix Multiplication Using Only One Filament Per Server Thread. 81 8 LIST OF TABLES 2.1 Gossamer Annotations.......................... 14 3.1 Applications and the Annotations They Use.............. 38 3.2 Dwarf Computational Patterns and Applications............ 39 5.1 Execution Times on Intel Xeon..................... 62 5.2 Application Speedups on Intel Xeon................... 62 5.3 Execution Times on SGI Altix 4700................... 66 5.4 Application Speedups on SGI Altix 4700................ 66 5.5 Run-Time Initialization Overheads (CPU Cycles)........... 69 5.6 Filament Creation Overheads (Intel Xeon workstation)........ 69 6.1 Gossamer, Cilk++, and OpenMP: Execution Times.......... 79 6.2 Gossamer, Cilk++, and OpenMP: Speedups.............. 80 9 ABSTRACT The key to performance improvements in the multicore era is for software to utilize the newly available concurrency. Consequently, programmers will have to learn new programming techniques, and software systems will have to be able to manage the parallelism effectively. The challenge is to do so simply, portably, and efficiently. This dissertation presents a lightweight programming framework called Gossamer that is easy to use, enables the solution of a broad range of parallel programming problems, and produces efficient code. Gossamer supports task and recursive par- allelism, iterative parallelism, domain decomposition, pipelined computations, and MapReduce computations. Gossamer contains (1) a set of high-level annotations that one adds to a sequential program to specify concurrency and synchroniza- tion, (2) a source-to-source translator that produces an optimized program, and (3) a run-time system that provides efficient threads and synchronization. The annotation-based programming model simplifies writing parallel programs by allow- ing the programmer to concentrate on the application and not the extensive book- keeping involved with concurrency and synchronization; moreover, the annotations never reference any particulars of the underlying hardware. 10 CHAPTER
Recommended publications
  • Optimizing Applications for Multicore by Intel Software Engineer Levent Akyil Welcome to the Parallel Universe
    Letter to the Editor by parallelism author and expert James Reinders Are You Ready to Enter a Parallel Universe: Optimizing Applications for Multicore by Intel software engineer Levent Akyil Welcome to the Parallel Universe Contents Think Parallel or Perish, BY JAMES REINDERS .........................................................................................2 James Reinders, Lead Evangelist and a Director with Intel® Software Development Products, sees a future where every software developer needs to be thinking about parallelism first when programming. He first published“ Think Parallel or Perish“ three years ago. Now he revisits his comments to offer an update on where we have gone and what still lies ahead. Parallelization Methodology...................................................................................................................... 4 The four stages of parallel application development addressed by Intel® Parallel Studio. Writing Parallel Code Safely, BY PETER VARHOL ........................................................................... 5 Writing multithreaded code to take full advantage of multiple processors and multicore processors is difficult. The new Intel® Parallel Studio should help us bridge that gap. Are You Ready to Enter a Parallel Universe: Optimizing Applications for Multicore, BY LEVENT AKYIL .............................................. 8 A look at parallelization methods made possible by the new Intel® Parallel Studio—designed for Microsoft Visual Studio* C/C++ developers of Windows* applications.
    [Show full text]
  • Andrzej Nowak - Bio
    Multi-core Architectures Multi-core Architectures Andrzej Nowak - Bio 2005-2006 Intel Corporation IEEE 802.16d/e WiMax development Theme: Towards Reconfigggurable High-Performance Comppguting Linux kernel performance optimizations research Lecture 2 2006 Master Engineer diploma in Computer Science Multi-core Architectures Distributed Applications & Internet Systems Computer Systems Modeling 2007-2008 CERN openlab Andrzej Nowak Multi-core technologies CERN openlab (Geneva, Switzerland) Performance monitoring Systems architecture Inverted CERN School of Computing, 3-5 March 2008 1 iCSC2008, Andrzej Nowak, CERN openlab 2 iCSC2008, Andrzej Nowak, CERN openlab Multi-core Architectures Multi-core Architectures Introduction Objectives: Explain why multi-core architectures have become so popular Explain why parallelism is such a good bet for the near future Provide information about multi-core specifics Discuss the changes in computing landscape Discuss the impact of hardware on software Contents: Hardware part THEFREERIDEISOVERTHE FREE RIDE IS OVER Software part Recession looms? Outlook 3 iCSC2008, Andrzej Nowak, CERN openlab 4 iCSC2008, Andrzej Nowak, CERN openlab Towards Reconfigurable High-Performance Computing Lecture 2 iCSC 2008 3-5 March 2008, CERN Multi-core Architectures 1 Multi-core Architectures Multi-core Architectures Fundamentals of scalability Moore’s Law (1) Scalability – “readiness for enlargement” An observation made in 1965 by Gordon Moore, the co- founder of Intel Corporation: Good scalability: Additional
    [Show full text]
  • Efficiency, Energy Efficiency and Programming of Accelerated HPC Servers: Highlights of PRACE Studies
    Efficiency, energy efficiency and programming of accelerated HPC servers: Highlights of PRACE studies Lennart Johnsson Department of Computer Science University of Houston and School of Computer Science and Communications KTH To appear in Springer Verlag “GPU Solutions to Multi-scale Problems in Science and Engineering”, 2011 2 Lennart Johnsson Abstract During the last few years the convergence in architecture for High-Performance Computing systems that took place for over a decade has been replaced by a di- vergence. The divergence is driven by the quest for performance, cost- performance and in the last few years also energy consumption that during the life-time of a system have come to exceed the HPC system cost in many cases. Mass market, specialized processors, such as the Cell Broadband Engine (CBE) and Graphics Processors, have received particular attention, the latter especially after hardware support for double-precision floating-point arithmetic was intro- duced about three years ago. The recent support of Error Correcting Code (ECC) for memory and significantly enhanced performance for double-precision arithme- tic in the current generation of Graphic Processing Units (GPUs) have further so- lidified the interest in GPUs for HPC. In order to assess the issues involved in potentially deploying clusters with nodes consisting of commodity microprocessors with some type of specialized processor for enhanced performance or enhanced energy efficiency or both for science and engineering workloads, PRACE, the Partnership for Advanced Com- puting in Europe, undertook a study that included three types of accelerators, the CBE, GPUs and ClearSpeed, and tools for their programming. The study focused on assessing performance, efficiency, power efficiency for double-precision arithmetic and programmer productivity.
    [Show full text]
  • Design of a Parallel Multi-Threaded Programming Model for Multi-Core Processors
    DESIGN OF A PARALLEL MULTI-THREADED PROGRAMMING MODEL FOR MULTI-CORE PROCESSORS By Muhammad Ali Ismail Thesis submitted for the Degree of Doctor of Philosophy Department of Computer and Information Systems Engineering NED University of Engineering & Technology University Road, Karachi - 75270, Pakistan 2011 DESIGN OF A PARALLEL MULTI-THREADED PROGRAMMING MODEL FOR MULTI-CORE PROCESSORS PhD Thesis By Muhammad Ali Ismail Batch: 2008-2009 Project Advisor: Prof. Dr. Shahid Hafeez Mirza Project Co-supervisor: Prof. Dr. Talat Altaf 2011 Department of Computer and Information Systems Engineering NED University of Engineering & Technology University Road, Karachi - 75270, Pakistan Certificate Certified that the thesis entitled, “DEVELOPMENT OF A NEW PARALLEL MULTI-THREADED PROGRAMMING MODEL FOR MULTI-CORE PROCESSORS” which is being submitted by Mr. Muhammad Ali Ismail for the award of degree of Doctor of Philosophy in Computer & Information Systems Engineering Department of NED University of Engineering and Technology is a record of candidate’s own original work carried out by him under our supervision and guidance. The work incorporated in this thesis has not been submitted elsewhere for the award of any other degree. ___________________ _________________________ Prof. Dr. Talat Altaf, Prof. Dr. Shahid Hafeez Mirza Dean (ECE ), NEDUET Professor, UIT PhD Co-supervisor PhD Supervisor Acknowledgements In first place, I would like to thank the Almighty Allah for His countless blessings. In fact, all praise and glory belongs to Him and none has the right and worth to be worshipped but He. Next, I would like to acknowledge my home university, NED university of Engineering and Technology, for giving me the opportunity and funding for conducting this PhD research.
    [Show full text]
  • Download (633Kb)
    Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED “AS IS”. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE, TO ANY INTELLECTUAL PROPERTY RIGHTS IS GRANTED BY THIS DOCUMENT. INTEL ASSUMES NO LIABILITY WHATSOEVER AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY, RELATING TO THIS INFORMATION INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT OR OTHER INTELLECTUAL PROPERTY RIGHT. Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, reference www.intel.com/software/products. Intel, Intel Core and the Intel logo are trademarks of Intel Corporation in the U.S. and other countries. Ct: A New Paradigm for Data Parallel Computing *Other names and brands may be claimed as the property of others. Hans–Christian Hoppe Intel Visual Computing Institute, Intel Labs Copyright © 2009. Intel Corporation. using material from http://intel.com/software/products Anwar Ghuloum, CJ Newburn, Michael McCool and Stefanus Du Toit Performance and Productivity Libraries, Developer Products Division, Software and Services Group Software & Services Group, Developer Products Division Software & Services Group, Developer Products Division Copyright © 2009, Intel Corporation. All rights reserved. Copyright © 2009, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.
    [Show full text]
  • General Purpose Programming on Modern Graphics Hardware
    GENERAL PURPOSE PROGRAMMING ON MODERN GRAPHICS HARDWARE Robert Fleming, B.Sc. Thesis Prepared for the Degree of MASTER OF SCIENCE UNIVERSITY OF NORTH TEXAS May 2008 APPROVED: Robert Renka, Major Professor Armin Mikler, Committee Member Tom Jacob, Committee Member Krishna Kavi, Chair of the Department of Computer Science and Engineering Oscar Garcia, Dean of the College of Engineering Sandra L. Terrell, Dean of the Robert B. Toulouse School of Graduate Studies Fleming, Robert. General Purpose Programming on Modern Graphics Hardware. Master of Science (Computer Science), May 2008, 90 pp., 1 table, 3 figures, references, 124 titles. I start with a brief introduction to the graphics processing unit (GPU) as well as general-purpose computation on modern graphics hardware (GPGPU). Next, I explore the motivations for GPGPU programming, and the capabilities of modern GPUs (including advantages and disadvantages). Also, I give the background required for further exploring GPU programming, including the terminology used and the resources available. Finally, I include a comprehensive survey of previous and current GPGPU work, and end with a look at the future of GPU programming. Copyright 2008 by Robert Fleming ii To Wanda and Cheesepuff, my partners in crime. iii TABLE OF CONTENTS Page LIST OF TABLES AND ILLUSTRATIONS ......................................................................vi Chapters 1. MOTIVATION ............................................................................................ 1 1.1 What Kinds of Computation Suit the
    [Show full text]
  • Next Generation Developer Tools from Intel
    Next Generation Developer Tools from Intel Focusing on new Performance Analysis Tool 4th Parallel Tools Workshop HLRS, September 2010 Software & Services Group, Developer Products Division Copyright© 2010, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Developer Tools of Intel ---Today Fully Supported Developer Products: … and numerous unsupported tools like PIN, PTU, AVX Emulator,Emulator, CnC freely available from whatif.intel.com and other public sitessites 2 Software & Services Group, Developer Products Division Copyright© 2010, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 2 Intel ®®® Parallel Studio –––Windows Only New Version 2011 Released Sep 2 ndndnd ! Intel ® Parallel Advisor Intel ® Parallel Composer • Demystifies and speeds parallel • C++ Compiler, libraries, debugger application design plug-in • Direct user where to parallelize • Intel ® Parallel Debugger Extension: • Explorer & Modeler tools give Simplify debugging parallel code parallelism design insight and • A family of Parallel models New! analysis Set of portable, reliable, future proof • Proposes parallelism scheme best parallel models for both data and suited for application task parallelism, includes Intel TBB, • Summary view for decision-making Cilk Plus • Support for Intel Array Building Blocks • Intel ® IPP, OpenMP * included Intel ® Parallel Inspector Intel ® Parallel Amplifier • Dynamic Memory & thread • Parallel performance analyzer Analysis for serial and parallel • Find both serial and parallel performance bottlenecks code • Scale application performance with • Finds thread data races & more processor cores deadlocks • No special compilers or builds • Finds memory leaks and necessary corruption Software & Services Group, Developer Products Division Copyright© 2010, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners.
    [Show full text]
  • Software Plattform Embedded Systems 2020
    SPES Software Plattform Embedded Systems 2020 - Beschreibung der Fallstudie „Multi-core and Many-core Evaluation“ - Version: 1.0 Projektbezeichnung SPES 2020 Verantwortlich Richard Membarth QS-Verantwortlich Mario Körner, Frank Hannig Erstellt am 18.06.2010 Zuletzt geändert 18.06.2010 16:09 Freigabestatus Vertraulich für Partner Projektöffentlich X Öffentlich Bearbeitungszustand in Bearbeitung vorgelegt X fertig gestellt Weitere Produktinformationen Erzeugung Richard Membarth Mitwirkend Frank Hannig, Mario Körner, Wieland Eckert Änderungsverzeichnis Änderung Geänderte Beschreibung der Änderung Autor Zustand Kapitel Nr. Datum Version 1 22.06.10 1.0 Alle Finale Reporterstellung Prüfverzeichnis Die folgende Tabelle zeigt einen Überblick über alle Prüfungen – sowohl Eigenprüfungen wie auch Prüfungen durch eigenständige Qualitätssicherung – des vorliegenden Dokumentes. Geprüfte Neuer Datum Anmerkungen Prüfer Version Produktzustand Contents 1 Evaluation Application and Criteria 7 1.1 2D/3D Image Registration . .7 1.2 Checklist . .9 1.3 Profiling . 10 1.4 Parallelization Approaches . 12 2 Multi-Core Frameworks 15 2.1 OpenMP . 15 2.2 Cilk++ . 29 2.3 Threading Building Blocks . 43 2.4 RapidMind . 57 2.5 OpenCL . 70 2.6 Discussion . 82 3 Many-Core Frameworks 89 3.1 RapidMind . 89 3.2 PGI Accelerator . 92 3.3 OpenCL . 99 3.4 CUDA . 105 3.5 Intel Ct . 112 3.6 Larrabee . 112 3.7 Related Frameworks . 112 3.7.1 Bulk-Synchronous GPU Programming . 112 3.7.2 HMPP Workbench . 113 3.7.3 Goose . 113 3.7.4 YDEL for CUDA . 113 3.8 Discussion . 113 4 Conclusion 117 Bibliography 121 3 Abstract In this study, different parallelization frameworks for standard shared memory multi-core processors as well as parallelization frameworks for many-core processors like graphics cards are evaluated.
    [Show full text]
  • Cross-Platform Software Optimization with Intel's Ct Technology
    Cross-platform Software Optimization with Intel’s Ct Technology Software AND Services Copyright © 2014, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. Agenda • What is Intel’s Ct Technology? • How does Intel’s Ct Technology work? • How to port C++ code to Intel’s Ct Technology? 2 What is Intel’s Ct Technology? 3 Throughput and Visual Computing Applications Entertainment “RMS” Applications Recognition Mining TIPS Learning & Travel Synthesis Personal Media RMS Creation and GIPS Management 3D & Video Tera-scale Performance MIPS Mult- Media Multi-core IPSInstruction= per second KIPS Text Single Core Health Kilobytes Megabytes Gigabytes Terabytes Dataset Size 4 Software Opportunities and Risks Opportunities: SW Performance ISV Differentiation Increasing Performance Many-core architecture offers unprecedented opportunities for differentiation: • Model-based applications: New functionalities enriching the user’s experience • Improved quality: Higher resolution/accuracy in results • Increased usability: Raw power to fuel more sophisticated usage models 5 Software Opportunities and Risks Opportunities: Risk: SW Performance Programmer ISV Differentiation "Headaches" (Reduced Productivity) Increasing Performance Reduced Productivity: – Data races: New class of bugs that increase exponentially with degree of parallelism – Performance tuning: Programmers can expect to spend most time here – Forward scaling: Anticipating future HW enhancements – Modularity: Difficult to compose parallel programs – Proprietary Tools: Reduces choice and challenges build infrastructure 6 Software Opportunities and Risks Risk: Opportunities: Programmer SW Performance “Headaches” ISV Differentiation Throughput (Reduced Computing SW Productivity) Technologies Increasing Performance Visual computing software technologies make it easier: Industry Standard APIs: DirectX*, OpenGL* Native Tools: SSE/AVX/Co-processor Native Compilers, OpenMP*, Intel® Threading Building Blocks, etc.
    [Show full text]
  • Intel® Software Development Tools Intel® Parallel Studio Seminar
    From Serial to Parallel Intel® Software Products for HPC Hubert Haberstock Technical Consulting Engineer Software & Services Group, Developer Products Division Copyright © 2010, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 1 Agenda 09:15 Saluto di benvenuto e apertura dei lavori (Assintel) 09:30 Architettura Parallela: lo sviluppo dell’hardware (Intel Italy) 10:00 Parallel Programming, today and tomorrow (Intel) 11:05 Dal seriale al parallelo Intel High-Performance Tools (Intel) Intel Parallel Studio (C. Fiorillo) 13:30 Un caso di studio (C. Fiorillo) 14:15 Parallel programming methods and tools (Intel) 15:00 Ottimizzazione di applicazioni (C. Fiorillo) 16:00 Wrap up, Q&A, seminar evaluation Intel Software Tools - Parallel Design Cycle Serial Visualization of Architectural applications and the system Analysis Highly optimizing Introducing compilers delivering scalable solutions Parallelism Detect latent programming Validating to address unique Correctness challenges Tune for performance Performance and scalability Tuning Parallel Software & Services Group, Developer Products Division Copyright © 2010, Intel Corporation. All rights reserved. *Other brands and names are the property of their respective owners. 3 Intel® VTune™ Analyzer 9.1 "The Intel VTune Identifies hard to find Performance performance bottlenecks Analyzer took a multi-day task and • Features turned it into a sub- day task." – Tune process or thread parallel code – Low overhead sampling Randy Camp – Graphical call graph VP, Software R&D MUSICMATCH Inc. – View results on source or assembly • Applications – System-wide Analysis – Finding hotspots – Tuning libraries, drivers and applications – Remote Data Collector for Windows*/Linux* – Programming Lanugage and Compiler Independent – Supports latest Intel Processors Windows* Linux* Mac* IA32 Intel64 IA64 Multicore √ √ √ √ √ √ Software & Services Group, Developer Products Division Copyright © 2010, Intel Corporation.
    [Show full text]
  • Qilin: Exploiting Parallelism on Heterogeneous Multiprocessors with Adaptive Mapping
    Qilin: Exploiting Parallelism on Heterogeneous Multiprocessors with Adaptive Mapping Chi-Keung Luk Sunpyo Hong Hyesoon Kim Software Pathfinding and Electrical and Computer College of Computing Innovations Engineering School of Computer Science Software and Services Group Georgia Institute of Georgia Institute of Intel Corporation Technology Technology Hudson, MA 01749 Atlanta, GA 30332 Atlanta, GA 30332 [email protected] [email protected] [email protected] ABSTRACT 1. INTRODUCTION Heterogeneous multiprocessors are increasingly important in the Multiprocessors have emerged as mainstream computing plat- multi-core era due to their potential for high performance and en- forms nowadays. Among them, an increasingly popular class are ergy efficiency. In order for software to fully realize this potential, those with heterogeneous architectures. By providing processing the step that maps computations to processing elements must be as elements (PEs)1 of different performance/energy characteristics on automated as possible. However, the state-of-the-art approach is the same machine, these architectures could deliver high perfor- to rely on the programmer to specify this mapping manually and mance and energy efficiency [14]. The most well-known hetero- statically. This approach is not only labor intensive but also not geneous architecture today is probably the IBM/Sony Cell archi- adaptable to changes in runtime environments like problem sizes tecture, which consists of a Power processor and eight synergistic and hardware/software configurations. In this study, we propose processors [26]. In the personal computer (PC) world, a desktop adaptive mapping, a fully automatic technique to map computa- now has a multicore CPU and a GPU, exposing multiple levels of tions to processing elements on a CPU+GPU machine.
    [Show full text]