High Performance Integration of Data Parallel File Systems and Computing

Total Page:16

File Type:pdf, Size:1020Kb

High Performance Integration of Data Parallel File Systems and Computing HIGH PERFORMANCE INTEGRATION OF DATA PARALLEL FILE SYSTEMS AND COMPUTING: OPTIMIZING MAPREDUCE Zhenhua Guo Submitted to the faculty of the University Graduate School in partial fulfillment of the requirements for the degree Doctor of Philosophy in the Department of Computer Science Indiana University August 2012 Accepted by the Graduate Faculty, Indiana University, in partial fulfillment of the require- ments of the degree of Doctor of Philosophy. Doctoral Geoffrey Fox, Ph.D. Committee (Principal Advisor) Judy Qiu, Ph.D. Minaxi Gupta, Ph.D. David Leake, Ph.D. ii Copyright c 2012 Zhenhua Guo ALL RIGHTS RESERVED iii I dedicate this dissertation to my parents and my wife Mo. iv Acknowledgements First and foremost, I owe my sincerest gratitude to my advisor Prof. Geoffrey Fox. Throughout my Ph.D. research, he guided me into the research field of distributed systems; his insightful advice inspires me to identify the challenging research problems I am interested in; and his generous intelligence support is critical for me to tackle difficult research issues one after another. During the course of working with him, I learned how to become a professional researcher. I would like to thank my entire research committee: Dr. Judy Qiu, Prof. Minaxi Gupta, and Prof. David Leake. I am greatly indebted to them for their professional guidance, generous support, and valuable suggestions that were given throughout this research work. I am grateful to Dr. Judy Qiu for offering me the opportunities to participate into closely related projects. As a result, I obtained deeper understanding of related systems including Dryad and Twister, and could better position my research in the big picture of the whole research area. I would like to thank Dr. Marlon Pierce for his guidance and support of my initial science gateway research. Although the topic of this dissertation is not science gateway, he helped me to strengthen my research capability and lay the foundation for my subsequent research on distributed systems. In addition, he provided excellent administrative support for the use of our laboratory clusters. Whenever I had problems with user accounts or environment setup, he was always willing to help. I was fortunate to work with brilliant colleagues on publications and projects. My discussions with them broadened my research horizon and inspired new research ideas. I would like to thank Xiaoming Gao, Yuan Luo, Yang Ruan, Yiming Sun, Hui Li, and Tak-Lon Wu. v Without the continuous support and encouragement from my family, I could not have come this far. Whenever I encountered any hurdle or frustration, they were always there with me. The love carried by the soothing phone calls from my parents was the strength and motivation for me to pursue my dream. Words cannot fully express my heartfelt gratitude and appreciation to my lovely wife Mo. She accompanied me closely through hardship and tough times in this six year long journey. She made every effort to help with my life as well as my research. They make the long Ph.D. journey a joyful and rewarding experience. Finally, I would like to thank the administrative staff of Pervasive Technology Institute and the School of Informatics and Computing for providing continual help during my Ph.D. study. With their help, I could concentrate on my research. vi Abstract The ongoing data deluge brings parallel and distributed computing into the new data-intensive computing era, where many assumptions made by prior research on grid and High-Performance Computing need to be reviewed to check their validity and explore their performance implication. Data parallel systems, which are different from traditional HPC architecture in that compute nodes and storage nodes are not separated, have been proposed and widely deployed in both industry and academia. Many research issues, which did not exist before or were not under serious consideration, arise in this new architecture and have drastic influence on performance and scalability. MapReduce has been introduced by the information retrieval community, and has quickly demonstrated its usefulness, scalability and applicability. Its adoption of data centered approach yields higher throughput for data-intensive applications. In this thesis, we present our investigation and improvement of MapReduce. We identify the inefficien- cies of various aspects of MapReduce such as data locality, task granularity, resource utilization, and fault tolerance, and propose algorithms to mitigate the performance issues. Extensive evaluation is presented to demonstrate the effectiveness of our proposed algorithms and approaches. Besides, I, along with Yuan Luo and Yiming Sun, observe the inability of MapReduce to utilize cross-domain grid resources, and propose a MapReduce extension called Hierarchical MapReduce (HMR). In addition, to speed up the execution of our bioinformatics data visualization pipelines containing both single-pass and iterative MapReduce jobs, a workflow management system Hybrid MapReduce (HyMR) is presented built by Rang and me upon Hadoop and Twister. The thesis also includes a detailed performance evaluation of Hadoop and some storage systems, and provides useful insights to both framework and application developers. vii Contents Acknowledgements v Abstract vii 1 Introduction and Background3 1.1 Introduction...........................................3 1.2 Data Parallel Systems......................................5 1.2.1 Google File System (GFS)...............................5 1.2.2 MapReduce.......................................6 1.3 Motivation............................................8 1.4 Problem Definition....................................... 10 1.5 Contributions.......................................... 11 1.6 Dissertation Outline....................................... 12 2 Parallel Programming Models and Distributed Computing Runtimes 14 2.1 Programming Models...................................... 15 2.1.1 MultiThreading..................................... 15 2.1.2 Open Multi-Processing (OpenMP)........................... 17 2.1.3 Message Passing Interface (MPI)........................... 19 2.1.4 Partitioning Global Address Space (PGAS)...................... 22 2.1.5 MapReduce....................................... 23 2.1.6 Iterative MapReduce.................................. 23 2.2 Batch Queuing Systems..................................... 23 2.3 Data Parallel Runtimes..................................... 24 2.3.1 Hadoop......................................... 25 viii 2.3.2 Iterative MapReduce Runtimes............................. 25 2.3.3 Cosmos/Dryad..................................... 25 2.3.4 Sector and Sphere................................... 26 2.4 Cycle Scavenging and Volunteer Computing.......................... 28 2.4.1 Condor......................................... 28 2.4.2 Berkeley Open Infrastructure for Network Computing (BOINC)........... 29 2.5 Parallel Programming Languages................................ 29 2.5.1 Sawzall......................................... 29 2.5.2 Hive........................................... 30 2.5.3 Pig Latin........................................ 31 2.5.4 X10........................................... 32 2.6 Workflow............................................ 32 2.6.1 Grid workflow..................................... 32 2.6.2 MapReduce workflow................................. 33 3 Performance Evaluation of Data Parallel Systems 34 3.1 Swift............................................... 34 3.2 Testbeds............................................. 36 3.3 Evaluation of Hadoop...................................... 36 3.3.1 Job run time w.r.t the num. of nodes.......................... 39 3.3.2 Job run time w.r.t the number of map slots per node.................. 41 3.3.3 Run time of map tasks................................. 43 3.4 Evaluation of Storage Systems................................. 44 3.4.1 Local IO subsystem................................... 45 3.4.2 Network File System (NFS).............................. 46 3.4.3 Hadoop Distributed File System (HDFS)....................... 47 3.4.4 OpenStack Swift.................................... 47 3.4.5 Small file tests..................................... 49 3.5 Summary............................................ 50 ix 4 Data Locality Aware Scheduling 51 4.1 Traditional Approaches to Build Runtimes........................... 52 4.2 Data Locality Aware Approach................................. 54 4.3 Analysis of Data Locality In MapReduce........................... 56 4.3.1 Data Locality in MapReduce.............................. 56 4.3.2 Goodness of Data Locality............................... 57 4.4 A Scheduling Algorithm with Optimal Data Locality..................... 60 4.4.1 Non-optimality of dl-sched ............................... 60 4.4.2 lsap-sched: An Optimal Scheduler for Homogeneous Network............ 61 4.4.3 lsap-sched for Heterogeneous Network........................ 65 4.5 Experiments........................................... 67 4.5.1 Impact of Data Locality in Single-Cluster Environments............... 67 4.5.2 Impact of Data Locality in Cross-Cluster Environments................ 68 4.5.3 Impact of Various Factors on Data Locality...................... 69 4.5.4 Overhead of LSAP Solver............................... 73 4.5.5 Improvement of Data Locality by lsap-sched ..................... 74 4.5.6 Reduction of Data Locality Cost............................ 75 4.6 Integration of Fairness..................................... 79 4.6.1 Fairness
Recommended publications
  • Analysis of GPU-Libraries for Rapid Prototyping Database Operations
    Analysis of GPU-Libraries for Rapid Prototyping Database Operations A look into library support for database operations Harish Kumar Harihara Subramanian Bala Gurumurthy Gabriel Campero Durand David Broneske Gunter Saake University of Magdeburg Magdeburg, Germany fi[email protected] Abstract—Using GPUs for query processing is still an ongoing Usually, library operators for GPUs are either written by research in the database community due to the increasing hardware experts [12] or are available out of the box by device heterogeneity of GPUs and their capabilities (e.g., their newest vendors [13]. Overall, we found more than 40 libraries for selling point: tensor cores). Hence, many researchers develop optimal operator implementations for a specific device generation GPUs each packing a set of operators commonly used in one involving tedious operator tuning by hand. On the other hand, or more domains. The benefits of those libraries is that they are there is a growing availability of GPU libraries that provide constantly updated and tested to support newer GPU versions optimized operators for manifold applications. However, the and their predefined interfaces offer high portability as well as question arises how mature these libraries are and whether they faster development time compared to handwritten operators. are fit to replace handwritten operator implementations not only w.r.t. implementation effort and portability, but also in terms of This makes them a perfect match for many commercial performance. database systems, which can rely on GPU libraries to imple- In this paper, we investigate various general-purpose libraries ment well performing database operators. Some example for that are both portable and easy to use for arbitrary GPUs such systems are: SQreamDB using Thrust [14], BlazingDB in order to test their production readiness on the example of using cuDF [15], Brytlyt using the Torch library [16].
    [Show full text]
  • Bench - Benchmarking the State-Of- The-Art Task Execution Frameworks of Many- Task Computing
    MATRIX: Bench - Benchmarking the state-of- the-art Task Execution Frameworks of Many- Task Computing Thomas Dubucq, Tony Forlini, Virgile Landeiro Dos Reis, and Isabelle Santos Illinois Institute of Technology, Chicago, IL, USA {tdubucq, tforlini, vlandeir, isantos1}@hawk.iit.edu Stanford University. Finally HPX is a general purpose C++ Abstract — Technology trends indicate that exascale systems will runtime system for parallel and distributed applications of any have billion-way parallelism, and each node will have about three scale developed by Louisiana State University and Staple is a orders of magnitude more intra-node parallelism than today’s framework for developing parallel programs from Texas A&M. peta-scale systems. The majority of current runtime systems focus a great deal of effort on optimizing the inter-node parallelism by MATRIX is a many-task computing job scheduling system maximizing the bandwidth and minimizing the latency of the use [3]. There are many resource managing systems aimed towards of interconnection networks and storage, but suffer from the lack data-intensive applications. Furthermore, distributed task of scalable solutions to expose the intra-node parallelism. Many- scheduling in many-task computing is a problem that has been task computing (MTC) is a distributed fine-grained paradigm that considered by many research teams. In particular, Charm++ [4], aims to address the challenges of managing parallelism and Legion [5], Swift [6], [10], Spark [1][2], HPX [12], STAPL [13] locality of exascale systems. MTC applications are typically structured as direct acyclic graphs of loosely coupled short tasks and MATRIX [11] offer solutions to this problem and have with explicit input/output data dependencies.
    [Show full text]
  • Adaptive Data Migration in Load-Imbalanced HPC Applications
    Louisiana State University LSU Digital Commons LSU Doctoral Dissertations Graduate School 10-16-2020 Adaptive Data Migration in Load-Imbalanced HPC Applications Parsa Amini Louisiana State University and Agricultural and Mechanical College Follow this and additional works at: https://digitalcommons.lsu.edu/gradschool_dissertations Part of the Computer Sciences Commons Recommended Citation Amini, Parsa, "Adaptive Data Migration in Load-Imbalanced HPC Applications" (2020). LSU Doctoral Dissertations. 5370. https://digitalcommons.lsu.edu/gradschool_dissertations/5370 This Dissertation is brought to you for free and open access by the Graduate School at LSU Digital Commons. It has been accepted for inclusion in LSU Doctoral Dissertations by an authorized graduate school editor of LSU Digital Commons. For more information, please [email protected]. ADAPTIVE DATA MIGRATION IN LOAD-IMBALANCED HPC APPLICATIONS A Dissertation Submitted to the Graduate Faculty of the Louisiana State University and Agricultural and Mechanical College in partial fulfillment of the requirements for the degree of Doctor of Philosophy in The Department of Computer Science by Parsa Amini B.S., Shahed University, 2013 M.S., New Mexico State University, 2015 December 2020 Acknowledgments This effort has been possible, thanks to the involvement and assistance of numerous people. First and foremost, I thank my advisor, Dr. Hartmut Kaiser, who made this journey possible with their invaluable support, precise guidance, and generous sharing of expertise. It has been a great privilege and opportunity for me be your student, a part of the STE||AR group, and the HPX development effort. I would also like to thank my mentor and former advisor at New Mexico State University, Dr.
    [Show full text]
  • Concurrent Cilk: Lazy Promotion from Tasks to Threads in C/C++
    Concurrent Cilk: Lazy Promotion from Tasks to Threads in C/C++ Christopher S. Zakian, Timothy A. K. Zakian Abhishek Kulkarni, Buddhika Chamith, and Ryan R. Newton Indiana University - Bloomington, fczakian, tzakian, adkulkar, budkahaw, [email protected] Abstract. Library and language support for scheduling non-blocking tasks has greatly improved, as have lightweight (user) threading packages. How- ever, there is a significant gap between the two developments. In previous work|and in today's software packages|lightweight thread creation incurs much larger overheads than tasking libraries, even on tasks that end up never blocking. This limitation can be removed. To that end, we describe an extension to the Intel Cilk Plus runtime system, Concurrent Cilk, where tasks are lazily promoted to threads. Concurrent Cilk removes the overhead of thread creation on threads which end up calling no blocking operations, and is the first system to do so for C/C++ with legacy support (standard calling conventions and stack representations). We demonstrate that Concurrent Cilk adds negligible overhead to existing Cilk programs, while its promoted threads remain more efficient than OS threads in terms of context-switch overhead and blocking communication. Further, it enables development of blocking data structures that create non-fork-join dependence graphs|which can expose more parallelism, and better supports data-driven computations waiting on results from remote devices. 1 Introduction Both task-parallelism [1, 11, 13, 15] and lightweight threading [20] libraries have become popular for different kinds of applications. The key difference between a task and a thread is that threads may block|for example when performing IO|and then resume again.
    [Show full text]
  • Scalable and High Performance MPI Design for Very Large
    SCALABLE AND HIGH-PERFORMANCE MPI DESIGN FOR VERY LARGE INFINIBAND CLUSTERS DISSERTATION Presented in Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy in the Graduate School of The Ohio State University By Sayantan Sur, B. Tech ***** The Ohio State University 2007 Dissertation Committee: Approved by Prof. D. K. Panda, Adviser Prof. P. Sadayappan Adviser Prof. S. Parthasarathy Graduate Program in Computer Science and Engineering c Copyright by Sayantan Sur 2007 ABSTRACT In the past decade, rapid advances have taken place in the field of computer and network design enabling us to connect thousands of computers together to form high-performance clusters. These clusters are used to solve computationally challenging scientific problems. The Message Passing Interface (MPI) is a popular model to write applications for these clusters. There are a vast array of scientific applications which use MPI on clusters. As the applications operate on larger and more complex data, the size of the compute clusters is scaling higher and higher. Thus, in order to enable the best performance to these scientific applications, it is very critical for the design of the MPI libraries be extremely scalable and high-performance. InfiniBand is a cluster interconnect which is based on open-standards and gaining rapid acceptance. This dissertation presents novel designs based on the new features offered by InfiniBand, in order to design scalable and high-performance MPI libraries for large-scale clusters with tens-of-thousands of nodes. Methods developed in this dissertation have been applied towards reduction in overall resource consumption, increased overlap of computa- tion and communication, improved performance of collective operations and finally designing application-level benchmarks to make efficient use of modern networking technology.
    [Show full text]
  • DNA Scaffolds Enable Efficient and Tunable Functionalization of Biomaterials for Immune Cell Modulation
    ARTICLES https://doi.org/10.1038/s41565-020-00813-z DNA scaffolds enable efficient and tunable functionalization of biomaterials for immune cell modulation Xiao Huang 1,2, Jasper Z. Williams 2,3, Ryan Chang1, Zhongbo Li1, Cassandra E. Burnett4, Rogelio Hernandez-Lopez2,3, Initha Setiady1, Eric Gai1, David M. Patterson5, Wei Yu2,3, Kole T. Roybal2,4, Wendell A. Lim2,3 ✉ and Tejal A. Desai 1,2 ✉ Biomaterials can improve the safety and presentation of therapeutic agents for effective immunotherapy, and a high level of control over surface functionalization is essential for immune cell modulation. Here, we developed biocompatible immune cell-engaging particles (ICEp) that use synthetic short DNA as scaffolds for efficient and tunable protein loading. To improve the safety of chimeric antigen receptor (CAR) T cell therapies, micrometre-sized ICEp were injected intratumorally to present a priming signal for systemically administered AND-gate CAR-T cells. Locally retained ICEp presenting a high density of priming antigens activated CAR T cells, driving local tumour clearance while sparing uninjected tumours in immunodeficient mice. The ratiometric control of costimulatory ligands (anti-CD3 and anti-CD28 antibodies) and the surface presentation of a cytokine (IL-2) on ICEp were shown to substantially impact human primary T cell activation phenotypes. This modular and versatile bio- material functionalization platform can provide new opportunities for immunotherapies. mmune cell therapies have shown great potential for cancer the specific antibody27, and therefore a high and controllable den- treatment, but still face major challenges of efficacy and safety sity of surface biomolecules could allow for precise control of ligand for therapeutic applications1–3, for example, chimeric antigen binding and cell modulation.
    [Show full text]
  • Beowulf Clusters — an Overview
    WinterSchool 2001 Å. Ødegård Beowulf clusters — an overview Åsmund Ødegård April 4, 2001 Beowulf Clusters 1 WinterSchool 2001 Å. Ødegård Contents Introduction 3 What is a Beowulf 5 The history of Beowulf 6 Who can build a Beowulf 10 How to design a Beowulf 11 Beowulfs in more detail 12 Rules of thumb 26 What Beowulfs are Good For 30 Experiments 31 3D nonlinear acoustic fields 35 Incompressible Navier–Stokes 42 3D nonlinear water wave 44 Beowulf Clusters 2 WinterSchool 2001 Å. Ødegård Introduction Why clusters ? ² “Work harder” – More CPU–power, more memory, more everything ² “Work smarter” – Better algorithms ² “Get help” – Let more boxes work together to solve the problem – Parallel processing ² by Greg Pfister Beowulf Clusters 3 WinterSchool 2001 Å. Ødegård ² Beowulfs in the Parallel Computing picture: Parallel Computing MetaComputing Clusters Tightly Coupled Vector WS farms Pile of PCs NOW NT/Win2k Clusters Beowulf CC-NUMA Beowulf Clusters 4 WinterSchool 2001 Å. Ødegård What is a Beowulf ² Mass–market commodity off the shelf (COTS) ² Low cost local area network (LAN) ² Open Source UNIX like operating system (OS) ² Execute parallel application programmed with a message passing model (MPI) ² Anything from small systems to large, fast systems. The fastest rank as no.84 on todays Top500. ² The best price/performance system available for many applications ² Philosophy: The cheapest system available which solve your problem in reasonable time Beowulf Clusters 5 WinterSchool 2001 Å. Ødegård The history of Beowulf ² 1993: Perfect conditions for the first Beowulf – Major CPU performance advance: 80286 ¡! 80386 – DRAM of reasonable costs and densities (8MB) – Disk drives of several 100MBs available for PC – Ethernet (10Mbps) controllers and hubs cheap enough – Linux improved rapidly, and was in a usable state – PVM widely accepted as a cross–platform message passing model ² Clustering was done with commercial UNIX, but the cost was high.
    [Show full text]
  • Improving MPI Threading Support for Current Hardware Architectures
    University of Tennessee, Knoxville TRACE: Tennessee Research and Creative Exchange Doctoral Dissertations Graduate School 12-2019 Improving MPI Threading Support for Current Hardware Architectures Thananon Patinyasakdikul University of Tennessee, [email protected] Follow this and additional works at: https://trace.tennessee.edu/utk_graddiss Recommended Citation Patinyasakdikul, Thananon, "Improving MPI Threading Support for Current Hardware Architectures. " PhD diss., University of Tennessee, 2019. https://trace.tennessee.edu/utk_graddiss/5631 This Dissertation is brought to you for free and open access by the Graduate School at TRACE: Tennessee Research and Creative Exchange. It has been accepted for inclusion in Doctoral Dissertations by an authorized administrator of TRACE: Tennessee Research and Creative Exchange. For more information, please contact [email protected]. To the Graduate Council: I am submitting herewith a dissertation written by Thananon Patinyasakdikul entitled "Improving MPI Threading Support for Current Hardware Architectures." I have examined the final electronic copy of this dissertation for form and content and recommend that it be accepted in partial fulfillment of the equirr ements for the degree of Doctor of Philosophy, with a major in Computer Science. Jack Dongarra, Major Professor We have read this dissertation and recommend its acceptance: Michael Berry, Michela Taufer, Yingkui Li Accepted for the Council: Dixie L. Thompson Vice Provost and Dean of the Graduate School (Original signatures are on file with official studentecor r ds.) Improving MPI Threading Support for Current Hardware Architectures A Dissertation Presented for the Doctor of Philosophy Degree The University of Tennessee, Knoxville Thananon Patinyasakdikul December 2019 c by Thananon Patinyasakdikul, 2019 All Rights Reserved. ii To my parents Thanawij and Issaree Patinyasakdikul, my little brother Thanarat Patinyasakdikul for their love, trust and support.
    [Show full text]
  • A Cilk Implementation of LTE Base-Station Up- Link on the Tilepro64 Processor
    A Cilk Implementation of LTE Base-station up- link on the TILEPro64 Processor Master of Science Thesis in the Programme of Integrated Electronic System Design HAO LI Chalmers University of Technology Department of Computer Science and Engineering Goteborg,¨ Sweden, November 2011 The Author grants to Chalmers University of Technology and University of Gothen- burg the non-exclusive right to publish the Work electronically and in a non-commercial purpose make it accessible on the Internet. The Author warrants that he/she is the au- thor to the Work, and warrants that the Work does not contain text, pictures or other material that violates copyright law. The Author shall, when transferring the rights of the Work to a third party (for ex- ample a publisher or a company), acknowledge the third party about this agreement. If the Author has signed a copyright agreement with a third party regarding the Work, the Author warrants hereby that he/she has obtained any necessary permission from this third party to let Chalmers University of Technology and University of Gothenburg store the Work electronically and make it accessible on the Internet. A Cilk Implementation of LTE Base-station uplink on the TILEPro64 Processor HAO LI © HAO LI, November 2011 Examiner: Sally A. McKee Chalmers University of Technology Department of Computer Science and Engineering SE-412 96 Goteborg¨ Sweden Telephone +46 (0)31-255 1668 Supervisor: Magnus Sjalander¨ Chalmers University of Technology Department of Computer Science and Engineering SE-412 96 Goteborg¨ Sweden Telephone +46 (0)31-772 1075 Cover: A Cilk Implementation of LTE Base-station uplink on the TILEPro64 Processor.
    [Show full text]
  • Exascale Computing Project -- Software
    Exascale Computing Project -- Software Paul Messina, ECP Director Stephen Lee, ECP Deputy Director ASCAC Meeting, Arlington, VA Crystal City Marriott April 19, 2017 www.ExascaleProject.org ECP scope and goals Develop applications Partner with vendors to tackle a broad to develop computer spectrum of mission Support architectures that critical problems national security support exascale of unprecedented applications complexity Develop a software Train a next-generation Contribute to the stack that is both workforce of economic exascale-capable and computational competitiveness usable on industrial & scientists, engineers, of the nation academic scale and computer systems, in collaboration scientists with vendors 2 Exascale Computing Project, www.exascaleproject.org ECP has formulated a holistic approach that uses co- design and integration to achieve capable exascale Application Software Hardware Exascale Development Technology Technology Systems Science and Scalable and Hardware Integrated mission productive technology exascale applications software elements supercomputers Correctness Visualization Data Analysis Applicationsstack Co-Design Programming models, Math libraries and development environment, Tools Frameworks and runtimes System Software, resource Workflows Resilience management threading, Data Memory scheduling, monitoring, and management and Burst control I/O and file buffer system Node OS, runtimes Hardware interface ECP’s work encompasses applications, system software, hardware technologies and architectures, and workforce
    [Show full text]
  • Library for Handling Asynchronous Events in C++
    Masaryk University Faculty of Informatics Library for Handling Asynchronous Events in C++ Bachelor’s Thesis Branislav Ševc Brno, Spring 2019 Declaration Hereby I declare that this paper is my original authorial work, which I have worked out on my own. All sources, references, and literature used or excerpted during elaboration of this work are properly cited and listed in complete reference to the due source. Branislav Ševc Advisor: Mgr. Jan Mrázek i Acknowledgements I would like to thank my supervisor, Jan Mrázek, for his valuable guid- ance and highly appreciated help with avoiding death traps during the design and implementation of this project. ii Abstract The subject of this work is design, implementation and documentation of a library for the C++ programming language, which aims to sim- plify asynchronous event-driven programming. The Reactive Blocks Library (RBL) allows users to define program as a graph of intercon- nected function blocks which control message flow inside the graph. The benefits include decoupling of program’s logic and the method of execution, with emphasis on increased readability of program’s logical behavior through understanding of its reactive components. This thesis focuses on explaining the programming model, then proceeds to defend design choices and to outline the implementa- tion layers. A brief analysis of current competing solutions and their comparison with RBL, along with overhead benchmarks of RBL’s abstractions over pure C++ approach is included. iii Keywords C++ Programming Language, Inversion
    [Show full text]
  • Intel Threading Building Blocks
    Praise for Intel Threading Building Blocks “The Age of Serial Computing is over. With the advent of multi-core processors, parallel- computing technology that was once relegated to universities and research labs is now emerging as mainstream. Intel Threading Building Blocks updates and greatly expands the ‘work-stealing’ technology pioneered by the MIT Cilk system of 15 years ago, providing a modern industrial-strength C++ library for concurrent programming. “Not only does this book offer an excellent introduction to the library, it furnishes novices and experts alike with a clear and accessible discussion of the complexities of concurrency.” — Charles E. Leiserson, MIT Computer Science and Artificial Intelligence Laboratory “We used to say make it right, then make it fast. We can’t do that anymore. TBB lets us design for correctness and speed up front for Maya. This book shows you how to extract the most benefit from using TBB in your code.” — Martin Watt, Senior Software Engineer, Autodesk “TBB promises to change how parallel programming is done in C++. This book will be extremely useful to any C++ programmer. With this book, James achieves two important goals: • Presents an excellent introduction to parallel programming, illustrating the most com- mon parallel programming patterns and the forces governing their use. • Documents the Threading Building Blocks C++ library—a library that provides generic algorithms for these patterns. “TBB incorporates many of the best ideas that researchers in object-oriented parallel computing developed in the last two decades.” — Marc Snir, Head of the Computer Science Department, University of Illinois at Urbana-Champaign “This book was my first introduction to Intel Threading Building Blocks.
    [Show full text]