BARGHI SAMAN.Pdf (1.888Mb)

Total Page:16

File Type:pdf, Size:1020Kb

BARGHI SAMAN.Pdf (1.888Mb) Improving the Performance of User-level Runtime Systems for Concurrent Applications by Saman Barghi A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Computer Science Waterloo, Ontario, Canada, 2018 © Saman Barghi 2018 Examining Committee Membership The following served on the Examining Committee for this thesis. The decision of the Examining Committee is by majority vote. External Examiner: Carsten Griwodz Professor, Department of Informatics, University of Oslo, Oslo, Norway Chief Research Scientist, Simula Research Laboratory, Lysaker, Norway Supervisor(s): Martin Karsten Associate Professor, Cheriton School of Computer Science, University of Waterloo Internal Member: Peter A. Buhr Associate Professor, Cheriton School of Computer Science, University of Waterloo Ken Salem Professor, Cheriton School of Computer Science, University of Waterloo Internal-External Member: Werner Dietl Assistant Professor, Dept. of Electrical and Computer Engineering, University of Waterloo ii I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. iii Abstract Concurrency is an essential part of many modern large-scale software systems. Applications must handle millions of simultaneous requests from millions of connected devices. Handling such a large number of concurrent requests requires runtime systems that efficiently man- age concurrency and communication among tasks in an application across multiple cores. Existing low-level programming techniques provide scalable solutions with low overhead, but require non-linear control flow. Alternative approaches to concurrent programming, such as Erlang and Go, support linear control flow by mapping multiple user-level execution entities across multiple kernel threads (M:N threading). However, these systems provide comprehensive execution environments that make it difficult to assess the performance impact of user-level runtimes in isolation. This thesis presents a nimble M:N user-level threading runtime that closes this con- ceptual gap and provides a software infrastructure to precisely study the performance impact of user-level threading. Multiple design alternatives are presented and evaluated for scheduling, I/O multiplexing, and synchronization components of the runtime. The performance of the runtime is evaluated in comparison to event-driven software, system- level threading, and other user-level threading runtimes. An experimental evaluation is conducted using benchmark programs, as well as the popular Memcached application. The user-level runtime supports high levels of concurrency without sacrificing application performance. In addition, the user-level scheduling problem is studied in the context of an existing actor runtime that maps multiple actors to multiple kernel-level threads. In particular, two locality-aware work-stealing schedulers are proposed and evaluated. It is shown that locality-aware scheduling can significantly improve the performance of a class of applications with a high level of concurrency. In general, the performance and resource utilization of large-scale concurrent applications depends on the level of concurrency that can be expressed by the programming model. This fundamental effect is studied by refining and customizing existing concurrency models. iv Acknowledgements I would like to take this opportunity to thank all those who helped me to make this thesis possible. First and foremost, I would like to express my sincere thanks to my super- visor, Martin Karsten, for his regular advice, guidance, encouragement and perseverance throughout the course of my PhD. This thesis is only made possible with his unlimited help and support. I would also like to thank Peter A. Buhr for his guidance, help, and support throughout the course of my PhD. I thank the members of my PhD committee for their valuable and constructive feedback: Carsten Griwodz, Weinter Dietl, Ken Salem, and Peter A. Buhr. Finally, I would like to thank the many friends and colleagues I have met during my studies for their support and friendship over the past few years. v Dedication This is dedicated to my beloved parents and family for their love, endless support, encouragement, and sacrifices. vi Table of Contents List of Tablesx List of Figures xi Abbreviations xiv 1 Introduction1 1.1 Thesis Statement................................2 1.2 Main Results..................................3 2 Background and Related Work5 2.1 Multicore Processors..............................5 2.2 Concurrent and Parallel Programming Models................8 2.2.1 Thread Programming.......................... 10 2.2.2 Event Programming.......................... 13 2.2.3 Actor-Model Programming....................... 15 2.2.4 Other Programming Models...................... 17 2.3 User-level Runtime Systems.......................... 19 2.3.1 Input/Output (I/O).......................... 20 2.3.2 Scheduling................................ 25 2.3.3 User-level Threading Libraries..................... 30 vii 3 User-level Threading Runtime 38 3.1 Overview..................................... 40 3.2 Scheduling.................................... 43 3.2.1 Placement................................ 43 3.2.2 Scheduler Design............................ 44 3.2.3 Idle Management............................ 46 3.3 Synchronization................................. 47 3.4 Synchronous I/O Interface........................... 51 3.4.1 IO Subsystem.............................. 51 3.4.2 IO Polling................................ 52 3.4.3 Polling Mechanism........................... 56 3.5 Implementation Details............................. 60 4 Evaluation 63 4.1 Environment and Tools............................. 63 4.2 Queue Performance............................... 65 4.3 I/O Subsystem Design............................. 66 4.3.1 Yield Before Read........................... 66 4.3.2 Lazy Registration............................ 67 4.4 I/O Polling................................... 69 4.5 Benchmark Evaluation............................. 72 4.5.1 Scalability................................ 73 4.5.2 Efficiency................................ 74 4.5.3 I/O Multiplexing............................ 75 4.6 Application Evaluation............................. 80 4.6.1 Memcached Transformation...................... 80 4.6.2 Performance Evaluation........................ 81 viii 5 Actor Scheduling 92 5.1 Related Work.................................. 92 5.2 Characteristics of Actor Applications..................... 96 5.3 Locality-Aware Scheduler(LAS)........................ 97 5.3.1 Memory Hierarchy Detection...................... 97 5.3.2 Locality-aware Work-stealing..................... 98 5.4 Experiments and Evaluation.......................... 100 5.4.1 Benchmarks............................... 101 5.4.2 Experiments............................... 102 5.4.3 Chat Server............................... 106 6 Utilization Model 110 6.1 Probabilistic Model............................... 111 6.2 Analytic Model................................. 114 6.2.1 Experimental Verification....................... 116 7 Conclusion and Future Work 119 7.1 Summary and Conclusion........................... 119 7.2 Future Work................................... 121 References 123 APPENDICES 143 A Other User-level Threading Libraries 144 B Intel Results 147 ix List of Tables 2.1 M:N user-level threading frameworks with I/O support........... 37 4.1 Cache Layout.................................. 65 x List of Figures 2.1 Shared Ready Queue.............................. 26 2.2 Shared Ready Queue and Local Queues.................... 27 2.3 Local Queues.................................. 28 2.4 Deque Used in Work-stealing.......................... 29 3.1 An example of a setup with 6 processors and 3 clusters........... 41 3.2 Scheduler and I/O subsystem per cluster................... 42 3.3 Cluster Scheduler................................ 45 3.4 Barging spectrum for different locks...................... 49 3.5 Blocking mutex with baton passing....................... 50 3.6 User-Level I/O Synchronization........................ 54 3.7 Design with a poller fibre per cluster...................... 58 3.8 Central polling mechanism using a dedicated poller thread.......... 59 3.9 Polling frequency spectrum........................... 60 3.10 (a) Fixed-size stack, (b) Segmented stack................... 61 4.1 Scalability of Different Queues........................ 65 4.2 Effect of yielding before read on a web server performance.......... 67 4.3 Effect of lazy registration on short-lived connections using a web server... 68 4.4 Throughput and Latency of I/O polling mechanisms............. 70 4.5 Scalability (32 cores, 1024 fibres, 32 locks). Throughput is normalized by the respective throughput for a single thread/fibre.............. 73 xi 4.6 Efficiency (N 2 fibres, N=4 locks, 20µs loops), lower is better........ 74 4.7 Web Server (12,000 connections)........................ 76 4.8 Web Server (16 cores)............................. 78 4.9 Web Server (16 cores)............................. 79 4.10 Memcached - Core Scalability (6,000 connections, write ratio 0.1)..... 82 4.11 Memcached - Connection Scalability (16 cores, write ratio 0.1)....... 83 4.12 Memcached - Read/Write Scalability With Blocking Mutex (16 cores, 6,000 connections)..................................
Recommended publications
  • Integration of CUDA Processing Within the C++ Library for Parallelism and Concurrency (HPX)
    1 Integration of CUDA Processing within the C++ library for parallelism and concurrency (HPX) Patrick Diehl, Madhavan Seshadri, Thomas Heller, Hartmut Kaiser Abstract—Experience shows that on today’s high performance systems the utilization of different acceleration cards in conjunction with a high utilization of all other parts of the system is difficult. Future architectures, like exascale clusters, are expected to aggravate this issue as the number of cores are expected to increase and memory hierarchies are expected to become deeper. One big aspect for distributed applications is to guarantee high utilization of all available resources, including local or remote acceleration cards on a cluster while fully using all the available CPU resources and the integration of the GPU work into the overall programming model. For the integration of CUDA code we extended HPX, a general purpose C++ run time system for parallel and distributed applications of any scale, and enabled asynchronous data transfers from and to the GPU device and the asynchronous invocation of CUDA kernels on this data. Both operations are well integrated into the general programming model of HPX which allows to seamlessly overlap any GPU operation with work on the main cores. Any user defined CUDA kernel can be launched on any (local or remote) GPU device available to the distributed application. We present asynchronous implementations for the data transfers and kernel launches for CUDA code as part of a HPX asynchronous execution graph. Using this approach we can combine all remotely and locally available acceleration cards on a cluster to utilize its full performance capabilities.
    [Show full text]
  • Events, Co-Routines, Continuations and Threads OS (And Application)Execution Models System Building
    Events, Co-routines, Continuations and Threads OS (and application)Execution Models System Building General purpose systems need to deal with • Many activities – potentially overlapping – may be interdependent • Activities that depend on external phenomena – may requiring waiting for completion (e.g. disk read) – reacting to external triggers (e.g. interrupts) Need a systematic approach to system structuring © Kevin Elphinstone 2 Construction Approaches Events Coroutines Threads Continuations © Kevin Elphinstone 3 Events External entities generate (post) events. • keyboard presses, mouse clicks, system calls Event loop waits for events and calls an appropriate event handler. • common paradigm for GUIs Event handler is a function that runs until completion and returns to the event loop. © Kevin Elphinstone 4 Event Model The event model only requires a single stack Memory • All event handlers must return to the event loop CPU Event – No blocking Loop – No yielding PC Event SP Handler 1 REGS Event No preemption of handlers Handler 2 • Handlers generally short lived Event Handler 3 Data Stack © Kevin Elphinstone 5 What is ‘a’? int a; /* global */ int func() { a = 1; if (a == 1) { a = 2; } No concurrency issues within a return a; handler } © Kevin Elphinstone 6 Event-based kernel on CPU with protection Kernel-only Memory User Memory CPU Event Loop Scheduling? User PC Event Code SP Handler 1 REGS Event Handler 2 User Event Data Handler 3 Huh? How to support Data Stack multiple Stack processes? © Kevin Elphinstone 7 Event-based kernel on CPU with protection Kernel-only Memory User Memory CPU PC Trap SP Dispatcher User REGS Event Code Handler 1 User-level state in PCB Event PCB Handler 2 A User Kernel starts on fresh Timer Event Data stack on each trap (Scheduler) PCB B No interrupts, no blocking Data Current in kernel mode Thead PCB C Stack Stack © Kevin Elphinstone 8 Co-routines Originally described in: • Melvin E.
    [Show full text]
  • Designing an Ultra Low-Overhead Multithreading Runtime for Nim
    Designing an ultra low-overhead multithreading runtime for Nim Mamy Ratsimbazafy Weave [email protected] https://github.com/mratsim/weave Hello! I am Mamy Ratsimbazafy During the day blockchain/Ethereum 2 developer (in Nim) During the night, deep learning and numerical computing developer (in Nim) and data scientist (in Python) You can contact me at [email protected] Github: mratsim Twitter: m_ratsim 2 Where did this talk came from? ◇ 3 years ago: started writing a tensor library in Nim. ◇ 2 threading APIs at the time: OpenMP and simple threadpool ◇ 1 year ago: complete refactoring of the internals 3 Agenda ◇ Understanding the design space ◇ Hardware and software multithreading: definitions and use-cases ◇ Parallel APIs ◇ Sources of overhead and runtime design ◇ Minimum viable runtime plan in a weekend 4 Understanding the 1 design space Concurrency vs parallelism, latency vs throughput Cooperative vs preemptive, IO vs CPU 5 Parallelism is not 6 concurrency Kernel threading 7 models 1:1 Threading 1 application thread -> 1 hardware thread N:1 Threading N application threads -> 1 hardware thread M:N Threading M application threads -> N hardware threads The same distinctions can be done at a multithreaded language or multithreading runtime level. 8 The problem How to schedule M tasks on N hardware threads? Latency vs 9 Throughput - Do we want to do all the work in a minimal amount of time? - Numerical computing - Machine learning - ... - Do we want to be fair? - Clients-server - Video decoding - ... Cooperative vs 10 Preemptive Cooperative multithreading:
    [Show full text]
  • Bench - Benchmarking the State-Of- The-Art Task Execution Frameworks of Many- Task Computing
    MATRIX: Bench - Benchmarking the state-of- the-art Task Execution Frameworks of Many- Task Computing Thomas Dubucq, Tony Forlini, Virgile Landeiro Dos Reis, and Isabelle Santos Illinois Institute of Technology, Chicago, IL, USA {tdubucq, tforlini, vlandeir, isantos1}@hawk.iit.edu Stanford University. Finally HPX is a general purpose C++ Abstract — Technology trends indicate that exascale systems will runtime system for parallel and distributed applications of any have billion-way parallelism, and each node will have about three scale developed by Louisiana State University and Staple is a orders of magnitude more intra-node parallelism than today’s framework for developing parallel programs from Texas A&M. peta-scale systems. The majority of current runtime systems focus a great deal of effort on optimizing the inter-node parallelism by MATRIX is a many-task computing job scheduling system maximizing the bandwidth and minimizing the latency of the use [3]. There are many resource managing systems aimed towards of interconnection networks and storage, but suffer from the lack data-intensive applications. Furthermore, distributed task of scalable solutions to expose the intra-node parallelism. Many- scheduling in many-task computing is a problem that has been task computing (MTC) is a distributed fine-grained paradigm that considered by many research teams. In particular, Charm++ [4], aims to address the challenges of managing parallelism and Legion [5], Swift [6], [10], Spark [1][2], HPX [12], STAPL [13] locality of exascale systems. MTC applications are typically structured as direct acyclic graphs of loosely coupled short tasks and MATRIX [11] offer solutions to this problem and have with explicit input/output data dependencies.
    [Show full text]
  • HPX – a Task Based Programming Model in a Global Address Space
    HPX – A Task Based Programming Model in a Global Address Space Hartmut Kaiser1 Thomas Heller2 Bryce Adelstein-Lelbach1 [email protected] [email protected] [email protected] Adrian Serio1 Dietmar Fey2 [email protected] [email protected] 1Center for Computation and 2Computer Science 3, Technology, Computer Architectures, Louisiana State University, Friedrich-Alexander-University, Louisiana, U.S.A. Erlangen, Germany ABSTRACT 1. INTRODUCTION The significant increase in complexity of Exascale platforms due to Todays programming models, languages, and related technologies energy-constrained, billion-way parallelism, with major changes to that have sustained High Performance Computing (HPC) appli- processor and memory architecture, requires new energy-efficient cation software development for the past decade are facing ma- and resilient programming techniques that are portable across mul- jor problems when it comes to programmability and performance tiple future generations of machines. We believe that guarantee- portability of future systems. The significant increase in complex- ing adequate scalability, programmability, performance portability, ity of new platforms due to energy constrains, increasing paral- resilience, and energy efficiency requires a fundamentally new ap- lelism and major changes to processor and memory architecture, proach, combined with a transition path for existing scientific ap- requires advanced programming techniques that are portable across plications, to fully explore the rewards of todays and tomorrows multiple future generations of machines [1]. systems. We present HPX – a parallel runtime system which ex- tends the C++11/14 standard to facilitate distributed operations, A fundamentally new approach is required to address these chal- enable fine-grained constraint based parallelism, and support run- lenges.
    [Show full text]
  • An Ideal Match?
    24 November 2020 An ideal match? Investigating how well-suited Concurrent ML is to implementing Belief Propagation for Stereo Matching James Cooper [email protected] OutlineI 1 Stereo Matching Generic Stereo Matching Belief Propagation 2 Concurrent ML Overview Investigation of Alternatives Comparative Benchmarks 3 Concurrent ML and Belief Propagation 4 Conclusion Recapitulation Prognostication 5 References Outline 1 Stereo Matching Generic Stereo Matching Belief Propagation 2 Concurrent ML Overview Investigation of Alternatives Comparative Benchmarks 3 Concurrent ML and Belief Propagation 4 Conclusion Recapitulation Prognostication 5 References Outline 1 Stereo Matching Generic Stereo Matching Belief Propagation 2 Concurrent ML Overview Investigation of Alternatives Comparative Benchmarks 3 Concurrent ML and Belief Propagation 4 Conclusion Recapitulation Prognostication 5 References Stereo Matching Generally SM is finding correspondences between stereo images Images are of the same scene Captured simultaneously Correspondences (`disparity') are used to estimate depth SM is an ill-posed problem { can only make best guess Impossible to perform `perfectly' in general case Stereo Matching ExampleI (a) Left camera's image (b) Right camera's image Figure 1: The popular 'Tsukuba' example stereo matching images, so called because they were created by researchers at the University of Tsukuba, Japan. They are probably the most widely-used benchmark images in stereo matching. Stereo Matching ExampleII (a) Ground truth disparity map (b) Disparity map generated using a simple Belief Propagation Stereo Matching implementation Figure 2: The ground truth disparity map for the Tsukuba images, and an example of a possible real disparity map produced by using Belief Propagation Stereo Matching. The ground truth represents what would be expected if stereo matching could be carried out `perfectly'.
    [Show full text]
  • Tcl and Java Performance
    Tcl and Java Performance http://ptolemy.eecs.berkeley.edu/~cxh/java/tclblend/scriptperf/scriptperf.html Tcl and Java Performance by H. John Reekie, University of California at Berkeley Christopher Hylands, University of California at Berkeley Edward A. Lee, University of California at Berkeley Abstract Combining scripting languages such as Tcl with lower−level programming languages such as Java offers new opportunities for flexible and rapid software development. In this paper, we benchmark various combinations of Tcl and Java against the two languages alone. We also provide some comparisons with JavaScript. Performance can vary by well over two orders of magnitude. We also uncovered some interesting threading issues that affect performance on the Solaris platform. "There are lies, damn lies and statistics" This paper is a work in progress, we used the information here to give our group some generalizations on the performance tradeoffs between various scripting languages. Updating the timing results to include JDK1.2 with a Just In Time (JIT) compiler would be useful. Introduction There is a growing trend towards integration of multiple languages through scripting. In a famously controversial white paper (Ousterhout 97), John Ousterhout, now of Scriptics Corporation, argues that scripting −− the use of a high−level, untyped, interpreted language to "glue" together components written in a lower−level language −− provides greater reuse benefits that other reuse technologies. Although traditionally a language such as C or C++ has been the lower−level language, more recent efforts have focused on using Java. Recently, Sun Microsystems laboratories announced two products aimed at fulfilling this goal with the Tcl and Java programming languages.
    [Show full text]
  • High-Level and Efficient Stream Parallelism on Multi-Core Systems
    WSCAD 2017 - XVIII Simp´osio em Sistemas Computacionais de Alto Desempenho High-Level and Efficient Stream Parallelism on Multi-core Systems with SPar for Data Compression Applications Dalvan Griebler1, Renato B. Hoffmann1, Junior Loff1, Marco Danelutto2, Luiz Gustavo Fernandes1 1 Faculty of Informatics (FACIN), Pontifical Catholic University of Rio Grande do Sul (PUCRS), GMAP Research Group, Porto Alegre, Brazil. 2Department of Computer Science, University of Pisa (UNIPI), Pisa, Italy. {dalvan.griebler, renato.hoffmann, junior.loff}@acad.pucrs.br, [email protected], [email protected] Abstract. The stream processing domain is present in several real-world appli- cations that are running on multi-core systems. In this paper, we focus on data compression applications that are an important sub-set of this domain. Our main goal is to assess the programmability and efficiency of domain-specific language called SPar. It was specially designed for expressing stream paral- lelism and it promises higher-level parallelism abstractions without significant performance losses. Therefore, we parallelized Lzip and Bzip2 compressors with SPar and compared with state-of-the-art frameworks. The results revealed that SPar is able to efficiently exploit stream parallelism as well as provide suit- able abstractions with less code intrusion and code re-factoring. 1. Introduction Over the past decade, vendors realized that increasing clock frequency to gain perfor- mance was no longer possible. Companies were then forced to slow the clock frequency and start adding multiple processors to their chips. Since that, software started to rely on parallel programming to increase performance [Sutter 2005]. However, exploiting par- allelism in such multi-core architectures is a challenging task that is still too low-level and complex for application programmers.
    [Show full text]
  • Exascale Computing Project -- Software
    Exascale Computing Project -- Software Paul Messina, ECP Director Stephen Lee, ECP Deputy Director ASCAC Meeting, Arlington, VA Crystal City Marriott April 19, 2017 www.ExascaleProject.org ECP scope and goals Develop applications Partner with vendors to tackle a broad to develop computer spectrum of mission Support architectures that critical problems national security support exascale of unprecedented applications complexity Develop a software Train a next-generation Contribute to the stack that is both workforce of economic exascale-capable and computational competitiveness usable on industrial & scientists, engineers, of the nation academic scale and computer systems, in collaboration scientists with vendors 2 Exascale Computing Project, www.exascaleproject.org ECP has formulated a holistic approach that uses co- design and integration to achieve capable exascale Application Software Hardware Exascale Development Technology Technology Systems Science and Scalable and Hardware Integrated mission productive technology exascale applications software elements supercomputers Correctness Visualization Data Analysis Applicationsstack Co-Design Programming models, Math libraries and development environment, Tools Frameworks and runtimes System Software, resource Workflows Resilience management threading, Data Memory scheduling, monitoring, and management and Burst control I/O and file buffer system Node OS, runtimes Hardware interface ECP’s work encompasses applications, system software, hardware technologies and architectures, and workforce
    [Show full text]
  • HPXMP, an Openmp Runtime Implemented Using
    hpxMP, An Implementation of OpenMP Using HPX By Tianyi Zhang Project Report Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science in the School of Engineering at Louisiana State University, 2019 Baton Rouge, LA, USA Acknowledgments I would like to express my appreciation to my advisor Dr. Hartmut Kaiser, who has been a patient, inspirable and knowledgeable mentor for me. I joined STEjjAR group 2 years ago, with limited knowledge of C++ and HPC. I would like to thank Dr. Kaiser to bring me into the world of high-performance computing. During the past 2 years, he helped me improve my skills in C++, encouraged my research project in hpxMP, and allowed me to explore the world. When I have challenges in my research, he always points things to the right spot and I can always learn a lot by solving those issues. His advice on my research and career are always valuable. I would also like to convey my appreciation to committee members for serving as my committees. Thank you for letting my defense be to an enjoyable moment, and I appreciate your comments and suggestions. I would thank my colleagues working together, who are passionate about what they are doing and always willing to help other people. I can always be inspired by their intellectual conversations and get hands-on help face to face which is especially important to my study. A special thanks to my family, thanks for their support. They always encourage me and stand at my side when I need to make choices or facing difficulties.
    [Show full text]
  • Thread Scheduling in Multi-Core Operating Systems Redha Gouicem
    Thread Scheduling in Multi-core Operating Systems Redha Gouicem To cite this version: Redha Gouicem. Thread Scheduling in Multi-core Operating Systems. Computer Science [cs]. Sor- bonne Université, 2020. English. tel-02977242 HAL Id: tel-02977242 https://hal.archives-ouvertes.fr/tel-02977242 Submitted on 24 Oct 2020 HAL is a multi-disciplinary open access L’archive ouverte pluridisciplinaire HAL, est archive for the deposit and dissemination of sci- destinée au dépôt et à la diffusion de documents entific research documents, whether they are pub- scientifiques de niveau recherche, publiés ou non, lished or not. The documents may come from émanant des établissements d’enseignement et de teaching and research institutions in France or recherche français ou étrangers, des laboratoires abroad, or from public or private research centers. publics ou privés. Ph.D thesis in Computer Science Thread Scheduling in Multi-core Operating Systems How to Understand, Improve and Fix your Scheduler Redha GOUICEM Sorbonne Université Laboratoire d’Informatique de Paris 6 Inria Whisper Team PH.D.DEFENSE: 23 October 2020, Paris, France JURYMEMBERS: Mr. Pascal Felber, Full Professor, Université de Neuchâtel Reviewer Mr. Vivien Quéma, Full Professor, Grenoble INP (ENSIMAG) Reviewer Mr. Rachid Guerraoui, Full Professor, École Polytechnique Fédérale de Lausanne Examiner Ms. Karine Heydemann, Associate Professor, Sorbonne Université Examiner Mr. Etienne Rivière, Full Professor, University of Louvain Examiner Mr. Gilles Muller, Senior Research Scientist, Inria Advisor Mr. Julien Sopena, Associate Professor, Sorbonne Université Advisor ABSTRACT In this thesis, we address the problem of schedulers for multi-core architectures from several perspectives: design (simplicity and correct- ness), performance improvement and the development of application- specific schedulers.
    [Show full text]
  • The Future of PGAS Programming from a Chapel Perspective
    The Future of PGAS Programming from a Chapel Perspective Brad Chamberlain PGAS 2015, Washington DC September 17, 2015 COMPUTE | STORE | ANALYZE (or:) Five Things You Should Do to Create a Future-Proof Exascale Language Brad Chamberlain PGAS 2015, Washington DC September 17, 2015 COMPUTE | STORE | ANALYZE Safe Harbor Statement This presentation may contain forward-looking statements that are based on our current expectations. Forward looking statements may include statements about our financial guidance and expected operating results, our opportunities and future potential, our product development and new product introduction plans, our ability to expand and penetrate our addressable markets and other statements that are not historical facts. These statements are only predictions and actual results may materially vary from those projected. Please refer to Cray's documents filed with the SEC from time to time concerning factors that could affect the Company and these forward-looking statements. COMPUTE | STORE | ANALYZE 3 Copyright 2015 Cray Inc. Apologies in advance ● This is a (mostly) brand-new talk ● My new talks tend to be overly text-heavy (sorry) COMPUTE | STORE | ANALYZE 4 Copyright 2015 Cray Inc. PGAS Programming in a Nutshell Global Address Space: ● permit parallel tasks to access variables by naming them ● regardless of whether they are local or remote ● compiler / library / runtime will take care of communication OK to access i, j, and k wherever they live k = i + j; i k j k k k k 0 1 2 3 4 Images / Threads / Locales / Places
    [Show full text]