Message Passing and Bulk Transport on Heterogeneous Multiprocessors

Total Page:16

File Type:pdf, Size:1020Kb

Message Passing and Bulk Transport on Heterogeneous Multiprocessors Master’s Thesis Nr. 118 Systems Group, Department of Computer Science, ETH Zurich Message Passing and Bulk Transport on Heterogeneous Multiprocessors by Reto Achermann Supervised by Prof. Timothy Roscoe, Dr. Kornilios Kourtis October 17, 2014 Abstract The decade of symmetric multiprocessors is about to end. Due to physical limitations, individual cores cannot be made any faster and adding cores won’t work anymore (dark silicon). Operating systems face the next big paradigm shift: systems are getting faster by exploiting specialized hardware leading to the era of asymmetric processors and heterogeneous systems. An ever growing count and the diversity of cores yields new challenges regarding scheduling and resource management. Multiple physical address spaces within single machines let them appear as a cluster-like system. Today’s commodity operating systems are not well designed to deal with het- erogeneity and asymmetric processors. Many attempts have been made to tackle the challenges of heterogeneous hardware, most of which treat avail- able co-processors like devices or independent execution environments rather than equivalent processors. Thus, true support of a multi-architecture system is rarely seen, despite the fact that such hardware already exists. Examples are, systems like the OMAP44xx SoC or Intel’s Xeon Phi co-processor. In this thesis we will elaborate the arising challenges introduced by heteroge- neous system architectures. We ported the Barrelfish research operating system to the Xeon Phi execution environment and explored the impact on perfor- mance and system design imposed by the emerging hardware characteristics such as different instruction set architectures, multiple physical address spaces and asymmetric cores. Based on our proposed software framework, we demon- strate a parallel, heterogeneous execution of an OpenMP program using our implementation of a message passing based OpenMP library and show why more threads do not necessary imply a better performance. 1 Acknowledgments I want to thank Prof. Timothy Roscoe for giving me the opportunity to work with the new Xeon Phi hardware and for his inspirational thoughts and critical questions during our discussions; Dr. Kornilios Kourtis for his feedback on the benchmarks and Stefan Kaestle for his ideas about possible applications. Further, I’d like to thank Raphael Fuchs for the inspirational discussions about Barrelfish. 2 Contents 1 Introduction and Motivation 9 2 Related Work 12 2.1 Intel Manycore Platform Software Stack . 12 2.2 Heterogeneous Systems . 13 2.2.1 Heterogeneous Networked Environments . 13 2.2.2 The CPU plus Accelerator Model . 13 2.2.3 The System-on-a-Chip Model . 13 2.2.4 The Heterogeneous Model . 14 2.3 Barrelfish . 15 2.4 Concluding Remarks . 16 3 Barrelfish in a Nutshell 17 3.1 Architectural Overview . 17 3.1.1 CPU Driver . 18 3.2 Capabilities . 18 3.3 Major Domains . 19 3.3.1 Kernel Monitor . 19 3.3.2 System Knowledge Base (SKB) . 19 3.3.3 Memory Service . 19 3.3.4 Device Drivers . 20 3.4 Message Passing . 20 3.4.1 Exporting a Service . 20 3.4.2 Binding to a Service . 20 3.5 Hardware Register Access . 21 3.6 Hake: Barrelfish’s Build System . 21 3.6.1 Hakefiles . 21 4 The Intel Xeon Phi Co-Processor 22 4.1 System Overview . 23 4.2 Xeon Phi Co-Processor . 23 4.2.1 Hardware Specification . 24 4.2.2 In Comparison with the Xeon E5 v2 . 24 4.2.3 Comparison to General Purpose GPUs . 25 4.3 Physical Address Space Layout . 26 4.3.1 Host Side View . 26 4.3.2 Xeon Phi Side View . 27 4.3.3 Multiple Physical Address Spaces . 27 3 CONTENTS 4 4.3.4 Accessing Resources of another Processor . 29 4.4 The K1OM Architecture . 29 4.5 Booting the Xeon Phi . 31 4.5.1 Operating Systems . 31 4.5.2 Bootstrap . 31 4.5.3 Boot Modes . 32 5 Memory Access over PCI Express 33 5.1 Memory Hierarchy . 33 5.1.1 Memory Access with Co-Processors . 34 5.2 Evaluating Memory Access Performance . 35 5.3 Memory Access Latency . 35 5.3.1 Benchmark Description . 35 5.3.2 Benchmark Results . 36 5.3.3 Implications . 38 5.4 Memory Throughput . 39 5.4.1 Benchmark Description . 39 5.4.2 Benchmark Baseline: Local Transfers . 39 5.4.3 Host to Xeon Phi Transfers . 41 5.4.4 Xeon Phi to Host Transfers . 43 5.4.5 Between Xeon Phi . 43 5.4.6 Implications . 43 5.5 Message Passing over Shared Memory . 44 5.5.1 Benchmark Description . 44 5.5.2 Benchmark Baseline: Local Benchmark . 44 5.5.3 Message Passing over PCI Express . 46 5.5.4 Implications . 48 5.6 A Word on CPU Performance . 48 6 Barrelfish on the Xeon Phi 50 6.1 System Overview . 50 6.1.1 Xeon Phi Driver . 51 6.1.2 Xeon Phi Manager . 52 6.1.3 Service and Domain Identification . 53 6.2 Differences and Similarities to the MPSS . 54 6.3 Modifications to Barrelfish . 54 6.3.1 Boot-Loader . 54 6.3.2 CPU Driver and Monitor . 54 6.3.3 Kaluga and SKB . 55 6.4 Creating the Xeon Phi Bootimage . 56 6.4.1 Additional Hake Targets . 56 6.4.2 Boot-Image Anatomy . 56 6.4.3 Building Process . 57 6.5 The Boot Process Explained . 58 6.5.1 Boot Dependencies . 58 6.5.2 Host Side Driver Startup . 58 6.5.3 Chain of Boot Events . 59 6.5.4 Weever: The Xeon Phi Boot-Loader . 60 6.6 Capabilities Revisited . 61 6.6.1 The Problem of Multiple Address Spaces . 61 CONTENTS 5 6.6.2 Capabilities: Accessing Remote Resources . 62 6.6.3 Capability Translation: An Implementation . 62 6.7 Flounder over PCI Express . 65 6.7.1 Founder Bootstrap Steps . 65 6.7.2 Message Stub Interface Extensions . 65 6.7.3 Usage . 66 6.7.4 Limitations . 67 6.8 Xeon Phi Service Interfaces . 68 6.8.1 Xeon Phi Driver Service . 68 6.8.2 Name Services . 71 6.9 DMA Service . 75 6.9.1 DMA Library . 75 6.9.2 DMA Manager . 76 6.9.3 DMA Subsystem Programming Model . 76 6.10 Virtual Devices . 77 6.10.1 VirtIO . 78 6.10.2 Barrelfish Bulk Transport . 81 6.10.3 Virtual Devices: A Conclusion . 82 7 Applications 83 7.1 Work Offloading . 83 7.1.1 Requirements for Offloading . 84 7.2 OpenMP . 85 7.2.1 Computation Model . 85 7.2.2 BOMP: Barrelfish OpenMP . 87 7.2.3 XOMP: OpenMP for Exclusive Address Spaces . 87 7.2.4 Library Initialization . 90 7.2.5 Library Scaling Characteristics . 90 7.3 Use Case Example: Matrix Multiplication . 95 7.3.1 General Benchmark Description . 97 7.3.2 Baseline: Local Benchmarks . 97 7.3.3 Effect of Data Replication . 98 7.3.4 Matrix Multiply using Heterogeneous OpenMP . 98 7.4 Conclusion . 100 8 Future Work 102 8.1 Towards One System . 102 8.2 Xeon Phi as a Collection of Cores . 104 8.3 Multiple Address Spaces . 105 8.4 Barrelfish: Explore Scaling Behavior . 105 8.5 Networking to the Xeon Phi and Host VFS . 106 8.6 Extending Flounder . 107 8.7 DMA Library . 107 8.8 OpenMP . 107 8.9 Bulk Transport over PCI Express . 108 8.10 Xeon Phi Hardware Features . 108 8.11 VirtIO . 109 Appendices 110 CONTENTS 6 A Memory Access Benchmarks 111 A.1 Xeon Phi Caches . 111 A.2 Memory Latency . 111 A.2.1 Experiment Description . 111 A.2.2 Experiment Parameters . 112 A.3 Memory Throughput . ..
Recommended publications
  • Hardware Configuration with Dynamically-Queried Formal Models
    Master’s Thesis Nr. 180 Systems Group, Department of Computer Science, ETH Zurich Hardware Configuration With Dynamically-Queried Formal Models by Daniel Schwyn Supervised by Reto Acherman Dr. David Cock Prof. Dr. Timothy Roscoe April 2017 – October 2017 Abstract Hardware is getting increasingly complex and heterogeneous. With different compo- nents having different views of the system, the traditional assumption of unique phys- ical addresses has become an illusion. To adapt to such hardware, an operating system (OS) needs to understand the complex address translation chains and be able to handle the presence of multiple address spaces. This thesis takes a recently proposed model that formally captures these aspects and applies it to hardware configuration in the Barrelfish OS. To this end, I present Sockeye, a domain specific language that uses the model to de- scribe hardware. I then show, that code relying on knowledge about the address spaces in a system can be statically generated from these specifications. Furthermore, the model is successfully applied to device management, showing that it can also be used to configure hardware at runtime. The implementation presented here does not rely on any platform specific code and it reduced the amount of such code in Barrelfish’s device manager by over 30%. Applying the model to further hardware configuration tasks is expected to have similar effects. i Acknowledgements First of all, I want to thank my advisers Timothy Roscoe, David Cock and Reto Acher- mann for their guidance and support. Their feedback and critical questions were a valuable contribution to this thesis. I’d also like to thank the rest of the Barrelfish team and other members of the Systems Group at ETH for the interesting discussions during meetings and coffee breaks.
    [Show full text]
  • Message Passing for Programming Languages and Operating Systems
    Master’s Thesis Nr. 141 Systems Group, Department of Computer Science, ETH Zurich Message Passing for Programming Languages and Operating Systems by Martynas Pumputis Supervised by Prof. Dr. Timothy Roscoe, Dr. Antonios Kornilios Kourtis, Dr. David Cock October 7, 2015 Abstract Message passing as a mean of communication has been gaining popu- larity within domains of concurrent programming languages and oper- ating systems. In this thesis, we discuss how message passing languages can be ap- plied in the context of operating systems which are heavily based on this form of communication. In particular, we port the Go program- ming language to the Barrelfish OS and integrate the Go communi- cation channels with the messaging infrastructure of Barrelfish. We show that the outcome of the porting and the integration allows us to implement OS services that can take advantage of the easy-to-use concurrency model of Go. Our evaluation based on LevelDB benchmarks shows comparable per- formance to the Linux port. Meanwhile, the overhead of the messag- ing integration causes the poor performance when compared to the native messaging of Barrelfish, but exposes an easier to use interface, as shown by the example code. i Acknowledgments First of all, I would like to thank Timothy Roscoe, Antonios Kornilios Kourtis and David Cock for giving the opportunity to work on the Barrelfish OS project, their supervision, inspirational thoughts and critique. Next, I would like to thank the Barrelfish team for the discussions and the help. In addition, I would like to thank Sebastian Wicki for the conversations we had during the entire period of my Master’s studies.
    [Show full text]
  • The Barrelfish Multi- Kernel: an Interview with Timothy Roscoe
    INCREASING CPU PERFORMANCE WITH faster clock speeds and ever more complex RIK FARROW hardware for pipelining and memory ac- cess has hit the brick walls of power and the Barrelfish multi- bandwidth. Multicore CPUs provide the way forward but also present obstacles to using kernel: an interview existing operating systems design as they with Timothy Roscoe scale upwards. Barrelfish represents an ex- perimental operating system design where Timothy Roscoe is part of the ETH Zürich Computer early versions run faster than Linux on the Science Department’s Systems Group. His main research areas are operating systems, distributed same hardware, with a design that should systems, and networking, with some critical theory on the side. scale well to systems with many cores and [email protected] even different CPU architectures. Barrelfish explores the design of a multikernel Rik Farrow is the Editor of ;login:. operating system, one designed to run non-shared [email protected] copies of key kernel data structures. Popular cur- rent operating systems, such as Windows and Linux, use a single, shared operating system image even when running on multiple-core CPUs as well as on motherboard designs with multiple CPUs. These monolithic kernels rely on cache coherency to protect shared data. Multikernels each have their own copy of key data structures and use message passing to maintain the correctness of each copy. In their SOSP 2009 paper [1], Baumann et al. describe their experiences in building and bench- marking Barrelfish on a variety of Intel and AMD systems ranging from four to 32 cores. When these systems run Linux or Windows, they rely on cache coherency mechanisms to maintain a single image of the operating system.
    [Show full text]
  • A Tool for Modeling, Viewing, and Checking Distributed System
    ;login FALL 2016 VOL. 41, NO. 3 : & POSIX: The Old, the New, and the Missing Vaggelis Atlidakis, Jeremy Andrus, Roxana Geambasu, Dimitris Mitropoulos, and Jason Nieh & Runway: A Tool for Modeling, Viewing, and Checking Distributed Systems Diego Ongaro & Create a Threat Model for Your Organization Bruce Potter & Eliminating Toil Betsy Beyer, Brendan Gleason, Dave O’Connor, and Vivek Rau Columns Separating Protocol Implementations from Transport in Python David Beazley Shutting Down Go Servers with Manners Kelsey Hightower Using Consul for Distributed Key-Value Stores David N. Blank-Edelman Monitoring Paging Trauma Dave Josephsen Making Security into a Science Dan Geer Distributed Systems Robert G. Ferrell UPCOMING EVENTS OSDI ’16: 12th USENIX Symposium on Operating Enigma 2017 Systems Design and Implementation January 30–February 1, 2017, Oakland, CA, USA Sponsored by USENIX in cooperation with ACM SIGOPS enigma.usenix.org November 2–4, 2016, Savannah, GA, USA www.usenix.org/osdi16 FAST ’17: 15th USENIX Conference on File and Co-located with OSDI ’16 Storage Technologies INFLOW ’16: 4th Workshop on Interactions of NVM/Flash Sponsored by USENIX in cooperation with ACM SIGOPS with Operating Systems and Workloads February 27–March 2, 2017, Santa Clara, CA, USA November 1, 2016 Submissions due September 27, 2016 www.usenix.org/inflow16 www.usenix.org/fast17 LISA16 SREcon17 December 4–9, 2016, Boston, MA, USA March 13–14, 2017, San Francisco, CA, USA www.usenix.org/lisa16 Co-located with LISA16 NSDI ’17: 14th USENIX Symposium on Networked SESA
    [Show full text]
  • Customized OS Kernel for Data-Processing on Modern Hardware
    Imperial College London MEng Computing Individual Project Customized OS kernel for data-processing on modern hardware Author: Supervisor: Daniel Grumberg Dr. Jana Giceva An Individual Project Report submitted in fulfillment of the requirements for the degree of MEng Computing in the Department of Computing June 18, 2018 iii Abstract Daniel Grumberg Customized OS kernel for data-processing on modern hardware The end of Moore’s Law shows that traditional processor design has hit its peak and that it cannot yield new major performance improvements. As an answer, computer architecture is turning towards domain-specific solutions in which the hardware’s properties are tailored to specific workloads. An example of this new trend is the Xeon Phi accelerator card which aims to bridge the gap between modern CPUs and GPUs. Commodity operating systems are not well equipped to leverage these ad- vancements. Most systems treat accelerators as they would an I/O device, or as an entirely separate system. Developers need to craft their algorithms to target the new device and to leverage its properties. However, transferring data between compu- tational devices is very costly, so programmers must also carefully specify all the individual memory transfers between the different execution environments to avoid unnecessary costs. This proves to be a complex task and software engineers often need to specialise and complicate their code in order to implement optimal memory transfer patterns. This project analyses the features of main-memory hash join algorithms, that are used in relational databases for join operations. Specifically, we explore the relation- ship between the main hash join algorithms and the hardware properties of Xeon Phi cards.
    [Show full text]
  • Baumann: the Multikernel: a New OS Architecture for Scalable Multicore
    The Multikernel: A new OS architecture for scalable multicore systems Andrew Baumann,∗ Paul Barham,y Pierre-Evariste Dagand,z Tim Harris,y Rebecca Isaacs,y Simon Peter,∗ Timothy Roscoe,∗ Adrian Schüpbach,∗ and Akhilesh Singhania∗ ∗Systems Group, ETH Zurich yMicrosoft Research, Cambridge zENS Cachan Bretagne Abstract App App App App Commodity computer systems contain more and more OS node OS node OS node OS node processor cores and exhibit increasingly diverse archi- Agreement algorithms State State State Async messages State tectural tradeoffs, including memory hierarchies, inter- replica replica replica replica connects, instruction sets and variants, and IO configu- Arch-specific code rations. Previous high-performance computing systems have scaled in specific cases, but the dynamic nature of Heterogeneous x86 x64 ARM GPU modern client and server workloads, coupled with the cores impossibility of statically optimizing an OS for all work- Interconnect loads and hardware variants pose serious challenges for operating system structures. Figure 1: The multikernel model. We argue that the challenge of future multicore hard- ware is best met by embracing the networked nature of the machine, rethinking OS architecture using ideas from Such hardware, while in some regards similar to ear- distributed systems. We investigate a new OS structure, lier parallel systems, is new in the general-purpose com- the multikernel, that treats the machine as a network of puting domain. We increasingly find multicore systems independent cores, assumes no inter-core sharing at the in a variety of environments ranging from personal com- lowest level, and moves traditional OS functionality to puting platforms to data centers, with workloads that are a distributed system of processes that communicate via less predictable, and often more OS-intensive, than tradi- message-passing.
    [Show full text]
  • Cherios: Designing an Untrusted Single-Address-Space Capability Operating System Utilising Capability Hardware and a Minimal Hypervisor
    CheriOS: Designing an untrusted single-address-space capability operating system utilising capability hardware and a minimal hypervisor Lawrence G. Esswood University of Cambridge Computer Laboratory Churchill College July 2020 This thesis is submitted for the degree of Doctor of Philosophy Declaration This thesis is the result of my own work and includes nothing which is the outcome of work done in collaboration except as declared in the Preface and specified in the text. I further state that no substantial part of my thesis has already been submitted, or, is being concurrently submitted for any such degree, diploma or other qualification at the University of Cambridge or any other University or similar institution except as declared in the Preface and specified in the text. It does not exceed the prescribed word limit for the relevant Degree Committee. Lawrence G. Esswood July 2020 iii Abstract CheriOS: Designing an untrusted single-address-space capability operating system utilising capability hardware and a minimal hypervisor Lawrence G. Esswood This thesis presents the design, implementation, and evaluation of a novel capability operating system: CheriOS. The guiding motivation behind CheriOS is to provide strong security guarantees to programmers, even allowing them to continue to program in fast, but typically unsafe, languages such as C. Furthermore, it does this in the presence of an extremely strong adversarial model: in CheriOS, every compartment – and even the operating system itself – is considered actively malicious. Building on top of the architecturally enforced capabilities offered by the CHERI microprocessor, I show that only a few more capability types and enforcement checks are required to provide a strong compartmentalisation model that can facilitate mutual distrust.
    [Show full text]
  • The Multikernel: a New OS Architecture for Scalable Multicore Systems
    The Multikernel: A new OS architecture for scalable multicore systems Andrew Baumann,∗ Paul Barham,y Pierre-Evariste Dagand,z Tim Harris,y Rebecca Isaacs,y Simon Peter,∗ Timothy Roscoe,∗ Adrian Schüpbach,∗ and Akhilesh Singhania∗ ∗ y z Systems Group, ETH Zurich Microsoft Research, Cambridge ENS Cachan Bretagne ABSTRACT App App App App Commodity computer systems contain more and more processor cores and exhibit increasingly diverse architectural tradeoffs, in- OS node OS node OS node OS node Agreement cluding memory hierarchies, interconnects, instruction sets and algorithms State State State Async messages State variants, and IO configurations. Previous high-performance com- replica replica replica replica puting systems have scaled in specific cases, but the dynamic nature Arch-specific of modern client and server workloads, coupled with the impossi- code bility of statically optimizing an OS for all workloads and hardware Heterogeneous variants pose serious challenges for operating system structures. x86 x64 ARM GPU cores We argue that the challenge of future multicore hardware is best met by embracing the networked nature of the machine, rethinking Interconnect OS architecture using ideas from distributed systems. We investi- gate a new OS structure, the multikernel, that treats the machine as a Figure 1: The multikernel model. network of independent cores, assumes no inter-core sharing at the lowest level, and moves traditional OS functionality to a distributed system of processes that communicate via message-passing. Such hardware, while in some regards similar to earlier paral- We have implemented a multikernel OS to show that the ap- lel systems, is new in the general-purpose computing domain.
    [Show full text]
  • Festschrift Honoring Rick Rashid on His 60Th Birthday
    estschrift for Rick Rashid F on his 60th birthday Rich Draves Pamela Ash 9 March 2012 Dan Hart illustration for School of Computer Science, © 2009 Carnegie Mellon University, used with permission. INTRODUCTION RICH DRAVES ou’re holding a Festschrift honoring Rick Rashid on his 60th birthday. You may well ask, “What is a Festschrift?” According to Wikipedia, a Festschrift “is a book Y honoring a respected person, especially an academic, and presented during his or her lifetime.” The Festschrift idea dates back to August 2011, when a group of friends and colleagues suddenly realized that Rick’s 60th birthday was rapidly approaching. In our defense, I can only point to Rick’s youthful appearance and personality. Peter Lee suggested that we organize a Festschrift. The idea was foreign, but everyone immediately recognized that it was the right way to honor Rick. The only problem was timing. Fortunately, this is a common problem and the belated Festschrift is somewhat traditional. We set a goal of holding a colloquium on March 9, 2012 (in conjunction with Microsoft Research’s annual TechFest) and producing a volume by September 2012, within a year of Rick’s birthday. This volume has content similar to the colloquium, but some authors made a different written contribution. The DVD in the back of the book contains a recording of the colloquium. I want to emphasize that we’re celebrating Rick’s career and accomplishments to date but we are also looking forward to many more great things to come. In other words, we need not fear his retirement.
    [Show full text]
  • Transactional Memory 2Nd Edition Copyright © 2010 by Morgan & Claypool
    Transactional Memory 2nd edition Copyright © 2010 by Morgan & Claypool All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means—electronic, mechanical, photocopy, recording, or any other except for brief quotations in printed reviews, without the prior permission of the publisher. Transactional Memory, 2nd edition Tim Harris, James Larus, and Ravi Rajwar www.morganclaypool.com ISBN: 9781608452354 paperback ISBN: 9781608452361 ebook DOI 10.2200/S00272ED1V01Y201006CAC011 A Publication in the Morgan & Claypool Publishers series SYNTHESIS LECTURES ON COMPUTER ARCHITECTURE Lecture #11 Series Editor: Mark D. Hill, University of Wisconsin Series ISSN Synthesis Lectures on Computer Architecture Print 1935-3235 Electronic 1935-3243 Synthesis Lectures on Computer Architecture Editor Mark D. Hill, University of Wisconsin Synthesis Lectures on Computer Architecture publishes 50- to 100-page publications on topics pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals. The scope will largely follow the purview of premier computer architecture conferences, such as ISAC, HAPP, MICRO, and APOLLOS. Transactional Memory, 2nd edition Tim Harris, James Larus, and Ravi Rajwar 2010 Computer Architecture Performance Evaluation Models Lieven Eeckhout 2010 Introduction to Reconfigured Supercomputing Marco Lanzagorta, Stephen Bique, and Robert Rosenberg 2009
    [Show full text]
  • Operating System Support for Warehouse-Scale Computing
    Operating system support for warehouse-scale computing Malte Schwarzkopf University of Cambridge Computer Laboratory St John’s College October 2015 This dissertation is submitted for the degree of Doctor of Philosophy Declaration This dissertation is the result of my own work and includes nothing which is the outcome of work done in collaboration except where specifically indicated in the text. This dissertation is not substantially the same as any that I have submitted, or, is being concur- rently submitted for a degree or diploma or other qualification at the University of Cambridge or any other University or similar institution. I further state that no substantial part of my dissertation has already been submitted, or, is being concurrently submitted for any such degree, diploma or other qualification at the University of Cambridge or any other University of similar institution. This dissertation does not exceed the regulation length of 60,000 words, including tables and footnotes. Operating system support for warehouse-scale computing Malte Schwarzkopf Summary Computer technology currently pursues two divergent trends: ever smaller mobile devices bring computing to new everyday contexts, and ever larger-scale data centres remotely back the ap- plications running on these devices. These data centres pose challenges to systems software and the operating system (OS): common OS abstractions are fundamentally scoped to a single machine, while data centre applications treat thousands of machines as a “warehouse-scale computer” (WSC). I argue that making the operating system explicitly aware of their distributed operation can result in significant benefits. This dissertation presents the design of a novel distributed operating system for warehouse-scale computers.
    [Show full text]
  • Porting Cilk to the Barrelfish OS
    Porting Cilk to the Barrelfish OS CHAU HO BAO LE KTH Information and Communication Technology Master of Science Thesis Stockholm, Sweden 2013 TRITA-ICT-EX-2013:66 KTH Royal Institute of Technology Dept. of Software and Computer Systems Degree project for the degree of Master of Science in Information and Communications Technology Porting Cilk to the Barrelfish OS Author: Chau Ho Bao Le Supervisor: Georgios Varisteas, MSc Examiner: Prof. Mats Brorsson, KTH, Sweden Abstract Barrelfish operating system is an experimental instance of multikernel structure which exhibits good features such as hardware heterogeneity, scalability, dynamicity, etc. Bar- relfish is in progress and lacks applications. Therefore, there is a need to investigate the efficiency of applications running in Barrelfish and one of candidates is a shared-memory application. To conduct an empirical study, Cilk is chosen inasmuch as its runtime li- brary is designed for shared-memory architectures and it has been known to expose good performance. This thesis focuses on making Cilk run on top of Barrelfish in order to reach two goals: portability which is described to be supported by Barrelfish, and good speed afterwards. The porting involves compiling Cilk runtime source code by replacing its pthread subroutines with set of APIs in Barrelfish and then changing the way Cilk scheduler spawns worker thread on multiple cores. However, the main point of the porting is to make different cores access to the same virtual address space. Luckily, Barrelfish provides a notion of domain which specifies the number of cores in an application so that these cores can share the same memory space.
    [Show full text]