A POSIX Threads Based DSM System

Total Page:16

File Type:pdf, Size:1020Kb

A POSIX Threads Based DSM System Murks – A POSIX Threads Based DSM System M. Pizka, C. Rehn Institut f¨ur Informatik, Technische Universit¨at M¨unchen, 80290 Munich - Germany email: pizka,rehn @in.tum.de ABSTRACT nel modifications, in 5.. Section 7. concludes with a brief The shared memory paradigm provides an easy to use pro- summary of our observations and Murks’ capabilities after gramming model for communicating processes. Its im- discussing its performance in 6.. plementation in distributed environments has proofed to be harder than expected. Most distributed shared mem- ory (DSM) systems suffer from either poor performance or 1.1 Distributed OS Context they are very complicated to use. The DSM system Murks, presented in this paper, is the result of the sobering expe- In the research project MoDiS (Model oriented Distributed riences we made by trying to integrate an existing DSM Systems) [EP99] we follow a language-based, integrated, system into a distributed OS. and top-down oriented approach to develop a distributed Murks distinctive features are 1) full support for POSIX OS that provides a high-quality programming environment multithreading instead of an own proprietary thread- (i. e. ease-of-use, reusabilty, maintainability, etc.) as well package and 2) it does not burden the programmer with as proper performance for general purpose distributed com- additional, difficult to use, memory management services. puting. These advantages are achieved by a tight integration of the DSM subsystem into the OS. By this, Murks is able to pro- 1.1.1 Language-based vide a better combination of performance and ease-of-use than other DSM systems. The term language-based means that the development of the OS is driven by the development of high-level language KEY WORDS concepts for the specification of parallel and distributed distributed systems, operating systems, memory manage- systems. E. g. concurrency, cooperation and synchroniza- ment, multi-threading, distributed shared memory tion are language concepts instead of runtime level ser- vices. By this, we are able to provide suitable program- 1. Motivation ming concepts without being restricted neither to the fea- tures of a predefined OS nor to the concepts of an already Murks is a page-based distributed shared memory (DSM) existing programming language, such as passing object ref- system providing full transparency concerning the physical erences in Java, which must be regarded as inadequate in distribution of the hardware (HW) configuration. Instead of distributed environments. focusing on peak performance, our major design goal was to develop a DSM that can be seamlessly integrated into a distributed and parallel operating system (OS). The reason INSEL This programming model is implemented and for this was the observation and (frustrating) practical ex- offered to the programmer by the syntax of the Ada- perience, that there are certainly many different DSM sys- like [Ada83] programming language INSEL [Win96] and tems with well-elaborated management strategies but none its optimizing source to machine-level compiler [Piz97]. of them proved to be suitable as a OS level service cross- INSEL can be characterized as a high-level, type-safe, im- cutting several applications of a long-lived system. perative and object-based programming language, support- To substantiate these statements, we will first ing explicit task parallelism. sketch the key design principles of the distributed OS Objects are dynamically created during execution as MoDiS [PE97] for which a DSM system was sought after, instances of class describing objects. To prevent from dan- in 1.1. Afterwards, we are able to define requirements for gling pointers, objects are implicitly deleted by a garbage a suitable DSM system and to discuss the pros and cons collector. Determined with the class definition, objects may of the memory and execution models provided by existing either be active, called actors, or passive ones. An actor de- DSMs in section 2.. Section 3. gives an explanation for the fines a separate flow of control and executes concurrently lack of support for POSIX multi-threading [IEE95] in most to its creator. DSM systems. Following these observations, we introduce Actors may communicate directly in a rendezvous the concepts of our DSM system Murks, which aims at style as well as indirectly by accessing shared passive ob- overcoming the observed shortcomings in section 4., be- jects (shared memory paradigm). All requests to objects fore we describe implementation details, such as Linux ker- are served synchronously. DSM-Requirements Part 1 1.1.3 Top-Down Management R1 trivial: We need a DSM subsystem that efficiently im- The philosophy behind our approach to automate resource plements actor interaction via shared passive objects management is to free the application level from all tasks of arbitrary granularity. Note, passive objects may be but specifying parallel and cooperative computations. This as small as a single integer variable or complex data especially holds for all tasks dealing with the physical dis- structures composed of other passive and active ob- tribution of the execution environment. Note that this prin- jects. ciple is a prerequisite for achieving portability, scalability and enhanced maintainability of distributed applications. R2 The DSM must be compatible with varying degrees It is solely within the responsibility of the OS to of parallelism defined by actors of arbitrary number map the abstract requirements of applications specified by and size (in both, space and time). Due to the high- means of the language INSEL to physically distributed re- level of abstraction of the language concepts, it would sources, such as implementing the abstract creation of a be completely inappropriate to rigidly map the actor dynamic integer object by allocating a memory object in concept to e. g. heavy-weight processes. Instead, an shared heap space. Conceptually, this mapping between actor might be implemented as a kernel-level or user- abstract concepts and concrete HW features is achieved by level thread or even inlined into another thread. The top-down oriented stepwise refinement, performed by the DSM must not obstruct this flexibility. OS. Practically, some of this transformation steps are per- formed at compile-time while others are executed at run- R3 The language-based philosophy demands, that the ap- time. Hence, all mechanisms, strategies and services of plication level is left unspoiled by DSM concepts. The the OS are to be tailored to the language-level concepts to programmer must not be burdened with any mem- match the requirements of distributed applications [ea97]. ory management concepts besides creating objects in stack or heap space. 1.2 Generalization 1.1.2 Integrated It should be obvious, that most of the requirements stated above are relevant for other distributed systems, too and The entire distributed system in execution, comprising sev- not limited to the scope of MoDiS. Thus, MoDiS should eral applications and the distributed OS, is regarded as a only be regarded as the initial spark for our considerations. single globally structured system [PE97], which can be dy- Its major contribution is to deliver greater creativity for the namically extended with new applications and maybe also design of a distributed computing infrastructure by being OS functionality. language-based and top-down oriented instead of concen- One the one hand, this holistic view provides an ideal trating on how to extend legacy systems. integration of knowledge about the properties of distributed We believe that integration, automation and trans- objects and their interdependencies. Thus – at a conceptual parency are important prerequisites to achieve the desired level – the OS may consider non-local information for im- ease-of-use and performance of distributed systems in gen- portant resource managements decisions instead of being eral. Hence, these design principles will also be significant limited to local optimization a priori. On the other hand, the for DSM systems beyond the context of MoDiS. single system view implies longevity of the system, which has a surprisingly strong impact on low-level OS manage- ment mechanisms and therewith also on the DSM subsys- 2. Related Work tem. DSMs have been of strong interest to the OS commu- nity from the mid 80s to the late 90s [Li86, ea99]. The DSM-Requirements Part 2 main motivation was to provide a simplified program- ming model relative to the omnipresent message passing R4 The DSM subsystem has to support a single (inte- paradigm [Sun93, For94]. But several attempts to imple- grated) virtual address space (VA) across all nodes. ment the DSM concept exposed difficulties that proved to Note, it is not obvious that a DSM system also imple- be hard or undoable without specialized HW. [Car98] sum- ments a single VA! marizes that DSM systems suffer from R5 The DSM must support multiple applications simulta- ¯ being too difficult to use, or (non-exclusive) neously and provide partial cleanup of memory parti- tions previously allocated by terminated applications. ¯ severe performance penalties. R6 It must also support long-lived large-scale applica- Since we are in need of support for a combination of shared tions that eventually migrate between arbitrary nodes memory and parallelism, we have to examine both, mem- of the system. ory and execution models, of DSM systems. 2.1 Memory Models Single-Threaded DSMs Most DSM-Systems provide a single threaded execution Ivy [Li86] was the first page-based DSM, followed
Recommended publications
  • Other Apis What’S Wrong with Openmp?
    Threaded Programming Other APIs What’s wrong with OpenMP? • OpenMP is designed for programs where you want a fixed number of threads, and you always want the threads to be consuming CPU cycles. – cannot arbitrarily start/stop threads – cannot put threads to sleep and wake them up later • OpenMP is good for programs where each thread is doing (more-or-less) the same thing. • Although OpenMP supports C++, it’s not especially OO friendly – though it is gradually getting better. • OpenMP doesn’t support other popular base languages – e.g. Java, Python What’s wrong with OpenMP? (cont.) Can do this Can do this Can’t do this Threaded programming APIs • Essential features – a way to create threads – a way to wait for a thread to finish its work – a mechanism to support thread private data – some basic synchronisation methods – at least a mutex lock, or atomic operations • Optional features – support for tasks – more synchronisation methods – e.g. condition variables, barriers,... – higher levels of abstraction – e.g. parallel loops, reductions What are the alternatives? • POSIX threads • C++ threads • Intel TBB • Cilk • OpenCL • Java (not an exhaustive list!) POSIX threads • POSIX threads (or Pthreads) is a standard library for shared memory programming without directives. – Part of the ANSI/IEEE 1003.1 standard (1996) • Interface is a C library – no standard Fortran interface – can be used with C++, but not OO friendly • Widely available – even for Windows – typically installed as part of OS – code is pretty portable • Lots of low-level control over behaviour of threads • Lacks a proper memory consistency model Thread forking #include <pthread.h> int pthread_create( pthread_t *thread, const pthread_attr_t *attr, void*(*start_routine, void*), void *arg) • Creates a new thread: – first argument returns a pointer to a thread descriptor.
    [Show full text]
  • Thread and Multitasking
    11/6/17 Thread and multitasking Alberto Bosio, Associate Professor – UM Microelectronic Departement [email protected] Agenda l POSIX Threads Programming l http://www.yolinux.com/TUTORIALS/LinuxTutorialPosixThreads.html l Compiler: http://cygwin.com/index.html 1 11/6/17 Process vs Thread process thread Picture source: https://computing.llnl.gov/tutorials/pthreads/ Shared Memory Model Picture source: https://computing.llnl.gov/tutorials/pthreads/ 2 11/6/17 Simple Thread Example void *func ( ) { /* define local data */ - - - - - - - - - - - - - - - - - - - - - - /* function code */ - - - - - - - - - - - pthread_exit(exit_value); } int main ( ) { pthread_t tid; int exit_value; - - - - - - - - - - - pthread_create (0, 0, func (), 0); - - - - - - - - - - - pthread_join (tid, &exit_value); - - - - - - - - - - - } Basic Functions Purpose Process Model Threads Model Creation of a new thread fork ( ) thr_create( ) Start execution of a new thread exec( ) [ thr_create() builds the new thread and starts the execution Wait for completion of thread wait( ) thr_join() Exit and destroy the thread exit( ) thr_exit() 3 11/6/17 Code comparison main ( ) main() { { fork ( ); thread_create(0,0,func(),0); fork ( ); thread_create(0,0,func(),0); fork ( ); thread_create(0,0,func(),0); } } Creation #include <pthread.h> int pthread_create( pthread_t *thread, const pthread_attr_t *attr, void *(*start_routine)(void*), void *arg); 4 11/6/17 Exit #include <pthread.h> void pthread_exit(void *retval); join #include <pthread.h> int pthread_join( pthread_t thread, void *retval); 5 11/6/17 Joining Passing argument (wrong example) int rc; long t; for(t=0; t<NUM_THREADS; t++) { printf("Creating thread %ld\n", t); rc = pthread_create(&threads[t], NULL, PrintHello, (void *) &t); ... } 6 11/6/17 Passing argument (good example) long *taskids[NUM_THREADS]; for(t=0; t<NUM_THREADS; t++) { taskids[t] = (long *) malloc(sizeof(long)); *taskids[t] = t; printf("Creating thread %ld\n", t); rc = pthread_create(&threads[t], NULL, PrintHello, (void *) taskids[t]); ..
    [Show full text]
  • Introduction to Multi-Threading and Vectorization Matti Kortelainen Larsoft Workshop 2019 25 June 2019 Outline
    Introduction to multi-threading and vectorization Matti Kortelainen LArSoft Workshop 2019 25 June 2019 Outline Broad introductory overview: • Why multithread? • What is a thread? • Some threading models – std::thread – OpenMP (fork-join) – Intel Threading Building Blocks (TBB) (tasks) • Race condition, critical region, mutual exclusion, deadlock • Vectorization (SIMD) 2 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading Image courtesy of K. Rupp 3 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization Motivations for multithreading • One process on a node: speedups from parallelizing parts of the programs – Any problem can get speedup if the threads can cooperate on • same core (sharing L1 cache) • L2 cache (may be shared among small number of cores) • Fully loaded node: save memory and other resources – Threads can share objects -> N threads can use significantly less memory than N processes • If smallest chunk of data is so big that only one fits in memory at a time, is there any other option? 4 6/25/19 Matti Kortelainen | Introduction to multi-threading and vectorization What is a (software) thread? (in POSIX/Linux) • “Smallest sequence of programmed instructions that can be managed independently by a scheduler” [Wikipedia] • A thread has its own – Program counter – Registers – Stack – Thread-local memory (better to avoid in general) • Threads of a process share everything else, e.g. – Program code, constants – Heap memory – Network connections – File handles
    [Show full text]
  • Intel Threading Building Blocks
    Praise for Intel Threading Building Blocks “The Age of Serial Computing is over. With the advent of multi-core processors, parallel- computing technology that was once relegated to universities and research labs is now emerging as mainstream. Intel Threading Building Blocks updates and greatly expands the ‘work-stealing’ technology pioneered by the MIT Cilk system of 15 years ago, providing a modern industrial-strength C++ library for concurrent programming. “Not only does this book offer an excellent introduction to the library, it furnishes novices and experts alike with a clear and accessible discussion of the complexities of concurrency.” — Charles E. Leiserson, MIT Computer Science and Artificial Intelligence Laboratory “We used to say make it right, then make it fast. We can’t do that anymore. TBB lets us design for correctness and speed up front for Maya. This book shows you how to extract the most benefit from using TBB in your code.” — Martin Watt, Senior Software Engineer, Autodesk “TBB promises to change how parallel programming is done in C++. This book will be extremely useful to any C++ programmer. With this book, James achieves two important goals: • Presents an excellent introduction to parallel programming, illustrating the most com- mon parallel programming patterns and the forces governing their use. • Documents the Threading Building Blocks C++ library—a library that provides generic algorithms for these patterns. “TBB incorporates many of the best ideas that researchers in object-oriented parallel computing developed in the last two decades.” — Marc Snir, Head of the Computer Science Department, University of Illinois at Urbana-Champaign “This book was my first introduction to Intel Threading Building Blocks.
    [Show full text]
  • Intel Threading Building Blocks (Mike Voss)
    Mike Voss, Principal Engineer Core and Visual Computing Group, Intel We’ve already talked about threading with pthreads and the OpenMP* API • POSIX threads (pthreads) lets us express threading but makes us do a lot of the hard work • OpenMP is higher-level model and is widely used in C/C++ and Fortran • It takes care of many of the low-level error prone details for us • OpenMP has weaknesses, especially for C++ developers… • It uses #pragmas and so doesn’t look like C++ • It is not a composable parallelism model Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Agenda • What is composability and why is it important? • An introduction to the Threading Building Blocks (TBB) library • What it is and what it contains • TBB’s high-level execution interfaces • The generic parallel algorithms, the flow graph and Parallel STL • Synchronization primitives and concurrent containers • The TBB scalable memory allocator Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. 3 *Other names and brands may be claimed as the property of others. There are different ways parallel software components can be combined with other parallel software components Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. 4 *Other names and brands may be claimed as the property of others. Nested composition int main() { #pragma omp parallel f(); } void f() { #pragma omp parallel g(); } void g() { #pragma omp parallel h(); } Optimization Notice Copyright © 2018, Intel Corporation. All rights reserved. 5 *Other names and brands may be claimed as the property of others.
    [Show full text]
  • CSE 220: Systems Programming POSIX Threads and Synchronization
    CSE 220: Systems Programming POSIX Threads and Synchronization Ethan Blanton Department of Computer Science and Engineering University at Buffalo Introduction Threads Mutexes Condition Variables Semaphores Summary References POSIX Threads The POSIX threads API adds threading to Unix. You will also see this API called Pthreads or pthreads. Early Unix provided only the process model for concurrency. POSIX threads look like processes, but share more resources. Every POSIX thread starts with a function. © 2020 Ethan Blanton / CSE 220: Systems Programming Introduction Threads Mutexes Condition Variables Semaphores Summary References POSIX Synchronization Pthreads also provides synchronization mechanisms. In fact, it provides a rather rich set of options! Mutexes Semaphores Condition variables Thread joining Memory barriers1 Only semaphores are covered in detail in CS:APP. 1We won’t talk about these. © 2020 Ethan Blanton / CSE 220: Systems Programming Introduction Threads Mutexes Condition Variables Semaphores Summary References Compilation with Pthreads Pthreads may require extra compilation options. On modern Linux, use -pthread both when compiling and linking. On some other systems, other options may be required: Provide a different compiler or linker option (such as -pthreads) Compile with some preprocessor define (e.g., -DPTHREAD, -D_REENTRANT) Link with a library (e.g., -lpthread) …read the documentation! © 2020 Ethan Blanton / CSE 220: Systems Programming Introduction Threads Mutexes Condition Variables Semaphores Summary References Thread
    [Show full text]
  • Intel Threading Building Blocks 1 Outfitting C++ for Multi-Core Processor Parallelism
    ,ittb_marketing.15356 Page i Tuesday, July 3, 2007 4:48 PM Intel Threading Building Blocks 1 Outfitting C++ for Multi-core Processor Parallelism. Special Offer Take 35% off when you order direct from O’Reilly Just visit http://www.oreilly.com/go/inteltbb and use discount code INTTBB By James Reinders Copyright 2007 O'Reilly Media First Edition July 2007 Pages: 332 ISBN-10: 0-596-51480-8 ISBN-13: 978-0-596-51480-8 More than ever, multithreading is a requirement for good performance of systems with multi-core chips. This guide explains how to maximize the benefits of these processors through a portable C++ library that works on Windows, Linux, Macin- tosh, and Unix systems. With it, you'll learn how to use Intel Threading Building Blocks (TBB) effectively for parallel programming—without having to be a thread- ing expert. Written by James Reinders, Chief Evangelist of Intel Software Products, and based on the experience of Intel's developers and customers, this book explains the key tasks in multithreading and how to accomplish them with TBB in a portable and robust manner. With plenty of examples and full reference material, the book lays out common patterns of uses, reveals the gotchas in TBB, and gives important guide- lines for choosing among alternatives in order to get the best performance. Any C++ programmer who wants to write an application to run on a multi-core sys- tem will benefit from this book. TBB is also very approachable for a C programmer or a C++ programmer without much experience with templates.
    [Show full text]
  • MPI and Pthreads in Parallel Programming
    “1st International Symposium on Computing in Informatics and Mathematics (ISCIM 2011)” in Collabaration between EPOKA University and “Aleksandër Moisiu” University of Durrës on June 2-4 2011, Tirana-Durres, ALBANIA. MPI and Pthreads in parallel programming Sidita DULI1 1Department of Mathematics and Informatics, University “Luigj Gurakuqi”, Shkoder Email : [email protected] ABSTRACT By programming in parallel, large problem is divided in smaller ones, which are solved concurrently. Two of techniques that makes this possible are Message Passing Interface (MPI) and POSIX threads (Pthreads). In this article is highlighted the difference between these two different implementation of parallelism in a software by showing their main characteristics and the advantages of each of them. INTRODUCTION The goal of processing in parallel consists of taking a large task and divides it in smaller tasks that can be processed in the same time simultaneously. In this way, the whole task is completed faster than if it would be executed in one and big task. Parallel computing requires this architecture: 1. The hardware of the computer that will execute the task should be designed in the way to work with multiple processors and to enable the communication between the processors. 2. The operating system of the computer should enable the management of many processors. 3. The software should be capable of breaking large tasks into multiple smaller tasks which can be performed in parallel. The advantages that offers the parallel processing is increasing the power of processing, making a higher throughput and better performance for the same price. Most of these advantages can benefit those applications that can break larger tasks into smaller parallel tasks and that can manage the synchronization between those tasks.
    [Show full text]
  • TBB) Venue : CMSD, Uohyd ; Date : October 15-18, 2013
    C-DAC Four Days Technology Workshop ON Hybrid Computing – Coprocessors/Accelerators Power-Aware Computing – Performance of Applications Kernels hyPACK-2013 (Mode-1:Multi-Core) Lecture Topic: Multi-Core Processors : Shared Memory Prog: An Overview of Intel Thread Building Blocks (TBB) Venue : CMSD, UoHYD ; Date : October 15-18, 2013 C-DAC hyPACK-2013 Multi-Core Processors : Intel TBB 1 An Overview of Intel Thread Building Blocks Lecture Outline Following topics will be discussed Why Thread Building Blocks (TBB) ? Summary of high-level templates Loops - Loop Parallelization Algorithms templates – Intel TBB Memory Allocation – TBB – Performance Issues TBB Task Scheduler – Speed-Up issues TBB Application Perspective C-DAC hyPACK-2013 Multi-Core Processors : Intel TBB 2 Part II Intel TBB C-DAC hyPACK-2013 Multi-Core Processors : Intel TBB 3 Background : Operational Flow of Threads for an Application Operational Flow of Threads Implementation Source Code T1 . T2 Tn . Parallel Code Block or a Perform synchronization section needs multithread operations using parallel constructs Bi synchronization . T T1 2 . Tn . Perform synchronization operations using parallel Parallel Code Block constructs Bj T1 …. p Source : http://www.intel.com ; Reference : [6] C-DAC hyPACK-2013 Multi-Core Processors : Intel TBB 4 Simple Comparison of Single-core, Multi- processor, and Multi-core Architectures CPU State B) MultiprocessorCPU State CPU State Interrupt Logic Interrupt Logic Interrupt Logic Execution Cache Execution Units Cache Execution Cache Units Units
    [Show full text]
  • Chapter 4 Shared Memory Programming with Pthreads
    An Introduction to Parallel Programming Peter Pacheco Chapter 4 Shared Memory Programming with Pthreads Copyright © 2010, Elsevier Inc. All rights Reserved 1 # Chapter Subtitle Roadmap ! Problems programming shared memory systems. ! Controlling access to a critical section. ! Thread synchronization. ! Programming with POSIX threads. ! Mutexes. ! Producer-consumer synchronization and semaphores. ! Barriers and condition variables. ! Read-write locks. ! Thread safety. Copyright © 2010, Elsevier Inc. All rights Reserved 2 A Shared Memory System Copyright © 2010, Elsevier Inc. All rights Reserved 3 Processes and Threads ! A process is an instance of a running (or suspended) program. ! Threads are analogous to a light-weight process. ! In a shared memory program a single process may have multiple threads of control. Copyright © 2010, Elsevier Inc. All rights Reserved 4 ® POSIX Threads ! Also known as Pthreads. ! A standard for Unix-like operating systems. ! A library that can be linked with C programs. ! Specifies an application programming interface (API) for multi-threaded programming. Copyright © 2010, Elsevier Inc. All rights Reserved 5 Caveat ! The Pthreads API is only available on POSIXR systems — Linux, MacOS X, Solaris, HPUX, … Copyright © 2010, Elsevier Inc. All rights Reserved 6 Hello World! (1) declares the various Pthreads functions, constants, types, etc. Copyright © 2010, Elsevier Inc. All rights Reserved 7 Hello World! (2) Copyright © 2010, Elsevier Inc. All rights Reserved 8 Hello World! (3) Copyright © 2010, Elsevier Inc. All rights Reserved 9 Compiling a Pthread program gcc −g −Wall −o pth_hello pth_hello . c −lpthread link in the Pthreads library Copyright © 2010, Elsevier Inc. All rights Reserved 10 Running a Pthreads program . / pth_hello <number of threads> . / pth_hello 1 Hello from the main thread Hello from thread 0 of 1 .
    [Show full text]
  • A Lightweight Operating System for Ultrascale Supercomputers
    Kitten: A Lightweight Operating System for Ultrascale Supercomputers Presentation to the New Mexico Consortium Ultrascale Systems Research Center August 8, 2011 Kevin Pedretti Senior Member of Technical Staff Scalable System Software, Dept. 1423 [email protected] SAND2011-5626P Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.! Outline •"Introduction •"Kitten lightweight kernel overview •"Future directions •"Conclusion Four+ Decades of UNIX Operating System = Collection of software and APIs Users care about environment, not implementation details LWK is about getting details right for scalability, both within a node and across nodes Sandia Lightweight Kernel Targets •"Massively parallel, extreme-scale, distributed- memory machine with a tightly-coupled network •"High-performance scientific and engineering modeling and simulation applications •"Enable fast message passing and execution •"Offer a suitable development environment for parallel applications and libraries •"Emphasize efficiency over functionality •"Move resource management as close to application as possible •"Provide deterministic performance •"Protect applications from each other Lightweight Kernel Overview Basic Architecture Memory Management … … Policy Maker Page 3 Page 3 (PCT) Libc.a Libc.a Application N Application 1 Page 2 Page 2 libmpi.a libmpi.a
    [Show full text]
  • POSIX Threads (Pthreads)
    An Introduction to POSIX threads Alexey A. Romanenko [email protected] Based on Sun Microsystems© presentation What is this section about? ● POSIX threads overview ● Program compilation ● POSIX threads functions ● etc. Agenda ● POSIX threads programming model ● POSIX threads overview – Launching – Threads synchronization – The SMP systems BUS Threads Threads use and exist within the process resources, yet are able to be scheduled by the operating system and run as independent entities within a process. A thread can possess an independent flow of control and could be schedulable because it maintains its own. Thread is not a process Thread Shared Memory Model ● All threads have access to the same, globally shared, memory ● Data can be shared or private ● Shared data is accessible by all threads ● Private data can be accessed only by the threads that owns ● Explicit synchronization POSIX threads ● POSIX specifies a set of interfaces (functions, header files) for threaded programming commonly known as POSIX threads, or Pthreads. Threads attributes Shared Distinct ● process ID ● thread ID (the pthread_t data type) ● parent process ID ● signal mask ● process group ID and session ID (pthread_sigmask) ● controlling terminal ● the errno variable ● user and group Ids ● alternate signal stack ● open file descriptors (sigaltstack) ● record locks ● real-time scheduling policy and priority ● file mode creation mask ● current directory and root directory Thread-safe functions ● POSIX.1-2001 requires that all functions specified in the standard shall
    [Show full text]