Semaphores – Signaling Devices

Total Page:16

File Type:pdf, Size:1020Kb

Semaphores – Signaling Devices 4/18/2017 Synchronization Mechanisms &Problems Semaphores – signaling devices 7H. Semaphores when direct communication was not an option e.g. between villages, ships, trains 7I. Producer/Consumer Problems 7J. Object Level Locking 7K. Bottlenecks, Contention and Granularity Synchronization Mechanisms and Problems 1 Semaphores - History Semaphores - Operations • Concept introduced in 1968 by Edsger Dijkstra • Semaphore has two parts: – cooperating sequential processes – an integer counter (initial value unspecified) • THE classic synchronization mechanism – a FIFO waiting queue – behavior is well specified and universally accepted • P (proberen/test) ... “wait” – a foundation for most synchronization studies – decrement counter, if count >= 0, return – a standard reference for all other mechanisms – if counter < 0, add process to waiting queue • more powerful than simple locks • V (verhogen/raise) ... “post” or “signal” – they incorporate a FIFO waiting queue – increment counter – they have a counter rather than a binary flag – if counter >= 0 & queue non-empty, wake 1 st proc Synchronization Mechanisms and Problems 3 Synchronization Mechanisms and Problems 4 using semaphores for exclusion Semaphores - for exclusion struct account { • initialize semaphore count to one struct semaphore s; /* initialize count to 1, queue empty, lock 0 */ int balance; count reflects # threads allowed to hold lock … – }; • use P/wait operation to take the lock int write_check( struct account *a, int amount ) { int ret; – the first will succeed p( &a->semaphore ); /* get exclusive access to the account */ if ( a->balance >= amount ) { /* check for adequate funds */ – subsequent attempts will block amount -= balance; ret = amount; • use V/post operation to release the lock } else ret = -1; – restore semaphore count to non-negative v( &a->semaphore ); /* release access to the account */ return( ret ); – if any threads are waiting, unblock the first in line } Synchronization Mechanisms and Problems 5 Synchronization Mechanisms and Problems 6 1 4/18/2017 using semaphores for notifications Semaphores - completion events struct semaphore pipe_semaphore = { 0, 0, 0 }; /* count = 0; pipe empty */ • initialize semaphore count to zero char buffer[BUFSIZE]; int read_ptr = 0, write_ptr = 0; char pipe_read_char() { – count reflects # of completed events p (&pipe_semaphore ); /* wait for input available */ c = buffer[read_ptr++]; /* get next input character */ • use P/wait operation to await completion if (read_ptr >= BUFSIZE) /* circular buffer wrap */ read_ptr -= BUFSIZE; return(c); – if already posted, it will return immediately } – else all callers will block until V/post is called void pipe_write_string( char *buf, int count ) { while( count-- > 0 ) { • use V/post operation to signal completion buffer[write_ptr++] = *buf++; /* store next character */ if (write_ptr >= BUFSIZE) /* circular buffer wrap */ write_ptr -= BUFSIZE; – increment the count v( &pipe_semaphore ); /* signal char available */ } – if any threads are waiting, unblock the first in line } • one signal per wait: no broadcasts Synchronization Mechanisms and Problems 7 Synchronization Mechanisms and Problems 8 Counting Semaphores Implementing Semaphores • initialize semaphore count to ... void sem_wait(sem_t *s) { pthread_mutex_lock(&s->lock); – count reflects # of available resources while (s->value <= 0) • use P/wait operation to consume a resource pthread_cond_wait(&s->cond, &s->lock); s->value--; – if available, it will return immediately pthread_mutex_unlock(&s->lock); – else all callers will block until V/post is called } void sem_post(sem_t *s) { • use V/post operation to produce a resource pthread_mutex_lock(&s->lock); – increment the count s->value++; pthread_cond_signal(&s->cond); – if any threads are waiting, unblock the first in line pthread_mutex_unlock(&s->lock) • one signal per wait: no broadcasts } Synchronization Mechanisms and Problems 9 Synchronization Mechanisms and Problems 10 Implementing Semaphores in OS (locking to solve sleep/wakeup race) void sem_wait(sem_t *s ) { for (;;) { • requires a spin-lock to work on SMPs save = intr_enable( ALL_DISABLE ); while( TestAndSet( &s->lock ) ); – sleep/wakeup may be called on two processors if (s->value > 0) { – the critical section is short and cannot block s->value--; void sem_post(struct sem_t *s) { s->sem_lock = 0; struct proc_desc *p = 0; – we must spin, because we cannot sleep ... the lock we intr_enable( save ); save = intr_enable( ALL_DISABLE ); need is the one that protects the sleep operation return; while ( TestAndSet( &s->lock ) ); } s->value++; • also requires interrupt disabling in sleep add_to_queue( &s->queue, myproc ); if (p = get_from_queue( &s->queue )) { myproc->runstate |= PROC_BLOCKED; p->runstate &= ~PROC_BLOCKED; – wakeup is often called from interrupt handlers s->lock = 0; } intr_enable( save ); s->lock = 0; – interrupt possible during sleep/wakeup critical section yield(); intr_enable( save ); – If spin-lock already is held, wakeup will block for ever } if (p) } reschedule( p ); • very few operations require both of these } Synchronization Mechanisms and Problems 11 Synchronization Mechanisms and Problems 12 2 4/18/2017 Limitations of Semaphores Using Condition Variables • semaphores are a very spartan mechanism pthread_mutex_t lock = PTHEAD_MUTEX_INITIALIZER; pthread_cond_t cond = PTHEAD_COND_INITIALIZER; – they are simple, and have few features … – more designed for proofs than synchronization pthread_mutex_lock(&lock); • they lack many practical synchronization features while (ready == 0) pthread_cond_wait(&cond, &lock); – It is easy to deadlock with semaphores pthread_mutex_unlock(&lock) – one cannot check the lock without blocking … – they do not support reader/writer shared access if (pthread_mutex_lock(&lock)) { ready = 1; – no way to recover from a wedged V'er pthread_cond_signal(&cond); – no way to deal with priority inheritance pthread_mutex_unlock(&lock); • none the less, most OSs support them } Synchronization Mechanisms and Problems 13 Synchronization Mechanisms and Problems 14 Bounded Buffer Problem w/CVs Producer/Consumer w/Semaphores void producer( FIFO *fifo, char *msg, int len ) { void producer( FIFO *fifo, char *msg, int len ) { for( int i = 0; i < len; i++ ) { for( int i = 0; i < len; i++ ) { pthread_mutex_lock(&mutex); sem_wait(&empty); while (fifo->count == MAX) sem_wait(&mutex); pthread_cond_wait(&empty, &mutex); put(fifo, msg[i]); put(fifo, msg[i]); pthread_cond_signal(&fill); sem_post(&mutex); pthread_mutex_unlock(&mutex); sem_post(&full); } } void consumer( FIFO *fifo, char *msg, int len ) { void consumer( FIFO *fifo, char *msg, int len ) { } } for( int i = 0; i < len; i++ ) { for( int i = 0; i < len; i++ ) { pthread_mutex_lock(&mutex); sem_wait(&full); while (fifo->count == 0) sem_wait(&mutex); pthread_cond_wait(&fill, &mutex); msg[i] = get(fifo); msg[i] = get(fifo); pthread_cond_signal(&empty); sem_post(&mutex); pthread_mutex_unlock(&mutex); sem_post(&empty); } } } } Synchronization Mechanisms and Problems 15 Synchronization Mechanisms and Problems 16 Object Level Locking Whole File Locking • mutexes protect code critical sections int flock( fd , operation ) – brief durations (e.g. nanoseconds, milliseconds) • supported operation s: – other threads operating on the same data – LOCK_SH … shared lock (multiple allowed) – all operating in a single address space – LOCK_EX … exclusive lock (one at a time) • persistent objects are more difficult – LOCK_UN … release a lock – critical sections are likely to last much longer • lock is associated with an open file descriptor – many different programs can operate on them – lock is released when that file descriptor is closed – may not even be running on a single computer • locking is purely advisory • solution: lock objects (rather than code) – does not prevent reads, writes, unlinks Synchronization Mechanisms and Problems 17 Synchronization Mechanisms and Problems 18 3 4/18/2017 Advisory vs Enforced Locking Ranged File Locking • Enforced locking int lockf(fd , cmd, offset, len ) – done within the implementation of object methods • supported cmds : – guaranteed to happen, whether or not user wants it – F_LOCK … get/wait for an exclusive lock – may sometimes be too conservative – F_ULOCK … release a lock • Advisory locking – F_TEST/F_TLOCK … test, or non-blocking request – a convention that “good guys” are expected to follow – offset/len specifies portion of file to be locked – users expected to lock object before calling methods • lock is associated with a file descriptor – gives users flexibility in what to lock, when – lock is released when file descriptor is closed – gives users more freedom to do it wrong (or not at all) • locking may or may not be enforced – mutexes are advisory locks – depending on the underlying file system Synchronization Mechanisms and Problems 19 Synchronization Mechanisms and Problems 20 Cost of not getting a Lock Probability of Conflict • protect critical sections to ensure correctness • many critical sections are very brief – in and out in a matter of nano-seconds • blocking is much more (e.g. 1000x) expensive – micro-seconds to yield, context switch – milliseconds if swapped-out or a queue forms • performance depends on conflict probability Cexpected = (Cget * (1-Pconflict )) + (Cblock * Pconflict ) Synchronization Mechanisms and Problems 21 Synchronization Mechanisms and Problems 22 Convoy Formation Performance: resource convoys • in general threads-1 Pconflict = 1 – (1 – (Tcritical / Ttotal )) (nobody
Recommended publications
  • NUMA-Aware Thread Migration for High Performance NVMM File Systems
    NUMA-Aware Thread Migration for High Performance NVMM File Systems Ying Wang, Dejun Jiang and Jin Xiong SKL Computer Architecture, ICT, CAS; University of Chinese Academy of Sciences fwangying01, jiangdejun, [email protected] Abstract—Emerging Non-Volatile Main Memories (NVMMs) out considering the NVMM usage on NUMA nodes. Besides, provide persistent storage and can be directly attached to the application threads accessing file system rely on the default memory bus, which allows building file systems on non-volatile operating system thread scheduler, which migrates thread only main memory (NVMM file systems). Since file systems are built on memory, NUMA architecture has a large impact on their considering CPU utilization. These bring remote memory performance due to the presence of remote memory access and access and resource contentions to application threads when imbalanced resource usage. Existing works migrate thread and reading and writing files, and thus reduce the performance thread data on DRAM to solve these problems. Unlike DRAM, of NVMM file systems. We observe that when performing NVMM introduces extra latency and lifetime limitations. This file reads/writes from 4 KB to 256 KB on a NVMM file results in expensive data migration for NVMM file systems on NUMA architecture. In this paper, we argue that NUMA- system (NOVA [47] on NVMM), the average latency of aware thread migration without migrating data is desirable accessing remote node increases by 65.5 % compared to for NVMM file systems. We propose NThread, a NUMA-aware accessing local node. The average bandwidth is reduced by thread migration module for NVMM file system.
    [Show full text]
  • Resource Access Control in Real-Time Systems
    Resource Access Control in Real-time Systems Advanced Operating Systems (M) Lecture 8 Lecture Outline • Definitions of resources • Resource access control for static systems • Basic priority inheritance protocol • Basic priority ceiling protocol • Enhanced priority ceiling protocols • Resource access control for dynamic systems • Effects on scheduling • Implementing resource access control 2 Resources • A system has ρ types of resource R1, R2, …, Rρ • Each resource comprises nk indistinguishable units; plentiful resources have no effect on scheduling and so are ignored • Each unit of resource is used in a non-preemptive and mutually exclusive manner; resources are serially reusable • If a resource can be used by more than one job at a time, we model that resource as having many units, each used mutually exclusively • Access to resources is controlled using locks • Jobs attempt to lock a resource before starting to use it, and unlock the resource afterwards; the time the resource is locked is the critical section • If a lock request fails, the requesting job is blocked; a job holding a lock cannot be preempted by a higher priority job needing that lock • Critical sections may nest if a job needs multiple simultaneous resources 3 Contention for Resources • Jobs contend for a resource if they try to lock it at once J blocks 1 Preempt J3 J1 Preempt J3 J2 blocks J2 J3 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 Priority inversion EDF schedule of J1, J2 and J3 sharing a resource protected by locks (blue shading indicated critical sections).
    [Show full text]
  • Memory and Cache Contention Denial-Of-Service Attack in Mobile Edge Devices
    applied sciences Article Memory and Cache Contention Denial-of-Service Attack in Mobile Edge Devices Won Cho and Joonho Kong * School of Electronic and Electrical Engineering, Kyungpook National University, Daegu 41566, Korea; [email protected] * Correspondence: [email protected] Abstract: In this paper, we introduce a memory and cache contention denial-of-service attack and its hardware-based countermeasure. Our attack can significantly degrade the performance of the benign programs by hindering the shared resource accesses of the benign programs. It can be achieved by a simple C-based malicious code while degrading the performance of the benign programs by 47.6% on average. As another side-effect, our attack also leads to greater energy consumption of the system by 2.1× on average, which may cause shorter battery life in the mobile edge devices. We also propose detection and mitigation techniques for thwarting our attack. By analyzing L1 data cache miss request patterns, we effectively detect the malicious program for the memory and cache contention denial-of-service attack. For mitigation, we propose using instruction fetch width throttling techniques to restrict the malicious accesses to the shared resources. When employing our malicious program detection with the instruction fetch width throttling technique, we recover the system performance and energy by 92.4% and 94.7%, respectively, which means that the adverse impacts from the malicious programs are almost removed. Keywords: memory and cache contention; denial of service attack; shared resources; performance; en- Citation: Cho, W.; Kong, J. Memory ergy and Cache Contention Denial-of-Service Attack in Mobile Edge Devices.
    [Show full text]
  • A Case for NUMA-Aware Contention Management on Multicore Systems
    A Case for NUMA-aware Contention Management on Multicore Systems Sergey Blagodurov Sergey Zhuravlev Mohammad Dashti Simon Fraser University Simon Fraser University Simon Fraser University Alexandra Fedorova Simon Fraser University Abstract performance of individual applications or threads by as much as 80% and the overall workload performance by On multicore systems, contention for shared resources as much as 12% [23]. occurs when memory-intensive threads are co-scheduled Unfortunately studies of contention-aware algorithms on cores that share parts of the memory hierarchy, such focused primarily on UMA (Uniform Memory Access) as last-level caches and memory controllers. Previous systems, where there are multiple shared LLCs, but only work investigated how contention could be addressed a single memory node equipped with the single memory via scheduling. A contention-aware scheduler separates controller, and memory can be accessed with the same competing threads onto separate memory hierarchy do- latency from any core. However, new multicore sys- mains to eliminate resource sharing and, as a conse- tems increasingly use the Non-Uniform Memory Access quence, to mitigate contention. However, all previous (NUMA) architecture, due to its decentralized and scal- work on contention-aware scheduling assumed that the able nature. In modern NUMA systems, there are mul- underlying system is UMA (uniform memory access la- tiple memory nodes, one per memory domain (see Fig- tencies, single memory controller). Modern multicore ure 1). Local nodes can be accessed in less time than re- systems, however, are NUMA, which means that they mote ones, and each node has its own memory controller. feature non-uniform memory access latencies and multi- When we ran the best known contention-aware sched- ple memory controllers.
    [Show full text]
  • Thread Evolution Kit for Optimizing Thread Operations on CE/Iot Devices
    Thread Evolution Kit for Optimizing Thread Operations on CE/IoT Devices Geunsik Lim , Student Member, IEEE, Donghyun Kang , and Young Ik Eom Abstract—Most modern operating systems have adopted the the threads running on CE/IoT devices often unintentionally one-to-one thread model to support fast execution of threads spend a significant amount of time in taking the CPU resource in both multi-core and single-core systems. This thread model, and the frequency of context switch rapidly increases due to which maps the kernel-space and user-space threads in a one- to-one manner, supports quick thread creation and termination the limited system resources, degrading the performance of in high-performance server environments. However, the perfor- the system significantly. In addition, since CE/IoT devices mance of time-critical threads is degraded when multiple threads usually have limited memory space, they may suffer from the are being run in low-end CE devices with limited system re- segmentation fault [16] problem incurred by memory shortages sources. When a CE device runs many threads to support diverse as the number of threads increases and they remain running application functionalities, low-level hardware specifications often lead to significant resource contention among the threads trying for a long time. to obtain system resources. As a result, the operating system Some engineers have attempted to address the challenges encounters challenges, such as excessive thread context switching of IoT environments such as smart homes by using better overhead, execution delay of time-critical threads, and a lack of hardware specifications for CE/IoT devices [3], [17]–[21].
    [Show full text]
  • Computer Architecture Lecture 12: Memory Interference and Quality of Service
    Computer Architecture Lecture 12: Memory Interference and Quality of Service Prof. Onur Mutlu ETH Zürich Fall 2017 1 November 2017 Summary of Last Week’s Lectures n Control Dependence Handling q Problem q Six solutions n Branch Prediction n Trace Caches n Other Methods of Control Dependence Handling q Fine-Grained Multithreading q Predicated Execution q Multi-path Execution 2 Agenda for Today n Shared vs. private resources in multi-core systems n Memory interference and the QoS problem n Memory scheduling n Other approaches to mitigate and control memory interference 3 Quick Summary Papers n "Parallelism-Aware Batch Scheduling: Enhancing both Performance and Fairness of Shared DRAM Systems” n "The Blacklisting Memory Scheduler: Achieving High Performance and Fairness at Low Cost" n "Staged Memory Scheduling: Achieving High Performance and Scalability in Heterogeneous Systems” n "Parallel Application Memory Scheduling” n "Reducing Memory Interference in Multicore Systems via Application-Aware Memory Channel Partitioning" 4 Shared Resource Design for Multi-Core Systems 5 Memory System: A Shared Resource View Storage 6 Resource Sharing Concept n Idea: Instead of dedicating a hardware resource to a hardware context, allow multiple contexts to use it q Example resources: functional units, pipeline, caches, buses, memory n Why? + Resource sharing improves utilization/efficiency à throughput q When a resource is left idle by one thread, another thread can use it; no need to replicate shared data + Reduces communication latency q For example,
    [Show full text]
  • Optimizing Kubernetes Performance by Handling Resource Contention with Custom Scheduler
    Optimizing Kubernetes Performance by Handling Resource Contention with Custom Scheduler MSc Research Project Cloud Computing Akshatha Mulubagilu Nagaraj Student ID: 18113575 School of Computing National College of Ireland Supervisor: Mr. Vikas Sahni www.ncirl.ie National College of Ireland Project Submission Sheet School of Computing Student Name: Akshatha Mulubagilu Nagaraj Student ID: 18113575 Programme: Cloud Computing Year: 2020 Module: Research Project Supervisor: Mr. Vikas Sahni Submission Due Date: 17/08/2020 Project Title: Optimizing Kubernetes Performance by Handling Resource Contention with Custom Scheduler Word Count: 5060 Page Count: 19 I hereby certify that the information contained in this (my submission) is information pertaining to research I conducted for this project. All information other than my own contribution will be fully referenced and listed in the relevant bibliography section at the rear of the project. ALL internet material must be referenced in the bibliography section. Students are required to use the Referencing Standard specified in the report template. To use other author's written or electronic work is illegal (plagiarism) and may result in disciplinary action. I agree to an electronic copy of my thesis being made publicly available on TRAP the National College of Ireland's Institutional Repository for consultation. Signature: Date: 17th August 2020 PLEASE READ THE FOLLOWING INSTRUCTIONS AND CHECKLIST: Attach a completed copy of this sheet to each project (including multiple copies). Attach a Moodle submission receipt of the online project submission, to each project (including multiple copies). You must ensure that you retain a HARD COPY of the project, both for your own reference and in case a project is lost or mislaid.
    [Show full text]
  • Addressing Shared Resource Contention in Multicore Processors Via Scheduling
    Addressing Shared Resource Contention in Multicore Processors via Scheduling Sergey Zhuravlev Sergey Blagodurov Alexandra Fedorova School of Computing Science, Simon Fraser University, Vancouver, Canada fsergey zhuravlev, sergey blagodurov, alexandra [email protected] Abstract 70 Contention for shared resources on multicore processors remains 60 BEST an unsolved problem in existing systems despite significant re- 50 search efforts dedicated to this problem in the past. Previous solu- WORST 40 tions focused primarily on hardware techniques and software page coloring to mitigate this problem. Our goal is to investigate how 30 and to what extent contention for shared resource can be mitigated 20 via thread scheduling. Scheduling is an attractive tool, because it does not require extra hardware and is relatively easy to integrate Down to solo realtive 10 into the system. Our study is the first to provide a comprehensive 0 analysis of contention-mitigating techniques that use only schedul- SOPLEX SPHINX GAMESS NAMD AVERAGE % Slow-Down realtive Slow-Down to solo % realtive -10 ing. The most difficult part of the problem is to find a classification Benchmark scheme for threads, which would determine how they affect each other when competing for shared resources. We provide a com- Figure 1. The performance degradation relative to running solo for prehensive analysis of such classification schemes using a newly two different schedules of SPEC CPU2006 applications on an Intel proposed methodology that enables to evaluate these schemes sep- Xeon X3565 quad-core processor (two cores share an LLC). arately from the scheduling algorithm itself and to compare them to the optimal. As a result of this analysis we discovered a classifi- cation scheme that addresses not only contention for cache space, but contention for other shared resources, such as the memory con- power budgets have greatly staggered the development of large sin- troller, memory bus and prefetching hardware.
    [Show full text]
  • Research Collection
    Research Collection Master Thesis Dynamic Thread Allocation for Distributed Jobs using Resource Tokens Author(s): Smesseim, Ali Publication Date: 2019 Permanent Link: https://doi.org/10.3929/ethz-b-000362308 Rights / License: In Copyright - Non-Commercial Use Permitted This page was generated automatically upon download from the ETH Zurich Research Collection. For more information please consult the Terms of use. ETH Library Dynamic Thread Allocation for Distributed Jobs using Resource Tokens Master Thesis Ali Smesseim August 25, 2019 Advisors Prof. Dr. G. Alonso Dr. I. Psaroudakis Dr. V. Trigonakis Department of Oracle Labs Oracle Labs Computer Science Zurich Zurich ETH Zurich Contents Contentsi 1 Introduction1 2 Background5 2.1 Related work ............................ 5 2.1.1 Parallel programming models............... 5 2.1.2 Simple admission control ................. 6 2.1.3 Thread scheduling..................... 7 2.1.4 Cluster scheduling..................... 8 2.2 System overview .......................... 10 2.2.1 PGX.D overview...................... 10 2.2.2 Callisto runtime system.................. 12 2.2.3 Relation to literature.................... 16 3 Solution Design 19 3.1 Dynamic thread allocation..................... 19 3.2 Scheduler API ........................... 22 3.3 Policies for distributed jobs .................... 24 3.3.1 Outgoing message policy.................. 24 3.3.2 CPU time policy...................... 26 3.3.3 Policy combination..................... 28 3.3.4 Sliding window....................... 30 3.4 Operator assignment within job.................. 30 3.5 Admission control ......................... 31 4 Evaluation 33 4.1 Methodology ............................ 33 4.1.1 Workloads.......................... 33 4.2 Parameter configuration...................... 35 i Contents 4.2.1 Message handling prioritization.............. 35 4.2.2 Network policy configuration ............... 37 4.2.3 Combination function configuration ..........
    [Show full text]
  • Redhawk Linux User's Guide
    LinuxÆ Userís Guide 0898004-710 August 2014 Copyright 2014 by Concurrent Computer Corporation. All rights reserved. This publication or any part thereof is intended for use with Concurrent products by Concurrent personnel, customers, and end–users. It may not be reproduced in any form without the written permission of the publisher. The information contained in this document is believed to be correct at the time of publication. It is subject to change without notice. Concurrent makes no warranties, expressed or implied, concerning the information contained in this document. To report an error or comment on a specific portion of the manual, photocopy the page in question and mark the correction or comment on the copy. Mail the copy (and any additional comments) to Concurrent Computer Corporation, 2881 Gateway Drive, Pompano Beach, Florida, 33069. Mark the envelope “Attention: Publications Department.” This publication may not be reproduced for any other reason in any form without written permission of the publisher. Concurrent Computer Corporation and its logo are registered trademarks of Concurrent Computer Corporation. All other Concurrent product names are trademarks of Concurrent while all other product names are trademarks or registered trademarks of their respective owners. Linux® is used pursuant to a sublicense from the Linux Mark Institute. Printed in U. S. A. Revision History: Date Level Effective With August 2002 000 RedHawk Linux Release 1.1 September 2002 100 RedHawk Linux Release 1.1 December 2002 200 RedHawk Linux Release
    [Show full text]
  • Analysis of Application Sensitivity to System Performance Variability in a Dynamic Task Based Runtime
    Analysis of Application Sensitivity to System Performance Variability in a Dynamic Task Based Runtime Galen Shipmany Kevin Pedretti Ramanan Sankaran Patrick McCormick Stephen Olivier Oak Ridge National Laboratory Los Alamos National Laboratory Kurt B. Ferreira Oak Ridge, TN, USA Los Alamos, NM, USA Sandia National Laboratories Albuquerque, NM, USA Sean Treichler Michael Bauer Alex Aiken NVIDIA Stanford University Santa Clara, CA Stanford, CA, USA ABSTRACT tem thereby providing a level of isolation of system services Application scalability can be significantly impacted by node from the application. Management of resource contention level performance variability in HPC. While previous studies at the application level is handled by the application de- have demonstrated the impact of one source of variability, veloper. Many application developers opt for a static par- OS noise, in message passing runtimes, none have explored titioning of resources mirroring domain decomposition due the impact on dynamically scheduled runtimes. In this pa- to its simplicity. However, this approach leaves many HPC per we examine the impact that OS noise has on the Le- applications vulnerable to the effects of OS noise, especially gion runtime. Our work shows that 2:5% net performance at scale. variability at the node level can result in 25% application The growing effects of OS noise has been one of the con- slowdown for MPI+OpenACC based runtimes compared to tributing factors in the development of dynamic runtime 2% slowdown for Legion. We then identify the mechanisms systems such as Charm++ [1], the Open Community Run- that contribute to better noise absorption in Legion, quan- time [2], Uintah [3], StarSs [4] and Legion [5].
    [Show full text]
  • KIT PPT Master
    Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Andreas Merkel, Jan Stoess, Frank Bellosa System Architecture Group KIT – The cooperation of Forschungszentrum Karlsruhe GmbH and Universität Karlsruhe (TH) Memory Contention – a Problem on Multicores CPU Memory 2 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Memory Contention – a Problem on Multicores CPU Memory 3 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Memory Contention – a Problem on Multicores CPU CPU Memory 4 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Memory Contention – a Problem on Multicores CPU CPU CPU CPU Memory 5 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Memory Contention – a Problem on Multicores CPU CPU CPU CPU Memory CPU CPU CPU CPU 6 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Memory Contention – a Problem on Multicores CPU CPU CPU CPU CPU CPU CPU CPU Memory CPU CPU CPU CPU CPU CPU CPU CPU 7 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Memory Contention Intel Core2 Quad core0 core1 Bottleneck: memory bus stream idle core2 core3 Stall cycles, increased runtime idle idle core0 core1 stream stream 4.5 core2 core3 idle idle 4 e core0 core1 m i 3.5 stream stream t e n c core2 core3 u n 3 stream stream r a 1 instance t d s 2.5 e 2 instances n on 4 cores z i i l } r 2 4 instances a e p m r 1.5 o n 1 0.5 0 stream memory benchmark 8 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Impact of Resource Contention on Energy Efficiency Longer time to halt More static power Increasing importance of leakage 9 Resource-Conscious Scheduling for Energy Efficiency on Multicore Processors Achieving Energy Efficiency by Scheduling Scheduler decides When Where In which combination At which frequency setting to execute tasks.
    [Show full text]