
FLSCHED: A Lockless and Lightweight Approach to OS Scheduler for Xeon Phi Heeseung Jo Woonhak Kang Chonbuk National University Georgia Institute of Technology 567 Baekje-daero 266 Ferst Dr Jeonju, Jeollabuk 54896 Atlanta, GA 30313 [email protected] [email protected] Changwoo Min Taesoo Kim Virginia Tech Georgia Institute of Technology 302 Whittemore 266 Ferst Dr Blacksburg, VA 24060 Atlanta, GA 30313 [email protected] [email protected] ABSTRACT accelerators. For example, a single Xeon processor has up to Processor manufacturers have increased the number of cores 24 physical cores or 48 hardware threads [14], and a Xeon in a chip, and the latest manycore processor has up to 76 phys- Phi processor has up to 76 physical cores or 304 hardware ical cores and 304 hardware threads. On the other hand, the threads [22, 24]. In addition, due to increasingly important revolution of OS schedulers to manage processes in systems machine learning workloads, which are compute-intensive is slow to follow up emerging manycore processors. and massively parallel, we expect that the core count per In this paper, we show how much CFS, the default Linux system will increase further. scheduler, can break the performance of parallel applications The prevalence of manycore processors imposes new chal- on manycore processors (e.g., Intel Xeon Phi). Then, we pro- lenges in scheduler design. First, schedulers should be able to pose a novel scheduler named FLSCHED, which is designed handle the unprecedented high degree of parallelism. When for lockless implementation with less context switches and the CFS scheduler was introduced, quad-core servers were more efficient scheduling decisions. In our evaluations on dominant in data centers. Now, 32-core servers are standard Xeon Phi, FLSCHED shows better performance than CFS up in data centers [18]. Moreover, servers with more than 100 to 1.73× for HPC applications and 3.12× for micro-benchmarks. cores are becoming popular [2]. Under such a high degree of parallelism, a small sequential part in a system can break ACM Reference format: the performance and scalability of an application. Amdahl’s Heeseung Jo, Woonhak Kang, Changwoo Min, and Taesoo Kim. Law says that if a sequential part in an entire system increases 2017. FLSCHED: A Lockless and Lightweight Approach to OS Scheduler for Xeon Phi. In Proceedings of APSys ’17, Mumbai, from 1% to 2%, then we end up with significantly decreased India, September 2, 2017, 8 pages. maximum speed up from 50 times to 33 times. In particular, https://doi.org/10.1145/3124680.3124724 schedulers in the Linux kernel use various lock primitives, such as spinlock, mutex, and read-write semaphore, to pro- 1 INTRODUCTION tect their data structures (see Table 1). We found that those sequential parts in schedulers protected by locks significantly Manycore processors are now prevalent in all types of comput- degrade the performance of massively parallel applications. ing devices, including mobile devices, servers, and hardware The performance degradation becomes especially significant Permission to make digital or hard copies of all or part of this work for in communication-intensive applications, which need sched- personal or classroom use is granted without fee provided that copies are not uler intervention (see Figure 1 and Figure 2). made or distributed for profit or commercial advantage and that copies bear Second, the cost of context switching keeps increasing this notice and the full citation on the first page. Copyrights for components as the amount of context, which needs to be saved and re- of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to stored, increasing. In Intel architectures, the width of SIMD redistribute to lists, requires prior specific permission and/or a fee. Request register file has increased from 128-bits to 256-bits andnow permissions from [email protected]. to 512-bits in XMM, YMM, and AVX, respectively [19]. APSys ’17, September 2, 2017, Mumbai, India This problem becomes exaggerated when the limited mem- © 2017 Association for Computing Machinery. ory bandwidth is shared among many CPU cores with small ACM ISBN 978-1-4503-5197-3/17/09. $15.00 cache such as Xeon Phi processors. Recent schedulers adopt https://doi.org/10.1145/3124680.3124724 lazy optimization techniques, which do not save unchanged 2.0 CFS FIFO RR FL register files, to reduce the context switching overhead8 [ , 27]. However, there is no way but paying high cost for compute- 1.5 intensive applications, which heavily rely on SIMD operations 1.0 for better performance. 0.5 In this paper, we present FLSCHED—a new process sched- Normalized OPS uler to address the aforementioned problems. FLSCHED is 0.0 designed for manycore accelerators like Xeon Phi. We adopt bt cg ep ft is mg sp ua a lockless design to keep FLSCHED from becoming a se- Programs in NPB quential bottleneck. This is particularly critical for manycore Figure 1: Performance comparison of NPB benchmarks run- accelerators, which have a large number of CPU cores; the ning 1,600 threads on a Xeon Phi with different process sched- Xeon Phi processor, which we used for experiment in this ulers. Performance (OPS: operations per second) is normalized paper, has 57 cores or 228 hardware threads. FLSCHED is to that of CFS. also designed for minimizing the number of context switches. Because a Xeon Phi processor has 2× larger vector registers 20.0 60.0M than Xeon processors (i.e., 32 512-bit registers for Xeon Phi CFS 18.0 FIFO 50.0M 16.0 RR and 16 512-bits registers for Xeon processor), and its per-core FL 14.0 40.0M memory bandwidth and cache size are smaller than a Xeon 12.0 10.0 30.0M processor, its overhead of context switching is higher than a 8.0 20.0M 6.0 Xeon processor [9]. Thus, it is critical to minimize the number Context switches Execution time (s) 4.0 10.0M of context switching as many as possible. Finally, FLSCHED 2.0 0.0 0.0 is tailored to throughput-oriented workloads, which are domi- 20 40 60 80 100120140160180200 20 40 60 80 100120140160180200 nant in manycore accelerators. Number of groups (x40 tasks) Number of groups (x40 tasks) This paper makes following three contributions: Figure 2: Execution time and number of context switches of ∙ We evaluate how the widely-used Linux schedulers hackbench on a Xeon Phi. We used the thread test mode with (i.e., CFS, FIFO, and RR) perform in a manycore ac- increasing the number of groups. Each group has 20 senders celerator. We analyze their behavior especially in terms and 20 receivers communicating via pipe. of spinlock contention, which will increase a sequen- tial portion in a scheduler, and the number of context switching. FIFO, and RR, with high performance computing (HPC) ap- ∙ We design a new processor scheduler, named FLSCHED, plications and a communication-intensive micro-benchmark. which is tailored for minimizing the number of context We first measured the performance of NAS Parallel Bench- switching in a lockless fashion. mark (NPB) [1], which is written in OpenMP, running on the ∙ We show the effectiveness of FLSCHED for real-world Xeon Phi. In particular, we ran eight NPB benchmarks, which OpenMP applications and micro-benchmarks. In partic- fit in the Xeon Phi memory, and measured the operation per ular, FLSCHED outperforms all other Linux schedulers second (OPS). As Figure 1 shows, there is no clear winner for NAS Parallel Benchmark (NPB) by up to 73%. among CFS, FIFO, and RR: for five benchmarks, FIFO and The rest of this paper is organized as follows: §2 provides RR are better than CFS; for the other three benchmarks, CFS the motivation of our work with a case study, and §3 de- is better than FIFO and RR. In contrast, FLSCHED shows bet- scribes FLSCHED’s design in detail. §4 evaluates and analy- ter performance for all benchmarks, except for is. Especially, ses FLSCHED’s performance. §5 compares FLSCHED with four benchmarks (i.e., cg, mg, sp, and ua) show significantly previous research, and §6 concludes the paper. better performance up to 1.73 times. For the analysis of sched- uler behavior, we ran the perf profiling tool while running 2 CASE STUDY ON XEON PHI the benchmarks. We found that spinlock contention in the In this section, we analyze how existing schedulers in the schedulers could become a major scalability bottleneck. As Linux kernel perform on manycore processors, especially a Table 2 shows, for the four benchmarks, which FLSCHED Xeon Phi processor. A lot of research efforts have been made shows significant higher performance, CFS, FIFO, andRR to make OS scalable to fully utilize manycore processors [3– spend significantly longer time for spinlock contention in 7, 12, 23]. Our focus in this paper is to investigate whether schedulers (i.e., around 8-15%) than FLSCHED (i.e., around 3- existing schedulers in the Linux kernel are efficient and scal- 5%). This shows that the increased sequential portion caused able enough to manage manycore processors. To this end, we by lock contention in schedulers can significantly deteriorate evaluated performance of three widely-used schedulers, CFS, the performance and scalability of applications. 2 To see how the number of context switching affects perfor- Lock types CORE CFS FIFO/RR FL mance, we ran hackbench [11], which is a scheduler bench- raw_spin_lock 16 1 12 - mark. We set the configuration of hackbench to use threads raw_spin_lock_irq/irqsave 13 5 2 - rcu_read_lock 14 5 1 - for parallelism and pipe for communication among threads. spin_lock - - - - Figure 2 shows the execution time and the number of con- spin_lock_irq/irqsave 12 - - - read_lock 3 - - - text switching on the Xeon Phi.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-