Systems Performance: Enterprise and the Cloud
Total Page:16
File Type:pdf, Size:1020Kb
Systems Performance Second Edition This page intentionally left blank Systems Performance Enterprise and the Cloud Second Edition Brendan Gregg Boston • Columbus • New York • San Francisco • Amsterdam • Cape Town Dubai • London • Madrid • Milan • Munich • Paris • Montreal • Toronto • Delhi • Mexico City São Paulo • Sydney • Hong Kong • Seoul • Singapore • Taipei • Tokyo Many of the designations used by manufacturers and sellers to distinguish their products Publisher are claimed as trademarks. Where those designations appear in this book, and the Mark L. Taub publisher was aware of a trademark claim, the designations have been printed with initial Executive Editor capital letters or in all capitals. Greg Doench The author and publisher have taken care in the preparation of this book, but make no Managing Producer expressed or implied warranty of any kind and assume no responsibility for errors or Sandra Schroeder omissions. No liability is assumed for incidental or consequential damages in connection with or arising out of the use of the information or programs contained herein. Sr. Content Producer For information about buying this title in bulk quantities, or for special sales opportunities Julie B. Nahil (which may include electronic versions; custom cover designs; and content particular to Project Manager your business, training goals, marketing focus, or branding interests), please contact our corporate sales department at [email protected] or (800) 382-3419. Rachel Paul Copy Editor For government sales inquiries, please contact [email protected]. Kim Wimpsett For questions about sales outside the U.S., please contact [email protected]. Indexer Visit us on the Web: informit.com/aw Ted Laux Proofreader Library of Congress Control Number: 2020944455 Rachel Paul Copyright © 2021 Pearson Education, Inc. Compositor All rights reserved. This publication is protected by copyright, and permission must be The CIP Group obtained from the publisher prior to any prohibited reproduction, storage in a retrieval system, or transmission in any form or by any means, electronic, mechanical, photocopy- ing, recording, or likewise. For information regarding permissions, request forms and the appropriate contacts within the Pearson Education Global Rights & Permissions Department, please visit www.pearson.com/permissions. Cover images by Brendan Gregg Page 9, Figure 1.5: Screenshot of System metrics GUI (Grafana) © 2020 Grafana Labs Page 84, Figure 2.32: Screenshot of Firefox timeline chart © Netflix Page 164, Figure 4.7: Screenshot of sar(1) sadf(1) SVG output © 2010 W3C Page 560, Figure 10.12: Screenshot of Wireshark screenshot © Wireshark Page 740, Figure 14.3: Screenshot of KernelShark © KernelShark ISBN-13: 978-0-13-682015-4 ISBN-10: 0-13-682015-8 ScoutAutomatedPrintCode For Deirdré Straughan, an amazing person in technology, and an amazing person—we did it. This page intentionally left blank Contents at a Glance Contents ix Preface xxix Acknowledgments xxxv About the Author xxxvii 1 Introduction 1 2 Methodologies 21 3 Operating Systems 89 4 Observability Tools 129 5 Applications 171 6 CPUs 219 7 Memory 303 8 File Systems 359 9 Disks 423 10 Network 499 11 Cloud Computing 579 12 Benchmarking 641 13 perf 671 14 Ftrace 705 15 BPF 751 16 Case Study 783 A USE Method: Linux 795 B sar Summary 801 C bpftrace One-Liners 803 D Solutions to Selected Exercises 809 E Systems Performance Who’s Who 811 Glossary 815 Index 825 This page intentionally left blank Contents Preface xxix Acknowledgments xxxv About the Author xxxvii 1 Introduction 1 1.1 Systems Performance 1 1.2 Roles 2 1.3 Activities 3 1.4 Perspectives 4 1.5 Performance Is Challenging 5 1.5.1 Subjectivity 5 1.5.2 Complexity 5 1.5.3 Multiple Causes 6 1.5.4 Multiple Performance Issues 6 1.6 Latency 6 1.7 Observability 7 1.7.1 Counters, Statistics, and Metrics 8 1.7.2 Profiling 10 1.7.3 Tracing 11 1.8 Experimentation 13 1.9 Cloud Computing 14 1.10 Methodologies 15 1.10.1 Linux Perf Analysis in 60 Seconds 15 1.11 Case Studies 16 1.11.1 Slow Disks 16 1.11.2 Software Change 18 1.11.3 More Reading 19 1.12 References 19 2 Methodologies 21 2.1 Terminology 22 2.2 Models 23 2.2.1 System Under Test 23 2.2.2 Queueing System 23 2.3 Concepts 24 2.3.1 Latency 24 2.3.2 Time Scales 25 x Contents 2.3.3 Trade-Offs 26 2.3.4 Tuning Efforts 27 2.3.5 Level of Appropriateness 28 2.3.6 When to Stop Analysis 29 2.3.7 Point-in-Time Recommendations 29 2.3.8 Load vs. Architecture 30 2.3.9 Scalability 31 2.3.10 Metrics 32 2.3.11 Utilization 33 2.3.12 Saturation 34 2.3.13 Profiling 35 2.3.14 Caching 35 2.3.15 Known-Unknowns 37 2.4 Perspectives 37 2.4.1 Resource Analysis 38 2.4.2 Workload Analysis 39 2.5 Methodology 40 2.5.1 Streetlight Anti-Method 42 2.5.2 Random Change Anti-Method 42 2.5.3 Blame-Someone-Else Anti-Method 43 2.5.4 Ad Hoc Checklist Method 43 2.5.5 Problem Statement 44 2.5.6 Scientific Method 44 2.5.7 Diagnosis Cycle 46 2.5.8 Tools Method 46 2.5.9 The USE Method 47 2.5.10 The RED Method 53 2.5.11 Workload Characterization 54 2.5.12 Drill-Down Analysis 55 2.5.13 Latency Analysis 56 2.5.14 Method R 57 2.5.15 Event Tracing 57 2.5.16 Baseline Statistics 59 2.5.17 Static Performance Tuning 59 2.5.18 Cache Tuning 60 2.5.19 Micro-Benchmarking 60 2.5.20 Performance Mantras 61 Contents xi 2.6 Modeling 62 2.6.1 Enterprise vs. Cloud 62 2.6.2 Visual Identification 62 2.6.3 Amdahl’s Law of Scalability 64 2.6.4 Universal Scalability Law 65 2.6.5 Queueing Theory 66 2.7 Capacity Planning 69 2.7.1 Resource Limits 70 2.7.2 Factor Analysis 71 2.7.3 Scaling Solutions 72 2.8 Statistics 73 2.8.1 Quantifying Performance Gains 73 2.8.2 Averages 74 2.8.3 Standard Deviation, Percentiles, Median 75 2.8.4 Coefficient of Variation 76 2.8.5 Multimodal Distributions 76 2.8.6 Outliers 77 2.9 Monitoring 77 2.9.1 Time-Based Patterns 77 2.9.2 Monitoring Products 79 2.9.3 Summary-Since-Boot 79 2.10 Visualizations 79 2.10.1 Line Chart 80 2.10.2 Scatter Plots 81 2.10.3 Heat Maps 82 2.10.4 Timeline Charts 83 2.10.5 Surface Plot 84 2.10.6 Visualization Tools 85 2.11 Exercises 85 2.12 References 86 3 Operating Systems 89 3.1 Terminology 90 3.2 Background 91 3.2.1 Kernel 91 3.2.2 Kernel and User Modes 93 3.2.3 System Calls 94 xii Contents 3.2.4 Interrupts 96 3.2.5 Clock and Idle 99 3.2.6 Processes 99 3.2.7 Stacks 102 3.2.8 Virtual Memory 104 3.2.9 Schedulers 105 3.2.10 File Systems 106 3.2.11 Caching 108 3.2.12 Networking 109 3.2.13 Device Drivers 109 3.2.14 Multiprocessor 110 3.2.15 Preemption 110 3.2.16 Resource Management 110 3.2.17 Observability 111 3.3 Kernels 111 3.3.1 Unix 112 3.3.2 BSD 113 3.3.3 Solaris 114 3.4 Linux 114 3.4.1 Linux Kernel Developments 115 3.4.2 systemd 120 3.4.3 KPTI (Meltdown) 121 3.4.4 Extended BPF 121 3.5 Other Topics 122 3.5.1 PGO Kernels 122 3.5.2 Unikernels 123 3.5.3 Microkernels and Hybrid Kernels 123 3.5.4 Distributed Operating Systems 123 3.6 Kernel Comparisons 124 3.7 Exercises 124 3.8 References 125 3.8.1 Additional Reading 127 4 Observability Tools 129 4.1 Tool Coverage 130 4.1.1 Static Performance Tools 130 4.1.2 Crisis Tools 131 Contents xiii 4.2 Tool Types 133 4.2.1 Fixed Counters 133 4.2.2 Profiling 135 4.2.3 Tracing 136 4.2.4 Monitoring 137 4.3 Observability Sources 138 4.3.1 /proc 140 4.3.2 /sys 143 4.3.3 Delay Accounting 145 4.3.4 netlink 145 4.3.5 Tracepoints 146 4.3.6 kprobes 151 4.3.7 uprobes 153 4.3.8 USDT 155 4.3.9 Hardware Counters (PMCs) 156 4.3.10 Other Observability Sources 159 4.4 sar 160 4.4.1 sar(1) Coverage 161 4.4.2 sar(1) Monitoring 161 4.4.3 sar(1) Live 165 4.4.4 sar(1) Documentation 165 4.5 Tracing Tools 166 4.6 Observing Observability 167 4.7 Exercises 168 4.8 References 168 5 Applications 171 5.1 Application Basics 172 5.1.1 Objectives 173 5.1.2 Optimize the Common Case 174 5.1.3 Observability 174 5.1.4 Big O Notation 175 5.2 Application Performance Techniques 176 5.2.1 Selecting an I/O Size 176 5.2.2 Caching 176 5.2.3 Buffering 177 5.2.4 Polling 177 5.2.5 Concurrency and Parallelism 177 xiv Contents 5.2.6 Non-Blocking I/O 181 5.2.7 Processor Binding 181 5.2.8 Performance Mantras 182 5.3 Programming Languages 182 5.3.1 Compiled Languages 183 5.3.2 Interpreted Languages 184 5.3.3 Virtual Machines 185 5.3.4 Garbage Collection 185 5.4 Methodology 186 5.4.1 CPU Profiling 187 5.4.2 Off-CPU Analysis 189 5.4.3 Syscall Analysis 192 5.4.4 USE Method 193 5.4.5 Thread State Analysis 193 5.4.6 Lock Analysis 198 5.4.7 Static Performance Tuning 198 5.4.8 Distributed Tracing 199 5.5 Observability Tools 199 5.5.1 perf 200 5.5.2 profile 203 5.5.3 offcputime 204 5.5.4 strace 205 5.5.5 execsnoop 207 5.5.6 syscount 208 5.5.7 bpftrace 209 5.6 Gotchas 213 5.6.1 Missing Symbols 214 5.6.2 Missing Stacks 215 5.7 Exercises 216 5.8 References 217 6 CPUs 219 6.1 Terminology 220 6.2 Models 221 6.2.1 CPU Architecture 221 6.2.2 CPU Memory Caches 221 6.2.3 CPU Run Queues 222 Contents xv 6.3 Concepts 223 6.3.1 Clock Rate 223 6.3.2 Instructions 223 6.3.3 Instruction Pipeline 224 6.3.4 Instruction Width 224 6.3.5 Instruction Size 224 6.3.6 SMT 225 6.3.7 IPC, CPI 225 6.3.8 Utilization 226 6.3.9 User Time/Kernel Time 226 6.3.10 Saturation 226 6.3.11 Preemption 227 6.3.12 Priority Inversion 227 6.3.13 Multiprocess, Multithreading 227 6.3.14 Word Size 229 6.3.15 Compiler Optimization 229 6.4 Architecture 229 6.4.1 Hardware 230 6.4.2 Software 241 6.5 Methodology 244 6.5.1 Tools Method 245 6.5.2 USE Method 245 6.5.3 Workload Characterization 246 6.5.4 Profiling 247 6.5.5 Cycle Analysis 251 6.5.6 Performance Monitoring 251 6.5.7 Static Performance Tuning 252 6.5.8 Priority Tuning 252 6.5.9 Resource Controls