Autonomic Performance-Aware Resource Management in Dynamic IT Service Infrastructures
Total Page:16
File Type:pdf, Size:1020Kb
escartes Autonomic Performance-Aware Resource Management in Dynamic IT Service Infrastructures Zur Erlangung des akademischen Grades eines Doktors der Ingenieurwissenschaften/ Doktors der Naturwissenschaften der Fakultät für Informatik des Karlsruher Instituts für Technologie (KIT) genehmigte Dissertation von Nikolaus Matthias Huber aus Zell am Harmersbach Tag der mündlichen Prüfung: 16.07.2014 Erster Gutachter: Prof. Dr. Samuel Kounev Zweiter Gutachter: Prof. Dr. Ralf Reussner KIT – Universität des Landes Baden-Württemberg und nationales Forschungszentrum in der Helmholtz-Gemeinschaft www.kit.edu Ich versichere wahrheitsgem¨aß, die Dissertation bis auf die dort angegebenen Hilfen selb- st¨andig angefertigt, alle benutzten Hilfsmittel vollst¨andig und genau angegeben und alles kenntlich gemacht zu haben, was aus Arbeiten anderer und eigenen Ver¨offentlichungen unver¨andert oder mit Anderungen¨ entnommen wurde. Karlsruhe, 30. September 2014 Nikolaus Huber Contents Abstract xiii Zusammenfassung xv 1. Introduction 1 1.1. Motivation . .1 1.2. Problem Statement . .2 1.3. Shortcomings of Existing Approaches . .4 1.4. Contributions of this Thesis . .6 1.5. Outline . .8 I. Foundations and Related Work 11 2. Autonomic System Adaptation and Performance Modeling 13 2.1. Autonomic and Self-Adaptive Systems . 14 2.1.1. Self-* Properties . 14 2.1.2. Design Principles of Autonomic Systems . 16 2.1.3. Model-Driven Autonomic and Self-Adaptive Systems . 18 2.1.4. The Descartes Research Project and Self-Aware Computing Systems 20 2.2. Software Performance Engineering . 21 2.2.1. Performance Models: Classification and Terminology . 21 2.2.2. Model-Driven Software Performance Engineering . 22 2.2.3. The Palladio Component Model (PCM) . 22 2.2.4. Online Performance Prediction . 24 3. Related Work 29 3.1. Architecture-Based Self-Adaptive Software Systems . 30 3.1.1. Engineering and Evaluation of Self-Adaptive Software Systems . 30 3.1.2. Architecture-Based Self-Adaptation Approaches . 32 3.2. Model-Based Performance and Resource Management . 36 3.2.1. Predictive Performance Models . 36 3.2.2. Architecture-Level Performance Models . 37 3.3. Adaptation Process Modeling Languages . 37 3.4. Summary . 38 II. Proactive Model-Based Performance and Resource Management 41 4. Proactive Model-Based System Adaptation 43 4.1. Model-Based Adaptation Control Loop . 44 4.1.1. Monitor . 44 v vi Contents 4.1.2. Analyze . 45 4.1.3. Plan . 46 4.1.4. Execute . 47 4.1.5. Knowledge . 47 4.2. Overview of the Descartes Modeling Language . 47 4.2.1. Technical Viewpoint . 49 4.2.2. Logical Viewpoint . 51 4.3. Summary . 52 5. Modeling System Resource Landscapes and Their Performance Influences 53 5.1. Resource Landscape Meta-Model . 53 5.1.1. Containers and Containment Relationships . 54 5.1.2. Classes of Runtime Environments . 55 5.1.3. Resource Configuration Specification . 56 5.1.4. Container Types . 57 5.1.5. Example Resource Landscape Model Instance . 59 5.2. Performance-Influencing Factors of Resource Layers . 60 5.2.1. Classification of Performance-Influencing Factors . 60 5.2.2. Automatic Quantification of Performance-Influencing Factors . 62 5.2.3. Derivation of the Performance Model . 75 5.3. Application Architecture, Usage Profile, and Deployment Meta-Models . 76 5.3.1. Application Architecture Meta-Model . 76 5.3.2. Usage Profile Meta-Model . 77 5.3.3. Deployment Meta-Model . 78 5.4. Case Studies . 78 5.4.1. Modeling Data Centers with the Resource Landscape Meta-Model . 78 5.4.2. Quantifying Performance Influences of Virtualization . 81 5.5. Summary . 84 6. Modeling System Adaptation Processes 85 6.1. Adaptation Points Meta-Model . 86 6.2. Adaptation Process Meta-Model . 89 6.2.1. Actions . 91 6.2.2. Tactics . 93 6.2.3. Strategies . 95 6.2.4. QoS Data Repository . 96 6.2.5. Weighting Function . 97 6.3. Adaptation Framework Architecture and Implementation . 99 6.4. Evaluation . 103 6.4.1. Comparing S/T/A, Story Diagrams, and Stitch . 103 6.4.2. Comparing Accuracy and Efficiency Using PerOpteryx . 104 6.4.3. Reusing Adaptation Plans in SLAstic . 107 6.5. Summary . 109 7. Self-Adaptive Workload Classification and Forecasting 111 7.1. Time Series Analysis . 111 7.1.1. Workload Intensity Behavior Characteristics . 112 7.1.2. Survey of Forecasting Methods . 114 7.2. Workload Classification and Forecasting . 118 7.2.1. Forecasting Objectives . 119 7.2.2. Forecasting Methods Overhead Groups . 120 7.2.3. Forecasting Methods Partitions . 120 vi Contents vii 7.2.4. Evaluating Forecasting Accuracy . 121 7.2.5. Non-Absolutely Positive Workloads . 122 7.2.6. Decision Tree . 122 7.3. WCF Architecture and Implementation . 123 7.4. Evaluation . 126 7.4.1. Experiment Design . 127 7.4.2. Experiment I: Comparing WCF with ETS and Naive Forecasting . 127 7.4.3. Experiment II: Comparing WCF with Overhead Group 2 . 130 7.4.4. Experiment III: Comparing WCF with Overhead Group 3 . 134 7.4.5. Experiment IV: Comparing WCF with Overhead Group 4 . 138 7.5. Summary . 142 III. Validation and Conclusion 145 8. Validation 147 8.1. Validation Goals . 148 8.1.1. Modeling Capabilities . 148 8.1.2. Prediction Capabilities of the Architecture-Level Performance Model 148 8.1.3. End-to-End Validation of the Model-Based Adaptation Approach . 149 8.2. Model-Based Resource Allocation in Virtualized Environments . 151 8.2.1. SPECjEnterprise2010 Benchmark and Adaptation Process Overview 151 8.2.2. Applied DML Instance . 156 8.2.3. Evaluation . 160 8.3. Proactive Model-Based System Adaptation . 165 8.3.1. Proactively Reducing SLA Violations with WCF . 166 8.3.2. WCF for Proactive Resource Provisioning at Run-Time . 170 8.3.3. Evaluation . 172 8.4. Model-Based System Adaptation in Heterogeneous Environments . 172 8.4.1. Blue Yonder System Architecture . 173 8.4.2. Applied DML Instance . 175 8.4.3. Evaluation . 180 8.5. Discussion . 182 9. Conclusions and Outlook 187 9.1. Summary . 187 9.2. Outlook . 189 Appendix A. Additional Meta-Model Specifications 193 A.1. Application Architecture Meta-Model . 193 A.1.1. Service Behavior Descriptions . 193 A.1.2. Signature . 195 A.2. Usage Profile Meta-Model . 195 List of Figures 199 List of Tables 203 Bibliography 205 vii Publication List [Huber et al., 2014] Huber, N., van Hoorn, A., Koziolek, A., Brosig, F., and Kounev, S. (2014). Modeling Run-Time Adaptation at the System Architecture Level in Dynamic Service-Oriented Environments. Service Oriented Computing and Applications Journal (SOCA), 8(1):73–89. [Huber et al., 2012d] Huber, N., von Quast, M., Brosig, F., Hauck, M., and Kounev, S. (2012d). A Method for Experimental Analysis and Modeling of Virtualization Perfor- mance Overhead. In Ivanov, I., van Sinderen, M., and Shishkov, B., editors, Cloud Computing and Services Science, Service Science: Research and Innovations in the Ser- vice Economy, pages 353–370. Springer, New York. [Huber et al., 2012c] Huber, N., van Hoorn, A., Koziolek, A., Brosig, F., and Kounev, S. (2012c). S/T/A: Meta-Modeling Run-Time Adaptation in Component-Based System Architectures. In Proceedings of the 9th IEEE International Conference on e-Business Engineering (ICEBE 2012), pages 70–77, Los Alamitos, CA, USA. IEEE Computer Society. Acceptance Rate (Full Paper): 19.7% (26/132). [Huber et al., 2012b] Huber, N., Brosig, F., and Kounev, S. (2012b). Modeling Dynamic Virtualized Resource Landscapes. In Proceedings of the 8th ACM SIGSOFT Interna- tional Conference on the Quality of Software Architectures (QoSA 2012), pages 81–90, New York, NY, USA. ACM. Acceptance Rate (Full Paper): 25.6%. [Huber et al., 2012a] Huber, N., Brosig, F., Dingle, N., Joshi, K., and Kounev, S. (2012a). Providing Dependability and Performance in the Cloud: Case Studies. In Wolter, K., Avritzer, A., Vieira, M., and van Moorsel, A., editors, Resilience Assessment and Eval- uation of Computing Systems, XVIII. Springer-Verlag, Berlin, Heidelberg. ISBN: 978- 3-642-29031-2. [Huber et al., 2011b] Huber, N., von Quast, M., Hauck, M., and Kounev, S. (2011b). Eval- uating and Modeling Virtualization Performance Overhead for Cloud Environments. In Proceedings of the International Conference on Cloud Computing and Services Science (CLOSER 2011), pages 563 – 573. SciTePress. Acceptance Rate: 18/164 = 10.9%, Best Paper Award. [Huber et al., 2011a] Huber, N., Brosig, F., and Kounev, S. (2011a). Model-based Self- Adaptive Resource Allocation in Virtualized Environments. In Proceedings of the 6th International Symposium on Software Engineering for Adaptive and Self-Managing Sys- tems (SEAMS 2011), pages 90–99, New York, NY, USA. ACM. Acceptance Rate (Full Paper): 27% (21/76). [Huber et al., 2010b] Huber, N., von Quast, M., Brosig, F., and Kounev, S. (2010b). Anal- ysis of the Performance-Influencing Factors of Virtualization Platforms. In Proceedings of the 12th International Symposium on Distributed Objects, Middleware, and Appli- cations (DOA 2010), Crete, Greece. Springer Verlag. Acceptance Rate (Full Paper): 33%. ix x Publication List [Huber et al., 2010a] Huber, N., Becker, S., Rathfelder, C., Schweflinghaus, J., and Reuss- ner, R. (2010a). Performance Modeling in Industry: A Case Study on Storage Virtual- ization. In Proceedings of the ACM/IEEE 32nd International Conference on Software Engineering (ICSE 2010), Software Engineering in Practice Track, pages 1–10, New York, NY, USA. ACM. Acceptance Rate (Full Paper): 23% (16/71). [Huber, 2009] Huber, N. (2009). Performance Modeling of Storage Virtualization. Master’s thesis, Universit¨at Karlsruhe (TH), Karlsruhe, Germany. GFFT Prize. [Brosig et al., 2013a] Brosig, F., Gorsler, F., Huber, N., and Kounev, S. (2013a). Evaluat- ing Approaches for Performance Prediction in Virtualized Environments. In Proceedings of the IEEE 21st International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS 2013). (Short Paper). [Brosig et al., 2011] Brosig, F., Huber, N., and Kounev, S. (2011). Automated Extraction of Architecture-Level Performance Models of Distributed Component-Based Systems.