Enhancing Quality of Service Metrics for High Fan-In Node.Js Applications by Optimising the Network Stack
Total Page:16
File Type:pdf, Size:1020Kb
DEGREE PROJECT, IN COMPUTER SCIENCE , SECOND LEVEL LAUSANNE, SWITZERLAND 2015 Enhancing Quality of Service Metrics for High Fan-In Node.js Applications by Optimising the Network Stack LEVERAGING IX: THE DATAPLANE OPERATING SYSTEM FREDRIK PETER LILKAER KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION (CSC) Enhancing Quality of Service Metrics for High Fan-in Node.js Applications by Optimising the Network Stack -Leveraging IX: The Dataplane Operating System FREDRIK PETER LILKAER DD221X, Master’s Thesis in Computer Science (30 ECTS credits) Degree Progr. in Computer Science and Engineering 300 credits Master Programme in Computer Science 120 credits Royal Institute of Technology year 2015 Supervisor at EPFL was Edouard Bugnion Supervisor at CSC wa s Carl-Henrik Ek Examiner wa s Johan Håstad Presented: 2015-10-01 Royal Institute of Technology School of Computer Science and Communication KTH CSC SE-100 44 Stockholm, Sweden URL: www.kth.se/csc Abstract This thesis investigates the feasibility of porting Node.js, a JavaScript web application framework and server, to IX, a data- plane operating system specifically developed to meet the needs of high performance microsecond-computing type of applications in a datacentre setting. We show that porting requires exten- sions to the IX kernel to support UDS polling, which we imple- ment. We develop a distributed load generator to benchmark the framework. The results show that running Node.js on IX improves throughput by up to 20.6%, latency by up to 5.23×, and tail latency by up to 5.68× compared to a Linux baseline. We show how server side request level reordering affect the la- tency distribution, predominantly in cases where the server is load saturated. Finally, due to various limitations of IX1, we are unable at this time to recommend running Node.js on IX in a production environment, despite improved metrics in all test cases. However, the limitations are not fundamental, and could be resolved in future work. Referat Förbättran av Quality of Service för högbelastade Node.js- webbapplikationer genom effektivare operativsystem Detta exjobb undersöker möjligheterna till att använda IX, ett specialiserat dataplansoperativsystem avsett för högpresterande datacentertillämpningar, för att köra Node.js, ett webapplika- tionramverk för JavaScript-applikationer. För att porta Node.js till IX krävs att vi utvidgar IX med funktionalitet för samtidig pollning av Unix Domain Sockets och nätverksflöden, vilket visas samt genomförs. Vidare utvecklas en distribuerad lastgenerator för att utvärdera applikationsramverket under IX jämfört baslin- je som utgörs av en omodifierad Linuxdistribution. Resultaten vi- sar att throughput förbättras med upp till 20.6%, latens upp till 5.23× och tail latency upp till 5.68×. Sedermera undersöker vi huruvida latensvariansen ökat på grund av request-omordningar på serversidan, vilket tycks vara fallet vid hög serverbelastning, även om andra faktorer tycks ha större inverkan vid låg server- belastning. Slutligen, även om alla storheter förbättrats vid alla observerade mätpunkter, kan ännu inte vidspredd adoption av IX för att köra Node.js applikationer rekommenderas, främst på grund av problem med horisontal skalning samt problem att ingå som frontend-server i en klassisk tiered-datacentre arkitektur. 1Mainly lack of outgoing TCP connections and multi-process execution, respectively preventing Node.js from acting as a frontend in a multi-tiered architecture and scaling horizontally within a single node. Acknowledgments Writing a thesis can be a long, and at times straining task. I would therefore like to thank the people that helped me achieve my thesis. First, I would like to thank the Data Center Systems laboratory at École Poly- technique Fédérale de Lausanne, EPFL, that allowed me to work with them for the duration of my thesis. In particular, I would like to thank my supervisor Edouard Bugnion, who offered invaluable advice every time I was stuck in my work. I would also like to thank Mia Primorac and George Prekas who I had the pleasure of working alongside, and who also withstood all my questions on IX. I would like to thank my supervisor at KTH, Carl-Henrik Ek, for offering good academic guidance and writing advice. Finally, I would like to thank all my friends of Lausanne for support and moti- vation during the semester. An extra thanks goes out to those of you that helped me to proofread. Contents Contents Glossary 1 Introduction 1 1.1 Problem Statement . 2 1.2 Contribution . 3 2 Background 5 2.1 Operating Systems . 5 2.2 The IX Dataplane Operating System . 6 2.2.1 Requirements and Motivations . 7 2.2.2 What is a Dataplane Operating System? . 8 2.2.3 Results . 8 2.3 Web Servers . 9 2.3.1 Apache, the Traditional Forking Web Server . 9 2.3.2 Nginx - the Event Driven Web Server . 9 2.3.3 Node.js . 10 2.4 Queueing Theory . 10 3 Software Foundation 13 3.1 The IX Dataplane Operating System . 13 3.1.1 Architectural Overview . 13 3.1.2 Dune Process Virtualisation . 14 3.1.3 Execution Model . 15 3.1.4 IX System Call API . 16 3.1.5 IX Event Conditions . 17 3.1.6 libix Userspace API . 17 3.1.7 Limitations . 18 3.2 Node.js . 18 3.2.1 V8 Javascript Engine . 19 3.2.2 libuv . 19 4 Design 25 CONTENTS 4.1 Design Overview . 25 4.2 Limitations . 25 4.3 Modifications of IX . 26 4.3.1 Motivation for IX Kernel Extensions . 26 4.3.2 Kernel Extension . 27 4.3.3 libix . 28 4.4 Modifications of Node.js . 28 4.4.1 Modifications of libuv . 28 4.4.2 Modifications of the V8 Javascript Engine . 34 5 Evaluation 35 5.1 Results . 35 5.1.1 Test Methodology . 35 5.1.2 Performance Metrics . 36 5.1.3 A Note on Poisson Distributed Arrival Rates . 37 5.1.4 Load Scaling . 37 5.1.5 Connection Scalability . 38 5.2 Result Tracing . 38 5.2.1 Throughput Increase . 39 5.2.2 Reordering & Tail Latency . 39 6 Discussion 43 6.1 Related Work . 45 6.2 Lessons Learned . 45 6.3 Future Work . 46 6.4 Conclusion . 47 Bibliography 49 A Resources 53 A.1 libuv - ix . 53 A.2 Node.js . 53 B dialog - high concurrency rate controlled poisson distributed load generator 55 B.1 Purpose . 55 B.2 Implementation . 55 B.3 Evaluation . 56 B.4 Resources . 58 Glossary API Application Programming Interface. 2, 16–20, 24, 25, 28, 29, 31, 47 ASLR Address Space Layout Randomisation. 34, 35, 53 FIFO First-In, First-Out. 11, 41 HTTP HyperText Transfer Protocol. 18, 35, 56 IPC Inter-Process Communication. 5, 46 libOS library Operating System. 6, 15, 47 LIFO Last-In, First-Out. 12 NIC Network Interface Controller. 26, 44 OS Operating System. 5, 19, 43 RPC Remote Procedure Call. 16, 28 RSS Receive Side Scaling. 15 SIRO Service in Random Order. 12 SLA Service Level Agreement. 3, 36, 38 TCP Transmission Control Protocol. 20, 22, 29, 32, 43 TLB Translation Lookaside Buffer. 14 UDP User Datagram Protocol. 43 UDS Unix Domain Socket. 20, 24, 26–28, 34, 47 Chapter 1 Introduction Almost everyone have probably heard about Moore’s law in one form or another; that computers double in processing power approximately every 18 months1. Con- sequently we should, by now, be free of performance problems since our computers ought to be super fast, given an exponential growth in processing power. And they are. The problem is just that we are constantly telling our computers to solve big- ger, and/or harder problems. Around the year 2004, it stopped to be efficient to scale CPU processing performance vertically, that is increasing the clock frequency. As a result, we are now constructing software to make use of multi-core processors, and we are engineering large, complex, distributed systems to deal with the gigantic datasets that we like to call “big data”. We find that it is important to bound the end-to-end latency, particularly in such systems. End-to-end latency is a key per- formance indicator and has a direct correlation with user experience and thus, for a commercial system, both customer conversion and customer retention, in particular in a realtime/online system In such distributed systems, computation is divided between multiple entities, which may be spread across a pleathoria of machines within a single - or across - datacentre(s). Therefore, one way to minimise the end-to-end latency and to control its distribution is to attempt to bound the latency of every participating component. The motivation is that latency and variance in latency is induced in every step of communication along the execution path. Furthermore, in current computer cluster deployments, energy accounts for a significant portion of operational expenses. Consequently, if we can engineer systems that are able to perform the required tasks more efficiently, they can run with fewer hardware resources and thus consume less energy resources. Therefore, it is still desired to improve the efficiency of our systems, even if we have at our disposal, extremely powerful computational resources. In this work we explore a method to improve the performance of web servers based on the Node.js application framework, that may or may not, be used in such a distributed setting as described in the first paragraph. The performance met- 1 The number of transistors on a die doubles approximately every 18 months. 1 CHAPTER 1. INTRODUCTION rics/Quality of Service metrics we study are mainly latency and its distribution as motivated in the second paragraph, and throughput. Throughput is the num- ber of transactions per time unit, and exhibits correlation with energy efficiency requirements as described in the third paragraph.