
DEGREE PROJECT IN COMPUTER SCIENCE AND ENGINEERING, SECOND CYCLE, 30 CREDITS STOCKHOLM, SWEDEN 2016 A Study of OpenStack Networking Performance PHILIP OLSSON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF COMPUTER SCIENCE AND COMMUNICATION A Study of OpenStack Networking Performance PHILIP OLSSON Master’s Thesis at CSC Supervisor: Dilian Gurov Examiner: Johan Håstad Supervisor at Ericsson AB: Max Shatokhin June 17, 2016 Abstract Cloud computing is a fast-growing sector among software companies. Cloud platforms provide services such as spread- ing out storage and computational power over several geo- graphic locations, on-demand resource allocation and flex- ible payment options. Virtualization is a technology used in conjunction with cloud technology and offers the pos- sibility to share the physical resources of a host machine by hosting several virtual machines on the same physical machine. Each virtual machine runs its operating system which makes the virtual machines hardware independent. The cloud and virtualization layers add additional layers of software to the server environments to provide the ser- vices. The additional layers cause an overlay in latency which can be problematic for latency sensitive applications. The primary goal of this thesis is to investigate how the networking components impact the latency in an Open- Stack cloud compared to a traditional deployment. The networking components were benchmarked under different load scenarios, and the results indicate that the additional latency added by the networking components is not too significant in the used network setup. Instead, a significant performance degradation could be seen on the applications running in the virtual machine which caused most of the added latency in the cloud environment. Referat En studie av Openstack nätverksprestanda Molntjänster är en snabbt växande sektor bland mjukva- ruföretag. Molnplattformar tillhandahåller tjänster så som utspridning av lagring och beräkningskraft över olika geo- grafiska områden, resursallokering på begäran och flexib- la betalningsmetoder. Virtualisering är en teknik som an- vänds tillsammans med molnteknologi och erbjuder möj- ligheten att dela de fysiska resurserna hos en värddator mellan olika virtuella maskiner som kör på samma fysiska dator. Varje virtuell maskin kör sitt egna operativsystem vilket gör att de virtuella maskinerna blir hårdvaruobe- roende. Moln och virtualiseringslagret lägger till ytterliga- re mjukvarulager till servermiljöer för att göra teknikerna möjliga. De extra mjukvarulagrerna orsakar ett pålägg på responstiden vilket kan vara ett problem för applikationer som kräver snabb responstid. Det primära målet i detta ex- amensarbete är att undersöka hur de extra nätverkskompo- nenterna i molnplattformen OpenStack påverkar respons- tiden. Nätverkskomonenterna var utvärderade under olika belastningsscenarion och resultaten indikerar att den extra responstiden som orsakades av de extra nätverkskomponen- terna inte har allt för stor betydelse på responstiden i den använda nätverksinstallationen. En signifikant perstanda- försämring sågs på applikationerna som körde på den virtu- ella maskinen vilket stod för den större delen av den ökade responstiden. Glossary blade Server computer optimized to minimize power consumption and physical space. IaaS Infrastructure as aService. IP Internet Protocol. KVM Kernel-based Virtual Machine. OS Operating System. OVS Open vSwitch. QEMU Quick Emulator. SLA Service Level Agreement. VLAN Virtual Local Area Network. VM Virtual Machine. Contents Glossary 1 Introduction 1 1.1 Motivation . 1 1.2 Problemstatement............................ 2 1.3 Approach . 3 1.4 Contributions............................... 3 1.5 Delimitations ............................... 3 1.6 StructureOfTheThesis......................... 3 2 Background 5 2.1 Virtualization and Cloud Computing . 5 2.1.1 Virtualization . 5 2.1.2 Cloud Computing . 6 2.2 OpenStack................................. 7 2.2.1 Keystone Identity Service . 7 2.2.2 Nova Compute . 8 2.2.3 Neutron Networking . 8 2.2.4 Cinder Block Storage Service . 13 2.2.5 Glance Image Service . 14 2.2.6 Swift Object Storage . 14 3 Related work 15 4 Experimental Setup 17 4.1 Physical Setup For Native And Virtual Deployment . 17 4.2 Native Blade Deployment Architecture . 19 4.3 Virtual Deployment Architecture . 20 5 Method 23 5.1 Load Scenarios . 23 5.2 Measuring Network Performance . 24 5.3 Measuring Server Performance . 25 5.4 Measuring Load Balancer Performance . 26 5.5 Measuring Packet Delivery In And Out From VM . 26 6 Results 27 6.1 Time Spent In Blade Native Versus Virtual Deployment . 27 6.2 Time Distribution Native Versus Virtual Deployment . 29 6.3 Network Components Impact On Latency . 31 6.4 Server Performance Impact On Latency . 33 6.5 Load Balancer Impact On Latency . 34 6.6 Packet Delivery In And Out From VM Impact On Latency . 35 7 Discussion and Analysis 37 7.1 Network Components Performance . 37 7.2 Load Balancer Performance . 38 7.3 Server Performance . 38 7.4 Packet Delivery In And Out From The VM . 39 8 Conclusion 41 9 Future Work 43 10 Social, Ethical, Economic and Sustainability aspects 45 Bibliography 47 Chapter 1 Introduction This chapter introduces the concepts of the thesis, the motivation for the project, the investigated problem and the chosen approach. It also provides the delimitations of the project and finally describes the structure of the thesis. 1.1 Motivation Migrating applications to a cloud environment has in recent years become a popular strategy among software companies. To deploy architecture and software in cloud environments, such as OpenStack, provide benefits such as spreading out storage and computational power over several geographic locations, on-demand resource al- location, pay-as-you-go services, and small hardware investments [18]. Data centers exploit the use of virtualization techniques which can increase the resource utiliza- tion of physical servers by letting several virtual machines (VMs), isolated from each other, run simultaneously on the same physical machine [10, 26]. Virtualiza- tion and cloud techniques provide the benefits of creating systems that are easy to scale horizontally, i.e., adding more servers to the server environment, and can make maintenance of both software and hardware easier. Depending on who the stakeholder is, a cloud environment can provide differ- ent benefits. By using virtualization techniques, standardized hardware can be used which can make it less costly to buy hardware for large data centers [10]. Customers who want to deploy arbitrary applications inside VMs in the cloud have the oppor- tunity to only pay for the hardware and bandwidth needed for their applications to run. Furthermore, customers or companies that use a cloud platform can easily scale their needs which limits the financial costs and burden of investing in a scale up or down often associated with it. The cloud environment adds additional layers of software abstraction to the server environments to provide the services compared to traditional server environ- ments. The additional layers are for example extra networking components and the hypervisor layer. The hypervisor is responsible for hosting one or several VMs on a physical host. Open vSwitch (OVS) and Linux Bridges are referred to as net- 1 CHAPTER 1. INTRODUCTION working components, responsible for switching traffic inside a cloud. It is possible to configure the networking in a cloud in several different ways, and in this thesis, an OpenStack provider network setup with OVS and Linux Bridges will be studied. OVS and Linux Bridges are commonly used in cloud computing platforms [22]. The additional layers in a cloud environment cause an overlay in latency which can be crucial for latency sensitive applications. Therefore, it is important to understand where the additional latency derives from to be able to prevent it, if possible. Currently, Ericsson is using many different specialized hardware components which their systems run on. By migrating products to a cloud environment, it is possible to use standardized hardware and doing so, possibly lower the costs for investments in new hardware and maintenance of both hardware and software. However, reducing the costs by using standardized hardware is sometimes re- ferred to as a myth. Even though that cheaper standardized hardware could be used, it might not lower the costs since specialized hardware can have other char- acteristics that provide benefits that standardized hardware does not have. This can for example be, better performance per price unit, lower power consumption, and improved cooling technology. By being able to run systems on both standard- ized and specialized hardware customers are offered the option to choose what they want. On traditional deployment, also referred to as native deployment, both upgrad- ing of software and scaling up is complicated. The real benefits of using virtualiza- tion and cloud technologies for Ericsson is that it gives the ability to horizontally scale up the system when needed and let the maintenance of software to be easier which implies lower costs. When Ericsson migrates their MTAS product to run in the cloud, also referred to as virtual deployment, they are experiencing higher latency from their system in comparison to native deployment. The MTAS product is for example respon- sible for setting up different kinds of calls between subscribers and handover of calls to different subsystems when subscribers are moving between various network zones, such
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages59 Page
-
File Size-