Thesis no: MEE-2015-NN
Monitoring and Analysis of CPU load relationships between Host and Guests in a Cloud Networking Infrastructure
An Empirical Study
Krishna Varaynya Chivukula
Faculty of Computing Blekinge Institute of Technology SE–371 79 Karlskrona, Sweden This thesis is submitted to the Faculty of Computing at Blekinge Institute of Technology in partial fulfillment of the requirements for the degree of Master of Science in Electrical Engineering. The thesis is equivalent to 20 weeks of full time studies.
Contact Information: Author(s): Krishna Varaynya Chivukula E-mail: [email protected]
University advisor: Prof.Dr. Kurt Tutschku Department of Telecommunication Systems
Faculty of Computing Internet : www.bth.se Blekinge Institute of Technology Phone : +46 455 38 50 00 SE–371 79 Karlskrona, Sweden Fax : +46 455 38 50 57 Abstract
Cloud computing has been a fast-growing business of the IT sector in the recent years as it favors hardware resource sharing by reducing the infrastructure main- tenance costs and promising improved resource utilization and energy efficiency to the service providers and customers. Cloud Service Providers, CSP, imple- ment load management techniques for effective allocation of resources based on need, enabling them to maintain costs alongside meeting the SLAs. Understand- ing the impact and behavior of variable workloads in a cloud system is essential for achieving load management. CPU load is a principle computational resource that plays an important role in resource management.
This thesis work aims to monitor and analyze load in cloud infrastructure by applying load collection and evaluation techniques. The aim is also to investigate CPU load relationship between host and guest machines under varying workload conditions. In the thesis, along with a cloud environment built using OpenStack, we also consider a system with KVM hypervisor to achieve the goals.
The methodology applied is empirical, that is pure experimental examination. This study is about performing measurements to make an assessment about load behavior in the system. The experiments are designed to fulfill the objectives of the thesis. We also employ visual correlation analysis to understand the strength of association between host and guest CPU load.
Results of the initial experimental study include distinction between CPU load of OpenStack compute device and a device with KVM hypervisor. Further experi- mental runs are based on these observations. The succeeding results show quite remarkable association between PM and VM under 100% workload conditions. However, few other variations in workload do not resemble similar results.
CPU load results obtained from cloud and a standalone virtualization system differ, not drastically though. 100% workload situations have shown negligi- ble distortion in the visual correlation and usually reported linearity. Lower workloads showed distortions in correlation analysis. It is anticipated that more iterations can likely refine the observations. Further investigation of these rela- tionships by using other applications commonly used in cloud is potential.
Keywords: Cloud, CPU load, Measurements, OpenStack, Virtualization
i Dedicated to my family
ii Acknowledgements
I am forever indebted to my supervisor, Prof. Dr. Kurt Tutschku for his valuable guidance, patience and continuous support throughout my thesis. I could not have imagined a better advisor and mentor for my master thesis.
I sincerely thank Dr. Patrik Arlos for his constant encouragement and suggestions in between his busy schedule. I extend my heartfelt thanks to Dr. Dragos Ilie for his invaluable guidance when approached. Words cannot express my gratitude for Anders Carlsson, my father figure, who gave me never ending support and opportunities to excel.
I am grateful to Svenska Institutet for embracing me as a deserving candidate for scholarship and enabling me to fulfill my dream of Master’s education. I acknowl- edge City Network Hosting AB for letting us perform tests in their infrastructure.
I am thankful to god, my wonderful parents and sister for their immense support and motivation. Without you, it would not be the same. Last but not least, a huge thanks to Vida and all my friends who made this journey worthwhile.
iii List of Abbreviations
NIST National Institute of Standards and Technology IaaS Infrastructure-as-a-Service SaaS Software-as-a-Service PaaS Platform-as-a-Service PM Physical Machine VM Virtual Machine SLA Service-Level Agreement PCPU Physical CPU vCPU virtual CPU VMM Virtual Machine Monitor KVM Kernel-based Virtual Machine QEMU Quick Emulator OS Operating System(s)
iv List of Figures
3.1 Minimal architecture and services offered by OpenStack’s Con- troller node(left), Networking node(center) and Compute node(right) 13
3.2 Experimental Methodology portrayed as a sprial model...... 15
3.3 Example of graph showing scatter plots and linear correlation as a relation between x and y attributes...... 16
3.4 Anticipated graphical plots of possibly attainable correlation be- tween host and guest in terms of CPU load ...... 16
3.5 Server, Virtualization and Network components that are related to causing or effecting load in the system...... 18
3.6 example of output of “uptime” command on terminal...... 18
3.7 example of output of “top” command on terminal...... 19
4.1 An OpenStack environment can have “n” number of compute nodes based on requirement and a controller manages the compute nodes. 22
4.2 Abstraction of hardware, software and virtualization layers in a system. Nova-compute is not present in a normal virtualization system...... 22
4.3 Visual representation of the PMs, VMs and tools used in the im- plementation of experiments. The figure resembles the question as to what could be the relationship between load on host and guest machines...... 23
4.4 Depiction of the experimental setup on OpenStack platform . . . 25
v 4.5 Stages of experiments performed with stress ...... 26
4.6 Depiction of on-premise device setup ...... 27
4.7 Depiction of on-premise device experimental setup for Single guest 29
4.8 Stress applied initially on 1 vCPU ...... 30
4.9 Stess configured to load 2 vCPUs ...... 31
4.10 Stess configured to load 3 vCPUs ...... 31
4.11 Stess configured to load 3 or more vCPUs. The dotted lines indi- cate that the number of vCPUs being stressed is increased in each experimental run...... 32
4.12 Stess-ng configured to impose load om 1 or more vCPUs with 10, 20, 30, 40, 50 and 100% load. The dotted lines indicate that the number of vCPUs being stressed is increased in each experimental run...... 33
5.1 CPU load observed in OpenStack compute node and on-premise device in multiple guest scenario ...... 35
5.2 Scatter plots showing the relationship between CPU load average on host and guest in different vCPU stress conditions...... 37
5.3 Scatter plots showing the CPU load relationships between host and guest in varying load conditions - uptime tool...... 39
5.4 Scatter plots showing the CPU load relationships between host and guest in varying load conditions - uptime tool...... 40
5.5 Scatter plots showing the CPU load relationships between host and guest in varying load conditions - uptime tool...... 41
5.6 Scatter plots showing the CPU load relationships between host and guest in varying load conditions - uptime tool...... 42
5.7 Scatter plots showing the CPU load relationships between host and guest in varying load conditions - top tool...... 43
vi List of Tables
3.1 Minimal hardware required to install a controller and a compute node ...... 13
4.1 Specifications of the OpenStack Compute Node used for experiments 24
4.2 Specifications of the on-premise device used for experiments . . . 27
5.1 Stress-ng and uptime results on centOS host ...... 44
5.2 Stress-ng and uptime results on Ubuntu host ...... 44
5.3 Stress-ng and top results on CentOS host ...... 44
5.4 Stress-ng and top results on Ubuntu host ...... 45
vii Contents
Abstract i
Acknowledgements iii
List of Abbreviations iv
List of Figures vi
List of Tables vii
1 Introduction 1
1.1 Background ...... 1
1.2 Aims and Objectives ...... 3
1.3 Research Questions ...... 4
1.4 Expected Contribution ...... 4
2 Related Work 5
3 Methodology 8
3.1 Introduction to Underlying Technologies ...... 8
3.1.1 Virtualization ...... 8
viii 3.1.2 Hypervisors ...... 9
3.1.3 Cloud Computing and OpenStack ...... 10
3.2 Methodology ...... 14
3.2.1 Experimental Research ...... 14
3.2.2 Visual Correlation Analysis ...... 15
3.3 Measurement Tools ...... 17
4 General Experimental Setup 21
4.1 Experimental Modeling ...... 21
4.2 Experimental setup ...... 24
4.2.1 OpenStack Cloud Test-bed ...... 24
4.2.2 On-premise Test-bed ...... 25
4.2.3 Stress tests on Single Guest ...... 28
5 Results and Analysis 34
5.1 Results from OpenStack and on-premise Test-beds ...... 35
5.2 Results of Stress tests ...... 36
5.3 Results of Stress-ng tests ...... 38
5.3.1 Uptime tool ...... 38
5.3.2 Top tool ...... 40
5.4 Discussions ...... 43
6 Conclusions and Future Work 46
6.1 Conclusions ...... 46
6.2 Future Work ...... 47
ix References 48
x Chapter 1 Introduction
This thesis document consists of 6 chapters. Chapter 1 introduces thesis con- cepts, problems statements and motivation in the background section, aims and objectives, research questions and expected contribution of the thesis in the later sections. Research work associated with the thesis in general is presented in Chapter 2. Chapter 3 exhibits the main thesis concepts by briefly discussing the underlying technologies like virtualization, cloud computing, standard tools used in experimentation and the methodology adopted. Experimental modeling and setup is highlighted in Chapter 4. Chapter 5 presents the results obtained from the experimental runs with a detailed analysis and discussions. Conclu- sions derived from the analysis along with intended future work are highlighted in Chapter 6.
1.1 Background
Cloud computing is a predominant phenomenon in telecommunication, which allows sharing of hardware resources with scalability and flexibility by eliminat- ing the constraints on distance. According to NIST, cloud model is classified into three service models based on the resources provided: Software-as-a-Service, SaaS, Platform-as-a-Service, PaaS and Infrastructure-as-a-Service, IaaS. Appli- cation or software itself is offered to the customer as service in SaaS model, while a platform for building or developing customer’s applications is provided as ser- vice in PaaS. On the other hand, IaaS renders pools of computational, storage or network resources and permits the consumers to provision them as per need. These cloud solutions are utilized on a pay-per-use basis, thereby saving the initial investment and maintenance costs for the customers. [1,2,3]
1 Chapter 1. Introduction 2
Similar to traditional systems, monitoring system performance with re- spect to computational resources and applications in cloud computing is impor- tant. Performance monitoring and resource management in cloud infrastructures lies on a higher level of complexity due to lack of standards in these service models, where the customers do not have access to the underlying hardware machines. Cloud service providers, CSP, monitor the resources to ensure quality in their services as well as to bill their customers.[4,5]
The costs faced by CSP depend on the CPU utilization and the costs of the user are based on the time of lease of resources. Higher CPU utilization requires more electricity and cooling, which amount to around 40% costs in dat- acenters. Although cloud services promise improved resource utilization, it is complicated to determine the adequate amount of resources in order to satisfy variable workload. Load management techniques address this issue of managing computational resources according to the varying workloads, thus help in avoiding headroom or hotspot and minimizing the costs. Our study focuses on CPU load as metric of monitoring and analysis in cloud networking infrastructures.[6,7,8,9 ]
CPU load is the demand for computational resources, in other words, is the number of processes running or waiting for the resources. CPU load is determined by adding the CPU utilization and saturation. Utilization is the time a processor is busy and is indicated in percentage. Saturation is the number of processes that are waiting for the CPU at a position where the CPU is 100% utilized.[10,11]
Cloud computing is built on virtualization technologies that provide a structure where multiple instances can be run on one PM. Since the customers cannot access the hardware, the responsibility of monitoring the resources lies with the service provider to meet the SLAs. It is however complex since there is no possibility for the service provider or user to ensure that the resources are either over used or under used, which is violation of SLAs [5,12]. In such a case, monitoring Virtual Machine, VM, data is a challenge since the CSP and its customer have a different perspective of system performance. In this thesis, we aim at investigating CPU load as viewed from both CSP and customer perspective to identify the relationship between CPU load on host to that on the guests. This relationship can help the CSP in billing their customers based on time as well as resource usage and can also be applied for initiating load prediction for load management techniques.
The thesis mainly focuses on modeling a framework for CPU load gen- eration, collection and evaluation and on analyzing the guest and host CPU load relationships. The experiments are conducted on OpenStack compute node and Chapter 1. Introduction 3 an on-premise virtualization device using a set of standard Linux tools.
This research is carried out in collaboration with a second master thesis “An investigation of CPU utilization relationship between host and guests in cloud infrastructure”. While this thesis solely focuses on methodologies for obtaining, monitoring and analyzing CPU load relationships; the second thesis focuses on methodologies in obtaining CPU utilization. The experimental scenarios of both the theses coincide, yet the tools used for measurements and contributions made from the observation and analysis of results, differ. [13]
1.2 Aims and Objectives
The aim of the thesis is to establish a framework for CPU load characterization in federated cloud environment. The experiments outline the performance of load as defined by a set of standard Linux tools that are assist in obtaining the CPU metrics. This is achieved by imposing well-known stress on the vCPUs and extracting the load values from the Physical Machine, PM and VMs, there by identifying the relation between the PCPU and vCPU metrics by a visual correlation analysis. Study related to the working mechanism of these tools is beyond the scope of this thesis.
Cloud operators need to scrutinize the resources available regularly in order to ensure proper load management and meet the SLAs set with cloud cus- tomers. Main objectives of the thesis include:
• Study of commercially off the shelf performance evaluation tools • Study of available tools or applications for generating workload • Modelling of experimental platform in OpenStack • Modeling of standalone virtualization test platform • Implementation and experimental runs on both the platforms • Use of standard tools for CPU load collection • Iterations of the experiments to ensure robust result • Analysis of the results • Observation of correlation values • Visual correlation analysis of obtained host and guest CPU load Chapter 1. Introduction 4 1.3 Research Questions
As discussed in section1.2, our goal is to design a model for load collection and evaluation and to investigate host and guest CPU load relationships for better load management. The following are the research questions formulated:
1. How to model ways of collecting physical and virtual CPU load from a cloud infrastructure?
2. How do the physical and virtual systems react when load is changed?
3. How to identify relationship between host CPU load and Guest CPU load?
4. In what way is the relationship of host and guest useful in load management?
1.4 Expected Contribution
The expected outcome of this thesis comprises:
• Design, modeling and implementation of cloud as well as virtualization plat- forms for experiments.
• Data collection and observation of the measurements over iterations to en- sure robust results.
• Method for analyzing load relationships between PM and VM in cloud and virtualization environment.
• A detailed mathematical and visual correlation analysis.
• Identifying association between physical and virtual CPU load for better load management. Chapter 2 Related Work
Relevant research work associated with this thesis in multiple aspects is intro- duced in this section. A comparative study is conducted to identify the research gaps with proposals of other applicable methods and tools.
As mentioned in previous sections, monitoring resource usage for proper load management in cloud is an important and on-going topic of research. In [14], Mour, et al., presented a novel approach towards load management in cloud by migrating VMs from a burdened hardware machine to other under loaded ma- chines to help achieve substantial improvement in performance. In their model, they have considered a cloud environment comprising diversified range of physical as well as virtual machines. The model comprises of several modules handling individual tasks of load collection, load evaluation, load prediction, load balanc- ing and VM migration. In order to manage the load it is necessary to collect and evaluate the existing load and make a proper prediction based on current load as well as load expected in near future. Our thesis concentrates on load collection and evaluation in OpenStack cloud and virtualization system aiming at finding a relationship between load on VM and PM, which can be of use in load management models.
Another paper, [15], characterized a dynamic resource management sys- tem for cloud computing with focus on VM CPU mapping to the physical CPU because PM capacity should be adequate to fulfill the resource needs of all VMs running on it. They tried to estimate future resource usages without looking inside the VMs, using trend of resource usage patterns. The cloud environment they have used is based on Xen hypervisor.
Few research works include empirical study on VM performance in cloud, i.e.; deducing from practical observations rather than believing in theory [16].
5 Chapter 2. Related Work 6
One such work can be found in [17], where the authors have characterized and analyzed server utilization with diverse workloads encountered in a real cloud sys- tems. Their analysis states that workload variability across different time spans in a cloud environment is relatively high and they further intended to carry out a more fine-grained analysis on the correlation and effects of workload types on server. In [18], an empirical study on OpenStack cloud’s scheduling functionality is portrayed. The paper aimed at evaluating behavior of OpenStack scheduler with respect to memory and CPU cores of PM and VMs and their interactions. Their findings include that CPU cores requested by instances is an important factor at the time of resource allocation to VMs in all types of OpenStack sched- ulers. Acquiring the concepts and research gaps from these papers, we set out to perform empirical study applying black-box approach similar to theirs, in Open- Stack cloud networking infrastructures to identify load relationships between host and guest.
Authors of [19] designed a system to measure CPU overhead in the virtual computing system to obtain VM and physical CPU utilization to map a relationship between them. However this does not deal with the impact on PCPU when a VM utilizes more than the allocated resources i.e.; vCPUs, which is addressed in our experiments.
Corradi et al., in their work “VM consolidation: A real case based on OpenStack Cloud”, conclude that power consumption can be exceptionally re- duced with VM consolidation and they desire to investigate further in the di- rection of workload effects with either CPU or network to understand the VM consolidation and role of SLAs in service decision process. They also wished to deploy larger OpenStack cloud for further testing [20]. This is related to our the- sis work from the perspective of VM consolidation in real cloud infrastructures, which is experimented and tested in our thesis on a real OpenStack cloud while multiple guests share resources.
In [21], CPU utilization while running web service application on cloud and a normal virtualization system is compared which would be helpful for the user in specifying the capacity to server, based on computational needs. They performed experiments on cloud and virtualization environments and the results show that web service CPU utilization in cloud is higher than that of on-site device. Their scenarios of experimentation are similar to ours, but our goal is to identify load relationships of host and guest in variable workloads rather than running a single application.
These excerpts from the related work show that load management and workload impact on PCPU and vCPUs are indeed current important topics of research. After careful review and comparison of these works and future targets Chapter 2. Related Work 7 that stand as motivation to our thesis in multiple ways, we proceed to perform our experiments aiming at comparing and identifying PCPU relation with respect to varying workload on vCPUs which could be advantageous in load prediction and management techniques. Chapter 3 Methodology
This chapter provides a brief description of the underlying technologies along with the research methodology and measurement tools used in this thesis.
3.1 Introduction to Underlying Technologies
A concise description of fundamental technologies required to grasp the main idea of the thesis is provided in this section. The fundamentals include virtu- alization, hypervisor and cloud computing principles and standard measurement tools, which detailed in the forthcoming sections respectively.
3.1.1 Virtualization
Virtualization facilitates abstraction of physical servers into virtual machines with their OS. The virtual machines can share the resources at the same time. Virtual- ization is efficient as it reduces the need for physical resources by hosting multiple servers on a single physical machine. Two main techniques of virtualization are OS virtualization and Hardware Virtualization.[10]
OS virtualization – In OS virtualization, the operating system is partitioned to multiple instances, which behave like individual guests. These guests can be run, rebooted, admin- istered independent to the host machine. These instances can be virtual servers of high performance to cloud customers and of high density to the operators. The disadvantage of this technique is that the guests cannot run different kernel
8 Chapter 3. Methodology 9 versions and is overcome in hardware virtualization technique.[22]
Hardware Virtualization – This technique of virtualization involves creation of virtual machines with entire operating system including kernels. This means that they can run different ker- nel versions unlike OS virtualization. Hardware virtualization supplies an entire system of virtual hardware components where an OS can be installed. This in turn has the following types involved:
• Full virtualization – binary translations: The instructions passed to the guest kernel are translated during the run time. • Full virtualization – hardware assisted: The guest kernel instructions are not translated or modified and are operated by hypervisor running a Virtual Machine Monitor, VMM. • Para virtualization: This type of virtualization provides a virtual system with an interface to virtual OS for using physical resources through hy- percalls. This is mostly prevalent in network interfaces and storage con- trollers.[10, 23]
3.1.2 Hypervisors
Hypervisor is a computer software or hardware or firmware that creates, pro- visions, runs and monitors virtual machines. The following are two types of hypervisors: Type 1 – This hypervisor runs on the processor directly but not as a kernel software. The supervision is taken care of by the first guest on the hypervisor, which runs on ring 0. This performs the administration work like creating and running new guests. This is also called as bare-metal hypervisor and provides scheduling for VMs. eg: Xen. Type 2 – Host OS kernel executes and supervises the hypervisor and the guests existing on it. This does not come with a scheduler of it’s own but uses the host kernel scheduler. Eg: KVM.[24]
KVM, Kernel-based Virtual Machine, is a type 2 open source hypervisor used widely in cloud computing. This hypervisor, coupled with a user process called QEMU – Quick Emulator, creates hardware assisted virtual instances [25]. It is also used in Joyent public cloud and Google Compute Engine [26]. Guests are at first provisioned by allocating CPU resources as vCPUs and then provided Chapter 3. Methodology 10 scheduling by hypervisor. The vCPUs allocation is limited to the physical CPU resource. When it comes to observability, physical resource usage cannot be observed from the virtual instances.
Hardware support is limited to 8 virtual cores for each physical core on the virtual machine and once the maximum number of CPUs exceeds, QEMU provides software virtualization to KVM. In case of multiple VMs hosted by a physical machine, better performance can be attained by assigning 1 virtual core per VM.[27]
3.1.3 Cloud Computing and OpenStack
As summarized in section 1.1, cloud computing is a popular technology supporting physical resource sharing by multiple tenant servers. The services of cloud: IaaS provides compute; storage, network resources and the consumers are allowed to provision them based on their needs. PaaS allows users to run their applications on provided platform and SaaS, on the other hand, allows the customer to utilize the application via a user interface.[1]
In the Future Internet architectures, federation of such public and pri- vate clouds is an interesting feature. One such project working towards the goal of reaching a cloud federation is FI-PPP, Future Internet Public Private Part- nership framework. This framework consists of several smaller projects and XiFi is the project that concentrates on building and providing smart infrastructures for this cloud federation. These Infrastructures facilitate the deployment of sev- eral applications and instances in a unified market place, where, business logic is instantiated as VMs. BTH is one among the current operational nodes across Europe.[28]
These nodes of XiFi are interconnected through networking infrastruc- ture. These architectures are beneficial since they are not implemented at one place and hence are resilient. If hardware at a place is crashed or out of stor- age, the virtual instances can be moved to other places depending on the already existing load on them.
The XiFi nodes are heterogeneous clouds built on OpenStack and pro- vide tools and services such as Generic Enablers for the deployment of various ap- plications. They encompass the Cloud principles namely, on demand self-service, resource pooling, scalability and security.[29]
Another example for such federated platform is the infrastructure at City Network hosting AB. City Network AB is a corporation that provides cloud Chapter 3. Methodology 11 domain services and hosting to its customers. Similar to XiFi, the services are delivered via OpenStack user interface, where the users can create and provision their VMs and utilize the storage and other high quality services offered by City Network. This web interface is called City Cloud and is similar to FIWARE cloud lab of XiFi, which is built on the OpenStack dashboard. However, unlike XiFi, City Network upgrades regularly and keeps track the latest OpenStack releases. Currently, they provide hosting on their datacenters in UK, Stockholm and Karlskrona.[30]
City Network architectures have considerable number of customers uti- lizing their services. Identifying and comparing the host and guest CPU load will be of great value in these operational clouds for better customer service and load balancing.
Cloud Networking focuses on providing the VMs with static or dynamic IP addresses, firewalls to make them available to reach from elsewhere. Cloud Net- working provides control and security for the network functions and services de- livered as a service over global cloud computing infrastructure. The word global, here, resembles the federation of the local clouds through network infrastructures. Cloud Networking can be of two types – Cloud Enabled Networking (CEN) and Cloud Based Networking (CBN). In the criterion of CEN, the management and control characteristics are moved into the cloud while the network functions such as Routing, Switching and Security services remain in the hardware. In the sec- ond principle – CBN, the requirement for physical hardware is abolished and the network functions are transferred to the Cloud. Yet, the networking infrastruc- ture needed for fulfilling the networking demands of physical devices remains in the hardware.[31,32]
OpenStack is open source cloud solution software that provides and manages IaaS. The infrastructures offered by OpenStack include compute, stor- age and networking resources. OpenStack comes with a combination of core and optional services that can be implemented in its cloud architecture. The minimal architecture of OpenStack consists of its core components that can be realized in either three-node or two-node architectures. Figure 3.1 shows the core com- ponents and services in a three-node architecture. The two-node architecture is similar to figure 3.1 eliminating the Network node, whose services are moved to compute node instead.[33,34]
• Controller – Openstack comes with a cloud controller that controls or administers other core and optional components or nodes of openstack. Controller node is built to serve as central management system for OpenStack deployments. Controller can be deployed in a single node or various nodes depending on Chapter 3. Methodology 12
the requirement. The main services managed by controller include authenti- cation and authorization services for identity management, databases, user dashboard and image services.[35]
• Compute – OpenStack’s compute is known as Nova by its project name. The compute node is the device that comprises the software to host virtual instances, thus providing IaaS cloud platform. Nova does not come with virtualiza- tion software but comprises drivers to interact with the virtualization layer underneath. Its main components include object storage component called as swift and block storage component called as cinder that provide storage services.[36]
• Networking – This aims at providing network services to the instances and enables com- munication between the compute nodes and the virtual instances. It is not necessary to have a separate node for networking and one of the compute nodes can be utilized for the this purpose. The project name of “Network- ing” component in OpenStack is “Neutron”.[34]
• Dashboard – OpenStack provides a user interface for the users to create and provision virtual instances as per need. It is formally named as “Horizon” in Open- Stack.[35]
• Telemetry – As shown in figure 3.1, Telemetry is an optional service in OpenStack. Telemetry, or Ceilometer in OpenStack, monitors the OpenStack cloud en- vironment for providing billing services. Ceilometer has an agent to collect CPU and network metrics that are used for billing, auditing and capacity planning. It has meters to collect duration of instance, CPU time used, number of disk io requests. The CPU utilization reported by this agent is based on CPU ticks but not workload of the VMs.[37,38,39]
The hardware required for OpenStack installation depends upon the number of virtual instances needed or the types of services provided. Table 3.1 displays the minimal requirements for a small cloud platform using Open- Stack.[40]
It is evident that cloud networking aims at increasing efficiency. Effi- ciency can be defined as the ability to do something without wastage of material, time and energy. Smart Load management can improve efficiency in cloud infras- tructures with focus on how best the resources can be shared. In cloud, we place as many VMs as we can in the system at different locations to increase efficiency;