Autonomic Computing in Virtualized Environment
Total Page:16
File Type:pdf, Size:1020Kb
International Journal of Information Technology Convergence and Services (IJITCS) Vol.2, No.1, February 2012 AUTONOMIC COMPUTING IN VIRTUALIZED ENVIRONMENT Anala M R 1 and Dr.Shobha G 2 1Department of CSE, RVCE, Bangalore [email protected] 2Dean PG Studies (CSE & ISE), RVCE, Bangalore [email protected] ABSTRACT The infrastructure of IT-sector is heterogeneous and complex. It is difficult for a system administrator to manage these complex systems. This led to complex, unmanageable and insecure systems. A new approach to solve this problem is the IT infrastructure automation. Autonomic computing is the solution to manage complex infrastructure which provides "must have" solutions for the problems above. The increased resource utilization in IT made the enterprises to use the costly physical servers. These physical servers were under-utilized. Virtualization is a technology with efficient and better utilization of resources. Virtualization and autonomic computing are of major importance for the IT- sector and could be successfully combined to reach the ideals of the autonomic computing. This paper presents a framework for automating live migration of virtual machines in virtualized environment using autonomic computing. This paper also addresses the types of attacks encountered during live migration of virtual machines. KEYWORDS Virtualization, autonomic computing, Self-management and live migration. 1. INTRODUCTION 1.1 Autonomic computing The heterogeneous infrastructures such as hardware and software applications, services and networks, directed towards the complex, unmanageable and insecure systems. The system administrator will not be able to manage alone such a complex computing system. Therefore these complex computing systems are managed by autonomic computing. Autonomic computing is the technique of managing the computing environment by computing systems themselves based on the policies decided by the system administrator. Self-management is the core of autonomic computing [1] and represents the process by which computing systems manage themselves. There are four aspects of self-management: • Self-configuration : The property in which the computing systems will automatically configure itself by sparing the system administrator with respect to high-level policies. Any new components added to the system are incorporated seamlessly. It can dynamically adapt to changing environments. Such changes could include the deployment of new components or the removal of existing ones, or dramatic changes in the system characteristics • Self-optimization : The computing system will continuously try to improve its performance by dynamically changing its parameter at run-time. It can monitor, allocate and tune resources DOI : 10.5121/ijitcs.2012.2108 67 International Journal of Information Technology Convergence and Services (IJITCS) Vol.2, No.1, February 2012 automatically. The tuning of resources means reallocation of resources in response to dynamically changing workloads in order to improve overall utilization. Without self- optimizing functions it is difficult to effectively use the computing resources when an application does not completely use its allocated resources. • Self-healing : the property of a computing system to detect, diagnose and repair local problems caused either by software of by hardware failures. It can discover, diagnose and react to disruptions. Self-healing components can detect system malfunctions and initiate policy-based corrective action without disrupting the IT environment. • Self-protecting : the ability of a computing system to secure itself from a malicious attack. The self-protected computing system is capable of predicting possible problems based on logs and reports. It can anticipate, detect, identify and protect against threats from anywhere. 1.2Virtualization Virtualization [2] is a technique of separating the resources of computer into multiple execution environments. Virtualization techniques create multiple partitions which are isolated with each other called virtual machines. A virtual machine is a software implementation of a physical machine where an isolated operating system is installed within the operating system of a physical machine. The desire to run multiple operating systems was the original motivation for virtual machines. Virtualization is a technology with efficient and better utilization of resources. Techniques of virtualization: There are three different approaches to achieve server virtualization: the virtual machine model, the paravirtual machine model, and virtualization at the operating system (OS) layer. Architecture of Virtual machine model: The first technique, virtual machine model uses the host/guest paradigm as shown in figure 1. Each guest runs on a virtual imitation of the hardware layer. This approach does not require any alteration to the guest operating system to run. The administrator can create guests that use different operating systems. The guest has no knowledge of the host's operating system because it is not aware that it's not running on real hardware. The guests require real computing resources from the host therefore it uses a hypervisor [3] to coordinate instructions to the CPU. The hypervisor is called a virtual machine monitor (VMM) [4]. It validates all the guest-issued CPU instructions and manages any executed code that requires addition privileges. VMware, Virtualbox and Microsoft Virtual Server both use the virtual machine model. In the bottom of the stack the Host OS manages the physical computer. The VMM manages all VMs running on this physical computer. The Guest OS and the application run on these virtual machines. 68 International Journal of Information Technology Convergence and Services (IJITCS) Vol.2, No.1, February 2012 Architecture of para-virtual machine model: As shown in figure 2 the paravirtual machine (PVM) model is also based on the host/guest paradigm. This technique also uses a virtual machine monitor. Unlike virtual machine model the para-virtual machine model the VMM actually modifies the guest operating system's code. As guest OS is to be modified, this is possible for only open source OS such as Linux and BSD and wnkdows cannot. The Intel(Intel VT) and AMD (AMD-V) provide functionality that enables unmodified OS to be hosted by a para-virtualized hypervisor. Like virtual machines, para-virtual machines are capable of running multiple operating systems. Xen[2,5] and UML both use the para-virtual machine model. Architecture of Virtualization at the OS level: This technique is different from virtual machine models and paravirtual machines. It isn't based on the host/guest paradigm. In the OS level model, the host runs a single OS kernel as its core and exports operating system functionality to each of the guests. Guests must use the same operating system as the host. Each virtual environment has its own file system, process table, networking configuration and system libraries as shown in figure 3. OS virtualization provides software emulation. This distributed architecture [6] eliminates system calls between layers, which reduces CPU usage overhead. Virtuozzo and Solaris Zones both use OS-level virtualization. The major limitation of this technique is the selection of OS since each container must be of same type, version and patch level. Comparison: The three techniques available for virtualization differ in implementation complexity, the extent of OS support, performance in comparison with standalone server, the level of access to common resources. The virtual machine models has wider scope of usage, but exhibits poor performance. Para-VMs have better performance, but they support fewer OSs because they modify OS. Therefore para-virtual machine models preferably use open source OS like linux. Migrating operating system instances across distinct physical hosts is a useful tool for administrators of data centers and clusters. It allows a clean separation between hardware and software, and facilitates fault management, load balancing, and low-level system maintenance. By carrying out the majority of migration while OSes continue to run, we achieve impressive 69 International Journal of Information Technology Convergence and Services (IJITCS) Vol.2, No.1, February 2012 performance with minimal service downtimes. Migration can be conducted offline (where the guest is suspended and then moved) or live (where a guest is moved without suspending). An offline migration suspends the guest then moves an image of the guest’s memory to the destination host. The guest is resumed on the destination host and the memory the guest used on the source host is freed. The time an offline migration depends on the network bandwidth and latency. A guest with 2GB of memory should take several seconds on a 1G bit Ethernet link. A live migration keeps the guest running on the source host and begins moving the memory without stopping the guest. Migration is useful for • Load balancing – Guests running in source host can be moved to some other host when guest doesn’t get sufficient resources at source host. • Hardware failover – When source machine’s hardware fails, for the continued services guests can be safely moved to new host. • Energy saving – During lower usage of a host system in order to save power guests can be reallocated to other hosts and host systems are powered off. • Geographic migration – To minimize the latency guests can be moved to another location in serious circumstances. During live migration of the virtual machine