Virtualization for Cloud Computing
Total Page:16
File Type:pdf, Size:1020Kb
Virtualization for Cloud Computing Dr. Sanjay P. Ahuja, Ph.D. 2010-14 FIS Distinguished Professor of Computer Science School of Computing, UNF University of CLOUD COMPUTING UNF NORTH FLORIDA • On demand provision of computational resources (Infrastructure, Platform, Software). • Requires high availability of resources and optimum use. • Virtualization is the enabling technology and creates virtual machines that allows a single machine to act as if it were many machines. • Benefits of virtualization for cloud computing: Reduces capital expenses and maintenance costs through server consolidation, reduces physical space needed in data centers. Resource Management, Migration, Maintainability, High availability and Fault tolerance are other benefits. • Virtualization is implemented using hypervisors. 2 University of VIRTUALIZATION UNF NORTH FLORIDA Machine Stack showing virtualization opportunities • Creation of a virtual version of hardware using software. Application • Runs several applications at the same time on a single physical server by hosting each of them inside their own virtual machine. • By running multiple virtual machines Libraries simultaneously, a physical server can be utilized efficiently. Operating Primary approaches to virtualization System • Platform virtualization Ex : Server • Resources virtualization Ex : Storage, Network Hardware 3 University of HYPERVISOR UNF NORTH FLORIDA • Hypervisor plays an important role in the virtualization scenario by virtualization of hardware. It provides support for running multiple operating systems concurrently in virtual servers created within a physical server. • The virtualization layer is the software responsible for hosting and managing all VMs. The virtualization layer is a hypervisor running directly on the hardware. • Example: VMWare, Xen, KVM. 4 University of SERVER WITHOUT VIRTUALIZATION UNF NORTH FLORIDA • Only one OS can run at a time within a server. • Under utilization of resources. Multiple Software • Inflexible and costly infrastructure. Applications • Hardware changes require manual effort and access to the physical Operating System server. Hardware CPU Memory NIC DISK 5 University of SERVER WITH VIRTUALIZATION UNF NORTH FLORIDA • Can run multiple OS Multiple Software Multiple Software simultaneously. Applications Applications • Each OS can have different Operating System Operating System hardware configuration. Virtual Server 1 Virtual Server 2 • Efficient utilization of hardware resources. Hypervisor • Each virtual machine is independent. Hardware Save electricity, initial cost to buy • servers, space etc. • Easy to manage and monitor CPU Memory NIC DISK virtual machines centrally. 6 University of HYPERVISOR TYPE UNF NORTH FLORIDA Full virtualization Multiple Software Multiple Software Enables hypervisors to run an Applications Applications • unmodified guest operating Operating System Operating System system (e.g. Windows 2003 or Virtual Server 1 Virtual Server 2 XP). • Guest OS is not aware that it is Hypervisor being virtualized. • E.g.: VMware uses a Hardware combination of direct execution and binary translation techniques to achieve full virtualization of CPU Memory NIC DISK server systems. 7 University of HYPERVISOR TYPE UNF NORTH FLORIDA Multiple Software Multiple Software Para virtualization Applications Applications • Involves explicitly modifying Para virtualized Para virtualized Guest Guest guest operating system (e.g. Operating System Operating System SUSE Linux Enterprise Server Virtual Server 1 Virtual Server 2 11) so that it is aware of being virtualized to allow near native Hypervisor / VMM performance. Hardware • Improves performance. • Lower overhead. • E.g.: Xen supports both Hardware Assisted Virtualization (HVM) and Para-Virtualization CPU Memory NIC DISK (PV). 8 HYPERVISOR IMPLEMENTATION University of APPROACHES UNF NORTH FLORIDA Bare metal Approach Type I Hypervisor. VM VM VM • • Runs directly on the system hardware. • May require hardware assisted virtualization technology support by the CPU. Hypervisor • Limited set of hardware drivers provided by the hypervisor vendor. Kernel Driver • E.g.: Xen, VMWare ESXi Hardware 9 HYPERVISOR IMPLEMENTATION University of APPROACHES UNF NORTH FLORIDA Hosted Approach VM VM • Type II Hypervisor. • Runs virtual machines on top of a Applications host OS (windows, Unix etc.) Relies on host OS for physical Hypervisor • resource management. • Host operating system provides drivers for communicating with the server hardware. Host Operating System • E.g.: VirtualBox Hardware 10 University of VMWARE ESXI UNF NORTH FLORIDA • Bare Metal Approach. • Full virtualization. • Proven technology. VM VM VM • Used for secure and robust virtualization solutions for Hypervisor virtual data centers and cloud infrastructures. Hardware • Takes advantage of support for hardware assisted Architecture of VMWare ESXi virtualization for 64-bit OS on 11 Intel processors. 11 University of CITRIX XEN SERVER UNF NORTH FLORIDA • Open source; bare metal. • Offers both Hardware Assisted Virtualization (HVM) and Para- Domain Virtualization (PV) Zero VM VM Guest • Needs virtualization support in the CPU for HVM. • Xen loads an initial OS which Hypervisor runs as a privileged guest called “domain 0”. • The domain 0 OS, typically a Linux or UNIX variant, can talk Hardware directly to the system hardware (whereas the other guests cannot) and also talk directly to the hypervisor itself. It allocates Architecture of Xen and maps hardware resources for other guest domains. 12 University of UBUNTU KVM UNF NORTH FLORIDA • Kernel based virtual machine (Kernel Based VM) 1. Linux Applications 2. KVM Management • Open source. Console VM VM • Kernel-level extension to Linux. • Full virtualization. Linux • Supports full virtualization and KVM hence does not need hardware Linux Kernel assisted virtualization support in the CPU. Hardware Architecture of KVM 13 .