Virtual Iron 3.1 Enterprise Edition data center software – for HP ProLiant blade servers

Executive summary...... 3 Introduction...... 3 Server virtualization ...... 4 implementation ...... 4 Hardware assisted virtualization ...... 4 Software emulation of hardware ...... 4 I/O virtualization ...... 5 ...... 5 Native OS support ...... 5 HP ...... 5 Adaptive Enterprise ...... 5 ProLiant servers...... 6 The HP BladeSystem c-Class ...... 6 HP BladeSystem c-Class server blades – interconnects ...... 6 StorageWorks SB40c storage blade ...... 6 HP storage arrays ...... 7 EVA8000...... 7 MSA1000...... 7 Virtual Iron 3.1 Enterprise Edition ...... 7 System architecture ...... 8 Virtualization Manager...... 8 Virtualization Services ...... 8 Virtual Iron 3.1 tested solution architecture...... 9 Efficiency ...... 10 Features...... 11 High availability...... 11 Live migration ...... 11 Virtual server guest operating systems support...... 11 Virtual Iron Storage Management Framework ...... 11 Server node based direct attached storage...... 12 Subpartitioning of MSA1000 LUNs ...... 13 Flexibility...... 13 Virtual Iron 3.1 installation and management...... 14 Installing Virtual Iron 3.1...... 14 Prepare the server ...... 14 Prepare the FC connected storage ...... 14 Configure the network...... 15 Download Virtual Iron 3.1...... 15 Install the software...... 15 Connecting to the Virtualization Manager...... 19 Boot a server node...... 21 Discovering the hardware ...... 21 Configuring the network and the server node(s) ...... 21 Creating a virtual data center...... 22 Assign a server to a virtual data center ...... 23 Creating a virtual server with Linux...... 23 Creating a virtual server with Windows Server 2003 ...... 27 Hardware and software requirements ...... 28 Hardware requirements ...... 28 Software requirements ...... 28 Recommended configuration BOM...... 29 Virtual Iron 3.1 review ...... 30 Recommended steps...... 30 Summary ...... 30 For more information...... 31

Executive summary

Virtual Iron 3.1 Enterprise Edition, when installed on the HP BladeSystem with an HP StorageWorks 8000 Enterprise Virtual Array (EVA8000), creates a virtualized data center model supporting both Linux and Microsoft® Windows® guest operating systems and enables live migration of executing operating systems between physical server nodes. Virtual Iron 3.1 software was tested by HP and Virtual Iron on HP ProLiant rack mount and blade server hardware. Virtual Iron is a member of the HP Developer and Solution Partner Program, (DSPP), http://www.hp.com/go/dspp. The DSPP program enables HP partners to access HP hardware and testing facilities to verify that the partner’s software executes on HP ProLiant servers. Virtual Iron, Inc. is responsible for all support of Virtual Iron software, including any bundled software components running on ProLiant or HP BladeSystem servers. This document provides a public lab-validated proof point and a solution proof-of-concept. A hardware bill-of-materials and installation guide for the specific hardware and software tested are included. Several HP ProLiant rack mount and blade servers were tested including the ProLiant BL460c, the ProLiant BL465c, the ProLiant BL480c, the ProLiant DL380 G5, the ProLiant DL385 G2, ProLiant DL580 G3 dual-core, and the ProLiant DL585 G2 dual-core servers. Virtual Iron 3.1 software leverages hardware assisted virtualization technology from both Intel® and AMD™:

• Intel Virtualization Technology (VT) for Intel 64 architecture (formerly known as Intel Extended Memory 64 Technology, or Intel EM64T) in Intel Xeon® processors • AMD Virtualization™ (AMD-V™) for AMD64 instruction set and architecture in AMD Opteron™ processors

Introduction

Server virtualization methods on industry standard processors are rapidly evolving as multiple hardware and software companies compete to deliver the components required to address the challenges of today’s enterprise data centers. This document specifically addresses x64 server virtualization, enabling the creation of a virtual data center model with Virtual Iron software on HP hardware. This document contrasts Virtual Iron 3.1 to other virtualization technologies to provide the reader with the value and benefits of Virtual Iron 3.1 Enterprise Edition. Virtual Iron offers two editions of the product including a single server Edition and the Enterprise Edition. The Enterprise Edition (EE) offers the broadest set of functionality and is the basis of this whitepaper.

3

Note: x64 is a generic industry term referring to both Intel 64 architecture and AMD64 processors. These provide execution of 32- and 64-bit operating systems and applications.

Server virtualization

As Moore’s law continues to prove true, the performance of an industry standard server often exceeds the requirements for a single application or set of applications. While the number of servers required for the data center continues to grow, the utilization of individual physical systems continues to diminish. Contemporary multi-core scaling of processors exacerbates the performance disparity, as many applications can not make use of the large number of cores these systems provide. Through server virtualization, multiple operating systems may execute concurrently on a single server, addressing this disparity. The resulting virtualized data center may support several separate operating systems and applications while providing the required performance, security and reliability from a small subset of physical machines.

Hypervisor implementation There are two general approaches for hypervisor implementation of server virtualization: Hardware assisted virtualization, and software emulation of hardware. Virtual Iron supports hardware assisted virtualization.

Hardware assisted virtualization Intel and AMD both have released hardware-assisted virtualization technologies. While there are differences between the capabilities, each improves the performance of , reducing the complexity of software required for implementing hypervisor providing server virtualization.

Software emulation of hardware Another approach to server virtualization is software emulation of hardware. This approach has enabled several software companies to produce products that allow multiple operating systems to concurrently execute on a single server. Depending on the application, this may result in a more computationally intensive method than hardware assisted virtualization, and thus, the virtualization overhead may be higher. An advantage of software emulation of hardware is that older processors that don’t support hardware assisted virtualization may be utilized. Virtual Iron 3.1 requires a processor that supports hardware assisted virtualization.

4 I/O virtualization Currently, I/O virtualization requires software implementation within the hypervisor. There are two general approaches to this software implementation. These include paravirtualization or native OS support. Virtual Iron offers a choice of both paravirtualization and native OS support. Virtual Iron’s paravirtualized drivers (VSTools) are included.

Paravirtualization Paravirtualization requires virtualization vendor-written device drivers to be inserted into each guest and reduces virtualization overhead of the network and disk I/O. Enterprise support policies may not allow for custom drivers to be installed into guest images. The use of vendor supplied drivers in guest operating systems may violate support contracts of operating systems and applications vendors. Some advanced features such as live migration or virtual symmetric multiprocessing (SMP) support require paravirtualization. Paravirtualization may be the best choice by providing the highest performance and enabling advanced feature support such as multi-processor virtual server support and live migration of executing virtual servers between separate physical server nodes.

Native OS support Server virtualization software supporting an unmodified guest operating system is said to provide native OS support. This enables one to install an unmodified or stock version of an operating system into the virtual server. This ensures minimal inhibitors to adoption within highly structured enterprise environments. Virtual Iron 3.1 supports native OS in virtual servers.

HP

HP provides a broad set of industry standard hardware and software components that may be combined together to create an optimized system tailored to the specific requirements of the system architect. A guiding principal is that of the Adaptive Enterprise. As IT systems serve businesses in competitive markets, IT requirements have become more dynamic. New applications translate into a competitive advantage, and cost management creates a desire to minimize idle system capacity. Both initiatives drive the requirement for a dynamic, cost effective system infrastructure.

Adaptive Enterprise The HP Adaptive Enterprise provides a vision for companies to align IT capabilities with changing business requirements. The specific architecture and configuration of a data center is still unique to an individual company, yet each company may seek to increase flexibility while providing better overall utilization of IT infrastructure resources. The vision of the HP Adaptive Enterprise is that each company benefits from a dynamic infrastructure that matches the competitive nature of its business while helping it control costs while being able to leverage technology as a strategic advantage.

5 ProLiant servers HP offers ProLiant servers in several different form factors. This includes rack mount, tower servers, and blade servers. Various ProLiant servers are offered with one, two or four processor sockets, based on Intel and AMD processors. Dual-core and quad-core processors are available. While ProLiant servers are offered with various levels of RAS (Reliability Availability and Serviceability) and density, this document focuses of the HP BladeSystem c-Class servers which provide an optimum balance of reliability, efficient power management, server density, performance, and remote management features. Blade servers also simplify replacement of hardware and scaling for capacity within a virtualized infrastructure.

The HP BladeSystem c-Class The HP BladeSystem provides an ideal server infrastructure with which an enterprise may consolidate up to sixteen physical servers into a single rack-mount enclosure. Power, SAN storage interconnects and Ethernet switches may all be integrated within the enclosure reducing cabling costs. Keyboard, video and mouse connections are also consolidated through integrated remote console access via HP Integrated Lights-Out 2 (iLO 2). A virtual data center leveraging Virtual Iron 3.1 EE software and the HP BladeSystem requires only a few cables to connect the storage array and network backbone to the integrated SAN and Ethernet switches; just one or two cables would be required to connect as many as 16 server nodes to the traditional enterprise data center.

HP BladeSystem c-Class server blades – interconnects To connect the BladeSystem to external networks and storage, HP offers a variety of options supporting standard interconnects such as Ethernet, Fibre Channel (FC) and InfiniBand (IB). The built-in HP Virtual Connect architecture includes a 5 Terabit backplane, supporting 4 redundant fabrics at once and includes 8 high-performance interconnect bays. All interconnect options are hot-pluggable and can be installed in pairs for full redundancy. The tested configuration had two Ethernet switches and two Fibre Channel switches within the c-Class enclosure. You may install additional Ethernet mezzanine cards and Ethernet switches as required.

StorageWorks SB40c storage blade The direct attached storage of the ProLiant blade servers may be expanded by the use of a SB40c storage blade, which occupies an adjacent slot in the HP BladeSystem c7000 enclosure. Six hot swap SFF (Small Form Factor) SATA (Serial ATA) or SAS (Serial Attached SCSI) drives are leveraged to provide up to 876 Gigabytes of raw capacity in addition to the two hot swap drive slots on the ProLiant BL460c, or ProLiant BL465c blade server. The SB40c has extensive fault prevention features and the internal Smart Array P400 controller with 256MB DDR2 BBWC can be configured for RAID levels 0, 1, 1 + 0, 5, and 6 (RAID ADG). Full height blade servers require an optional mezzanine card for support of the SB40c.

6 HP storage arrays Storage arrays consolidate disks within a storage enclosure and provide built-in high reliability availability and serviceability functions. SAN connected HP storage arrays provide a simple method to increase the utilization of storage resources. The virtual data center model separates storage from server nodes. In this model, one or more storage arrays provide the storage infrastructure of the virtual data center. This provides the benefits of the storage array to all the virtual servers in the virtual data center. A SAN-based architecture leveraging an HP storage array ensures that all the advanced features of Virtual Iron 3.1 may be leveraged including high availability and live migration. When using Virtual Iron 3.1 with VSTools and leveraging a storage array, the virtual servers may be live migrated from one physical server to another.

EVA8000 The HP StorageWorks Enterprise Virtual Array family is the next generation of storage array products. The HP StorageWorks EVA8000 provides enterprise class storage array capabilities including support of up to 1024 separate virtual disks. The design is fully redundant and provides a total of eight separate 4 gigabit Fibre Channel host connections for SAN interconnect. Options for the EVA8000 include wide area replication providing high performance remote disaster recovery options. A single EVA8000 may provide capacity and performance for several hundred virtual servers depend on the specific I/O requirements. The Enterprise Virtual Arrays are designed for the data center where there is a critical need for improved storage utilization and scalability. They meet application-specific demands for transaction I/O performance for enterprise customers. They provide easy capacity expansion, instantaneous replication, and simplified storage administration. The Enterprise Virtual Arrays combined with HP StorageWorks Command View EVA software provide a comprehensive solution designed to simplify management and maximize performance.

MSA1000 The StorageWorks 1000 Modular Smart Array (MSA1000) storage array provides an entry- level storage array which can provide up to 32 separate virtual disks. A small business or proof of concept might leverage a single MSA1000. Systems may leverage several MSA1000 storage arrays to meet the performance and system requirements. A 2 gigabit Fibre Channel interface provides the connection to the SAN switch. The MSA1000 has the ability to be configured with thirty-two LUNs. Best practices would dictate generally no more than eight server nodes accessing a single MSA1000. Performance is heavily dependent on the actual number of end users and specific applications running in the virtual servers.

Virtual Iron 3.1 Enterprise Edition

The Virtual Iron solution provides a foundation upon which companies may deploy a mix of Linux and Windows applications in a virtualized architecture. In the virtualized environment, system hardware is separated from specific operating system and application deployments. Virtual Iron logically manages the organization and virtualization. Virtual Iron 3.1 provides for the creation of one or more virtual data centers leveraging multiple physical servers.

7 System architecture Virtual Iron builds the virtual data center from two basic components: the Virtual Iron management software, known as the Virtualization Manager, and Virtualization Services that includes an open source hypervisor. An introduction of these components follows, after which the system architecture is described in more depth.

Virtualization Manager The first component, the Virtual Iron Virtualization Manager, is loaded on a dedicated server within the data center. It manages all the physical server nodes providing the virtualization services. Virtualization administrators access the software through a web browser. Figure 1 shows the Virtualization Manager.

Figure 1. Virtual Iron Virtualization Manager

Virtualization Services The second component is the Virtual Iron Virtualization Services that includes an extended open source hypervisor. This hypervisor leverages the hardware-assisted virtualization capabilities built into Intel and AMD processors to create an abstraction layer between physical hardware and virtual resources. Virtual Iron 3.1 leverages components of the open source code and extends the capabilities to enable 64-bit support, enhanced memory management, live migration, and enterprise class reliability and management. Figure 2 provides a logical representation of a Virtual Iron server node, which utilizes the Virtual Iron Virtualization Services and an extended open source hypervisor.

8

Figure 2. Virtual Iron Server Node

Virtual Iron enables administrators to leverage the resources from a collection of physical servers to power virtual servers. Currently, both four-socket dual-core and two-socket quad- core ProLiant servers contain up to eight processing cores; thus 8-way SMP virtual servers may be created.

System architecture, continued The Virtual Iron architecture leverages Ethernet for communications, a SAN for storage access, and one or more storage arrays for data storage. Each physical server node contains a Fibre Channel HBA (Host Bus Adapter), and two Ethernet NICs. The integrated Storage Management Framework provides extended storage management features.

Virtual Iron 3.1 tested solution architecture Figure 3, tested solution architecture, shows how the hardware components were connected for the lab validation.

9

Figure 3. Virtual Iron 3.1 Tested Solution Architecture

Each physical server network boots from the switched IP network, utilizing the standards- based network pre-execution boot protocol known as PXE. The PXE server is integrated into Virtual Iron management software, which is also connected to the network Ethernet switch. All virtual server system images are stored in one or more storage arrays, which are available to any system through the SAN. An individual disk LUN or vdisk, (virtual disk) in the storage array is utilized for each of the virtual servers.

Efficiency When seeking to consolidate multiple servers, virtualization provides for the operation of multiple separate operating systems running concurrently on a single physical server or servers. While the performance of Industry Standard Servers has grown significantly, the efficiency of the virtualization platform is the key to maximizing the value of the resulting virtualized data center. The goal is to efficiently host the required virtual servers. This may include several highly powered SMP servers. A configuration of multiple two-, four- and even eight-way virtualized servers is possible within a single server node. Consolidating several legacy servers into each single server node is completely reasonable.

10 Features High availability Virtual servers may use Virtual Iron’s LiveRecovery policy to provide high availability without clustering software. If a primary server node fails, the virtual server immediately restarts on an alternate server node from the shared pool in the virtual data center. The alternate server node may or may not have an active workload of virtual servers executing during an event. Policies arbitrate the allocation of available resources such that service level agreements (SLAs) are maintained.

Live migration A virtual server may be live migrated from one physical computer to another without shutting down the executing virtual server. Migrations can occur manually or automatically based on policies such as time of day or resource utilization. A SAN-based storage architecture is required. Server nodes must have the same type of processor for live migrations. Virtual servers must have the Virtual Iron VSTools installed to support live migration.

Virtual server guest operating systems support Virtual Iron enables administrators to configure and manage multiple guest operating systems, of different types and versions. Version 3.1 supports virtual servers running the following:

• Red Hat Enterprise Linux 4 U2 and U4 32- and 64-bit • SUSE Linux Enterprise Server 9 SP3 32- and 64-bit • Windows XP Professional 32-bit • Windows Server 2003 32-bit

Virtual Iron Storage Management Framework Virtual Iron provides the Storage Management Framework extending the storage management features of the hypervisor. This capability is based on Linux LVM (Logical Volume Manager) and enables:

• Support of direct attached storage (DAS) • Sub partitioning of storage array based volumes • Partitioning a single physical disk into multiple virtual disks • Exporting vdisks to a vhd file on the management server • Importing vdisks to a vhd file from the management server • Cloning vdisks

Figure 4 provides a logical representation of a managed SAN volume.

11

Figure 4. Virtual Iron Storage Management

While each virtual server may directly access a SAN based LUN, leveraging hypervisor software-based storage management may be required to be able to partition the LUNs of the storage array into a larger number of vdisks. The Storage Management Framework may be leveraged for both direct attached storage and storage array based LUNs.

Server node based direct attached storage Optionally, Virtual Iron supports DAS, (direct attached storage) within the server node. As the storage is local to each server node, live migration and high availability features are unavailable when based on DAS storage. The Storage Management Framework is leveraged to manage and present the DAS of each server node to the local server node hypervisor. For virtual servers leveraging DAS, this might provide a reasonable compromise for low cost server consolidation systems. These servers would leverage DAS within each blade or rack mount server. For example, a single partition RAID1 via the internal E200i controller of the ProLiant BL460c blade server would be partitioned into smaller vdisks that present the block storage to each local virtual server running on the server node. Another approach might be to leverage an SB40c storage blade in an adjacent slot to a ProLiant BL460c server blade to extend the DAS storage to be 584GB of RAID6 (ADG). In a RAID6 (ADG) configuration, even two of the six internal drives could fail within the storage blade and the data would be protected and continue to be available.

12

Note: When leveraging the Storage Management Framework, all the virtual servers assigned to vdisks within a specific Logical Volume must reside on the same server node. Thus, to maintain maximum flexibility for live migration of virtual servers, be sure to create a separate Logical Volume within the Logical Volume Group for each virtual server created.

Subpartitioning of MSA1000 LUNs Likewise, the Storage Management Framework might be leveraged to partition a large number of smaller vdisks from a single LUN of an MSA1000. This would allow the use of a single MSA1000 with a 32 LUN limit to support many more virtual servers. For example, a single 500 Gigabyte LUN might be subpartitioned into 40 smaller vdisks. One would typically create a separate Logical Volume for each virtual server to enable unrestricted virtual server migration between server nodes. Several vdisks might be created and leveraged within each Logical Volume for a specific virtual server.

Flexibility The flexibility of the virtual data center model enables several benefits to reduce total cost of ownership (TCO) while simplifying the approach of each:

• Dynamic application scaling – A running virtual server may be live-migrated to a different server node providing higher performance. – With a configuration change and a virtual server reboot, one might quickly add or remove RAM or processor resources to any particular virtual server.

• Server consolidation – Many virtual servers can run on a single server node.

• Application resource management – Policy settings can arbitrate processor share between virtual servers.

• Server fault tolerance – Standby server nodes may be assigned for virtual servers.

• Server maintenance – A running virtual server may be live-migrated to another Server Node to facilitate server maintenance during normal business hours.

13 • Disaster recovery – Wide area EVA8000 array based storage replication allows for fast startup of standby data center.

• Hardware serviceability and lifecycle management – Simple blade insert and PXE boot eliminates life-cycle management issues and provides reduced MTTR (Mean Time-To-Repair), reducing server management costs. – Flexible service windows reduce contractor overtime and double time or weekend and night service charges.

Virtual Iron 3.1 installation and management

Basic management tasks include the following:

• Installing the Virtual Iron 3.1 • Connecting to the Virtualization Manager • Boot a Server Node • Discovering the hardware • Configuring the network and the Server Node(s) • Creating a virtual data center • Assign a server to a virtual data center • Creating a virtual server • Installing an operating system

A brief step-by-step review is provided for each task.

Installing Virtual Iron 3.1 Prepare the server To create a Virtual Iron management server, a ProLiant BL460c blade server was prepared by installing Windows Server 2003. Connect the lab network to the second Ethernet switch (Public), on the right in the back of the c7000 enclosure. Connect a second Ethernet cable from the Onboard Administrator to the Public network segment. The Insight Display was used to identify the IP address of the Onboard Administrator. The default password of the Onboard Administrator is printed on a small tag attached to the Onboard Administrator. Use the Virtual Media function and start with the HP ProLiant SmartStart CD-ROM. After the Windows Server 2003 operating system installation was completed, the server was patched by the Windows update service. The Firewall was disabled and the server was rebooted.

Prepare the FC connected storage Both the MSA1000 storage array and the EVA8000 storage array were connected to the internal Brocade FC switches within the c7000 enclosure. The MSA1000 has a single two gigabit Fibre Channel connection, while two four gigabit FC connects from the EVA8000 were connected; one to each SAN switch. Six host ports of the EVA8000 may be connected to future c7000 enclosures.

14

Note: At this time, create one or more initial LUNs on the Fibre Channel connected storage array. It is reasonable to use the management server on which Virtual Iron is to be installed as the storage array management server as well. See the associated documentation of the storage array and install the HP storage management utilities. These may be remotely accessed from a web browser.

Configure the network Rename the first Ethernet NIC, Network Connection, to Management. Rename the second NIC, Network Connection 2, to Public. Public requests an IP address via the lab DHCP server; configure the second Ethernet NIC, (Management), for a static IP address of 10.99.0.1.

Download Virtual Iron 3.1 Use Internet Explorer to connect to the Virtual Iron website, (http://www.VirtualIron.com), and click the Download Now icon. Follow the directions and download the Windows version of Virtual Iron 3.1 Enterprise Edition.

Install the software Browse with the Windows Explorer to the file that was downloaded and double-click on the file. Follow the simple graphical installation program and accept the defaults, browser to the license file, enter an Admin password, at the Network Setup, select Separate Public and Dedicated Management Networks. See Figure 5 for guidance.

15

Figure 5. Virtualization Manager Install Network Setup

An example of the configuration for the Public and Management network interfaces is provided in Figure 6.

Note: If the Network Setup offered in the following screen doesn’t offer the expected choices, cancel the installation and check the server network configuration. Production deployment best practices would dictate a static address for both the interfaces of the management server.

16

Figure 6. Virtualization Manager Install Network Setup

17 Enable the DHCP server.

Figure 7. Virtualization Manager Install DHCP server

Complete the installation. The management server is now ready to accept both administrators’ connecting via web browsers to the public interface and server nodes’ PXE booting through the management network.

18 Connecting to the Virtualization Manager Connect to the Virtualization Manager by using a web browser with Java™ 5 installed from a Windows or Linux workstation. Enter the IP address of the Public NIC of the management server, this will be specific to the installation.

Steps:

1. Start web browser. 2. Enter the URL http://10.101.0.102

Note: While no login is required at this time, subsequent links may require login. Use the password configured during setup.

Figure 8, below, shows options.

19

Figure 8. Welcome to the Virtual Iron Virtualization Manager

20 Boot a server node

1. Power on a Server Node and watch the console as the server boots.

Note: If the Server Node has DAS, change the boot order to PXE. Press during the power on self test.

2. Watch the hypervisor boot and make note of the Server Node IP address.

Note: It is recommended to boot the Server Nodes one at a time to simplify the identification and configuration.

Discovering the hardware

1. Click the Launch Virtualization Manager URL in the Virtualization Manager.

2. Enter the password configured during installation.

3. Clear the Tutorial Window by clicking the right-hand arrow just to the left of the TOC button.

4. Click the Hardware button at the left.

5. Click the Discover tab.

6. Click the OK button and the Commit button.

Newly discovered hardware will be displayed.

7. Highlight the new server node and rename it, a bay prefix was adopted with the associated bay number for each server blade booted.

Configuring the network and the server node(s)

1 In the Virtualization Manager, click the Hardware button at the left. Click the Networks tab.

21 2 Click the Add button and enter the Network name Public. Select All Nodes and the Network Public and click the Add button. Press the OK button. Click the Commit button.

3 Complete the configuration of each Server Node by selecting the correct Network for the Ethernet NICs of each Server Node. The first NIC of each half-height blade was assigned to the Management network, (10.99.0.x), and the second was assigned to Public. Full-height blades have four NICs; by default, the first and third were assigned to the Management network, (10.99.0.x) and the second and fourth were assigned to Public.

4 Press or the Commit button.

5 Click the Fibre Channel item of the Server Node and then the SAN Disks tab. Check to see that the LUNs of the MSA1000 or the Vdisks of the EVA8000 are visible. Make a note of the World Wide ID (WWID) numbers of the HBAs for each server node. These may be leveraged from within the Selective Storage Presentation of the MSA1000 or in the Vdisk Presentation configuration menu of the EVA8000.

Within the StorageWorks Command View EVA management interface; click the Hosts folder. Click the Create folder button. Click the newly added folder and click the Add host button. Add both WWIDs for each server node HBA, using a new host entry for each server. As new Vdisks are created within the EVA8000 the list of hosts enables a single action to present the Vdisk to all the server nodes by leveraging the folder name.

Note: Configuration of Selective Storage Presentation of the MSA1000 is not required for a dedicated SAN leveraged for the virtual data center. Assigning a list of WWIDs for the EVA8000 is required for each Vdisk is required.

Creating a virtual data center

1 Click the Resource Center button at the left.

2 Right-click the Resource Center item in the middle navigation pane and select New virtual data center. Change the name as desired. Press or the Commit button.

22 Assign a server to a virtual data center

1 Click the Resource Center button at the left. Click the virtual data center in the middle navigation pane. Click the Assign Nodes icon above the Resource Center text.

2 Select the Server Node to be assigned and click the Add button. Click the OK button. Click the Commit button.

Creating a virtual server with Linux There are several methods to install an operating system. You may also clone a virtual server or migrate a physical server to a virtual server. In this example, the Red Hat 4 Update 4 Advanced Server 64-bit operating system was installed into a virtual server by the “linux askmethod” approach with an NFS server from the lab network. You may choose to boot a virtual server from a CD-ROM or even from a network image.

1 The ISO file of the first disk of the Red Hat 4 operating system was copied to the c:\Program Files\VirtualIron\nbd directory.

2 Select View / Resource Center. Right-click the Unassigned virtual servers item and then select the New virtual server item.

3 Click the text New virtual server and select Rename, enter VS01 RH4. Press Enter and the Commit button.

4 Click the Configuration tab.

5 In the Configuration and Boot Options window, click the Edit button. In the Boot Options window, click the Operating System dialog box and select Red Hat Enterprise Linux 4.

6 Select the Network (Image) Boot option and select the ISO image copied to the nbd directory. Click the OK button

7 In the Network Adapters window, click the Edit button.

8 Click the red [unassigned] text item and select Public. Click the OK button.

Note: If the Public option is not available, return to the Hardware Resources and configure the Network Adapter of the specific Server Node. See the previous step of Configuring the network and the server node(s).

23 9 In the Storage window, click the Assign button. In the Storage Disk Mapping window, highlight the disk to be assigned and then click the Add>> button. Press the OK button.

10 Press the Commit button or . Make a note of the assigned MAC address of the VNIC (virtual NIC), displayed in the Network Configuration window; later this will be used to identify the server IP address.

11 Click the Edit button in the Configuration window and select the amount of memory required for the virtual server.

Note: To configure multiple processor virtual servers, the virtual server tools must be installed. See the Virtual Iron documentation.

12 Click and drag the newly configured virtual server to a physical server in the virtual data center. Click the Commit button.

13 Right-click the virtual server and select Start. Right-click the virtual server and select Launch Console.

14 Click the mouse once in the screen of the virtual server and then type linux askmethod at the prompt.

Proceed with the balance of the installation as one would with a physical server. The firewall was disabled during installation. See Figure 9.

24

Figure 9. Virtual Iron virtual server console

After the installation has completed, and you have rebooted the server as prompted:

15 Search the lab DHCP server for the MAC address of the VNIC identified in step 10. Log into the virtual server via SSH (Secure Shell). Change the run level to 3 by typing init 3. The inittab was edited to provide startup of run-level 3.

Note: One may choose to close the console of the virtual server at any time. Doing so does not impact the processing of the virtual server.

25 The Virtual server may be enhanced by installing the VSTools (Paravirtualization drivers). These will enable virtual SMP and live migration features. As the servers will be typically accessed through the network, VNC, SSHD, Remote X, sessions may be leveraged. The console can also be leveraged in run-level 5 after a server reboot, as in Figure 10.

Figure 10. Red Hat 4 virtual server console

26 Creating a virtual server with Windows Server 2003 A similar method as with Linux was tested; two ISO files were copied to the management server. After the basic installation of Windows, the Service Pack 2 setup utility, R2SETUP2.exe, within the CMPNENTS subdirectory of the second ISO image, provided the means to install the Service Pack after a reboot. Figure 11 provides a graphic. After rebooting the virtual server and installing all the Windows updates via the Windows Update utility, the Virtual Iron VSTools were installed.

Figure 11. Windows Server 2003 R2

In conclusion, many virtual servers were installed and tested with both Windows Server 2003 32-bit and Red Hat Linux 64-bit. The Hardware and software requirements section below provides specifics regarding software and hardware acquisition.

27 Hardware and software requirements

Hardware requirements Several ProLiant servers were tested as Virtual Iron 3.1 management servers. ProLiant servers tested as management server:

• DL360 G4 • DL385 • BL460c

Servers leveraged as Server Nodes must provide hardware assisted virtualization. Check the QuickSpecs of the considered server.

ProLiant servers tested as Server Nodes:

• DL380 G5 • DL385 G2 • DL580 G3 • BL460c • BL465c • BL480c

Software requirements The following software was tested for the installation described in this guide:

• Microsoft Windows Server 2003 --- CD-ROM • Red Hat Enterprise Linux Advanced Server 4.0 Update 4 64bit --- CD-ROM • Virtual Iron software 3.1. --- Downloaded from Virtual Iron • Virtual Iron License file --- Downloaded from Virtual Iron • Java 5 --- downloaded from Sun.com • Acrobat Reader --- downloaded from Adobe.com • ProLiant SmartStart --- downloaded from HP.com

A commercial Virtual Iron 3.1 Enterprise Edition software license is required for the management of more than one server node. Contact Virtual Iron for more information.

28 Recommended configuration BOM Table 1. Hardware BOM

Quantity Description Part number

Management Server

1 HP ProLiant BL460c G1 5160 2G 1P Svr 416656-B21

2 HP 72GB 3G SAS 15K 3.5" SP HDD 431935-B21

(QLogic HBA leveraged for storage array management)

1 HP BLc QLogic QMH2462 FC HBA Opt Kit 403619-B21

MSA1000 Storage Array

1 HP StorageWorks Modular Smart Array 1000 201723-B22

1 HP 256MB Battery-Backed Cache Module 254786-B21

1 MSA SAN Switch 2/8 288247-B21

8 HP 36GB 15K U320 Pluggable Hard Drive 286776-B22

2 5m SW LC/LC FC Cable 221692-B22

c-Class HP BladeSystem

1 HP BLc7000 1 PH 2 PS 4 Fan Full ICDC Kit 403321-B21

2 Brocade BladeSystem 4/24 SAN Swt Powr Pk AE371A

2 HP GbE2c Ethernet Blade Switch for c-Class BladeSystem 410917-B21

4 HP BLc7000 Encl Pwr Sply IEC320 Option 412138-B21

6 HP BLc7000 Encl Single Fan Option 412140-B21

2 HP ProLiant BL460c G1 5160 2G 1P Svr 416656-B21

2 HP ProLiant BL480c G1 5160 4G 2P Svr 416669-B21

6 HP BLc QLogic QMH2462 FC HBA Opt Kit 403619-B21

2 HP ProLiant BL465cG1 2218 DC 1P 2G Svr 407235-B21

2 HP 40A HV Core Only Corded PDU 252663-D75

8 HP 4GB SW Single Pack SFP Transceiver A7446B

HP StorageWorks EVA 8000 storage array

1 terabyte EVA8000 with DL380 storage management 1 appliance

29 Virtual Iron 3.1 review

The following steps are recommended as an introduction to the operation and capabilities of Virtual Iron 3.1 and provide a basis of a more complete proof of concept of the virtual data center. These steps were included in the joint lab validation performed in the HP labs.

Recommended steps • Follow the documented installation • Connect to the Virtualization Manager • Create a virtual data center • Create a virtual server • Install an Operating System • Start and connect to the Virtual server

Use standard networking protocols such as remote X, SSH, or VNC to connect to the virtual server or use the console function integrated into the Virtualization Manager Web interface. After these steps are completed, install the required application software and test system performance with your specific applications. Virtual Iron provides highly efficient hosting of multiple instances of the Red Hat, SUSE and Windows Server 2003 operating systems. Virtual Iron is ideal for processing, networking, and disk I/O intensive applications such as middleware applications. Test both small and large virtual servers including two-, four- and even eight-way SMP virtual computers. Install a mix of light weight and demanding Linux applications, as Virtual Iron handles both well.

Summary

This document provides installation guidance and a basic introduction for the use of the Virtual Iron 3.1 Linux and Windows hardware virtualization and virtual data center management system utilizing HP ProLiant rack mount and blade servers. The resulting system provides for highly efficient virtual hosting of multiple Linux and Windows server instances. If one utilizes blade servers, a simple blade swap may allow server life cycle management to eliminate physical reinstallation of operating systems and applications to manage server refreshes. Mean time to repair may also be greatly reduced ensuring optimum response to SLAs and server maintenance requirements. HP would like to thank Virtual Iron for their contribution to this industry that HP serves.

30 For more information

Virtual Iron Software Home, http://www.virtualiron.com/ HP.com - ProLiant servers - Industry standard servers, http://www.hp.com/go/proliant

To help us improve our documents, please provide feedback at www.hp.com/solutions/feedback.

© 2007 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein. Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. AMD, AMD Opteron, AMD Virtualization, and AMD-V are trademarks of Advanced Micro Devices, Inc. Java is a US trademark of , Inc. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation. 4AA1-0844ENW, Rev. 1, March 2007