EMC Virtual Storage Appliance (VSA)

&

VMware Site Recovery Manager

Building a low-cost, self-contained, learning, testing and development environment using EMC Celerra Virtual Storage Appliances (VSAs) with VMware Virtual Infrastructure and VMware Site Recovery Manager (SRM)

Revision 2.0

July 15, 2009

Bernie Baker Sr. VMware Specialist

EMC Corporation VMware Affinity Team (339) 293-2320

[email protected]

EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Source Material:

http://virtualgeek.typepad.com/virtual_geek/2009/04/new-celerra-vsa.html

http://virtualgeek.typepad.com/virtual_geek/2008/11/celerra-virtual-appliance-howto-301---replicating-between-two-celerra-vsas.html

Administering VMware Site Recovery Manager 1.0, Mike Laverick

VMware Site Recovery Manager 1.0 Evaluator Guide

VMware Site Recovery Manager 1.0 Update 1 Administration Guide

EMC® Celerra® NS Series iSCSI, EMC Celerra Replicator™ Adapter for VMware Site Recovery Manager, Version 1.1 Release Notes EMC P/N 300-007-023

Configuring iSCSI Targets on EMC Celerra EMC P/N 300-004-153

Using EMC® Celera Replicator™ (V2) P/N 300-004-188

On-Demand Web Replay: EMC Celerra Replicator Delivers Advanced IP Storage Protection

http://www.userlocal.com/helpvi.php

THE CONTENTS OF THIS WORK ARE NOT MEANT TO BE A COMPLETE INSTALLATION AND CONFIGURATION GUIDE FOR THE EMC CELERRA OR VMWARE SRM. THIS DOCUMENT STRIVES TO PROVIDE INFORMATION REGARDING THE INITIAL CONFIGURATION OF THE AFOREMENTIONED PLATFORMS IN A MANNER TO SUPPORT LEARNING, TESTING AND DEVELOPMENT IN A NON-PRODUCTION VMWARE VIRTUAL INFRASTRUCTURE ENVIRONMENT WITH VMWARE SITE RECOVERY MANAGER (SRM).

NOTE: THIS CONFIGURATION, SPECIFICALLY RUNNING CELERRA VSAS ON VMWARE ESX, IS NOT SUPPORTED IN PRODUCTION ENVIRONMENTS BY EMC CORPORATION OR VMWARE, INCORPORATED. BEST EFFORT SUPPORT IS AVAILABLE FROM THE COMMUNITY OF USERS AND MAINTAINERS.

HTTP://FORUMS.EMC.COM

HTTP://VIRTUALGEEK.TYPEPAD.COM

ADDITIONAL SUPPORT IS PROVIDED BY THE VMWARE AFFINITY TEAM WITHIN EMC. CONTACT YOUR LOCAL EMC OFFICE FOR CONTACT INFORMATION.

ALSO, NOTE THAT YOU CANNOT USE THE EMC CELERRA VSA FOR ANYTHING BEYOND TESTING AND DEVELOPMENT WITHOUT VIOLATING THE LICENSE AGREEMENT WHICH CAN BE FOUND HERE:

FTP://FTP.DOCUMENTUM.COM/VMWARECHAMPION/VIRTUAL_MACHINE_LIBRARY/CELERRA/CELERR A_SIMULATOR_-_EVAL_EDITION_SOFTWARE_LICENSE_AGREEMENT__2_.PDF

Page 2 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Table of Contents Overview ...... 5 Diagram ...... 5 Required Server and Network Hardware ...... 5 Required Software ...... 6 Required Infrastructure ...... 6 Required Commitment ...... 7 General Comment ...... 7 Small Request ...... 7

My Network ...... 8

Section 1: The Basics ...... 9 Assumptions ...... 10 Step 1: Download and Import Celerra Virtual Storage Appliance (VSA) ...... 11 Step 2: Configuring the Celerra VSA VM ...... 15 Step 3: Configuring the Celerra VSA TCP/IP addresses ...... 18 Step 4: Configuring the Celerra VSA to be Unique ...... 22 Step 5: Licensing ...... 35 Step 6: Configuring cge IP addresses ...... 41

Section 2: Adding Physical Storage ...... 45 Step 1: Add new “Hard Disk(s)” to your Celerra VSA VM ...... 45 Step 2: Configure the Celerra VSA to use the new storage ...... 47 Step 3: Adding “Physical” Disk to the Celerra VSA ...... 52

Section 3: Review ...... 55

Section 4: Configuring Celerra Replication ...... 56 Step 1: Configuring NTP (Network Time Protocol) ...... 57 Step 2: Correcting the Celerra VSA Replication Database ...... 60 Step 3: Configuring Replication Using the Celerra Manager GUI ...... 65 Step 4: Preparing ESX Servers for iSCSI Targets and LUNs ...... 89

Section 5: Site Recovery Manager Installation ...... 99 Step 1: Site Recovery Manager Database Connectivity ...... 100 Step 2: Copying Site Recovery Manager, the SRM patch and SRA ...... 108 Step 3: Installing Site Recovery Manager ...... 108 Step 3: Patching Site Recovery Manager ...... 111 Step 4: Installing the Storage Replication Adapter ...... 111 Step 5: Installing the SRM Plug-in for Virtual Center ...... 112

Page 3 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Section 6: Site Recovery Manager Configuration ...... 117 Step 1: Configuring the Connection ...... 118 Step 2: Configuring the Array Managers ...... 121 Step 3: Configuring Inventory Mappings (Optional) ...... 126 Step 4: Configuring Protection Groups ...... 128 Step 5: Configuring Recovery Groups ...... 131 Step 6: Testing ...... 135

Section 7: Automated SRM Failback ...... 139 Step 1: Installing the Celerra Failback Wizard ...... 139 Step 2: Configuring the plugin ...... 141 Step 3: Preparing for SRM Failover ...... 144 Step 4: SRM Failover ...... 145 Step 5: Using the Celerra Failback Wizard ...... 150

Section 8: Wrap-up ...... 156

Appendix A: Configuring the Replication Target (Command Line Interface) ...... 157 Configuring Replication ...... 160

Appendix B: iSCSI and NFS Discrete LUN Creation & Assignment ...... 162 Step 1: Creating iSCSI Targets ...... 162 Step 2: Configuring iSCSI LUNs and presenting them to ESX as VMFS Datastores ...... 164 Step 3: Configuring NFS LUNs and presenting them to ESX as NFS Datastores ...... 175

Appendix C: Basic VI Commands ...... 182

Appendix D: Troubleshooting Tips ...... 184

Page 4 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Overview There are two goals associated with this initiative. The first is the installation and configuration of “geographically separate” Celerra Virtual Storage Appliances (VSA) instances complete with bi- direction replication. The second goal leverages these Celerra VSAs for the purpose of testing and developing a working knowledge of VMware’s Site Recovery Manager (SRM). The ultimate goal is to build and environment similar to the one depicted in the diagram below.

Diagram VMware Demo Environment

Site Recovery Manager (SRM) Protected Recovery Site Site

VCPROD1 VCDR1

VirtualCenter Site Recovery Site Recovery VirtualCenter Manager Site-Site SRM Communications via TCP/IP Manager

Virtual Center 2.5 SRM 1.0 Update 1 SRM 1.0 Update 1 Virtual Center 2.5 Update 3 Update 3 Build 119598 VMW Virtual Center 1 – Non Replicated Build 119598 Celerra VSA (csprod) VMW Virtual Center 2 – Non Replicated MS Active Directory (1) Protected Site VM Staging Area MS Windows XP (2) These Disks are Celerra VSA (csdr1) MS SQL Server 2005 provisioned from within the Celerra VSA VMs. The disk used to setup the Celerra VSA VMs is provisioned Dell PowerEdg 1850 from the storage internal to PowerEdg 1850 Two Intel 2.8GHz Two Intel 2.8GHz 4 GB RAM the physical server ESX is 4 GB RAM Two GbE (Broadcom) Adapters VMware: ESX6 VMware: ESX7 Two GbE (Broadcom) Adapters PERC4 (RAID Controller) ESX 3.5 U4 running on ESX 3.5 U4 PERC4 (RAID Controller) Two 73GB SCSI RAID1 Two 73GB SCSI RAID1 Evaluation LIcense Evaluation License Dell Remote Access Card (DRAC) Dell Remote Access Card (DRAC)

DRAC IP: TCP/IP Network DRAC IP: xxx.xxx.xxx.xxx xxx.xxx.xxx.xxx

Array Replication ArrayCelerra Replication Replicator Asynchronous

Required Server and Network Hardware In order to accomplish this task, the following physical equipment is required:

• Two physical servers: (in my lab environment I used x86_64 Intel-based boxes) o Intel or AMD class, single processor or better, 4GB RAM (Minimum) 6 - 8GB preferred (the more you have the more testing you are able to accomplish e.g., more virtual machines), internal disk with at least 100GB of free capacity or access to an external shared storage device, two physical GbE NICs per server • GbE network (a simple GbE switch will suffice, flow control and trunking support will improve overall iSCSI performance) Need to build servers on a budget? Check out the following link: http://virtualgeek.typepad.com/virtual_geek/2008/06/building-a-home.html

Page 5 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Required Software • VMware ESX 3.5 Update 4 (Evaluation License: 60 days by default from VMware download site) • VMware Virtual Center 2.5 Update 4 (Evaluation License as per above) • VMware Site Recovery Manager 1.0 Update 1 (Evaluation License as per above) • EMC Celerra Failback Wizard (v1.0.1) for VMware Site Recovery Manager (PowerLink) • The EMC Celerra Virtual Storage Appliance (VSA) • Microsoft Windows XP or Windows 2003 Server R2 Enterprise Edition for use with the VMware Virtual Center Server • Microsoft Windows XP for use as a utility device as required. It makes sense to use several of these for SRM testing • VMware SCSI Floppy image for installing XP in a VM on ESX • Windows 2003 Server R2 Standard Edition or Enterprise for various servers you may want to create and test against. e.g., Active Directory, DNS, SQL, Exchange, etc. • Other Operating Systems such as Solaris x86, Red Hat Enterprise Linux, CentOS or Ubuntu • EMC Celerra Replicator Adapter for VMware Site Recovery Manager • Microsoft SQL Server 2005 Management Studio Express (SSMSEE) • Microsoft SYSPREP (optional) – provides for customization of VMs deployed from a template

Required Infrastructure There are several components in this lab that work better if you have Active Directory, Domain Name Server (DNS), Windows Internet Name Service (WINS) and DHCP enabled. These don’t have to be large, elaborate implementations. In fact, all of these services could be setup in a single VM on one of your ESX hosts. This about simple domain names like lab.com or vsa- srmlab.com. Create a DHCP scope to hand out TCP/IP addresses, a default gateway, DNS and WINS information. Have the client devices register host information automatically with DNS. There is a checkbox in the TCP/IP properties of the Ethernet adapter for the defined virtual machines. For those devices with static IP address add them to the DNS tables as “A” record, host entries. Don’t forget to click the PTR (reverse lookup) check box.

A quick note about disk space; you should plan on allocating disk capacity for the primary and secondary site Celerra VSAs as follows:  Primary site: as required/desired to support your environment. Default, required minimum, is 40GB. Add space as required.  Secondary site: plan to allocate a file system that is 2.5-3 times larger than the replication target LUN. With out this space you will have issues creating snapshots required for Site Recovery Manager. A full failover will work but the test function within SRM will likely fail without the ability to create array-based snapshots.

Page 6 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Required Commitment The task you are about to embark on will take some time to setup and configure. In my estimation, you should be able to accomplish the complete implementation in a day or two. Truthfully it depends on your experience and comfort with the tools. The first time I went through the task this document didn’t exist. It took me and a colleague two days to complete all of the required tasks leveraging a multitude of different documents and subject matter experts. Now that we have done it several times we can have a full environment up and running within 8 hours. That assumes, two people working from scratch, installing ESX, setting up active directory, test machines, Celerra VSA installs and the SRM configuration. There is a lot of work to accomplish but if you stick with it, the benefits will be instantly recognizable.

General Comment If you need any assistance setting up your VMware ESX infrastructure I suggest reviewing these documents:  Release Notes: ESX Server 3.5 Update 4 and VirtualCenter 2.5 Update 4  Quick Start Guide  Basic System Administration

Small Request This document represents a great deal of effort on my behalf and that of several of my peers on EMC’s VMware Affinity Team. We appreciate your desire and willingness to test VMware and EMC in your environment. If you have any questions or comments please forward them to my email address at: [email protected] or [email protected] . Please let me know if you find any problems or glaring omissions. I have tried to ensure completeness and accuracy.

Page 7 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager My Network

EMC Celerra VSA and VMware Site Recovery Manager Protected (SRM) Lab Environment Recovery Site Site

NOTE THE RED TEXT IN THIS DIAGRAM VCPROD1 INDICATES THOSE INTERFACES CONNECTED VCDR1 IP Address: xxx.xxx.xxx.xxx IP Address: xxx.xxx.xxx.xxx TO AND SUPPORTING iSCSI TRAFFIC

VirtualCenter Site Recovery Site Recovery VirtualCenter Manager Site-to-Site SRM Communications via TCP/IP Manager

Virtual Center 2.5 SRM 1.0 Update 1 SRM 1.0 Update 1 Virtual Center 2.5 Qty 2 Update 3 Update 3 VMW Virtual Center 1 w/ License Server and SRM VMW Virtual Center 2 w/ License Server and SRM Qty 2: Dell Build 119598 Build 119598 Server – Non Replicated Server – Non Replicated Dell PowerEdge 1850 PowerEdge 1850 Two Intel 2.8GHz Celerra VSA (csprod1) Celerra VSA (csdr1) Two Intel 2.8GHz 8 GB RAM MS Active Directory (2) Protected Site VM Staging Area 8 GB RAM Two GbE Adapters These Disks are provisioned MS DNS (2) Two GbE Adapters PERC4 (RAID) Controller MS DHCP (1) from within the Celerra VSA PERC4 (RAID) Controller Two 73GB 15K SCSI RAID 1 MS Windows XP (Clients) (4) VMs. The disk used to setup Two 73GB 15K SCSI RAID 1 VMware DRS/HA Cluster the Celerra VSA VMs is VMware DRS/HA Cluster (vMotion Enabled) provisioned from the storage (vMotion Enabled) Dell Remote Access Card (DRAC4) internal a physical server on Dell Remote Access Card (DRAC4) DRAC IP: xxx.xxx.xxx.xxx (192.168.1.0 Net) which ESX is running: Server DRAC IP: xxx.xxx.xxx.xxx (192.168.1.0 Net) VMware ESX2 IP: VMware: ESX1 Name is ESX1 (not pictured) VMware: ESX2 VMware ESX3 IP: eth0: xxx.xxx.xxx.xxx (192.168.1.0 Net) ESX 3.5 U2 ESX 3.5 U2 eth0: xxx.xxx.xxx.xxx (192.168.1.0 Net) eth1: xxx.xxx.xxx.xxx (10.100.1.0 Net) Build 82663 Build 82663 eth1: xxx.xxx.xxx.xxx (10.100.10.0 Net) Evaluation LIcense Evaluation License VMware ESX3 IP: VMware ESX4 IP: eth0: xxx.xxx.xxx.xxx (192.168.1.0 Net) eth0: xxx.xxx.xxx.xxx (192.168.1.0 Net) eth1: xxx.xxx.xxx.xxx (10.100.1.0 Net) iSCSI/TCP/IP Network eth1: xxx.xxx.xxx.xxx (10.100.10.0 Net)

EMC Celerra VSA VM IP: EMC Celerra VSA IP: eth0: xxx.xxx.xxx.xxx (General/NFS/CIFS) eth0: xxx.xxx.xxx.xxx (General/NFS/CIFS) – 192 Net -192 Net eth1: xxx.xxx.xxx.xxx (iSCSI) eth1: xxx.xxx.xxx.xxx (iSCSI) – 10 Net EMC Celerra VSA -10 Net eth2: xxx.xxx.xxx.xxx (Control Station) eth2: xxx.xxx.xxx.xxx (Control Station) – 192 Net Asynchronous -192 Net cge0: xxx.xxx.xxx.xxx (Data Mover NFS/CIFS target) Array Replication cge0: xxx.xxx.xxx.xxx (Data Mover NFS/CIFS target) – 192 Net Replication -192 Net cge1: xxx.xxx.xxx.xxx (Data Mover iSCSI target) (v2) cge1: xxx.xxx.xxx.xxx (Data Mover iSCSI target) – 10 Net -10 Net

All server and Celerra VSA TCP/IP addresses are static. The client TCP/IP assignments accomplished via MS DHCP server running as a VM on ESX1 DNS Domain: baker-iss.com

Page 8 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Section 1: The Basics Important Note:

This document makes mention of a second (disaster recovery target) Celerra VSA for use with VMware’s Site Recovery Manager. All of Section 1 and Section 2 will need to be completed for two Celerra VSA instances on separate VMware ESX hosts. This document assumes the following names for these instances: csprod and csdr. Where csprod represents a mock production environment (A.K.A. the protected site) and csdr is the disaster recovery partner (A.K.A. the recovery site).

I assume you, as the reader and implementer, will create the recovery site Celerra (csdr) using the same steps required to configure the protected site (csprod). This will be accomplished without specific instructions for the recovery site since they are identical to the protected site with name and network changes applied.

This text will cover the specific steps necessary to configure replication components required for the Celerra VSAs. Additionally, we will discuss, in detail, the steps required to setup and configure VMware Site Recovery Manager on both the protected and recovery sites. I will also cover the EMC Celerra Failback Wizard for VMware Site Recovery Manager. If you haven’t already seen it, the Celerra Failback Wizard is an automated tool, developed by the Celerra Solutions Engineering Team, that provides a mechanism for SRM recovery that is truly unique in the industry. At the time of this writing, there is not another storage company that has developed this type of solution or demonstrated this level of integration with SRM.

Page 9 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

With the intention of streamlining the installation and configuration process, this document focuses on the core steps required to build out two EMC Celerra VSAs to support replication and disaster recovery using Site Recovery Manger (SRM) from VMware. As mentioned on page 2, some of the source material contained within was provided via the VirtualGeek web blog. I’ll reference the blog on a few occasions throughout this document.

Please note the main URL for this site is: http://virtualgeek.typepad.com

Assumptions

• I have assumed some core competency with respect to Linux, VMware ESX and VMware vCenter. I have added tips on specific Linux commands and tried to remain consistent with those tips throughout the text • Active Directory implementation to support the environment (I setup two AD virtual machines. One on each ESX host) • Two preconfigured VMware ESX 3.5 Update 4 servers under the management of VMware vCenter 2.5 Update 4. (My vCenters were implemented as virtual machines on the ESX servers that each vCenter managed) – Not a best practice for production but this is a lab. • You will need full, administrative access to all of the platforms discussed • Need a little help with vi? See Appendix C – Basic vi Commands • One last thing… The Celerra VSA is implemented in a service know as “blackbird” and is sometimes referred to as a Celerra Simulator. • So enough of the overview… Let’s get started with the Celerra VSAs!!

Page 10 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 1: Download and Import Celerra Virtual Storage Appliance (VSA)

Ensure you have the latest version of the Celerra Virtual Storage Appliance (VSA). It is available at the following location:

• VirtualGeek , a blog maintained by Chad Sakac, Vice President, VMware Technology Alliance

THIS IS AN EXTREMELY LARGE FILE (approximately 1.5GB).

ALLOW ADEQUATE TIME FOR THE DOWNLOAD TO COMPLETE

Once downloaded:

 unzip the file  make note of the location of the .OVF file  start the Virtual Infrastructure Client and attach to your “production” vCenter instance (in my case it was named vcprod1)

Don’t forget to come back and perform these steps again for the “recovery” site. You will attach to the “recovery” site vCenter instance (mine was named vcdr1) and configure the Celerra VSA in the same manner as the “production” site

Page 11 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

From vCenter, Click “File”, “Virtual Appliance”, “Import”

Click on the Import from File radio button, Click “Browse”, Browse to the .OVF file and Click “Open” Click “Next” when returned to the Import Location panel.

Page 12 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Identify the Name of the VM (csprod or csdr) and the target datastore. In most cases, I put the Celerra VSA on the local datastore of the ESX hosting the virtual machine. I do this for convenience and the fact that it does not consume any of the limited network storage resources in my lab. If and when you add additional storage to the Celerra VSA the source is inconsequential. It can come from a local source (internal server disk) or a network based source such as a physical EMC Celerra or other storage source. If you are using internal, non- shared storage you will want to disable the Celerra VSA from a DRS perspective. This is not a requirement but DRS requires access to shared storage to do its job.

Setup network mappings (leave as default for now)

• Summary, Finish, Import process started. Typically the import process takes about 15- 20 minutes to complete.

Note: OVF is approximately 1.5GB. When finished importing the file will be in eagerthick VMDK format and approximately 40GB.

• Once imported Click Close

The Celerra VSA VM will be displayed in the Virtual Center Inventory.

Prior to configuring anything else; clone the Celerra VSA VM to Template from vCenter for future use.

Right Click on the Celerra VSA VM and Select Clone to Template.

Note: The Celerra VSA VM needs to be powered off in order to complete this step

Give it a name and datastore location; identify the ESX cluster on which to store the template Page 13 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Store it in Compact format.

Page 14 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 2: Configuring the Celerra VSA VM Edit the Virtual Machine (VM) settings

Right Click on the Celerra VSA VM and Select “Edit Settings”

Change the Memory allocated from the default to 2048 (min for initial testing) or 3072 (replication or SRM). If you have the extra memory set this to 4096 as I did for my configuration.

Configure Networks to map appropriately to your ESX configuration – This configuration requires two physical NICs in your ESX host. If you have more than two that is ok but this document assumes only two.

eth0 – vSwitch0 (Connected to an IP LAN Segment for NFS access) eth1 – vSwitch1 (Dedicated to iSCSI traffic) eth2 – vSwitch0 (Maps to the Celerra Control Station Interface)

Click “OK” when you have finished making these changes.

A quick note about performance:

If your ESX server hardware supports Virtualization (Intel-VT or AMD-V) I would recommend enabling it in system BIOS.

Page 15 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Historically, the Celerra VSA’s graphical (GUI) has been a slow performer. Enabling VT and setting the Celerra VSA VM to take advantage of this feature will increase GUI performance dramatically.

See the following link for additional information; http://virtualgeek.typepad.com/virtual_geek/2009/05/using-vsphere-and-hw-offload-for- improved-celerra-vsa-performance.html

Configure the VM via vCenter by right clicking on the VM and selecting “Edit Settings” Select the Options tab Select the Virtualized MMU Setting and Click on the “Force use of these features where available” radio button.

Click “OK”

Page 16 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

If you installed the Celerra VSA on a vSphere (ESX 4.0) server the Edit Settings screen would look like this:

Right Click on the Celerra VM and Select “Power On”

Note: During the initial boot of the Celerra VSA VM the “blackbird” service may hang. This only affects the initial boot and will not occur during subsequent restarts. Reset the Celerra VSA VM by powering it off and back on.

Page 17 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 3: Configuring the Celerra VSA TCP/IP addresses

The following steps are specific to the Celerra VSA only. A physical Celerra offers a configuration wizard for these steps

Right Click on the Celerra VSA VM. Select “Open Console”

Please note: You will need to click inside the VM console window in order to interact with the command line. If the screen is blank, click inside the window and hit the Enter key. If you need to interact with some other window, type CTL-ATL together to release the cursor from the VM window. This is required as the VMware Tools are not installed on the VM.

Login to the Control Station; User: root, Password: nasadmin Type the following command: (do not type [root@localhost ~]# as this is the system prompt)

[root@localhost ~]# ifconfig | less

The less command will allow for scrolling up and down within the command output for the purpose of viewing all of the interfaces.

Page 18 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The virtual NICs (vNICs) are as follows:

 dart-eth0: (Internal Interface for DART OS Data Mover – Celerra VSA service)

 dart-eth1: (Internal Interface for DART OS Data Mover – Celerra VSA service)

 eth0: connected to first vNIC

Note: By default this adapter maps to cge0 and cge1 on the Celerra VSA Manager but this needs to be changed – we’ll do that later and explain what cge0 and cge1 are used for.

 eth1: connected to the second vNIC

 eth2: management interface for the Celerra VSA Control Station

eth2 may not be listed as this point. If not, don’t worry, we’ll add it momentarily.

 lo: loopback

To exit the scrolled page view of the less command type “ctl-z” or “q” without the quotes

Page 19 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Configure the NICs

The following set of commands will configure the TCP/IP address information and reset the Virtual NICs accordingly.

Type the following command:

[root@localhost ~]# netconfig -d eth0 netconfig will launch a pseudo GUI to accomplish this task: Enter the appropriate values for each interface. eth0 will act as the NFS interface, eth1 will handle the iSCSI traffic and eth2 will be the Celerra VSA Control Station interface. The Control Station interface (eth2) will also act as the target interface for SecureShell (SSH) sessions and the Celerra Manager GUI.

Use the ” key to move between the fields on the form.

Repeat the process for eth1 and eth2

[root@localhost ~]# netconfig -d eth1 [root@localhost ~]# netconfig -d eth2

Once completed enter the following commands in ordered succession.

[root@localhost ~]# ifdown eth0 [root@localhost ~]# ifdown eth1 [root@localhost ~]# ifdown eth2 [root@localhost ~]# ifup eth0 [root@localhost ~]# ifup eth1 [root@localhost ~]# ifup eth2

Page 20 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Alternatively you could reboot the Celerra VSA (but that takes more time) using the command: reboot –n from the system prompt: [root@localhost ~]#

Validate the TCP/IP address information has been assigned correctly:

[root@localhost ~]# ifconfig | less (check the IP addresses for proper configuration)

Ping each of the interface addresses including the DART interfaces and ensure a positive response:

[root@localhost ~]# ping [root@localhost ~]# ping [root@localhost ~]# ping [root@localhost ~]# ping 128.221.252.2 Note this is the simulated data mover [root@localhost ~]# ping 128.221.253.2 Note this is the simulated data mover

Page 21 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 4: Configuring the Celerra VSA to be Unique

These steps are required for Celerra VSA Replication and VMware Site Recovery Manager to function properly.

In these steps we will change the host name and correct the fact that the MAC addresses have changed as a result of the .OVF import process.

You should still be logged in as root. If not, login to the Control Station (eth2 IP Address); via SSH or use the Open Console selection from the right click menu on the Celerra VSA virtual machine. If you prefer an SSH client one is freely available from http://www.putty.org

user: root password: nasadmin

[root@localhost ~]# cd /opt/blackbird/tools [root@localhost ~]# ls –l (this displays the contents of the /opt/blackbird/tools directory)

Enter the following command: [root@localhost tools]# ./init_storageID

This regenerates the Celerra VSA Serial number to match the VM UUID – The Celerra VSA was created from a clone. This replaces the clone Serial Number and replaces it with a unique identifier. This is a critical step as we will setup replication in a later step and each Celerra VSA instance in a replication partnership needs a unique ID. Note you may get a message that states “No change needed…" Page 22 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Once completed we need to update the host file using the following command:

[root@localhost tools]# vi /etc/hosts (see Appendix C for assistance with vi commands

Insert an entry below the localhost entry as follows:

192.168.1.52 csprod csprod.mydomain.com csprod

Use the IP address you specified for the Celerra VSA Control Station interface (eth2)

Save the file by hitting then typing :wq!

Page 23 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Now for the network file in /etc/sysconfig. This will ensure the host boots with the proper name.

[root@localhost tools]# vi /etc/sysconfig/network

Add the following lines to the bottom of the existing file:

DOMAINNAME=mydomain.com (in my case: baker-iss.com) HOSTNAME=csprod

Save the file by hitting then typing :wq!

In order to save a reboot, issue this command: [root@localhost tools]# hostname csprod

This changes the host name immediately. The previous two file edits (/etc/hosts and /etc/sysconfig/network) will ensure host name retention over a reboot.

[root@localhost tools]# exit (may need to repeat this a couple of times to get back to the login prompt) csprod login:

Notice the host name has changed, it used to look like this; localhost login: (See the screen shot below) Page 24 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

We will now login with the nasadmin user account. This account is used for all activities not related to the configuration of the Linux operating environment but to the actual Celerra VSA | DART environment.

Login to the Control Station (eth2 IP Address); user: nasadmin password: nasadmin

Page 25 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Adjustments required for the new NIC MAC address: The Celerra VSA service (blackbird) stores its interface MAC address so we need to change them. This is due the fact we created this Celerra VSA instance from an .OVF file which was created from a physical EMC Celerra platform. The blackbird service is aware of the previous physical network interface MAC addresses and we need to change them to match the MAC address of the Celerra VSA Virtual Machine’s virtual NICs (vNICs). We’ll accomplish this with the following set of commands:

[nasadmin@csprod ~]$ cd /opt/blackbird/tools [nasadmin@csprod ~]$ ls –l (this displays the contents of the /opt/blackbird/tools directory) [nasadmin@csprod tools]$ ./configure_nic server_2 –l

This lists the NICs in their current configuration

Notice that both cge0 and cge1 map to eth0; we will delete and re-add them to resolve this problem

[nasadmin@csprod tools]$ ./configure_nic server_2 -d cge0 (Delete cge0) [nasadmin@csprod tools]$ ./configure_nic server_2 -d cge1 (Delete cge1)

As of these commands execute the output shows the remaining configured cge interfaces. Since we have run this command twice the output from the second execution shows an empty list.

Page 26 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

For additional confirmation, execute the same command with the “list” option:

[nasadmin@csprod tools]$ ./configure_nic server_2 –l

The output should confirm the deletion and non-existence of the previously defined cge interfaces.

[nasadmin@csprod tools]$ export NAS_DB=/nas

Now that the cge interfaces have been deleted and the NAS_DB has been exported, we need to reboot the Celerra VSA. When the system comes back up we will re-add these interfaces and map them to the appropriate eth interfaces.

The following is the Linux Switch User command. By default su assumes the current user is switching to the root user account. The “-“, as in su –, forces the system to reload the new user’s profile information. By default su – is short hand for “switch user to root and load root’s profile”

[nasadmin@csprod tools]$ su – Password: nasadmin

Page 27 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

This command will shutdown the Linux , warm reset the virtual machine and restart the Celerra VSA

[root@csprod tools]# reboot –n

Page 28 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Post Reboot #1: Login to the Control Station (eth2 IP Address); user: nasadmin password: nasadmin

The overall boot process take approximately 20 minutes to complete. The system will appear to be up but the Celerra VSA (blackbird) service may not be fully operational.

Note: If you enabled BIOS VT support the getreason command below will typically return a “contacted” status for slot_2 within 3 minutes and the Web GUI should be available within 7-8 minutes.

To check the status of the service issue the following command:

[nasadmin@csprod ~]$ /nas/sbin/getreason

When the status matches the above the Celerra VSA is up and fully operational. You may notice other items in the list that refer to slot_x where “x” will be a number from 4 to 9. Ignore these.

NOTE: If you don’t want to repeatedly type the “getreason” command, type the following:

[nasadmin@csprod ~]$ watch –n 5 /nas/sbin/getreason

This will issue the command and refresh the interface every 5 seconds.

or will exit the watch command Page 29 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

We now need to add the cge interfaces back to the configuration and map them to their appropriate eth interface.

[nasadmin@csprod ~]$ cd /opt/blackbird/tools [nasadmin@csprod ~]$ ls –l (this displays the contents of the /opt/blackbird/tools directory) [nasadmin@csprod tools]$ ./configure_nic server_2 -l

Notice the list is empty. It should look just as it did prior to the reboot.

[nasadmin@csprod tools]$ ./configure_nic server_2 -a eth0 (maps eth0 to cge0) [nasadmin@csprod tools]$ ./configure_nic server_2 -a eth1 (maps eth1 to cge1) [nasadmin@csprod tools]$ ./configure_nic server_2 -l

Notice the list now reflects appropriate mapping between cge0/eth0 and cge1/eth1

Page 30 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

[nasadmin@csprod tools]$ export NAS_DB=/nas

We need to reboot:

[nasadmin@csprod tools]$ su - Password: nasadmin

[root@csprod tools]# reboot -n

Page 31 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Post Reboot #2: Login to the Control Station (eth2 IP Address); user: nasadmin password: nasadmin

Remember to wait for the blackbird service to finish loading. To check use the following command:

[nasadmin@csprod ~]$ /nas/sbin/getreason

Once the service has completed loading, issue the following command:

[nasadmin@csprod ~]$ nas_cel -list

Output refers to the hostname (name) as “localhost” and not “csprod” the net_path may be pointing to the lo (loopback) interface as 127.0.0.1. We’ll correct this using the next series of commands.

Page 32 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

First thing we need to do is su to the root user as we discussed a moment ago. What is different now is that we will not pass the “-“ argument to the su command as we want to maintain the nasadmin profile, specifically for file system pathing.

We modified the cge interfaces and as such they are properly configured. We also have the proper host name is place. With these items completed we must now update the Celerra VSA internal database. To do so execute the next commands;

[nasadmin@csprod ~]$ su (This is not a typo. The command here is “su” not “su –“) Password: nasadmin

[root@csprod ~]# nas_cel -update id=0 operation in progress (not interruptible)…

Ignore the warning about the loopback interconnect and note that this step may take a minute or two to complete

[root@csprod ~]# nas_cel –list

This verifies the change was completed

Page 33 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

As we continue these housekeeping tasks, we need to update the SSL certificates associated with the Celerra VSA to avoid certificate errors on the Celerra VSA's web management interface. To do this, enter the following commands:

[root@csprod ~]# /nas/sbin/nas_config -ssl Do you want to proceed? [y/n]: y

[root@csprod ~]# /nas/sbin/js_fresh_restart

Reboot the VM. Since we executed the su command earlier we don’t need to do it again here. Simply type the reboot command and hit Enter; [root@csprod ~]# reboot –n

Page 34 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 5: Licensing Login to the Control Station (eth2 IP Address); user: nasadmin password: nasadmin

Remember to wait for the Celerra VSA service to finish loading. To check use the following command:

[nasadmin@csprod ~]$ /nas/sbin/getreason

First we need to initialize the Celerra Licensing Database

[nasadmin@csprod ~]$ nas_license -init done

Now we can enable the specific licenses available within the Celerra Manager. It is important to note that EMC does not require the purchase or installation of a license. The Celerra VSA is enabled for all protocols and advanced features. Please recall that using this platform for anything other than learning, testing or development is in violation of the license agreement .

Launch each of these commands in succession. Each command will signify a successful application of the feature with the message: done

[nasadmin@csprod ~]$ nas_license –l

The nas_license –l command will output the site_key number only. Next time we run this command it will list the site_key and all of the applied licenses.

[nasadmin@csprod ~]$ nas_license –c advancedmanager [nasadmin@csprod ~]$ nas_license –c nfs [nasadmin@csprod ~]$ nas_license –c cifs [nasadmin@csprod ~]$ nas_license –c [nasadmin@csprod ~]$ nas_license –c snapsure [nasadmin@csprod ~]$ nas_license –c replicatorV2 [nasadmin@csprod ~]$ nas_license –c filelevelretention

See the graphic on the next page

Page 35 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

[nasadmin@csprod ~]$ nas_license –l

Page 36 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

As an option, you could have performed the licensing task from the Celerra Manager GUI.

If you are so inclined, the steps are as follows:

Best Practice: Add your Celerra VSA names to a DNS server in your environment. If DNS is not an option, define your machine name using both short and FQDN formats in your host file. Do the same for the Celerra VSA used as the disaster recovery target in the recovery site.

Open a web browser to the running Celerra VSA VM Management interface: https://csprod

The Celerra Manager requires Java. Java will be installed for you if it is not present on your machine. Of course this requires a connection to http://www.sun.com for the installation bits.

From my personal experience Internet Explorer 7 and 8 will always indicate an issue with the SSL certificates issued during connection even after the SSL configuration has been modified. Internet Explorer 6 works without issue as does Firefox as long as Java is installed properly and is operational. I used Firefox 3.0. A colleague tested Firefox 3.5 for PC and Safari on an Apple Mac. Thanks Michael!

Login with the following credentials; Username: nasadmin Password: nasadmin

Page 37 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

If you get an HTTP 503 Error: logout, wait a few moments and log back in. This error is caused as a result of incomplete initialization of web services within the Celerra VSA. Give it a couple of minutes to complete.

If you get a cookie error there may be a DNS issue. Resolve the DNS issues or use a Fully Qualified Domain Name (FQDN) or TCP/IP address.

Example: FQDN: http://csprod.baker-iss.com TCP/IP: http://192.168.1.50 - this was the TCP/IP address of Celerra (csprod)

If you receive a “secure connection failed” error while using the Firefox Web Browser you may need to delete a cached certificate. This may happen if you run thru this document more than once or connect to a Celerra VSA with the same serial number. (It happened in my lab and to at least one of the document testers.)

If this is the case you may choose to run the Celerra GUI from Internet Explorer. IE seems to be a bit more forgiving of the serial number/SSL conflicts.

Navigate to Tools, Options, Advanced, Encryption. Click View Certificates, Authorities. Scroll to the certificate in question, Select said certificate and Click “Delete”. Click “OK”

Page 38 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

If prompted, acknowledge the web site certificate warning…

When presented with the Celerra Message of the Day (MOTD) Click “OK”. You have the option to dismiss additional MOTD pop-ups until the message changes. There is a check box on the panel.

Page 39 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

From the Celerra Manger Main Page: Select the Licenses tab Select the licenses one at a time. Click “Apply”, Click “OK”

Celerra Manager – Advanced Edition Licensed NFS Licensed CIFS Licensed iSCSI Licensed SnapSure Licensed (CIFS, NFS, iSCSI Snapshots) ReplicatorV2 Licensed (iSCSI and Fibre Channel Only – NFS not supported) File-level Retention Licensed (WORM Functionality – Formally delivered in this release of the Celerra)

Page 40 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 6: Configuring cge IP addresses This step configures the “real” data mover IP addresses (cge0/1) and configures the initial iSCSI target

FYI: cge breaks down as “c” (Copper) “ge” (Gigabit Ethernet). On a physical Celerra these are the actual interfaces into the data movers.

Using the Web Management interface: https://csprod

Login with the following credentials; Username: nasadmin Password: nasadmin

Navigate to and Click on the “Network” Folder (left side panel, middle of the list); you'll notice two el3x (where x = 0 or 1) interfaces. These interfaces facilitate Control Station to Data Mover communications.

We will configure cge0 and cge1 with valid TCP/IP addresses

Click “New”

Page 41 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

In the presented panel, configure cge0: Select Data Mover (server_2), Device Name (cge0) and add the TCP/IP address information you want to assign. This will be the NFS export address on your test network should you choose to configure and enable NFS.

Note: If you do not provide a name, the system will default the name to a “dash divided” IP address. In the panel below, the Celerra would have named this interface 192-168-1-51. There is no difference in functionality if the name is system or user assigned. I use specific names for quick, easy recognition.

Click “OK”

Note: The cge0/1 IP address can not be the same as eth0/1 even though they bind to the same physical interface

Page 42 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Perform these steps again for cge1. (This will map to eth1; your iSCSI interface)

Note:

Once these interfaces are configured you can test them using the following command:

From the Celerra VSA Command Prompt:

[nasadmin@csprod ~]$ server_ping server_2

These interfaces, cge0 and cge1, can not be pinged with the standard “ping” command.

Page 43 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Once completed the Network Interfaces panel should look something like this:

Page 44 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Section 2: Adding Physical Storage This section discusses the steps required to add physical storage to the Celerra VSA. This includes modifications to the VM with the addition of a new VMDK, configuring the storage within the Celerra VSA host OS i.e., partitioning, formatting, and mounting the new storage and then reviews the steps required to “plug in” the “new” enclosures.

Note: This is not explicitly required but most folks want to play with more storage the 25GB available in the Celerra VSA by default. I’ve added these steps so you have the option if you choose to add capacity.

Of course you must have available physical storage capacity to support the additional virtual storage capacity in the Celerra VSA. This storage can be in the form of Direct Attached/Internal Server Storage (DAS), Network Attached Storage (NFS) or iSCSI/FibreChannel (SAN) based storage.

Step 1: Add new “Hard Disk(s)” to your Celerra VSA VM • Power Down the Celerra VSA VM

• Snapshot the Celerra VSA VM from Virtual Center – Right Click, Snapshot, Take Snapshot

• Right Click on the Celerra VSA VM and Select “Edit Settings”

• Add a new hard disk to the Celerra VSA VM, make it an appropriate size

• Repeat the previous step for each disk you want to add. I added a total of 4 additional disks. Each disk was 256GB in size.

Page 45 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Best practice: Add more, smaller VMDKs vs. one large one. Each VMDK file will be associated with a physical disk partition. If you need to run a file system check (fsck) on one of these physical partitions the process will be less intrusive and will require less time.

In my environment I added 4 disks of identical size; each was 200GB. These drive sizes do not have to be identical but it seems to make more sense to do so.

Note: Maximum size of a physical partition/virtual disk file is 2,048 GB (2 TB)

• Power On Celerra VSA VM

Page 46 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 2: Configure the Celerra VSA to use the new storage

From the Celerra Console Login to the Control Station (eth2 IP Address); user: nasadmin password: nasadmin

Remember to wait for the Celerra VSA service to finish loading. To check use the following command:

[nasadmin@csprod ~]$ /nas/sbin/getreason [nasadmin@csprod ~]$ su -

When prompted enter the root user’s password: nasadmin

[root@csprod ~]# dmesg | grep sd

This command will search device message and provide an indication of the added storage device(s) and the associated address(es). I added 4 additional devices.

Page 47 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

We need to create partitions on the newly added disk(s). To do this, enter the following command:

[root@csprod ~]# fdisk /dev/sdx (where x completes the device name e.g., /dev/sdb)

The device number/letter (/dev/sd#) will change based on the number of attached “disks”

In my case I added 4 disks therefore my devices names were:

 /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde

At the prompt; Command (m for help): Type “n” for new partition

Enter “p” for primary partition (without the quotes) Enter “1” for partition number Take the defaults for First Cylinder and Last Cylinder

Command (m for help): w (write partition table to disk)

Page 48 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The next command assumes the existence of the directory /mount; if /mount doesn’t exist issue the following command first; mkdir /mount

[root@csprod ~]# mkdir /mount/ i.e., mkdir /mount/csprod-diskX “X” represents a user assigned disk number - See the graphic below. WE will use these directories in a few moments as mount points for the partitions that we are about to create.

[root@csprod ~]# mkfs.ext3 /dev/sdb1 Remember this will change based on your device name. In this case it is /dev/sdb1

For those not familiar with Linux/UNIX disk and partitions you might ask yourself how we moved from /dev/sdb to /dev/sdb1.

/dev/sdb represents the actual disk device. (i.e., the physical drive) /dev/sdb1 represents the partition created on the physical drive. Think of this as a d: or e: drive in a Windows environment.

The operating system is formatting the disk. This process will take a few minutes to complete.

Repeat the previous three steps for each new disk added to the virtual machine. Subsequent device addresses would be /dev/sdc, /dev/sdd, etc. Page 49 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Edit the fstab file:

[root@csprod ~]# vi /etc/fstab

Add the following line to the bottom of the existing file and save the file ( :wq!)

/dev/sdb1 /mount/csprod-disk2 auto defaults 0 0 (these are zeros)

These may also be added for other drives added to the virtual machine. The following are not needed if you only added one additional disk. Furthermore, only add one entry per added disk

/dev/sdc1 /mount/csprod-disk3 auto defaults 0 0 /dev/sdd1 /mount/csprod-disk4 auto defaults 0 0 /dev/sde1 /mount/csprod-disk5 auto defaults 0 0

Note: The fstab defines which disk partitions will be mounted to the file system and defines the parameters and characteristics of the mount point. For additional information on fstab please consult the Linux man (manual) pages on your Celerra VSA.

To accomplish this, type “man fstab” from the command prompt.

Page 50 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager root@csprod ~]# mount –a

[root@csprod ~]# df –h (verify the newly mounted file system(s) is/are listed)

Page 51 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 3: Adding “Physical” Disk to the Celerra VSA

From the Celerra Console [root@csprod ~]# export NAS_DB=/nas [root@csprod ~]# /nas/sbin/setup_clariion –init

The initial output from this command presents the serial number of the virtual backend storage array. Please note this number: the format starts with BB

Note your specific serial number here: ______

Select “q” to quit the setup_clariion command

Page 52 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Because the Celerra VSA was created from another virtual device we need to ensure the serial number and the associated log .xml files match this number.

To accomplish this execute the following commands:

Note:

In the commands below you will replace the BBxxxxxxxxxxxx.xml with the actual Celerra VSA serial number currently in place. You may want to manually chage to the /nas/log directory to see this value.

Alternatively, you could use the shell command completion functionality to auto-complete this for you. As you are typing the command, hit the key once you have entered the following portion of the command:

/nas/log/backend_status.BB  Hitting at this point will add the remaining numbers represented in this text with xxxxxxxxxxxx.

The .xml file name used in the example below should match that of the serial number just noted above.

 [root@csprod ~]# cp /nas/log/backend_status. BBxxxxxxxxxxxx .xml /nas/log/backend_status. .xml

 [root@csprod ~]# cp /nas/log/backend_resume. BBxxxxxxxxxxxx .xml /nas/log/backend_resume..xml

If the previous two copy commands are not executed the following steps will fail. Tip: Use the up arrow to load the second command from the CLI history and just change the words “status” to “resume”. Of course leave the quotes off.

Page 53 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

[root@csprod ~]# /nas/sbin/setup_clariion –init

Notice the initial configuration defaults include two virtualized disk storage shelves.

• Select Option “1”. Add New Enclosure • Accept defaults for Enclosure [2_0], Physical Disk Size [146] and Physical Disk Type [FC] • Select Option “3”, Continue to Diskgroup Template Menu • Select Option “1”, CX_Standard_Raid_5 • Type “yes” to continue when asked: Do you want to continue and configure as shown? • When prompted to enter pathname for disk storage location enter the mount point you defined earlier: i.e., /mount/csprod-disk2 • Next prompt, Select “1. Create LUNs using available storage size in this path (xx MB in total)” • Type “no” when asked about zeroing all LUNS

Note: There is a slight performance benefit when zeroing the LUNs up front but it takes a really long time to complete.

The Celerra VSA will now create the enclosures, hot spares, LUNs and disk groups

• Repeat this step for for each of the physical disks you added to the Celerra VSA • Reboot the Celerra VSA (reboot –n)

Page 54 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Section 3: Review

At this point you should have two fully configured Celerra VSAs installed and configured with additional disk, iSCSI and NFS LUNs. You may recall from the opening statements that we would not walk thru DR site Celerra VSA implementation. The DR Celerra VSA is a mirror image of the primary site Celerra. The only differences relate to the networking configuration. Using the previous steps in Section1 and Section 2 and replacing all references to csprod with csdr you will successfully configure the DR Celerra VSA. Remember the DR Celerra VSA is installed on the ESX host designated for DR in your test environment.

In the next section, Configuring Celerra Replication, we will connect the two Celerra VSAs and replicate LUNs between them. Once we complete the replication configuration we will install and configure Site Recovery Manager, setup Protection and Recovery Groups and exercise the tool in test and failover scenarios in Section 5. We’ll wrap Section 5 with a review of the EMC Celerra Failback Wizard for VMware Site Recovery Manager.

If you have not already done so, please setup, install and configure the Celerra VSA for the recovery site.

Proceed to Section 4: Configuring Celerra Replication when complete

Page 55 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Section 4: Configuring Celerra Replication

Celerra Replicator is a very sophisticated remote replication facility. It supports the following features: (this is not an all inclusive list)

• 1:n and n:1 replication fan-in/fan-out replication relationships • Support cascading topologies (i.e. site one replicates to site two with a given frequency, and then site 2 replicates the data from site 1 to site 3 at a different frequency) • It has sophisticated QoS mechanisms (i.e. you can setup different bandwidth use for different parts of the day) • You can replicate all sorts of configurations - CIFS/NFS/iSCSI. • Full support for thin provisioning at the source or the target • And of course - most interestingly of all - it's fully integrated with VMware Site Recovery Manager so you can use this functionality to build your own SRM test bed.

Site Recovery Manager (SRM) doesn't support NFS yet, so this section is focused on iSCSI - but it's very easy to see how you would support NFS by selecting the appropriate radio button during configuration. At the time of this writing VMware has the next version of SRM underdevelopment and in beta testing with select customers and partners. The next version of SRM will support NFS.

Contained within this section are the following items:

• Configuring NTP • Correcting the Celerra VSA Replication Database

The ID associated with the Replication Database needs to be unique. As a result of the import process the database still maintains the original ID it had prior to cloning. Therefore, two Celerra VSAs replicating between each other would potentially have the same ID which will prohibit replication setup. The process of correcting/changing the ID will avoid this problem

• Configuring an iSCSI LUN that will be used as a replication target • Configuring Celerra Replicator

My configuration was completed in a local datacenter. All connections and communications where limited to that location. If you want to perform these steps over your WAN or to a network protected by a firewall you should be aware of the following:

 Port 8888 is used by Celerra Replicator for transferring data between Celerra VSAs  The HTTPS connection between the source and destination Data Movers utilizes port 5085  The HTTPS connection between the Control Stations requires access across port 443

Page 56 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 1: Configuring NTP (Network Time Protocol)

Celerra Replication requires the system time on the source and target devices to be within 10 minutes of each other. If not replication may fail in addition to issues with authentication and security.

From the Celerra Manager GUI (Using the Web Management interface: https://csprod) Login as root PW: nasadmin

Click “Control Station Properties” tab

Modify the Current Date and Time, Current Time zone and NTP Servers: fields accordingly

Page 57 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Note: Changing the time zone may require a reboot of the Celerra VSA

Page 58 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Open the Data Movers Folder and Right Click, Properties on server_2 Verify the current date and time settings are in sync with the control station

If the time skew is too great you may have issues syncing with the NTP server you specified in the GUI. If this is the case, follow this sequence:

o Login to the Celerra VSA Console as root o [root@csprod ~]# date (verify current date and time) o [root@csprod ~]# pgrep ntpd (ensure the NTP daemon is running) o [root@csprod ~]# chkconfig ntpd on (ensures the NTP starts with the VSA) o [root@csprod ~]# service ntpd stop o [root@csprod ~]# ntpdate –u

o Here are several Internet based NTP server addresses for reference:

 129.6.15.28 (time-a.nist.gov)  129.6.15.29 (time-b.nist.gov)  128.138.140.44 (utcnist.colorado.edu)  132.246.168.148 (time.nrc.ca)

o [root@csprod ~]# service ntpd start o [root@csprod ~]# date (verify current date and time are correct) o [root@csprod ~]# logout

• Now perform the same process for the data mover

o Login to the Celerra VSA Console as nasadmin PW: nasadmin o [nasadmin@csprod ~]$ server_date server_2 timesvc delete ntp o [nasadmin@csprod ~]$ server_date server_2 timesvc start o [nasadmin@csprod ~]$ server_date server_2 o [nasadmin@csprod ~]$ logout

If the times are still out of sync after following these steps reboot the entire Celerra VSA

Repeat these steps for the recovery site Celerra VSA (csdr)

Page 59 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 2: Correcting the Celerra VSA Replication Database The Celerra VSA has several small databases for the maintenance of critical information. As a result of the cloning process the replication database has the wrong serial number. The following process will correct this issue.

From the Celerra VSA Console Login as nasadmin: PW: nasadmin

[nasadmin@csprod ~]$ nas_cel –list (note the CMU maybe incorrect)

[nasadmin@csprod ~]$ su (This is not a typo. The command here is su not su -) Password: nasadmin

Page 60 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

[root@csprod ~]# cd /nas/ dos /slot_2

[root@csprod slot_2]# vi boot.cfg

Scroll to bottom of the file and comment the dpinit line (comment character = #)

Use the Insert Command (i) from the beginning of the dpinit line and type the hash (#) key.

Save the file: ( :wq!)

Page 61 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

• [root@csprod slot_2]# cd /nas/ server /slot_2/ (notice this is a different directory off of /nas)

• [root@csprod slot_2]# vi eof

Scroll to middle of the file and comment the dpinit line (comment character = #) Use the Insert Command (I) from the beginning of the dpinit line and type the hash (#) key. Save the file: ( :wq!)

• [root@csprod slot_2]# server_cpu server_2 –reboot now

• [root@csprod slot_2]# watch –n 5 /nas/sbin/getreason (will delver status on the data mover reboot)

Wait for slot_2 to be in a “contacted” state

Page 62 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

• [root@csprod slot_2]# server_dbms server_2 –db –delete icon_db (deletes the existing replication database)

• [root@csprod slot_2]# .server_config server_2 –v “dpinit” (creates a fresh replication database – Don’t forget the “.” at the beginning of this command. It is not a typo)

• [root@csprod slot_2]# nas_cel –list

• [root@csprod slot_2]# nas_cel –update id=0  (This is a zero)

(The CMU now correctly correlates to the Celerra VSA serial number)

Page 63 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

• [root@csprod ~]# cd /nas/ dos /slot_2 • [root@csprod slot_2]# vi boot.cfg

Scroll to bottom of the file and uncomment the dpinit line (remove the #) Save the file: ( :wq!)

• [root@csprod slot_2]# cd /nas/ server /slot_2/ • [root@csprod slot_2]# vi eof

Scroll to middle of the file and comment the dpinit line (remove the #) Save the file: ( :wq!)

Page 64 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 3: Configuring Replication Using the Celerra Manager GUI Replication involves a source and a target - if you're replicating file systems, then they are file systems. If you are replicating LUNs then they are LUNs. This step shows how to configure an iSCSI LUN that will be a replication target. To be a replication target, it needs to be the same size as the source, it needs to be configured as “read only”, and the file system has to be bigger than the LUN itself. Therefore, you need a bit of "space reservation". This "space reservation" is common to many storage use cases involving snapshots. One nice thing is that EMC has always philosophically chosen that an "out of space condition" should cause snap/replica LUNs to fail, not production LUNs.

This step assumes the existence of a second Celerra VSA configured as per the contents of this document. That said, on the target Celerra VSA, there needs to be a LUN designated as a read-only LUN that the primary Celerra VSA will use as a replication target. This will be created as part of the configuration process. I was just pointing out the fact that it needs to be there as some point.

These next several pages will walk thru the GUI version of all these steps. The process is a little slower than the CLI but it combines several sub-steps in a single Wizard interface. If you want to see the CLI version of these steps please refer to Appendix A: Configuring the Replication Target (Command Line Interface).

From the Celerra Manager GUI These steps will configure the Celerra VSAs with the following components:

 iSCSI Target  iSCSI LUN  Replication Interconnects  Replication

Page 65 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Create file system for use with the iSCSI LUN

Starting with CSPROD, Click on “Wizards”, “New File System”

Best Practice: The size of the remote file system should be at least two times the size of the source LUN you will be replicating. This will accommodate the creation of Snap LUNs during SRM testing and failover. If not you will run into issues including SRM test failure.

Click “Next” to Select server_2 (default), Click “Next” to Select Storage Pool (default)

Page 66 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select the Storage Pool capable of providing capacity for the iSCSI LUN. In my case, I am using clar_r5_performance. Click “Next”

Enter a name for the file system. I used csprod_iscsi_fs1_replicated as I plan to replicate this LUNs on this file system to csdr. Enter the size of the filesystem in MB (1024MB = 1GB). Ensure the Slice Volumes check box is selected. Check the Deduplication Enabled box should you choose to enable this technology. Click “Next”

Page 67 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

For the purposes of this exercise, take the defaults on the Enable Auto Extension Panel and the Default Quota Settings Panel by clicking “Next”

Click “Finish” to create the file system:

Page 68 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

When completed. The Panel will look similar to this one.

Click “Close”

Repeat this process for csdr keeping in mind that the target file systems should be at least 2.5-3 times larger than the replicated LUN is houses.

One you have completed the file system creation on csdr we’ll add an iSCSI LUN and iSCSI target on csprod. We’ll repeat the process on csdr as well.

Page 69 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Create the iSCSI target and LUN(s)

Click “Wizards”, Click “New iSCSI Lun”

Page 70 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

During this step the Create iSCSI LUN wizard will create the iSCSI target and the iSCSI LUN associated to said target.

Click “Next” to select server_2 (default), on the Select/Create Target panel, Click “Create Target”

Enter a Target Alias Name (I am using csprod_target1), Ensure the Auto Generate Target Qualifier Name check box is checked. Click “Next”

Page 71 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select the iSCSI interface created earlier (cge1), Click “Add”, Click “Next”

Click “Submit”, Click “Next”

Page 72 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select the iSCSI target just created, Click “Next”

On the Select/Create File System panel; select the file system created for replication earlier. Click “Next”

Page 73 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the LUN information. Click “Next”

Note: When configuring this step on csdr the LUN must be defined as read-only Also note that when defining LUN sizes you must leave a little room for the overhead associate with LUN formatting. If you need to use the maximum size, use 99% as a guide.

Here is the same screen for csdr. Notice the LUN sizes are identical.

Page 74 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Add the iSCSI initiators for the ESX servers that will access the replicated LUN

Enable Multiple Access if more than one ESX will access this LUN

Click “Add New”, Add the IQN number from your “production” ESX server, Click “OK”

Repeat this for each ESX server accessing this LUN

When completed, Click “Next” on the main Main LUN Masking panel

Click “Next” on the CHAP Access (Optional) Panel

Page 75 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Assuming you have not added any iSCSI LUNs to this Celerra VSA you will be prompted with the following panel:

Ensure the iSCSI service check box is selected, Click “Next”

Review the Summary in the Overview / Results panel, Click “Finish”

Page 76 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click “Close” once the task completes

Of course this needs to be completed on csdr, once completed move on to Configuring Replication

Page 77 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Configuring Replication From CSPROD, Select “New Replication” from the “Wizards” folder

Page 78 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

All of our conversation thus far has centered on iSCSI. To that, we will select the iSCSI LUN radio button and Click “Next”

Click “New Destination”

Page 79 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the name, IP address and a passphrase (your choice) of the destination Celerra

Enter the user credentials of csdr; the username is nasadmin; password is nasadmin

Click “Next”

Page 80 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the name of the production Celerra VSA (csprod).

The IP address maps to the eth0 interface and should be pre-populated. Only change this address if it is not the IP address of your eth0.

50

Click “Next”, Click “Submit” and view the following screen…

Acknowledge the successful completion of Control Station configuration, Click “Next” Page 81 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select the Celerra VSA Destination just created. In my case it network name was csdr.

Click “Next”, “New Interconnect”

Click “Next” Page 82 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Name the data mover interconnect. I used csprod-to-csdr, Click “Next”

The next panel is asking you to define an interconnect for the peer data mover. This is the interconnect the allows for reverse communication between the Celerra VSAs

I used csdr-to-csprod. Your peer Celerra, in this case, is the production side Celerra VSA; csprod

Page 83 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Take the defaults on the next screen, Click “Next”

Ensure the time on the Celerra VSA control stations is within 10 minutes of each other. Double-check prior to clicking submit.

Are you sure the time is less than 10 minutes out of skew? Click “Submit”

Page 84 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Review the results of the changes and Click “Next”

Select the newly created Data Mover Interconnect, Click “Next”

Select the iSCSI interfaces for each of you Celerra VSAs, Click “Next”

Page 85 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The next couple of items will name the replication session and select the specific LUNs involved in this session. Since this session will facilitate replication for VMware Site Recovery Manager, I named the session “srm_replication”. Notice the iSCSI target and iSCSI LUN selections. These were there by default. Double-check the entries for accuracy prior to clicking “Next”

If the Target Destination information is correct Click “Next”. Otherwise select the correct parameters and then Click “Next”

Page 86 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Take the defaults on the Update Policy Panel. Click “Next”

Click “Finish”

Page 87 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Review the results, Click “Close”

Once completed. Navigate back to the Celerra Manager Main Panel. Click on the “Replications” folder. Check the status of the srm_replication session you just created.

Do this from both sides of the replication relationship. csprod :

csdr:

Page 88 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 4: Preparing ESX Servers for iSCSI Targets and LUNs

From Virtual Center: Click on the “Configuration” tab of the ESX hosts you want to configure. Click on “Networking” Click on “Properties” for the vSwitch configured for iSCSI

Select vSwith, Click “Edit”

Page 89 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click on the “Security” tab Change Promiscuous Mode to Accept

Note: This is NOT required for the Celerra Simulator as it will run either way. This is a best practice as it relates to most simulators

Click “OK”, Click “Close”

Page 90 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The following step is not required for ESXi implementations…

If you are running ESXi please skip to the next page

From the main panel: Click on the Security Profile. Ensure that the Software iSCSI Client is enabled

Page 91 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Move to the Storage Adapters Tab and scroll to the iSCSI Software Adapter.

Note: Two items: 1.) the iSCSI Initiator IQN, 2.) Targets: currently 0 (zero)

Click “Properties”, Click “Configure”, Select Enabled

Click “OK” Click on the “Dynamic Discovery” tab, Click “Add”

Enter the TCP/IP address of the iSCSI target you created earlier. This is your cge1 interface address. Ensure the port number is 3260 (default)

Page 92 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click “OK” (This may take a moment or two...)

Click “Close”

When prompted to Rescan the host Click “Yes”

Accept the defaults and Click “OK”

Page 93 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Notice the new Target.

Repeat this step for each ESX server masked to the replicated iSCSI Target

To this point all we have done is allow the ESX servers to see the iSCSI LUNs. We must establish a VMFS file system so they are usable within the Virtual Infrastructure.

The steps on the proceeding two pages are for

the “Production” side ESX servers only!

Page 94 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click on the “Configuration” tab, Select Storage, Click “Add Storage”

Select the “Disk/LUN” radio button, Click “Next”

Page 95 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select the LUN just added

Click “Next”

Review the current disk layout and notice the message: “The hard disk is blank.”

Click “Next” Page 96 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter a Name in the “Datastore Name” field: Use something obvious like “iscsi_datastore1_replicated”

Click “Next”

Review VMFS Parameters Click “Next”

Click “Finish”

Page 97 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Verify the LUN was added to the ESX host

Important Note:

Add or migrate at least one virtual machine you want to protect with SRM to the replicated datastore. If not the SRM configuration will not display any summary information in the Display Replicated Datastores later in this configuration. Please add at least one VM to this new datastore.

Also only one LUN is replicating between the Celerra VSAs. You can create additional LUNS following the process above.

Page 98 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Section 5: Site Recovery Manager Installation

In order to complete the implementation and configuration of SRM we need to ensure the following components are in place.

• ESX Server managed by Virtual Center for each location

Of course these locations can be in the same data center or home office or where ever you want them. The key is a working TCP/IP connection between the two for the purposes of TCP/IP based Celerra Replication and SRM failover.

ESX version must be 3.5 Update 2 or later because we are using SRM 1.0 Update 1

Virtual Center must be version 2.5 Update 3 or later

• You will need to following software components:

VMware SRM 1.0 Update 1

VMware SRM 1.0 Update 1 Patch 3

Microsoft SQL Server Management Studio Express (SQLServer2005_SSMSEE)

If you haven’t already done so, download the SSMSEE and install it on the Virtual Center Server. It is a straight-forward Next, Next, Next, Finish type of installation

EMC Celerra Storage Resource Adapter (SRA)

Page 99 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 1: Site Recovery Manager Database Connectivity

In this step we will perform a number of steps required prior to the actual installation of the SRM server instance on the Virtual Center Server. Specifically we will

o Create a new Active Directory user for use with the SRM database o Create the SRM data base using the SSMSEE application o Assign the new user to the database as the db_owner o Setup and test an ODBC connection to the new database

Within your Active Directory create a new user: I used srmprotected-db. Notice I selected the “User cannot change password” and “Password never expires” options.

Click “Create”

Once created launch the SSMSEE application:

Start> all Programs> Microsoft SQL Server 2005 > SQL Server Management Studio Express

The application may or may not display the following panel:

Page 100 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

From the SQL Server 2005 Connect to Server Panel, Click “Connect”

Make note of the Server Name: in my case VMWARE-VC\SQLEXP_VIM.

You will need this information when you create an ODBC connection to the SRM database.

For some reason it will not be on the drop down list so I copied it to my clipboard.

In the main panel Right Click Databases and Select New Database. Name the Database as you wish. I used srmprotected-db. Click “OK”

Page 101 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Create a new login: Right Click on the “Logins” Folder under “Security”

Enter the login ID you created for this purpose. Enter the name in domain\user format: i.e., BAKER-ISS\srmprotected-db

Select the database you created from the Default Database Drop Down

Page 102 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Next select “User Mappings” from the “Select a Page” panel. Select the database you user will be mapped to (i.e., srmprotected-db), Select db_owner and leave public checked.

Click “OK”

Now that the database has been created and the user mapped to it, we need to create an ODBC connection to the new database.

Navigate to the Control Panel on the Virtual Center Server. Double Click on the Data Sources (ODBC) shortcut. Click “Add”

(If the ODBC setup icon is not in the Control Panel it will be in the Administrative Tools folder. You may need to enable this via the start menu properties.)

Page 103 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select “SQL Native Client”, Click “Finish”

Name the ODBC Connection to the SRM database a name and enter the server name on the Server: field as noted above i.e., in my case VMWARE-VC\SQLEXP_VIM, Click “Next”

Select the “With Integrated Windows authentication” radio button, Click “Next”

Page 104 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Ensure “Change the default database to:” selection is checked Select the SRM database you created earlier from the dropdown box. Keep the rest of the selections at their defaults. Click “Next”

On the next panel take the defaults and Click “Finish”

Once completed you will be presented with the following panel:

Click “Test Data Source”

Page 105 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

If everything was configured properly you should see the following panel (a small reward for the work so far!). Click”OK”

When returned to the original panel Click “OK”

Page 106 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click “OK” again when presented with the ODBC Data Source Administrator Panel

Notice the addition of the VMware SRM Data Source

At this point you should configure SQL on the recovery site Virtual Center Server.

Page 107 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 2: Copying Site Recovery Manager, the SRM patch and SRA Copy the SRM and SRA installation media to your Virtual Center Desktop. The best was to accomplish this (if you are remote to the console) is to map a drive to the administrative share, C$, on the Virtual Center server. Start> Run> \\192.168.1.14\c$

Enter user credentials as required

Step 3: Installing Site Recovery Manager Double Click on the VMware-srm-1.0.1-128004 installation icon

Accept the ELUA and Select a Destination Folder, Fill in the appropriate information in the SRM registration panel.

Page 108 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Acknowledge the Security Warning and accept the certificate’s thumbprint by Clicking “Yes”

Select the “Automatically generate a certificate” radio button and Click “Next”

Enter the appropriate (anything you like) information in the Organization and Organizational Unit fields and Click “Next”

Fill in the requested information and Click Next. Do not modify the SOAP or HTTP ports

Page 109 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the database specifics in the next panel and Click “Next”

Click “Install”

The installation will proceed without any additional input. Once completed Click “Finish” and proceed to the next step

Page 110 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 3: Patching Site Recovery Manager

After pushing the SRM 1.0 Update 1 Patch 3 package to the SRM server, double click the icon to launch the installation.

Click “Run”, “Next” Accept the terms of the End User License Agreement and Click “Next” Click “Install” Click “Finish”

Step 4: Installing the Storage Replication Adapter

Please note if you are running an earlier version of the storage replication adapter please remove it prior to proceeding.

Double Click on the EMC_Celerra_Replicator_Adapter_for_VMware_Site_Recovery_Manager installation icon.

Click “Next”, Accept the License, Click “Install”, Click “Finish”

From the services MMC restart the VMware Site Recovery Manager Service

No need for screen shots!

Restart the VMware SRM service.

Page 111 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 5: Installing the SRM Plug-in for Virtual Center

If not already running, launch the VI Client and attach to your Virtual Center Server.

Click “Plugins” from the Menu Bar

Page 112 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click the “Download and Install” button for Site Recovery Manager

At this point a separate installer will launch to install the plug-in. This is a standard windows install similar to the installs completed in Steps 2 and 3 of this section.

Click “Next”, accept the license, “Install”, “Finish” Page 113 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Shortly after the completion of the add-in install, you will be taken back to the plug-in manager for Virtual Center. Notice the plug-in has been installed but still needs to be enabled.

Note: Do not select the “Installed” tab until the VMware Site Recovery Manager status changes in the “Available” tab to read: The plugin has been installed. Please go to the installed tab to enable the plugin. Doing so prior to completion may require a restart of the VI client to see the plugin as installed.

Click on the “Installed” tab and Click “Enabled” in the Site Recovery Manager panel, Click “OK”

Page 114 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click on the new “Site Recovery” Manager Icon

Page 115 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

This panel offers us the ability to start configuring the actual linkages between SRM servers and their associated storage devices. Prior to that however, the SRM installation needs to be completed on the recovery, disaster recovery site. Please complete Section 5 on the recovery site Virtual Center server before proceeding to Section 6.

Page 116 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Section 6: Site Recovery Manager Configuration

Welcome Back! Hopefully the SRM implementation on the recovery site went well. At this point we have completed several tasks each a critical component in the grand scheme of SRM. Here however, is where the preverbal rubber hits the road.

In this section we will configure each of the SRM servers to interact. We will also create protection and recovery groups and test the failover capability of the SRM servers.

When we finished Section 5 we were left with the following screen shot. Notice the information in the Local Site panel. This information matches the information in the paired site providing you are looking at this screen from the perspective of the protected site. Of course this will change when you open this panel this from the recovery site SRM server. The Paired site will always reflect the “replication partner.” That said, if logged into the protected site you will see the recovery site information in the Paired Ssite panel. If you are looking at this from the Recovery site, the paired site will be that of the Protected site. Crystal clear? It will all make sense as we work through the next several steps.

Click “Configure” in the “Protection Setup” panel

Page 117 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 1: Configuring the Connection All of these tasks are completed from the protection site unless otherwise indicated.

Enter the TCP/IP Address of the remote (recovery) site Virtual Center Server, the Port number defaults to 80.

Acknowledge the Certificate Panel, Click “OK”

Page 118 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the Administrative Credentials for the Virtual Center Server. You may be asked to authenticate to both the remote and the local servers.

Page 119 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Once Authenticated the system will establish a reciprocal connection between the Virtual Center Servers.

Acknowledge any Security Certificate warnings (you may not see any)

Click “Finish”

Check the status of the connection. It should be “Connected”

Page 120 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 2: Configuring the Array Managers Now that the connection between the two sties has been established the arrays can be configured. You recall that we installed the EMC Celerra SRA and the Solution Enabler earlier. This step will not work if those components did not install properly.

Click “Configure” next to Array Managers: SRM main panel

Add the Protection Site Celerra VSAs, Click “Add”

Page 121 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the requested information in the “Add Array Manger” Panel. The display name should be the same as your protection site Celerra VSA. Manager Type is Celerra iSCSI Native, Control Station IP is the csprod eth2 IP address, User Name and Password are nasadmin. Click “Connect”

Once added, the replicated LUNs on csprod should be displayed in the Protection Arrays Panel below Click “OK”

Page 122 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The Add Array Manager process will return to the Configure Array Manager main panel. Notice the Protection Arrays: panel. It lists the Celerra VSA as “NS-Simulator”, shows the Peer array at the remote site and the number of LUN(s) configured for replication

Click “Next”, to configure the recovery side array, Click “Add”

Page 123 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the requested information in the “Add Array Manger” Panel. The display name should be the same as your protection site Celerra VSA. Manager Type is Celerra iSCSI Native, Control Station IP is the csdr eth2 IP address, User Name and Password are nasadmin. Click “Connect”, Click “OK”

Page 124 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click”Next”

Review the Replicated Datastores, Click “Finish”

Remember: If you did not add at least one virtual machine to the replicated datastore this panel will be blank. That is OK! Don’t think something is wrong if the datastore summary is blank.

Page 125 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 3: Configuring Inventory Mappings (Optional) Moving forward… the Inventory Mappings need to be completed. This puts us one step away from configuring the Protection and Recovery Groups. Note that these mappings set the general defaults for virtual machine networks on the recovery site. These can be overridden during the process of creating protection groups.

Click “Configure” next to the “Inventory Mappings:” entry in the “Protection Setup” Panel

Use this panel to map source site resources to recovery site resources. These are the recovery site resources used during testing and failover. Specifically, you are mapping network, resource pools and data center locations.

Page 126 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Example:

Page 127 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 4: Configuring Protection Groups

There are only a few steps left prior to testing our SRM configuration. Specifically, we need to create protection and recovery Groups. Protections Groups as their name infers are created at the protection/”production” site and will define which virtual machines are protected via the SRM facility. Recovery Groups wrap policy and procedure around the protection groups and become the basis of the recovery plan. The recovery Groups are created on the recovery/DR site.

Click on Site Recovery in the left hand navigation pane.

Click “Create” next to “Protection Groups” in the “Protection Setup” Panel

Page 128 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Give the Protection Groups a Name and Click “Next”

The name should give some insight to the VMs that will be contained within.

Select a Datastore Group and Click “Next” (as discovered by the Array Manager)

Page 129 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select a recovery site datastore for the shadow/“placeholder” Virtual Machines, Click “Finish”

We now see the created Protection Group

Page 130 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 5: Configuring Recovery Groups The final step in the configuration process centers on the Recovery Site and the configuration of Recovery Groups. The following steps will be executed from the recovery site Virtual Center server.

Click “Create” in the “Recovery Setup” Panel

Name the Recovery Plan – use something indicative of the Plan’s scope (entire site, windows machines, Linux machines, web servers, etc.), Click “Next”

Page 131 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

In the “Create Recovery Plan - Protection Groups” dialog select the Protection Group(s) created in the previous step.

Click “Next”

In the next panel, “Response Times”, leave the defaults and Click “Next”

Page 132 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The “Configure Test Networks” dialog provides the ability to select a test network for use when testing your recovery plan. By default this is set to Auto. Auto will dynamically create a “bubble” network during a running test and tear it down when the test completes. Note: Any virtual machines dependant on non-protected machines running DNS, AD and DHCP may not be able to communicate with each other once on the bubble network. Make provisions for these services in the bubble or configure an isolated network with these services for testing purposes.

Make the appropriate selections.

Click “Next”

Page 133 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

It is likely, during a test, that you may need to free-up some of the available DR site resources (CPU and Memory) until the test cycle is completed. SRM provide this capability via the recovery plan. If you choose to, SRM will suspend selected virtual machines. Click the check box next to the VMs you wish to suspend. I chose not to suspend any of my VMs as each is providing a critical service to m lab environment.

Click “Finish”

Page 134 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 6: Testing Providing everything to this point has been configured properly, we are ready to push the test button.

Let’s get right to it.

From the Recovery Site Virtual Center Server, Click the “Site Recovery” button on the ribbon menu. Expand the Recovery Plans, and Click on the Recovery Plan created earlier.

Click the “Recovery Steps” tab.

The steps listed here have been automatically created via the Create Recovery Plan.

Take a few moments to review the steps in the recovery plan. You have many options to tweak the plan but that is slightly outside the scope of this document.

When you are ready to proceed… move to the next page!

Page 135 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click on the Green “Test”’ Button (The Red “Run” Button is reserved for the actual failover event!)

Acknowledge the following dialog

SRM will start by configuring the storage components. Without a long dissertation, SRM will ready the remote storage and present a writeable snapshot to the ESX server in the DR site.

Notice the status in the recent task panel at the bottom of the main Virtual Center window.

Page 136 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The next step in the process will suspend any Virtual Machines (if they where selected to do so during the creation of the Recover Plan). If not the recovery plan will skip to recovering virtual machines in order of priority.

(These are the place holder VMs created during the Protection Groups configuration)

Page 137 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Once the VM have been recovered and all of the recovery plan steps have been completed, the plan will pause for user input. During this pause the virtual machines are running and “connected” to the network previously defined in the recovery plan. Remember, by default this network is referred to as the “bubble” network and maintains no connectivity to other physical networks and does not provide for network-based services such as AD, DNS or DHCP. This is unless you have made provisions ahead of time.

Testing complete? Great! Click “Continue”

Site Recovery Manager will clean up the underlying storage, place the VMs back into a powered off state in the placeholder datastore and return to the Recovery Plan main panel.

You have just executed a successful Site Recovery Manager test using Celerra VSAs. How did it go?

Do you want to run an actual failover? If so, I’ll show you how to accomplish this task in conjunction with a successful failback using the EMC Celerra Failback Wizard. Interested? Proceed to Section 7. Page 138 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Section 7: Automated SRM Failback

Welcome to the part of the document that only EMC could write. As of this writing, EMC is the only storage vendor to provide an automated failback tool for Site Recovery Manager. Officially know as the Celerra Failback Wizard, the EMC failback tool automates the process of recovering from a failover using SRM and the underlying replication technologies provided by Celerra Replicator.

The purpose of the plug-in is to reestablish a VMware vCenter environment that has been previously failed using VMware Site Recovery Manager. The tool uses the VMware Virtual Infrastructure (VI) software development kit (SDK) to manipulate the vCenter at both the protected and recovery sites.

In addition, the tool will connect to two EMC Celerra VSAs, one for each vCeneter, in order to cross- reference storage information with vCenter data.

Step 1: Installing the Celerra Failback Wizard To start the process we must first install the Celerra Failback Wizard. I assume you have pushed the installation files to the recovery side vCenter server.

The installation is a straight forward, standard installation.

 Double Click on the EMC Celerra SRM Failback Wizard v1.1 icon  Click “Next”  Accept the End User License Agreement, Click “Next”  Click “Install”  When prompted, Fill in the EMC Celerra SRM Failback Wizard – Virtual Center Plugin Registration Panel

 Click “Register” Page 139 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

 Acknowledge the Registration Success Panel, Click “OK”

 Click “Exit” when returned to the EMC Celerra SRM Failback Wizard – Virtual Center Plugin Registration Panel

Click “Finish” when returned to the installation panel

Page 140 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 2: Configuring the plugin Unlike most plug-ins, the Celerra Failback Wizard automatically registers itself with vCenter during the installation process. Note, as of this writing the failback wizard does not support the vSphere (ESX4) client. You must use the VI client available for ESX 3.5. If for some reason you need to re-register the failback wizard you may do so by following these steps;

Fist of all you will need to ensure the EMC Celerra Failback Service is running: Launch the Services MMC to check (Click “Start”, “Run”, services.msc, Click “OK”). Since the Failback Wizard is installed on the remote site, vCenter server we must confirm from that perspective:

If the service is running; re-run the registration process. This is accomplished by double clicking on the executable .jar file in this directory on the vCenter server (vcdr):

 C:\Program Files\EMC\Celerra SRM Failback Wizard\RegisterVcPlugin\RegisterVcPlugin

Page 141 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

After the installation and registration, the VI client will display a new button on the interface. This is the button we will use to access the EMC Celerra Failback Wizard.

Click on the button for a quick view of the interface.

Acknowledge the security alert by clicking “Yes” to the “Do you want to proceed?” prompt

The failback wizard will present the following panel:

Page 142 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Notice that the failback wizard interface is very straight forward. Each time you run a failback session you will be required to provide the TCP/IP address information along with administrative credentials for the vCenter servers and the Celerra storage devices (the VSAs in this case). By default, the vCenter username is “administrator”. The password is user definable and is the same password you have been using throughout this document. The default administrative credentials for the Celerra VSAs are nasadmin/nasadmin.

Please note: When running these tools in your lab you need to know the following information.

 Most plugins for vCenter and for the most part vCenter itself runs in the context of an Internet browser. As such if you try to run the Celerra Failback Wizard more than once without first closing the VI Client and re-launching the failback wizard you will likely run into minor issues due to the cached information from the last session.

 Consider it a best practice to restart your VI client betweens tests.

 According to the development team at EMC responsible for this tool, future releases will incorporate changes to remedy this situation.

For the moment we’ll leave the failback wizard and return to SRM to facilitate an actual failover event.

Page 143 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 3: Preparing for SRM Failover You should notice that the following steps will be a lot like the SRM “Test” run earlier. The primary difference here is that once we execute an SRM “Run” the changes made to the environment are more ”permanent” in nature. I use d quotes around permanent because this would be the case if we didn’t have a failback tool. Of course the tool is automating the task required to “failback” and a solid administrator with good storage and VMware skill could manually reset the environment with a good deal of time and effort. To that, some of the changes I eluded to a moment ago include but are not limited to:

 Fracturing the storage replication relationship  Promoting the secondary (target) LUN (vs. a snapshot) to the target ESX host(s)  Stopping and Removing (if available) the VMs on the primary site  Remapping of the storage LUNs to and from their assigned ESX hosts

The following steps will make these changes to your environment. Please proceed with caution and take a few steps to prepare your environment should something go wrong.

 Limit your initial testing to a single LUN  Choose a small number of VMs to test  Use simple VMs (WindowsXP, No RDMs etc.)  Create Clones of your test VMs and store them on non-replicated, non-SRM LUNs

You can add complexities to the mix once you are more comfortable with the tool and how it works.

A few additional items to take note of:

 Based on your configuration, there may be a handful of VM that you are testing. Notice their state on the production vCenter, vcprod1, 192.168.1.14, in my case. These VMs are likely running on a particular ESX host supported by a replicated LUN provided by the Celerra VSA known as csprod.  On the recovery site vCenter, vcdr1, 192.168.1.24 in my lab, there are shadow VMs created by the configuration of the protection group, PG1 – Windows Desktops. These VMs should be powered off, connected to an ESX host but not attached to any storage provided by the Celerra VSA known as csdr. If you were to try and power one of these VMs on you would notice the power on option is dimmed and not available. For that matter, all of the power related functions are disabled. This is the normal state for a shadow VM.  Once the SRM failover process is complete the VMs on the production vCenter server will be deleted and unregistered, these shadow VMs will have been restarted after the SRM server maps the replicated LUN, which has been fractured from the original source LUN, and all of the networking changes have been implemented.

If you are demonstrating this functionality to others, point out some of these items.

Page 144 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 4: SRM Failover

Navigate to the SRM plugin on your destination vcenter server. This is the same interface we used for the SRM test earlier.

Authenticate to the SRM servers as prompted.

Expand the Recovery Plans, and Click on the Recovery Plan, RP1 – Windows Desktops.

Page 145 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click the “Recovery Steps” tab.

The steps listed here have been automatically created via the Create Recovery Plan steps performed earlier.

Again, take a few moments to review the steps in the recovery plan. This time note that some steps are executed during recovery or test only. If there is not a designation, the step runs regardless.

When you are ready to proceed, Click on the Red “Run”’ Button (The Green “Test” Button, as we learned earlier, is used for testing the failover without impacting the production environment)

Acknowledge the following dialog.

This is the last warning provided by the SRM GUI prior to starting the actual failover process.

The changes about to be made will require the use of the Celerra Failback Wizard to restore the environment to its original, pre-failover, state. That is, unless you perform these steps manually! Page 146 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The steps of the failover will execute in a similar manner as the test failover we performed earlier.

When completed the VMs on the production side vCenter server will be powered down. In my case I shut these devices down ahead of time and powered down the vCenter (vcprod1) to simulate a real disaster recovery test. Since the SRM process executes from the recovery site access to the production site is not required. Killing my production vCenter proves this point. If the production site we up and available, albeit in some state of compromise in the event of a real disaster, SRM would attempt to shut down any of the surviving production VMs to avoid conflicts between the original production VMs and their restarted DR counterparts.

The actual difference in between a DR test and the actual running of the SRM failover were discussed earlier in the document. With regard to the GUI, the primary difference relates to how the process completes. Once the failover is started it will execute to completion without any user intervention. Recall that you were required to hit the “Continue” button when testing. This “paused” the process until testing was completed. Once clicked, the test processes would cleanup the environment and restore the DR site to its previous state. A state exactly as it was prior to invoking the test.

This is what the SRM screen looks like after a successful failover: Notice that it is not much different compared to the screen prior to execution.

One clue here is the Test button. It is dim and no longer accessible. This is because the Recovery Plan is no longer valid. During the execution, SRM provided status for each of the steps in the Recovery Plan. For a complete view of the status of each individual step. Click on the “History” tab and view or export the Recovery Report. Page 147 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Here is a snippet of what that report looks like:

In addition to the History Report, all of the VMs you protected in the first place are now up and running or at least available as per you Recovery Plan. These VMs will be attached to one or more ESX servers in your cluster and will be supported by the Celerra VSA and a LUN now referred to in a snap-name format. You may choose to rename this name, removing the snap-xxxxxxxx- prefix assigned to this LUN by ESX when it was presented as part of the SRM process. For the purpose of our testing we’ll leave the snap-xxxxxxxx- prefix in place. See the graphic on the next page for reference to the snap-xxxxxxxx- LUN name. One point of note, the remainder of the LUN name is the original LUN name. (i.e., snap-3f6ff555-csprod-iscsi-datastore0 ) csprod-iscsi-datastore is the original LUN name.

Page 148 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Take a few moments to explore the difference aspect of the DR vCenter server. Look at the networking and storage configuration pages, explore the datastore. You will notice that you are now running your “production” site workloads on the DR site. You have successfully failed over using the VMware Site Recovery Manager.

Page 149 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 5: Using the Celerra Failback Wizard

Several of the following screen shots were taken from a standard VI3 Virtual Infrastructure Client. As mentioned earlier, the Celerra Failback Wizard does not support the vSphere client. I have used the vSphere client for a majority of this document. The VI3 VI Client screen shots are purposely different. (Different theme with darker colors and different fonts)

Click on the EMC Celerra Failback Icon and acknowledge the Security Alert

On the next panel enter the following information:

 Protected Site: o Celerra Control Station IP address o vCenter User Credentials

 Recovery Site: o Celerra Control Station IP address o vCenter User Credentials

Page 150 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

To accomplish this task click “Configure” next to each of the required items.

Start with the Protected Site Celerra VSA and proceed to the three remaining items.

Page 151 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

When completed the screen should look like the panel below: Notice that both the protected site and the recovery site Celerra and vCenter fields have have been configured with their corresponding IP addresses.

Click on the “Configure & Run Failback” link

Page 152 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

The first task discovers available failback sessions, the LUNS associated with those sessions and the VMs residents on said LUNs. Notice the Status window on the bottom left of the panel.

Once the failback session(s) have been identified the process stops and waits on user input. You must select a Failback Session to continue. If there are multiple sessions available you have the option to select individual session of selecting all of them with the “Select All” check box under the Failback Sessions portion of the panel. The tool also provide the ability to Power On VMs after failback. Unlike SRM, this tool, as of this writing, does not support power on priorities. The best practice associate with this option dictates manual power on after failback.

Select the Failback Session listed in the Panel. (it starts with fs23_T1_LUN0_BB0050…) and happens to be the only one listed.

Click “Failback”

Page 153 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Watch the Recent Tasks window at the bottom of the VI Client in addition to the Status window as the process runs. It provides a good deal of information as to the steps in the process and the relative success of each.

Page 154 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Once completed the tool will provide “Failback Complete” in the Failback section of the Panel

Check on the success of the failback by checking you production and DR vCenter servers. They should look like they did prior to the SRM failover. Your test VMs should be back up and running (if you told the tool to power them up) and the shadow VMs on the DR vCenter should be gone.

Of course, when the SRM failover ran it in validated the Protection Groups and the associated Recovery Plans. VMware is working to provide consistency of these plans in a future release of SRM. Until then the Protection Groups and Recovery Plans will need to be manually recreated.

In my humble opinion, this is considerably less work, and more accurate than failing back using manual techniques or scripts. And compared to physical failover and failback, I can’t imagine using those methods unless I absolutely had to (i.e., workloads that can not leverage VMware or SRM).

Page 155 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Section 8: Wrap-up As I wrap this up, would like to thank the great community of virtualization geeks, gurus and testers that have used this document to perform their jobs and extend their knowledge. Thanks to all of you!

This document is the first major update since the initial release in December 2008. Since that time, this document has been downloaded by individuals, from around the globe, representing EMC, EMC partners, resellers, competitors, customers, prospects and enthusiasts. The responses and feedback have been fantastic and invaluable. I have tried to ensure accuracy throughout the process. If you find anything that needs clarification or correction please be sure to let me know. My email addresses are; [email protected] and [email protected] .

It took a great deal of time, energy and effort to put this document together. I hope the community or users and testers continue to find value in the content. Please let me know if you found this document useful.

I strongly believe that VMware SRM in conjunction with the Celerra VSAs allow for the creation of an environment fully capable of testing SRM and failback in a manner that mimics a real world implementation.

I will provide updates to this document as the technology changes and new release become available.

Thanks for taking time to look at the Celerra VSA and testing VMware Site Recovery Manager using this document.

For those of you sitting on the edge of your seat waiting for what’s next:

I will be developing a new document (in conjunction with one of my compatriots at VMware) focused on VMware Site Recovery Manager at scale. Look for something in the Fall ’09 timeframe. (maybe in time for VMworld)

Page 156 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Appendix A: Configuring the Replication Target (Command Line Interface) Replication involves a source and a target - if you're replicating file systems, then they are file systems. If you are replicating LUNs then they are LUNs. This step shows how to configure an iSCSI LUN that will be the replication target. To be a replication target, it needs to have the same size as the source, it needs to be configured to be read only, and the file system has to be bigger than the LUN itself. Therefore, you need a bit of "space reservation". This "space reservation" is common to many storage use cases involving snapshots. One nice thing is that EMC has always philosophically chosen that an "out of space condition" should cause snaps/replicas to fail, not production.

This step assumes the existence of a second Celerra VSA configured as per the contents of this document. That said, on the target VSA, there needs to be a LUN designated as a read- only LUN that the primary Celerra VSA will use as a replication target.

These next several pages will walk thru the CLI version of all these steps. The process is streamlined and much faster that using the Celerra Manager GUI. If you want to see the GUI version of these steps please refer to Section 4, Step 3 Configuring the Replication Target via the Celerra Manager GUI.

From the Celerra Console These commands will configure the Celerra VSAs with the following components:

 iSCSI Target  iSCSI LUN  Replication Interconnects  Replication

Create and Mount a file system for use with the iSCSI LUN

csprod:

[nasadmin@csprod ~]$ nas_fs -name csprod-fs1 -type uxfs -create size=45G pool=clar_r5_performance -option slice=y

[nasadmin@csprod ~]$ server_mount server_2 csprod-fs1 /nas/fs1

Page 157 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager csdr:

[nasadmin@csdr ~]$ nas_fs -name csdr-fs1 -type uxfs -create size=45G pool=clar_r5_performance -option slice=y

[nasadmin@csdr ~]$ server_mount server_2 csdr-fs1 /nas/fs1

Start the iSCSI service

csprod:

[nasadmin@csprod ~]$ server_iscsi server_2 -service -start

csdr:

[nasadmin@csdr ~]$ server_iscsi server_2 -service -start

Create an iSCSI target that will be used to serve LUNs to each Celerra VSA

csprod:

[nasadmin@csprod ~]$ server_iscsi server_2 -target -alias csprod-target1 -create 1:np=10.100.10.51 (This is the iSCSI interface on you created in Section 1-eth2)

csdr:

[nasadmin@csdr ~]$ server_iscsi server_2 -target -alias csdr-target1 -create 1:np=10.100.10.56 (This is the iSCSI interface on you created in Section 1-eth2)

Create an iSCSI LUN

Note that the LUN on the secondary needs to be set as “read-only”

csprod: (Primary)

[nasadmin@csprod ~]$ server_iscsi server_2 -lun -number 1 -create csprod-target1 - size 10G -fs csprod-fs1

csdr: (Secondary)

[nasadmin@csdr ~]$ server_iscsi server_2 -lun -number 1 -create csdr-target1 -size 10G -fs csdr-fs1 -readonly yes

Page 158 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Note: In order to setup reverse replication, from secondary to primary, a second set of LUNs needs to be created. This time the target replication LUN on the Primary must be set to read-only. The source LUN on the secondary is configured normally as a read/write LUN.

Set the iSCSI LUN masking for each ESX host

csprod:

[nasadmin@csprod ~]$ server_iscsi server_2 -mask -set csprod-target1 -initiator iqn.1998-01.com.vmware:esx2-29d419e2 -grant 1

csdr:

[nasadmin@csdr ~]$ server_iscsi server_2 -mask -set csdr-target1 -initiator iqn.1998- 01.com.vmware:esx3-017f4fc5 -grant 1

Note: The IQN number is associated with the software iSCSI interface on the ESX host. You will need to retrieve this from the ESX server via Virtual Center as noted earlier.

Page 159 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Configuring Replication

Set up mutual trusts between Celerra VSAs

This trust needs to be setup in both directions; primary to secondary and secondary to primary. The passphrase must be the same in both cases:

csprod:

[nasadmin@csprod ~]$ nas_cel -create csprod -ip 192.168.1.52 –passphrase nasadmin

csdr:

[nasadmin@csdr ~]$ nas_cel -create csdr -ip 192.168.1.57 –passphrase nasadmin

Configure the Data Movers to converse and share replicated data

csprod:

[nasadmin@csprod ~]$ nas_cel -interconnect -create csprod-to-csdr -source_server server_2 -destination_system csdr -destination_server server_2 -source_interfaces ip=10.100.10.51 -destination_interfaces ip=10.100.10.56

csdr:

[nasadmin@csdr ~]$ nas_cel -interconnect -create csdr-to-csprod -source_server server_2 -destination_system csprod -destination_server server_2 -source_interfaces ip=10.100.10.56 –destination_interfaces ip=10.100.10.51

Note: Perform this interconnect from both directions; primary to secondary and secondary to primary. If you only set it from one side replication will only work in one direction.

The interconnect must be the same in both cases.

Page 160 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Establish Replication from the primary Celerra VSA to the secondary Celerra VSA

csprod:

[nasadmin@csprod ~]$ server_iscsi server_2 –target –l

This will provide the Celerra VSA IQN number for the next command

csdr:

[nasadmin@csdr ~]$ server_iscsi server_2 –target –l

This will provide the Celerra VSA IQN number for the next command

csprod (only):

[nasadmin@csprod ~]$ nas_replicate -create srm-replicaiton -source -lun 1 -target iqn.1992-05.com.emc:bb0050568a59e40000-3 -destination -lun 1 -target iqn.1992002D05.com.emc:bb0050569a3d070000-3 -interconnect csprod-to-csdr - source_interface ip=10.100.10.51 -destination_interface ip=10.100.10.56 - overwrite_destination

csprod:

[nasadmin@csprod ~]$ nas_replicate -l

csdr:

[nasadmin@csdr ~]$ nas_replicate –l

Page 161 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Appendix B: iSCSI and NFS Discrete LUN Creation & Assignment Some people may have downloaded this document hoping for a “I just need to add an iSCSI LUN or NFS export to my ESX server from my existing Celerra VSA or physical Celerra” type of document. This appendix discussed just that. It falls outside the scope of the SRM discussion earlier but nevertheless provides insight to some of the basic functions of the Celerra VSA.

This appendix assumes that all LUNs and file systems are created for non-replicated uses.

This appendix also assumes that you have completed all of the steps in Section 1 of this document. If you need to create LUNs greater than 20GB in capacity you may also want to complete Section 2.

Step 1: Creating iSCSI Targets

Login to your Celerra VSA as nasadmin, Password: nasadmin

Navigate to the Wizards Folder

Click “New iSCSI Target”, Click “Next” to configure an iSCSI target on server_2

Enter the Target Alias Name: csprod_target1 and select Auto Generate Target Qualified Name

Note: This will generate a Unique IQN number for this target.

Page 162 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Assign the target to cge1 (your iSCSI Interface), Click “Next”

Click “Finish”

This graphic represents the complete target configuration as viewed from the iSCSI folder, Targets tab.

Page 163 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 2: Configuring iSCSI LUNs and presenting them to ESX as VMFS Datastores

You may not need to complete this step (outlined on pages 118-121) if you already configured your environment for iSCSI replication. These steps have already been completed. If you did not set up replication please continue.

From Virtual Center: Click on the “Configuration” tab of the ESX hosts you want to configure. Click on “Networking” Click on “Properties” for the vSwitch configured for iSCSI

Page 164 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select vSwith, Click “Edit”

Click on the “Security” tab Change Promiscuous Mode to Accept

Note: This is NOT required for the Celerra Simulator as it will run either way. This is a best practice as it related to most simulators

Click “OK”, Click “Close” Page 165 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

From the main panel: Click on the Security Profile. Ensure that the Software iSCSI Client is enabled

Page 166 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Move to the Storage Adapters Tab and scroll to the iSCSI Software Adapter.

Note: Two items: 1) the iSCSI Initiator IQN, 2.) Targets: currently 0 (zero)

Click “Properties”, Click “Configure”, Select Enabled

Copy the iSCSI Name field to the clipboard. You may want to write it down as well. Just in case you copy something else to the clipboard.

Click “OK”

Click on the “Dynamic Discovery” tab, Click “Add”

Enter the TCP/IP address of the iSCSI target you created earlier. Ensure the port number is 3260

Click “OK” (This may take a moment or two...)

Click “Close”

When prompted to Rescan the host Click “No”

Repeat this step for each ESX host that will attach to this target.

Page 167 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

From the Celerra Manager GUI (Using the Web Management interface: https://csprod)

Navigate to the Wizards Folder

Click “New iSCSI LUN”, Click “Next” to configure an iSCSI target on server_2

Page 168 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Pick the iSCSI target created earlier (csprod_target1), Click “Next”

Click “Create File System” (remember, like all filers, iSCSI LUNs are files in file systems)

Select the Storage Pool radio button. On the next screen select the specific Storage Pool you want to use. Two pools are defined by default; one for Performance and one for Economy. Page 169 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click “Next”, Enter the name for the new file system: iscsifs1

Enter the size of the new file system: enter an appropriate value in MB

Click “Next”, Click “Submit” wait for the job to complete, review the summary results

Click “Next”

Page 170 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Pick the file system just created from the pick list windows (it may be highlighted) Click “Next”

There are several options presented on this page: execute the options required for your specific use case. For the purpose of this document we'll focus on creating a single LUN

Enter the Number: (This is the LUN number assigned to the newly created LUN) i.e., 10

Enter the size of the new LUN

Page 171 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click “Next” Click “Enable Multiple Access” check box so it is selected Click “Add New”

Paste the ESX Host IQN number you copied earlier.

Repeat for all ESX Hosts, Click “Next”,

Click “Next” on the CHAP Access Panel

Click “iSCSI Service” check box so it is selected.

Click “Next”

Click “Finish” Wait for the job to process, review the summary for errors

Click “Close”

Page 172 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

From Virtual Center: Select one of the ESX host(s) just masked to the iSCSI target.

Click on the “Configuration” tab, Select Storage Adapters. Select the iSCSI software adapter

Click “Rescan”, Accept the defaults and Click “OK”

Notice the “Targets” count has incremented by the number of LUNs associated with the target and masked to this server. In this case, maybe one or two LUNs..

Perform these steps for all ESX hosts in the cluster.

To this point all we have done is allow the ESX servers to see the iSCSI LUNs. We must establish a VMFS file system so they are usable within the Virtual Infrastructure.

Page 173 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Click on the “Configuration” tab, Select Storage, Click Add Storage

Select the “Disk/LUN” radio button, Click “Next”

Select the LUN just added (i.e., LUN10 which may be display as vmhba32:0:10)

Click “Next”

Review the current disk layout and notice the message: “The hard disk is blank.”

Click “Next”

Enter a Name in the “Datastore Name” field: Use something obvious like “csprod_iscsi_lun10”

Click “Next”

Review VMFS Parameters

Click “Next”

Click “Finish”

Verify the LUN was added to the ESX host.

Page 174 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Step 3: Configuring NFS LUNs and presenting them to ESX as NFS Datastores Note: This step is not required for SRM but is included here for completeness of configuration and protocol options.

From Virtual Center: Click Security Profile for the ESX host(s). Enable the NFS Client

From the Celerra Manager GUI (Using the Web Management interface: https://csprod)

Click “Wizards”. Click “New File System”

Select Data Mover: server_2, Click “Next”

Page 175 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select the Storage Pool radio button. On the next screen select the specific Storage Pool you want to use. Two pools are defined by default; one for Performance and one for Economy.

Click “Next”

Page 176 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Enter the name for the new file system: nfsfs1

Enter the size of the new file system: enter an appropriate value in MB

Click “Next”, “Next”, “Next”

Click “Close”

Page 177 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Back on the main panel:

Click “NFS Exports”, Click “New”

Page 178 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Select the appropriate file system; i.e., nfsfs1 (/nfsfs1)

Enter TCP/IP Addresses for each ESX host that will access this export in the “Root Hosts:” field

Note: this field accepts wild cards and CIDR formatted entries

Click “OK”

Celerra Manager will finish the task and present a list of the NFS exports as confirmation.

Page 179 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

From Virtual Center: Click on the “Configuration” tab, Select Storage, Click “Add Storage”

Select the “” radio button, Click “Next”

Page 180 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

In the Properties panel enter the Server: name or IP and the Folder: that server is exporting: Example: Server: 192.168.1.247

(NOTE: this is a datamover (cge0) interface NOT the control station)

Folder: /nfsfs1

Enter a Name in the Datastore Name field: Use something obvious like csprod_nfs1, Click “Next”

Review the properties, Click “Finish”

Verify the LUN was added to the ESX host.

Page 181 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Appendix C: Basic VI Commands Credits to: Douglas (Doug) Palovick Homepage: Palovick.com email: [email protected]

What is VI and is it for you? VI is an extremely powerful text editor (not to be confused with a word processor). Here you will find basic VI commands for use within the VI, or vim, editor.

Ok, now onto some VI basics…

Opening a file/starting vi

• To start vi, all you need to do is type "vi" (minus the quotes, always minus the quotes from here on through)

• Let’s say I wanted to open a file named foo.txt that resides in the same directory I am currently in. I would type "vi foo.txt"

• Now lets say I

• i wanted to open inetd.conf which lives in the /etc directory. I would type "vi /etc/inetd.conf". The directory path to inetd.conf is always the same, no matter what directory you are in.

Quitting VI

• First let’s make sure you are in what is called "command mode". We do this by pressing the Esc key.

• Now we press " : ". You should see a little prompt at the bottom of your page. Now press "q" and hit enter.

• If it will not let you quit, it is because you have made some changes to the file and you must press ":", then "q!", then enter to force vi to quit (warning, this will disregard any changes you have currently made).

• If you would like to save the changes you have made within you vi session press ":", then "wq", now press enter. If vi ever complains that you're trying to make changes to a read-only file, try "w!" or "wq!".

• If you ever have any problems with doing this or other "command mode" commands hit the Esc key once or twice to make sure you are in command mode and try again.

Page 182 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager

Basic command mode commands for inserting text

• First make sure you are in command mode by pressing the Esc key and then make sure you press ":" before you try any of the commands bellow. Please note that after you enter one of these commands in command mode you will be put into insert mode, which will enable you to enter new text into the file. In order to get back to command mode you will have to press the Esc key again.

c - Add changes to file. This is a commonly used command for editing a file with vi. When this option is invoked, you rewrite over the text you want to change by backing up over it using the del key. Simply re-type over the text you have backed up over. When you use the left arrow key to back up over text you can not rewrite over the text you backed up over, but just insert new text at the point where you stopped backing up. Make sense? If not, no worries, you'll catch on.

a - Append text after the cursor.

A - Append text at the end of the line.

d - delete text

i - Insert text before the cursor.

I - Insert text at the beginning of the line.

R - Allows you to overwrite over text.

Page 183 of 184 EMC Celerra Virtual Storage Appliance (VSA) & VMware Site Recovery Manager Appendix D: Troubleshooting Tips

HTTPS Issues

To view whether the HTTPD daemons are enabled at the Control Station and to reenable them if necessary, type:

$ ps -e|grep httpd

If the HTTPD daemons are not running, restart the Celerra Manager by switching to root and typing: /nas/http/nas_ezadm/etc/script restart

Are all of the Celerra VSA processes running?

To view whether the Celerra VSA daemons are enabled at the Control Station and to re-enable them, type: [nasadmin@csprod ~]$ ps -e|grep nas | awk ' { print $4 } ' | sort | uniq nas_boxmonitor nas_eventcollec nas_eventlog nas_mcd nas_replicate nas_watchdog

The complete list of daemons is shown in the Output. The output list for the server might be different. If the daemons are not running, restart them by typing:

/etc/rc.d/init.d/nas stop /etc/rc.d/init.d/nas start

Restart a data mover server_cpu server_2 -reboot -monitor now

You can verify when the system is back online by using /nas/sbin/getreason

Code 5 indicates the Data Mover or blade is available.

You might also see the following codes as the Data Mover or blade restarts:  0 (reset) — Data Mover or blade performing a power on self-test (POST)  1 (DOS booted) — Data Mover or blade booting to DOS  3 (loaded) — operating system loaded and initializing  4 (ready) — operating system initialized  5 (contacted) — Data Mover or blade available

Page 184 of 184