Deploying a Cisco UCS based Converged Infrastructure Solution in an Enterprise Data Center LTRDCN-2010 Speakers: Haseeb Niazi, Sreeni Edula

1 | Page

Table of Contents

LEARNING OBJECTIVES ...... 4 LAB TOPOLOGY ...... 4 ACCESSING THE CISCO LIVE LAB ...... 5

CONNECTIVITY TO CISCO LIVE LAB ...... 5 ACCESS AND CONFIGURATION INFORMATION FOR LAB (LAB ACCESS GUIDE) ...... 6 VERIFY IP CONNECTIVITY TO KEY COMPONENTS AND TOOLS ...... 6 LAB 0: CONNECT TO THE LAB TESTBED ...... 6

TASK 1: VPN INTO CISCO LAB ...... 6 TASK 2: RDP TO JUMP- ...... 6 TASK 3: VERIFY ACCESS TO FTP SERVER ...... 7 TASK 4: OPEN WEB BROWSER AND NOTEPAD++ ...... 7 LAB 1: INTRODUCTION TO CISCO UCS AND NEXUS PROGRAMMABILITY ...... 8

TASK 1: INTERACTING WITH UCS MANAGER USING POWERSHELL ...... 8 TASK 2: GENERATE CISCO UCSM CREDENTIALS FOR NON-INTERACTIVE LOGIN ...... 9 TASK 3: CREATE A POWERSHELL SCRIPT TO CONNECT TO CISCO UCSM ...... 11 TASK 4: CONFIGURE CISCO UCS MANAGER USING POWERSHELL – ADD VLANS ...... 14 Step 1: Login to Cisco UCS GUI ...... 15 Step 2: Enable XML Recording from the GUI ...... 15 Step 3: Configure VLANs ...... 15 Step 4: Stop XML recording and save file ...... 16 Step 5: Use UCS PowerTool to Convert XML to PowerShell commands ...... 17 Step 6: Integrate PowerShell configlet into main PowerShell script ...... 17 Step 7: Configure UCS using the PowerShell script ...... 18 TASK 5: CONFIGURE CISCO NEXUS 9000 SERIES SWITCHES – ADD VLANS ...... 19 Step 1: Login to NX-API Developer Sandbox Tool for each Nexus Switch ...... 20 Step 2: Enter CLI Configuration in the Sandbox Tool for the first Switch ...... 21 Step 3: Generate JSON Configuration for the Nexus Configuration ...... 22 Step 4: Generate Python script for the Configuration ...... 22 LAB 2: DEPLOYING A CONVERGED INFRASTRUCTURE ...... 26

SETUP CISCO UCS COMPUTE AND STORAGE ACCESS ...... 26 Task 1: Generate a Service Profile from Service Profile Template ...... 26 Task 2: Deploy a UCS server using the Service Profile ...... 30 Task 3: Load ESXi image and Power Up server ...... 31 SETUP VIRTUALIZATION LAYER ...... 36 Task 1: Configure ESXi Host ...... 36 Task 2: Add ESXi Host to vCenter ...... 38 APPENDIX A: LAB ACCESS INFORMATION ...... 44

2 | Page

APPENDIX B: CONVERGED INFRASTRUCTURE LAB – STEP BY STEP CONFIGURATION ...... 52

SETUP COMPUTE AND STORAGE ACCESS ...... 52 Configure Cisco UCS Server and Storage Access ...... 52 Task 1: Review Base Setup ...... 53 Task 2: LAN Configuration ...... 54 Task 3: SAN Configuration ...... 62 Task 4: Server Configuration ...... 63 Task 5: Create Service Profile Template ...... 66 Task 6: Deploy Cisco UCS Service Profile ...... 68 VCENTER – DETAILED CONFIGURATIONS ...... 71 Setup VMware vCenter ...... 71 NETWORK SETUP – CONFIGURE NEXUS 9000 SWITCHES ...... 89 Login to NX-API Developer Sandbox Tool for each Nexus Switch ...... 89 Configure VLANs on each Switch using Sandbox Tool ...... 89

3 | Page

Learning Objectives Upon completion of this lab, you will be able to:

• Configure Cisco UCS Compute and Network for setting up the Converged Infrastructure • Configure and setup VMware infrastructure for hosting application VMs

Note: While storage system is an important consideration for setting up Converged Infrastructure, this lab does not cover storage system provisioning in this lab. The storage system has been pre-provisioned with the appropriate boot LUNs and datastores and is ready to use.

Lab Topology This lab is based on a Cisco UCS based Converged Infrastructure solution. The physical topology of the testbed is shown in the figure below.

Figure 1 Physical Topology

The lab is divided into several user PODs to support multiple users simultaneously as outlined below:

• Each attendee will have their own POD. • Each POD is assigned a dedicated UCS blade server to configure. • Servers in each Pod connect to Cisco UCS Fabric Interconnects (FI) that make up the Cisco UCS domain. The Fabric Interconnects are a shared resource. • To avoid accidental overlap with other users in the same UCS domain (FI pair), each POD is part of a dedicated organization and configuration will be contained

4 | Page

within your Pod’s organization for the most part. Some UCS configuration are global (for example, VLANs). • Each POD is assigned its own dedicated pre-configured Storage LUN and NFS Datastore volumes on a storage system. • The Nexus switches that provide connectivity between the Cisco UCS domain and storage network are shared by all Pods. • Each POD has a dedicated Jump-Server • All PODs shared a common VMware vCenter.

Accessing the Cisco Live Lab In this section, you will find all the information necessary to access the lab including:

• Connectivity info from the Cisco Live room to the remote lab • How to connect to the remote Cisco Live Lab • How to access tools necessary to complete the lab • Access and Configuration Information for the lab (Lab Access Guide)

Connectivity to Cisco Live Lab The topology below shows the connectivity from your workstation in the Cisco Live Room to the remote Cisco Live Lab, hosted in Cisco’s DMZ network.

Figure 2 Lab Connectivity

To connect to the remote Cisco Live lab, establish a VPN session from the Cisco Live workstation to the Lab ASA (see above topology). After establishing a VPN session, attendees will use an RDP client on their Cisco Live workstation to remote desktop into a dedicated Jump-Server VM for their assigned POD. The Jump-Server VM for each POD is hosted inside the remote Cisco lab environment and can only be accessed

5 | Page after VPN session is established. From the Jump-Server, you should have access to all the tools and services necessary to complete the lab.

Access and Configuration Information for lab (Lab Access Guide) In addition to this document, a separate PDF document (Lab Access Guide) is provided with all the relevant access and configuration parameter information. Lab Access Guide includes information such as IP addresses of key components in the infrastructure, Account Info, Access Credentials, IP Addressing, VLANs etc. This information is also included in Appendix A of this document. However, we recommend that you use the separate PDF document and keep it open to quickly access the information that you need to step through various tasks in this lab.

Verify IP Connectivity to Key Components and Tools Verify that you can ping the DNS server, NTP server and other components shown in the figure above. You will not be able to ping the default gateway as it is on a Firewall. See Table 1 in the Lab Access Guide (or Appendix A) of this document for IP and access information.

Lab 0: Connect to the Lab Testbed The first lab will cover how to connect to the remote Cisco Live Lab and access tools necessary to complete the lab.

Task 1: VPN into Cisco Lab

To VPN to the remote Cisco Lab, complete the following steps: 1. On the Cisco Live (CL) workstation, bring up the Cisco AnyConnect VPN client. 2. Enter the following VPN Server IP address: 64.100.248.78 and click Connect. 3. Ignore any certificate warnings. 4. Use the VPN login credentials provided by the lab instructors and click Ok. 5. Click Accept when the banner appears. 6. Wait for VPN tunnel to be established (lock symbol appears in the system tray)

Task 2: RDP to Jump-Server

Each Pod is assigned a Jump Server hosted in the remote Cisco Lab. You will primarily use this Jump Server to configure the lab. To RDP to the Jump-Server for your Pod, complete the following steps: 1. Launch RDP from your CL workstation. 2. Collect the Jump-Server IP Address and login credentials – see Table 2 in the Lab Access Guide or Appendix A of this document. 3. RDP to the IP address of the Jump-Server and login.

6 | Page

Task 3: Verify Access to FTP Server

To complete the lab, you will need to access a shared storage space on the lab FTP server where we will maintain all configuration scripts. This shared storage space allows the lab instructors to push scripts and review user scripts in case of any issues.

To verify access to this common repository (Z:), complete the following steps. 1. From your POD’s Jump-Server, navigate to Start > Computer > Open. 2. Verify there is a network mount on Z: drive for the FTP server (192.168.155.150). The POD number should match your assigned POD number. The figure below shows for POD1.

3. If the drive is disconnected, double-click the drive to connect. Enter the credentials – see Table 1 in the Lab Access Guide.

Task 4: Open Web Browser and Notepad++

For the upcoming labs, you will need a web-browser to access Cisco UCS Manager and Nexus 9000 switches. You will also need Notepad++ for viewing and editing scripts. Complete the following steps to get these tools ready for the upcoming labs. 1. From your POD’s Jump-Server, open 3 Tabs on a web-browser 2. Login to Cisco UCSM, Nexus9k-1, Nexus9k-2 – see Table 1 in Lab Access Guide for credentials. 3. Launch Notepad++.

7 | Page

Lab 1: Introduction to Cisco UCS and Nexus Programmability

In this section, you will be introduced to the Cisco UCS and Cisco Nexus programmability interfaces and will learn to do the following: • Develop a PowerShell script to configure a single feature on a Cisco UCS Server. • Develop a Python script to configure a single feature on Cisco Nexus switches.

The high-level steps involved in this task are as follows. • From your POD’s Jump-Server, use the pre-installed UCS PowerTool (UPT) application to create and execute PowerShell (PS) scripts • From a web-browser on your POD’s Jump-Server, use the Nexus Sandbox Tool and NX-API to configure Nexus 9000 switches

Task 1: Interacting with UCS Manager using PowerShell

Customers interact with UCS Manager using GUI very frequently. This task enables you to understand how easy it is to gather a lot of information very quickly using Command Line Interface (CLI) commands.

In this task, you will connect to UCS Manager using PowerShell and issue a single command to gather the number, type and serial numbers of all the blades (spread across multiple chassis) in the UCS system.

Complete the following steps to collect inventory information from a Cisco UCS system.

1. From your POD’s Jump-Server, launch Cisco UCS Manager PowerTool. 2. At the prompt: PowerTool :\>, enter the following to connect to UCSM.

Connect-Ucs 192.168.155.20

3. Provide the username and password for your POD from Table 3 of the Lab Access Guide. 4. When the login is complete, you will see output similar to shown below:

8 | Page

5. Issue the following command to list details about all the blades in the system. You can use “| more” at the end of the command to be able to scroll.

Get-UcsBlade | more

6. Now, to filter the output to only list the chassis/blade numbers, types and serial numbers, use the following command

Get-UcsBlade | select dn, serial, model

7. The output of the command is similar to the figure shown below:

Task 2: Generate Cisco UCSM Credentials for non-interactive login

In the previous task, we learned how to use the PowerTool CLI to gather various parameters from Cisco UCS. We also used PowerTool interactively to access Cisco

9 | Page

UCS. However, for configuration ease and re-use, customers typically write scripts to programmatically access and configure Cisco UCS.

For a programmatic or non-interactive login to Cisco UCS Manager, the following methods can be used:

• Use -Credential parameter when connecting using connect-Ucs.

• Use a previous session's (using connect-Ucs) exported credentials as input.

In this task, we will use the 2nd method and export credentials from an existing session. By exporting an existing session’s Cisco UCS Manager (UCSM) credentials, it can be used in future scripts to connect non-interactively to Cisco UCSM. The credentials are unique to each POD. To export an existing UCS session’s credentials, complete the following steps. All commands should be entered at the PowerTool C:\> prompt. 1. Check if a C:\UCS already exists – use ls or dir. 2. If a C:\UCS\ directory does not exist, enter the following command to create it.

New-Item -ItemType Directory UCS

3. Enter the following command to export the current UCS session credentials.

Export-UcsPSSession -Path C:\UCS\BB01-UCSM-cred.xml

4. When prompted for a Key:, enter BB01-UCSM. This key will be included in the PowerShell scripts and you will not need to type it when running the scripts. 5. Verify that the BB01-UCSM-cred.xml file is created in C:\UCS directory.

6. View the contents of the file using the following command.

7. The newly created credentials file can now be used to non-interactively login to Cisco UCSM to run scripts. 8. To verify the connectivity using the file we just created: a. Close the current PowerShell session:

Disconnect-Ucs

b. Reconnect to UCSM using newly generated credentials file:

10 | Page

Connect-Ucs -Path C:\UCS\BB01-UCSM-cred.xml

c. Enter BB01-UCSM when prompted for the key.

d. Disconnect the session before proceeding to the next Task.

Disconnect-Ucs

Task 3: Create a PowerShell Script to connect to Cisco UCSM

In this task, you will create a basic script to connect, login and disconnect from Cisco UCSM programmatically using the previously created xml credentials file. This script will be a foundation for all future scripts.

Complete the following steps to create the PowerShell script. 1. From your POD’s Jump-Server, navigate to Start > Computer. 2. Select the Network Location for your POD<#>(\\192.168.155.50)(Z:) 3. Right click on the script CL-ConnectToUCS.ps1 and select Edit with Notepad++.

11 | Page

4. From Notepad++ menu, create a new empty file by selecting File à New. 5. Save this new empty file in the PS-SCRIPTS directory of Z: share using the name shown below. Select Windows PowerShell for the file type – .ps1 extension will now automatically be added to the filename.

Z:\PS-SCRIPTS\ConfigureUCS.ps1

6. You should now have two tabs open in notepad++ with the 2 files open.

7. Copy the contents of the first file (CL-ConnectToUCS.ps1) to second one (ConfigureUCS.ps1). The second file will form the script that we will use for configuring the UCS. This step takes care of connecting to the UCS. 8. Review the script as you copy to get some understanding of what the script is doing. When complete, save second file and close both the files. The script and explanations of various commands are copied here for your reference.

12 | Page

# This script is for connecting to a Cisco UCSM non-interactively # using the XML Credentials file that was created using an earlier # PowerShell script.

# START OF SCRIPT; Prints the script execution time

Write-Host "Start Time is " (Get-Date).DateTime

# DEFINE VARIABLES

Write-Host "Setting Variables"

# The System Name and Key is assigned to a variable; this key was used during credential file creation

$UCSName = "BB01-UCSM"

$CredentialKey = "BB01-UCSM"

# Connect to UCS and use the credential key as part of the command. The plain text key cannot be used in the Connect-Ucs command hence the key is converted to secure string, assigned to a new variable $FPKey and then used in the Connect-Ucs command

Write-Host "Connecting to UCS"

$FPKey = $CredentialKey | ConvertTo-SecureString -AsPlainText -Force

Connect-Ucs -LiteralPath c:\UCS\"$UCSName"-cred.xml -Key $FPKey

# DISCONNECT FROM UCSM

Write-Host "Disconnect from UCS Manager"

DisConnect-Ucs

# END OF SCRIPT

Write-Host "End Time is " (Get-Date).DateTime

9. Switch to the UCS PowerTool window and execute the newly created script using the command line shown below. Verify that the script executes correctly and generates output similar to the screen capture below.

Z:\PS-SCRIPTS\ConfigureUCS.ps1

If the script does not execute correctly and you get the following output:

You will need to remove the restriction by issuing the following commands in the PowerTool window, at the PowerTool C:\> prompt:

Set-ExecutionPolicy -scope Process -Executionpolicy Unrestricted

13 | Page

unblock-file Z:\PS-SCRIPTS\ConfigureUCS.ps1

10. Execute the script again and it should complete successfully.

11. Now you’re ready to start configuring the UCS.

Task 4: Configure Cisco UCS Manager using PowerShell – Add VLANs

In this task, you will add the functionality to create vlans to the script from previous task. The resulting script will be able to connect, login, create vlans and disconnect from Cisco UCSM.

The high-level steps involved in this task are as follows.

14 | Page

1. Login into Cisco UCSM GUI and use the Record XML tool to record the steps involved in configuring VLANs on Cisco UCSM. The tool will record and generate a log file with the XML requests generated by the GUI configuration. 2. Use UCS PowerTool cmdlets to convert from XML to PowerShell commands. 3. Integrate the PowerShell commands into the main UCS Configuration script to have a complete script that connects to UCSM, configures VLANs and then exits. Complete the following steps to create a PowerShell script to create VLANs on Cisco UCSM:

Step 1: Login to Cisco UCS GUI 1. From your POD’s Jump-Server, open a web browser. 2. Navigate to the IP address of Cisco UCSM (192.168.155.20). 3. Launch UCS Manager. 4. Login using the credentials for your POD from Table 3 of the Lab Access Guide.

Step 2: Enable XML Recording from the GUI 1. Start XML Recording using the command sequence CNTL-ALT-Q. 2. You should now see Record XML on the top navigation menu. Make sure you click in the browser window to make the web-browser window active before using the command sequence.

3. Click on Record XML and it should now change to Stop XML Recording. 4. Any configuration changes from now onwards will be recorded.

Step 3: Configure VLANs 1. Click the LAN icon on the left navigation menu. 2. Navigate to LAN > LAN Cloud > VLANs. 3. Click VLANs and on the right pane, click the Add button.

15 | Page

4. In the Create VLANs pop-up window, specify the VLAN Name and VLAN ID for for your POD from Table 4. 5. Leave everything else default. 6. Click OK twice to accept and confirm the changes. 7. If this VLAN already exists and you get the following error, exit out of the Create VLANs pop-up window.

8. Delete the existing VLAN by right clicking on the existing VLAN under LAN > LAN Cloud > VLANs. Re-add the VLAN using steps 3-6 to capture the XML requests in the recording.

Note: that the XML recording will only include the addition of the VLAN (which is what we want) and not the deletion.

Step 4: Stop XML recording and save file

1. Click on Stop XML Recording to stop the recording. 2. In the pop-up window, for the Log File, enter a file name AddUplinkVLAN and click OK. This log file has the XML requests associated with the GUI configuration changes. 3. In the pop-up window, specify the directory to save the file in. Navigate to XML- REQUESTS directory in your POD’s Scripts folder. If the browser does not prompt you for the save location, the file is probably saved automatically in the Download folder. Move the file to Z:\XML-REQUESTS folder. 4. Verify the file is saved in the correct folder and the file name is correct.

16 | Page

Step 5: Use UCS PowerTool to Convert XML to PowerShell commands

1. Switch to the UCS PowerTool window and execute the following UCS cmdlet to convert from XML Requests to PowerShell commands. Use the same file name for both XMLRequests File and PSConfiglets File (in this case: AddUplinkVLAN)

ConvertTo-UCSCmdlet –Xml -LiteralPath Z:\XML- REQUESTS\ > Z:\PS-CONFIGLETS\

• XMLRequests File à Full path to recorded XML log file in XML-REQUESTS folder. This file was created in the previous step. • PSConfiglets File à Full path to a new file in PS-CONFIGLETS directory. This file does not need to be created ahead of time but the directory should already be present in your POD’s Scripts directory.

2. Navigate to Z:\PS-CONFIGLETS folder, select the PowerShell Configlet created in the previous step, right-click and select Edit with NotePad++ to open the newly created file. Select View > Word Wrap from the menu if needed. 3. The output should be similar to following.

Get-UcsLanCloud | Add-UcsVlan -CompressionType "included" - DefaultNet "no" -Id 917 -McastPolicyName "" -Name "CL-POD17-VLAN" - PolicyOwner "local" -PubNwName "" -Sharing "none"

4. To prevent a script failure due to an existing configuration that matches this, add -ModifyPresent to the above output as shown below. This does not delete the existing configuration – it will just prevent error messages from spewing to the console to indicate the configuration exists (more precisely, duplicate object error messages. )

Get-UcsLanCloud | Add-UcsVlan -ModifyPresent -CompressionType "included" -DefaultNet "no" -Id 917 -McastPolicyName "" -Name "CL- POD17-VLAN" -PolicyOwner "local" -PubNwName "" -Sharing "none"

Step 6: Integrate PowerShell configlet into main PowerShell script

1. In NotePad++, open the previously created UCS configuration script “Z:\PS- SCRIPTS\ConfigureUCS.ps1”. 2. Copy the contents of the configlet “Z:\PS-CONFIGLETS\AddUplinkVLAN.txt” and paste the contents to the UCS configuration before the Disconnect from UCSM section of the script as shown below:

17 | Page

3. Save the file.

Step 7: Configure UCS using the PowerShell script In this step, we will delete the VLAN created using the UCS GUI and then add it back in using the PowerShell script. 1. Use a web browser to log into Cisco UCSM. Click the LAN icon on the left navigation menu. 2. Navigate to LAN > LAN Cloud > VLANs. 3. From the right window pane, right-click the VLAN for your POD (one you created in an earlier step) and select Delete. Click Yes and OK to confirm the deletion. 4. Switch to UCS PowerTool window and execute the script Z:\PS- SCRIPTS\ConfigureUCS.ps1.

18 | Page

5. From the Cisco UCSM GUI, navigate to LAN > LAN Cloud > VLANs and verify the VLAN has been created.

Note: With the completion of this step, you have learnt to obtain UCS PowerShell commands by recording and converting XML configlets and have successfully incorporate configuration commands into a master PowerShell script.

Task 5: Configure Cisco Nexus 9000 Series switches – Add VLANs

In the previous task, you created VLANs on the UCS side – specifically on the Fabric Interconnect uplinks going to Nexus 9000 series switches,

19 | Page

In this task, you will add the VLANs to the Nexus links going to the Fabric Interconnects. The VLANs are added to the Port-Channels that are part of the Virtual Port Channel (vPC) configuration between Cisco UCS Fabric Interconnects and Nexus 9000 Series switches. You will also create a basic script to login to Cisco Nexus 9000 series switches and create VLANs.

The high-level steps involved in this task are as follows. 1. Use a web browser to login NX-API Developer Sandbox tool by browsing to the IP address of the Nexus switches 2. Use the Sandbox Tool from the browser to configure VLANs using Nexus CLI commands and post the configuration to the switches. 3. Use the Sandbox tool to generate the Python code from the CLI commands.

Complete the following steps to create a Python script to configure VLANs on Nexus switches.

Step 1: Login to NX-API Developer Sandbox Tool for each Nexus Switch

1. Collect the access IP and login for the two Nexus switches from Table 1 of the Lab Access Guide. 2. From the Jump-server, open a web browser. 3. Navigate to the IP address of Nexus-1 switch and login. 4. Open a second tab in the browser. 5. Navigate to the IP address of Nexus-2 switch and login.

Note: If you are using Chrome and receive following warning: “Flash is not available in the browser”, enter chrome by typing following: chrome://settings/content in the address bar. Click Flash and under “Allow”, add the IP addresses of the two Nexus switches. Alternately, click the padlock and allow running Flash for the site.

20 | Page

Step 2: Enter CLI Configuration in the Sandbox Tool for the first Switch

1. From Table 4, identify VLAN Name and ID for your POD (same as previous task) 2. In the Sandbox tool window (top, left) for Nexus-1, enter the CLI configuration to create and add VLANs to the three port-channels (13,14,155). Insert the VLAN ID and VLAN name for your POD.

vlan name interface port-channel 13-14, port-channel 155 switchport trunk allowed vlan add

3. Since this is a shared environment, verify the VLANs name and ID in Table 4 and verify the keyword add is in the switchport trunk statement.

21 | Page

4. Click POST to apply the configuration to the Cisco Nexus-1 switch.

Step 3: Generate JSON Configuration for the Nexus Configuration

In this step, you will format the Nexus-1 configuration as JSON data using the Sandbox tool. 1. In the Sandbox tool for Nexus-1, Select json for the API message format for the Nexus-1 configuration. 2. Select cli_conf for the Command type.

3. In the REQUEST window, click on the Copy button to copy the generated API message to a new tab in the Notepad++. 4. The output from JSON should be similar to output shown below. Parse through the JSON code to familiarize yourself with relationship between the CLI and JSON API.

{ "ins_api": { "version": "1.0", "type": "cli_conf", "chunk": "0", "sid": "1", "input": "vlan 917 ;name CL-POD17-VLAN ;interface port-channel 13-14, port-channel 155 ;switchport trunk allowed vlan add 917", "output_format": "json" } }

Step 4: Generate Python script for the Configuration

In this step, you will create a Python script which will be executed to configure the second Nexus switch (Nexus-2) with the same VLAN configuration. 1. Click on the Python button in the REQUEST window of the Nexus 9K-1 in the chrome browser and click Copy button to copy the generated Python code in a new tab in Notepad++. 2. The output from the Python script is shown below. Modify the highlighted lines in the script from Table 1. Also, add a print statement at the end of the script for printing the response from the switch

import requests

22 | Page

import json

"""

Modify these please

"""

url='http:///ins'

switchuser='USERID'

switchpassword='PASSWORD'

myheaders={'content-type':'application/json'}

payload={

"ins_api": {

"version": "1.0",

"type": "cli_conf",

"chunk": "0",

"sid": "1",

"input": "vlan 917 ;name CL-POD17-VLAN ;interface port-channel 13-14, port-channel 155 ;switchport trunk allowed vlan add 917",

"output_format": "json"

}

}

response = requests.post(url,data=json.dumps(payload), headers=myheaders,auth=(switchuser,switchpassword)).json()

print response

3. Save the file in the Z:\ directory with the file name: SwitchConfig and set the type as Python file (*.py; *.pyw).

23 | Page

4. Edit the file and insert the IP address, username and password for the second Nexus-2 switch (Table 1) in the url, switchuser and switchpassword fields.

"""

Modify these please

"""

url='http://YOURIP/ins'

switchuser='USERID'

switchpassword='PASSWORD'

The modified file should look like following.

5. Save the file. 6. Bring up a command prompt and change directory to C:\Python27 7. Execute the script using following command:

24 | Page

python.exe z:\SwitchConfig.py

8. Log into Nexus-2 using the web browser and enter the following command to verify the VLAN was configured correctly. The command type should be cli_show. Click Post.

show vlan id

9. Verify the Response Output shows the VLAN is active and enabled on the three port channels

25 | Page

Lab 2: Deploying a Converged Infrastructure

Setup Cisco UCS Compute and Storage Access

In this lab, you will configure a Cisco UCS server using a service profile template (SPT). A service profile template captures all the configuration for a server as a template which can then be used to deploy hundreds of servers. The configuration captured in a template is in the form of pools and policies that enables multiple servers to be configured by allocating from the pool. Pools include UUID Pools, MAC Pools, VLAN pools, and IP address pools for KVM Management, iSCSI SAN boot etc. For this lab, you will be provided a pre-configured service profile template. You will use this template to instantiate a service profile which is the configuration for one server. The service profile is then associated with a physical server which applies the configuration to that physical server – otherwise the configuration is not tied to any hardware. You will also capture the XML requests associated with deploying a service profile from a SPT, similar to the XML requests captured for VLAN configuration in the Introductory lab. The PowerShell configlets will then be converted and incorporated into the larger UCS configuration script: ConfigureUCS.ps to automate the above configuration steps. To better understand Service Profile Templates, a complete lab that creates a Service Profile Template, including the pools, policies and other configurations that make up the SPT is provided in Appendix B of this document.

Task 1: Generate a Service Profile from Service Profile Template

In this task, you will deploy a Service Profile that consolidates all of the configuration associated with a single Cisco UCS server. This includes LAN, SAN and Server aspects of the server’s configuration. The Service Profile is generated from a Service Profile Template named: CL-POD<#>-SPT-ESXi.

The high-level steps involved in this task are as follows. 1. Login into Cisco UCSM GUI to create the Service Profile. 2. Use the Record XML tool to start recording the XML requests generated when a Service Profile is deployed from Cisco UCSM GUI. 3. Deploy the Service Profile using the pre-configured Service Profile Template. 4. Stop the XML recording. 5. Convert XML requests to PowerShell configlets using UCS PowerTool cmdlets. 6. Integrate PS configlets into the main UCS Configuration script using NotePad++.

Complete the following steps to deploy a service profile for a Cisco UCS server. Step 0: Login to Cisco UCS GUI 1. From your POD’s Jump-Server, open a web browser. 2. Navigate to the IP address of Cisco UCSM (192.168.155.20). 3. Launch UCS Manager. 4. Login using the credentials for your POD from Table 3 of the Lab Access Guide. Step 1: Enable XML Recording from the GUI 1. Start XML Recording using the command sequence CNTL-ALT-Q.

26 | Page

2. You should now see Record XML on the top navigation menu.

3. Click on Record XML and it should now change to Stop XML Recording. 4. Any configuration changes from now onwards will be recorded.

NOTE: The configuration done in the next few steps will be recorded. If you make mistakes, either you can stop the recording and restart or you can continue the capture while correcting the changes. If you do the latter, you may not need to add -ModifyPresent to the PowerShell commands.

Step 2: Create Service Profile (SP) to configure UCS servers in your POD

1. From Cisco UCS Manager, click on the Servers icon in the left navigation menu. 2. Expand Servers > Service Profile Template > root > Sub-Organizations. 3. Select your POD’s Organization. 4. Right-click and select Create Service Profile From Template. 5. In the Create Service Profiles From Template wizard, configure a single Service Profile for your POD using the information from Table 10. 6. For the “Service Profile Template”, select the service profile for your POD under the correct organization.

27 | Page

7. Click OK Step 3: Stop XML recording and save file

1. Click on Stop XML Recording to stop the recording. 2. In the pop-up window, enter a file name for the XML Log file. The log file has the XML requests generated by the configuration changes. The filename should have the following format: CL-POD<#>-ESXi-Host-1. Click OK. 3. In the Save As pop-up window, specify the directory to save the file in. Navigate to XML-REQUESTS directory and double-click to open the directory. Click Save. 4. Verify the file is saved in the Z:/XML-REQUESTS folder. If the browser does not prompt you for the save location, the file is probably saved automatically in the Download folder. Move the file to Z:\XML-REQUESTS folder. Step 4: Use UCS PowerTool to Convert XML à PowerShell commands

1. To convert from XML Requests to PowerShell commands, switch to the UCS PowerTool window and execute the following UCS cmdlet. Use the same file name for both XMLRequests File and PSConfiglets File (in this case: CL- POD<#>-ESXi-Host-1).

ConvertTo-UCSCmdlet –Xml -LiteralPath Z:\XML-REQUESTS\CL-POD<#>- ESXi-Host-1 > Z:\PS-CONFIGLETS\CL-POD<#>-ESXi-Host-1.txt

2. Navigate to Z:\PS-CONFIGLETS folder in your POD’s Scripts folder. Select the newly created PowerShell Configlet, right-click and select Edit with NotePad++ to open the file. Select View > Word Wrap from the menu if needed. 3. The output should be similar to the following.

28 | Page

Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD17" -LimitScope | Get-UcsServiceProfile -Name "CL-POD17-SPT-ESXi" -LimitScope | Add- UcsServiceProfileFromTemplate -NewName @(" CL-POD17-ESXi-Host-1") - DestinationOrg "org-root/org-ORG-CL-POD17"

Step 5: Modify PowerShell Configlet

1. Add comments with a preceding ‘#” for readability of script. For example…

# Deploying a Service Profile from Service Profile Template

2. Use the Write-Host command to send messages to the console during script execution. Add the following statements to the script.

Write-Host "Create Service Profiles from Service Profile Template"

3. The resulting output should be as follows.

# Deploying a Service Profile from Service Profile Template

Write-Host "Create Service Profiles from Service Profile Template"

Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD17" -LimitScope | Get-UcsServiceProfile -Name "CL-POD17-SPT-ESXi" -LimitScope | Add- UcsServiceProfileFromTemplate -NewName @(" CL-POD17-ESXi-Host-1") - DestinationOrg "org-root/org-ORG-CL-POD17"

4. Save the file but keep it open. Step 6: Integrate PowerShell configlet into main script 1. In NotePad++, open the previously created UCS configuration script from:

Z:\PS-SCRIPTS\ConfigureUCS.ps1

2. Copy the contents of the configlet file CL-POD<#>-ESXi-Host-1.txt into the above script. Paste the contents right before the Disconnect from UCSM section of the script 3. Save the files. Close configlet from the PS-CONFIGLETS directory. Step 7 (Optional) Run the PowerShell script to create Service Profile In this step, you can delete the previously created VLAN and Service Profile and run the consolidated script (created in the last step) to recreate these entities. 1. In the UCSM GUI, click on LAN icon in the left and navigate to LAN -> LAN Cloud -> VLANs and delete the VLAN for your Pod (Table 4) 2. Click on the Servers icon in the left navigation menu. 3. Expand Servers > Service Profile > root > Sub-Organizations. 4. Select your POD’s Organization and delete the recently deployed Service Profile 5. Run the following script from the PowerShell window:

Z:\PS-SCRIPTS\ConfigureUCS.ps1

6. Verify the VLAN and service profile creation from the UCSM GUI.

29 | Page

Task 2: Deploy a UCS server using the Service Profile

For this lab, the service profile template has a server pool defined with the physical server assigned to your POD. As a result, instantiating a service profile from the service profile template in the previous step results in the service profile being automatically associated with the server for your POD. If there was no pool defined, then the associated would require additional steps to associate the configuration with the physical hardware. 1. From Cisco UCS Manager, click on the Servers icon in the left navigation menu. 2. Expand Servers > Service Profiles > root > Sub-Organizations. 3. Select your POD’s Organization. 4. Select the previously configured Service Profile. 5. Monitor the Service Profile status. It might take a few minutes for the UCS blade to get associated completely with the Service Profile due to firmware upgrade. You can keep an eye on the Overall Status window:

6. You can also keep an eye on FSM by clicking “>>” on the right and selecting FSM to see detailed information of the configuration.

30 | Page

Task 3: Load ESXi image and Power Up server

In this task, you will launch a KVM session to the newly deployed Service Profile in the last session of the lab and map the ESXi Installation ISO as a CD/DVD. Step 1: Launch KVM from UCS Manager and map ESXi ISO image

1. From Cisco UCS Manager, click on the Servers icon in the left navigation menu. 2. Expand Servers > Service Profiles > root > Sub-Organizations. 3. Select your POD’s Organization. 4. Select the Service Profile name from the previous Task and right-click to select KVM Console. 5. Accept all the warnings. KVM will launch in a new tab on your browser – you may need to disable pop-up blocking for this site. 6. Select the Virtual Media icon on the top right and click Activate Virtual Devices.

7. Click the Virtual Media icon again and select CD/DVD.

8. Click Choose File and select the ESXi ISO: VMware-VMvisor-Installer- 201704001-5310538.x86_64.iso from the VMware directory on your Jump- Server desktop. 9. Click Map Drive.

Step 2: Install ESXi image and verify storage access on the Server

In this step, you will reboot the server, verify storage access and install ESXi 1. Select the Server Actions icon from top right and select Reset or Boot Server. Click OK to continue rebooting the server. Select Power Cycle in the Reset Server Pop-up window. Click OK.

31 | Page

2. Observe the boot process to verify the server can access the Boot LUN. It takes about a minute and it does not stay on this screen for very long. You should see the storage system accessible over multiple paths using iSCSI.

3. The ESXi installer should load from the CD/DVD mapped in the previous step. It will automatically start booting the selected image.

4. Run the ESXi installer and let the system load; it might take a while (~10 min) before your see the welcome screen.

32 | Page

5. Click Enter on the Welcome screen to Continue.

6. Press F11 to accept EULA and continue. 7. When the disk scan is complete, you should see the NetApp boot LUN (15GB) as an install option. Press Enter to Continue.

33 | Page

8. Select US Default keyboard. Press Enter to Continue.

9. Type (and re-type) the root password as shown in Table 12. Press Enter to Continue. 10. Press F11 to start the Install process.

34 | Page

11. When the installation is complete (approx. 5-10 mins) following window will appear console:

12. Click the Virtual Media icon on the top right again and select CD/DVD–Mapped.

13. Click UnMap Drive. 14. Press Enter to reboot the server. When the boot up is complete, following screen should appear:

The blade now has ESXi fully installed and should be booting ESXi from the Boot LUN using iSCSI.

35 | Page

Setup Virtualization Layer

In this section of the lab, you will: • Setup ESXi management networking • Add ESXi host to the pre-installed vCenter

Task 1: Configure ESXi Host

Step 1: Setup ESXi Management Settings

In this task, you will set up a static management IP address, DNS and host name for the ESXi server and enable SSH and Shell Access to the ESXi host. 1. On the KVM console, press F2 to Customize System and enter root as the username and password set in previous task. Press Enter. 2. On the System Customization screen, select Troubleshooting Options to enable Shell and SSH access to the ESXi server. 3. Press Enter to toggle Enable ESXi Shell and Enable SSH.

4. Press escape to exit the screen. 5. On the System Customization screen, select Configure Management Network.

36 | Page

6. Select Network Adapters. On the Network Adapters screen, use arrow keys to select both vmnic0 and vmnic1 corresponding to MGMT vNICs (use space bar to toggle the selection).

7. Press Enter to continue. 8. On the Configure Management Network screen, select VLAN (optional). 9. Enter IB-MGMT-VLAN from Table 5 of the Lab Access Guide (i.e. 12).

10. Press Enter to Continue. 11. On the Configure Management Network screen, select IPv4 configuration. 12. Use arrow keys to highlight Set static IPv4 address and network configuration and use space bar to select the option. 13. Use Table 12 of the Lab Access Guide to assign the IP address and network mask. The Gateway IP address is 192.168.155.1 (from Table 1). Press Enter to continue.

37 | Page

14. On the Configure Management Network screen, select DNS Configuration 15. Enter the DNS server from Table 1 and Hostname from Table 12 of the Lab Access Guide. Leave the Alternate DNS Server Blank. Press Enter to continue.

16. Press ESC key to exit out of the Configure Management Network screen. 17. Press “Y” to “Apply changes” 18. The system now has the new IP address set.

Task 2: Add ESXi Host to vCenter

In this task, you will add the recently configured ESXi host to vCenter to finalize the virtualization setup.

Note: All the groups share the same vCenter therefore use caution and add the ESXi host to the Data Center and cluster for you POD.

Step 1: Add Host to vCenter In this step, you will add the new ESXi host to a pre-deployed VMware vCenter. 1. Collect vCenter IP address and Login information from Table 1 of the Lab Access Guide. 2. Using a web browser, access the vCenter using its IP address (192.168.155.81) and ignore/accept all the warning (the vCetner should be bookmarked in the Chrome). 3. Launch the vSphere Web Client (Flash). 4. If you get the “Get ADOBE FLASH PLAYER” icon, click the icon and click Allow when Chrome ask for permission to run Flash. 5. Use credentials from Table 1 to log into vCenter. 6. Navigate to Hosts and Cluster and expand the Data Center for your POD:

38 | Page

7. Right-click the Cluster and select Add Host… 8. In the Add Host wizard, configure the following in the appropriate setup screen: • Hostname/IP of the host (previously selected from Table 14) • Root account and password (configured using KVM console). • Ignore the security alert. • Select Evaluation license. • Select Default for all options not provided in this list. 9. Review settings and Click Finish to add host. 10. When the ESXi host addition is complete, click on Host in the left column and Summary in the main window. 11. Click Supress Warning for SSH:

The ESXi host addition to vCenter is now complete Step 3: vSphere vSwitch Setup for NFS This design uses a dedicated vSwitch for vMotion and NFS with two uplink vNICs. Traffic from these vNICs will use different paths across the fabric to provide redundancy and load balancing.

In this step, you will configure the following. • Create a vSwitch • Add VMkernel port for NFS on the vSwitch. • Modify MTU of VMkernel ports used for NFS to 9000. Complete the following steps to configure vSphere vSwitch for NFS. 1. From VMware vSphere web client, go to Hosts and Clusters. 2. Navigate to your POD’s Datacenter and cluster where the host resides. 3. Select the host.

39 | Page

4. On the right window pane, click the Configure Tab. 5. Navigate to Networking > Virtual switches. 6. You will see two pre-configured vSwitches one for management network and second for iSCSI network.

Note: In real customer deployment, two iSCSI vSwitches are configured for two paths. In this lab, we will stick to a single path and no additional switches will be setup.

7. Click Add host networking and select VMkernel Network Adapter

8. In the Add Networking configuration wizard, specify the following. • Select target device as: New standard switch. • Add VMNIC2 and VMNIC3 as Active Adapters • Specify the Network label as: NFS-PG and NFS VLAN ID (33xx) for your POD from Table 16. Click Next.

• For Connection Settings > IPv4 settings, specify the IP address (10.30.[x].11/24) as shown below from Table 16. Click Next.

40 | Page

• Review settings. Click Finish to complete.

9. Edit the newly formed vSwitch1 and set the MTU 9000 10. Click on the NFS vmk port and click edit. Go to NIC setting and change the MTU to 9000. 11. Click OK.

41 | Page

Step 3: Mount NFS Datastores To mount NFS datastores to the ESXi host, use the following steps 1. From VMware vSphere web client, go to Hosts and Clusters. 2. Navigate to your POD’s Datacenter and cluster where the host resides. 3. Select the host. 4. On the right window pane, click the Configure Tab. 5. Navigate to Storage > Datastores. Click on the 1st icon to Add Datastore. 6. In the New Datastore pop-up window, select the NFS radio-button for NFS. Click Next. 7. On the Select NFS Version screen, select NFS 3. Click Next. 8. Enter Datastore name, Folder (Path) and NFS Server IP address from Table 13.

9. Click Next, review changes. Click Finish.

Step 4: Deploy a VM on the VMware Setup To deploy a VM from a template on the newly mounted NFS datastores, use the following steps 1. From VMware vSphere web client, go to Hosts and Clusters. 2. Navigate to your POD’s Datacenter and cluster where the host resides.

42 | Page

3. Right click the host and select Deploy OVF Template. 4. Select Local file and click Browse… 5. Browse to the Desktop -> VMware -> VM Template – Win2k8-CL and multi- select all three files and click Open. 6. Name the file POD-W2k8 and select your POD’s datacentre. Click Next. 7. Verify the correct cluster and host is selected for your POD and click Next. 8. Note down the Local Administrator Password under Description and click Next. 9. Make sure virtual disk format is Thin Provision and the datastore selected is the recently mounted NFS datastore CL-DS-1. 10. Keep the Destination Network as VM Network and click Next. 11. Review the settings and click Finish. It will take a while to deploy the VM. At the end of this step, you have successfully deployed a VM on VMware infrastructure. We have not setup all the networking configuration such as vMotion port-group etc. but the configuration steps are similar to what has been covered in this lab so far. The VM deployment requires a considerable amount of time since a larger HDD is being copied from the jump-server to the NFS datastore.

43 | Page

Appendix A: Lab Access Information Table 1 Lab Subnet Details

Service IP Address Credentials

Common Repo. 192.168.155.150 Username: \ftpuser Shares (Z: drive) Password: CL19Sandiego

Gateway 192.168.155.1 N/A DNS 192.168.155.14 N/A NTP 192.168.155.254 N/A UCS Manager 192.168.155.20 See Table 3 Nexus9000-1 192.168.155.3 Username: cladmin Password: CL19Sandiego Nexus9000-2 192.168.155.4 Username: cladmin Password: CL19Sandiego VMware vCenter 192.168.155.81 Username: [email protected] Password: CL19Sandiego!

Table 2 Jump-Server Login Info and credentials

Jump-Server POD Name Username Password IP Address

CL-POD1 192.168.155.61 Administrator CL19Sandiego1 CL-POD2 192.168.155.62 Administrator CL19Sandiego2 CL-POD3 192.168.155.63 Administrator CL19Sandiego3 CL-POD4 192.168.155.64 Administrator CL19Sandiego4 CL-POD5 192.168.155.65 Administrator CL19Sandiego5 CL-POD6 192.168.155.66 Administrator CL19Sandiego6 CL-POD7 192.168.155.67 Administrator CL19Sandiego7 CL-POD8 192.168.155.68 Administrator CL19Sandiego8 CL-POD9 192.168.155.69 Administrator CL19Sandiego9 CL-POD10 192.168.155.70 Administrator CL19Sandiego10 CL-POD11 192.168.155.71 Administrator CL19Sandiego11 CL-POD12 192.168.155.72 Administrator CL19Sandiego12 CL-POD13 192.168.155.73 Administrator CL19Sandiego13 CL-POD14 192.168.155.74 Administrator CL19Sandiego14 CL-POD15 192.168.155.75 Administrator CL19Sandiego15 CL-POD16 192.168.155.76 Administrator CL19Sandiego16

44 | Page

Table 3 Cisco UCS Login Info and Credentials

POD Name Username Password

CL-POD1 POD1-ADMIN CL19Sandiego1 CL-POD2 POD2-ADMIN CL19Sandiego2 CL-POD3 POD3-ADMIN CL19Sandiego3 CL-POD4 POD4-ADMIN CL19Sandiego4 CL-POD5 POD5-ADMIN CL19Sandiego5 CL-POD6 POD6-ADMIN CL19Sandiego6 CL-POD7 POD7-ADMIN CL19Sandiego7 CL-POD8 POD8-ADMIN CL19Sandiego8 CL-POD9 POD9-ADMIN CL19Sandiego9 CL-POD10 POD10-ADMIN CL19Sandiego10 CL-POD11 POD11-ADMIN CL19Sandiego11 CL-POD12 POD12-ADMIN CL19Sandiego12 CL-POD13 POD13-ADMIN CL19Sandiego13

CL-POD14 POD14-ADMIN CL19Sandiego14 CL-POD15 POD15-ADMIN CL19Sandiego15 CL-POD16 POD16-ADMIN CL19Sandiego16

Table 4 VLAN Allocation

POD Name VLAN Name VLAN ID

CL-POD1 CL-POD1-VLAN 901

CL-POD2 CL-POD2-VLAN 902 CL-POD3 CL-POD3-VLAN 903 CL-POD4 CL-POD4-VLAN 904

CL-POD5 CL-POD5-VLAN 905 CL-POD6 CL-POD6-VLAN 906 CL-POD7 CL-POD7-VLAN 907 CL-POD8 CL-POD8-VLAN 908 CL-POD9 CL-POD9-VLAN 909 CL-POD10 CL-POD10-VLAN 910 CL-POD11 CL-POD11-VLAN 911 CL-POD12 CL-POD12-VLAN 912 CL-POD13 CL-POD13-VLAN 913

45 | Page

CL-POD14 CL-POD14-VLAN 914 CL-POD15 CL-POD15-VLAN 915 CL-POD16 CL-POD16-VLAN 916

Table 5 VLAN Allocation for the Deploying Converged Infrastructure Lab

VLAN Names VLANs Subnet Create VLAN Parameters

NATIVE-VLAN 2 N/A Common: Do NOT configure

IB-MGMT-VLAN 12 192.168.155.0/24 Common: Do NOT configure

CL-POD[xx]-VLAN 9[xx] N/A Previously configured: Do NOT configure xx=POD# (01-16) CL-POD[xx]-iSCSI-A 31[xx] 10.10.[xx].0/24 xx=POD# (01-16)

CL-POD[xx]-iSCSI-B 32[xx] 10.20.[xx].0/24 xx=POD# (01-16)

CL-POD[xx]-NFS 33[xx] 10.30.[xx].0/24 xx=POD# (01-16)

CL-POD[xx]-vMotion 34[xx] 10.40.[xx].0/24 xx=POD# (01-16)

Table 6 MAC-Address Pools for UCS POD Name MAC Pool Name MAC Pool Range

CL-POD1 CL-POD1-MAC-Pool-A 00:25:B5:12:01:00-00:25:B5:12:01:0F

CL-POD1-MAC-Pool-B 00:25:B5:12:21:00-00:25:B5:12:51:0F

CL-POD2 CL-POD2-MAC-Pool-A 00:25:B5:12:02:00-00:25:B5:12:02:0F

CL-POD2-MAC-Pool-B 00:25:B5:12:22:00-00:25:B5:12:22:0F

CL-POD3 CL-POD3-MAC-Pool-A 00:25:B5:12:03:00-00:25:B5:12:03:0F

CL-POD3-MAC-Pool-B 00:25:B5:12:23:00-00:25:B5:12:23:0F

CL-POD4 CL-POD4-MAC-Pool-A 00:25:B5:12:04:00-00:25:B5:12:04:0F

CL-POD4-MAC-Pool-B 00:25:B5:12:24:00-00:25:B5:12:24:0F

CL-POD5 CL-POD5-MAC-Pool-A 00:25:B5:12:05:00-00:25:B5:12:05:0F

CL-POD5-MAC-Pool-B 00:25:B5:12:25:00-00:25:B5:12:25:0F

CL-POD6 CL-POD6-MAC-Pool-A 00:25:B5:12:06:00-00:25:B5:12:06:0F

CL-POD6-MAC-Pool-B 00:25:B5:12:26:00-00:25:B5:12:26:0F

CL-POD7 CL-POD7-MAC-Pool-A 00:25:B5:12:07:00-00:25:B5:12:07:0F

CL-POD7-MAC-Pool-B 00:25:B5:12:27:00-00:25:B5:12:27:0F

CL-POD8 CL-POD8-MAC-Pool-A 00:25:B5:12:08:00-00:25:B5:12:08:0F

CL-POD8-MAC-Pool-B 00:25:B5:12:28:00-00:25:B5:12:28:0F

46 | Page

CL-POD9 CL-POD9-MAC-Pool-A 00:25:B5:12:09:00-00:25:B5:12:09:0F

CL-POD9-MAC-Pool-B 00:25:B5:12:29:00-00:25:B5:12:29:0F

CL-POD10 CL-POD10-MAC-Pool-A 00:25:B5:12:10:00-00:25:B5:12:10:0F

CL-POD10MAC-Pool-B 00:25:B5:12:30:00-00:25:B5:12:30:0F

CL-POD11 CL-POD11-MAC-Pool-A 00:25:B5:12:11:00-00:25:B5:12:11:0F

CL-POD11-MAC-Pool-B 00:25:B5:12:31:00-00:25:B5:12:31:0F

CL-POD12 CL-POD12-MAC-Pool-A 00:25:B5:12:12:00-00:25:B5:12:12:0F

CL-POD12-MAC-Pool-B 00:25:B5:12:32:00-00:25:B5:12:32:0F

CL-POD13 CL-POD13-MAC-Pool-A 00:25:B5:12:13:00-00:25:B5:12:13:0F

CL-POD13-MAC-Pool-B 00:25:B5:12:33:00-00:25:B5:12:33:0F

CL-POD14 CL-POD14-MAC-Pool-A 00:25:B5:12:14:00-00:25:B5:12:14:0F

CL-POD14-MAC-Pool-B 00:25:B5:12:34:00-00:25:B5:12:34:0F

CL-POD15 CL-POD15-MAC-Pool-A 00:25:B5:12:15:00-00:25:B5:12:15:0F

CL-POD15-MAC-Pool-B 00:25:B5:12:35:00-00:25:B5:12:35:0F

CL-POD16 CL-POD16-MAC-Pool-A 00:25:B5:12:16:00-00:25:B5:12:16:0F

CL-POD16-MAC-Pool-B 00:25:B5:12:36:00-00:25:B5:12:36:0F

Table 7 KVM IP Address Pools POD Name IP Pool Names for IP Address Pool for KVM KVM

CL-POD1 CL-POD1-KVM-POOL 192.168.155.181-181

CL-POD2 CL-POD2-KVM-POOL 192.168.155.182-182

CL-POD3 CL-POD3-KVM-POOL 192.168.155.183-183

CL-POD4 CL-POD4-KVM-POOL 192.168.155.184-184

CL-POD5 CL-POD5-KVM-POOL 192.168.155.185-185

CL-POD6 CL-POD6-KVM-POOL 192.168.155.186-186

CL-POD7 CL-POD7-KVM-POOL 192.168.155.187-187

CL-POD8 CL-POD8-KVM-POOL 192.168.155.188-188

CL-POD9 CL-POD9-KVM-POOL 192.168.155.189-189

CL-POD10 CL-POD10-KVM-POOL 192.168.155.190-190

CL-POD11 CL-POD11-KVM-POOL 192.168.155.211-211

CL-POD12 CL-POD12-KVM-POOL 192.168.155.212-212

CL-POD13 CL-POD13-KVM-POOL 192.168.155.213-213

CL-POD14 CL-POD14-KVM-POOL 192.168.155.214-214

CL-POD15 CL-POD15-KVM-POOL 192.168.155.215-215

47 | Page

CL-POD16 CL-POD16-KVM-POOL 192.168.155.216-216

Table 8 iSCSI IP Address Pools for the main Compute Setup Lab

POD IP Pool Names for iSCSI Fabric A IP Address Pool IP Address Pool for Name and Fabric B for iSCSI-A iSCSI-B

CL-POD1 CL-POD1-iSCSI-A | CL-POD1-iSCSI-B 10.10.1.[11-20]/24 10.20.1.[11-20]/24

CL-POD2 CL-POD2-iSCSI-A | CL-POD2-iSCSI-B 10.10.2.[11-20]/24 10.20.2.[11-20]/24

CL-POD3 CL-POD3-iSCSI-A | CL-POD3-iSCSI-B 10.10.3.[11-20]/24 10.20.3.[11-20]/24

CL-POD4 CL-POD4-iSCSI-A | CL-POD4-iSCSI-B 10.10.4.[11-20]/24 10.20.4.[11-20]/24

CL-POD5 CL-POD5-iSCSI-A | CL-POD5-iSCSI-B 10.10.5.[11-20]/24 10.20.5.[11-20]/24

CL-POD6 CL-POD6-iSCSI-A | CL-POD6-iSCSI-B 10.10.6.[11-20]/24 10.20.6.[11-20]/24

CL-POD7 CL-POD7-iSCSI-A | CL-POD7-iSCSI-B 10.10.7.[11-20]/24 10.20.7.[11-20]/24

CL-POD8 CL-POD8-iSCSI-A | CL-POD8-iSCSI-B 10.10.8.[11-20]/24 10.20.8.[11-20]/24

CL-POD9 CL-POD9-iSCSI-A | CL-POD9-iSCSI-B 10.10.9.[11-20]/24 10.20.9.[11-20]/24

CL-POD10 CL-POD10-iSCSI-A | CL-POD10-iSCSI-B 10.10.10.[11-20]/24 10.20.10.[11-20]/24

CL-POD11 CL-POD11-iSCSI-A | CL-POD11-iSCSI-B 10.10.11.[11-20]/24 10.20.11.[11-20]/24

CL-POD12 CL-POD12-iSCSI-A | CL-POD12-iSCSI-B 10.10.12.[11-20]/24 10.20.12.[11-20]/24

CL-POD13 CL-POD13-iSCSI-A | CL-POD13-iSCSI-B 10.10.13.[11-20]/24 10.20.13.[11-20]/24

CL-POD14 CL-POD14-iSCSI-A | CL-POD14-iSCSI-B 10.10.14.[11-20]/24 10.20.14.[11-20]/24

CL-POD15 CL-POD15-iSCSI-A | CL-POD15-iSCSI-B 10.10.15.[11-20]/24 10.20.15.[11-20]/24

CL-POD16 CL-POD16-iSCSI-A | CL-POD16-iSCSI-B 10.10.16.[11-20]/24 10.20.16.[11-20]/24

Table 9 IQN Suffix Pool – UCS Server Side

POD Name IQN Suffix Pool Name IQN Prefix IQN Suffix

CL-POD[x] CL-POD[x]-IQN-POOL iqn.2010- cl-pod[x] x=POD# (01-16) x=POD# (01-16) 11.com-flexpod x=POD# (01-16)

The IQN pool should start at 1 and should only contain one entry (Size 1)

48 | Page

Table 10 Service Profile Configuration

Service Profile Service Profile Name Name Suffix Number of POD Name Template Names Suffix Start Instances

CL-POD1 CL-POD1-SPT-ESXi CL-POD1-ESXi-Host- 1 1

CL-POD2 CL-POD2-SPT-ESXi CL-POD2-ESXi-Host- 1 1

CL-POD3 CL-POD3-SPT-ESXi CL-POD3-ESXi-Host- 1 1

CL-POD4 CL-POD4-SPT-ESXi CL-POD4-ESXi-Host- 1 1

CL-POD5 CL-POD5-SPT-ESXi CL-POD5-ESXi-Host- 1 1

CL-POD6 CL-POD6-SPT-ESXi CL-POD6-ESXi-Host- 1 1

CL-POD7 CL-POD7-SPT-ESXi CL-POD7-ESXi-Host- 1 1

CL-POD8 CL-POD8-SPT-ESXi CL-POD8-ESXi-Host- 1 1

CL-POD9 CL-POD9-SPT-ESXi CL-POD9-ESXi-Host- 1 1

CL-POD10 CL-POD10-SPT-ESXi CL-POD10-ESXi-Host- 1 1

CL-POD11 CL-POD11-SPT-ESXi CL-POD11-ESXi-Host- 1 1

CL-POD12 CL-POD12-SPT-ESXi CL-POD12-ESXi-Host- 1 1

CL-POD13 CL-POD13-SPT-ESXi CL-POD13-ESXi-Host- 1 1

CL-POD14 CL-POD14-SPT-ESXi CL-POD14-ESXi-Host- 1 1

CL-POD15 CL-POD15-SPT-ESXi CL-POD15-ESXi-Host- 1 1

CL-POD16 CL-POD16-SPT-ESXi CL-POD16-ESXi-Host- 1 1

Table 11 Storage System IQN Information

POD Name Storage IQN

CL-POD1 iqn.1992-08.com.netapp:sn.7ff30c0e547711e7881100a09855df56:vs.25

CL-POD2 iqn.1992-08.com.netapp:sn.5c03d41e547711e7881100a09855df56:vs.24

CL-POD3 iqn.1992-08.com.netapp:sn.1b1a621c547611e7881100a09855df56:vs.23

CL-POD4 iqn.1992-08.com.netapp:sn.faac8316547511e7881100a09855df56:vs.22

CL-POD5 iqn.1992-08.com.netapp:sn.c522ecff547511e7881100a09855df56:vs.21

CL-POD6 iqn.1992-08.com.netapp:sn.9fda8a3d547411e7881100a09855df56:vs.20

CL-POD7 iqn.1992-08.com.netapp:sn.7d630e53547411e7881100a09855df56:vs.19

CL-POD8 iqn.1992-08.com.netapp:sn.3971a6c5547411e7881100a09855df56:vs.18

CL-POD9 iqn.1992-08.com.netapp:sn.17227865546f11e7881100a09855df56:vs.16

CL-POD10 iqn.1992-08.com.netapp:sn.28b105ee546f11e7881100a09855df56:vs.17

CL-POD11 iqn.1992-08.com.netapp:sn.3dca9f8d69c511e8813d00a09855df56:vs.35

49 | Page

CL-POD12 iqn.1992-08.com.netapp:sn.705a9f8269cc11e8813d00a09855df56:vs.36

CL-POD13 iqn.1992-08.com.netapp:sn.c4862be769cc11e8813d00a09855df56:vs.37

CL-POD14 iqn.1992-08.com.netapp:sn.d2c00bd569cc11e8813d00a09855df56:vs.38

CL-POD15 iqn.1992-08.com.netapp:sn.e065a03269cc11e8813d00a09855df56:vs.39

CL-POD16 iqn.1992-08.com.netapp:sn.ef40dda769cc11e8813d00a09855df56:vs.40

Table 12 ESXi Host - IP Address/Hostname/Credentials

ESXi Management IP ESXi Server Host User Password POD Name Address Name

CL-POD1 192.168.155.161/24 CL-POD1-ESXi-1 root CL19Sandiego1 CL-POD2 192.168.155.162/24 CL-POD2-ESXi-1 root CL19Sandiego2 CL-POD3 192.168.155.163/24 CL-POD3-ESXi-1 root CL19Sandiego3 CL-POD4 192.168.155.164/24 CL-POD4-ESXi-1 root CL19Sandiego4 CL-POD5 192.168.155.165/24 CL-POD5-ESXi-1 root CL19Sandiego5 CL-POD6 192.168.155.166/24 CL-POD6-ESXi-1 root CL19Sandiego6 CL-POD7 192.168.155.167/24 CL-POD7-ESXi-1 root CL19Sandiego7 CL-POD8 192.168.155.168/24 CL-POD8-ESXi-1 root CL19Sandiego8

CL-POD9 192.168.155.169/24 CL-POD9-ESXi-1 root CL19Sandiego9 CL-POD10 192.168.155.170/24 CL-POD10-ESXi-1 root CL19Sandiego10 CL-POD11 192.168.155.171/24 CL-POD11-ESXi-1 root CL19Sandiego11

CL-POD12 192.168.155.172/24 CL-POD12-ESXi-1 root CL19Sandiego12 CL-POD13 192.168.155.173/24 CL-POD13-ESXi-1 root CL19Sandiego13

CL-POD14 192.168.155.174/24 CL-POD14-ESXi-1 root CL19Sandiego14 CL-POD15 192.168.155.175/24 CL-POD15-ESXi-1 root CL19Sandiego15 CL-POD16 192.168.155.176/24 CL-POD16-ESXi-1 root CL19Sandiego16

Table 13 Storage System NFS Information

Datastore Name Datastore Path NFS Server IP Size

CL_DS_1 /CL_DS_1 10.30.[x].2 500GB

Table 14 VMware vCenter – IP Address/Hostname/Credentials

vCenter Username Password IP Address

192.168.155.81/24 [email protected] CL18Sandiego!

50 | Page

Table 15 vCenter Configuration

vCenter vCenter VDS Name POD Name Datacenter Name Cluster Name

CL-POD[x] CL-POD[x] POD[x]-Cluster CL-POD[x]-vDS x=POD# (01-16) x=POD# (01-16) x=POD# (01-16) x=POD# (01-16)

Table 16 ESXi Host – NFS and vMotion VMkernel IP Address vMotion VLAN vMotion IP POD Name NFS VLAN NFS IP Address

CL-POD1 3301 10.30.1.11 3401 10.40.1.11

CL-POD2 3302 10.30.2.11 3402 10.40.2.11

CL-POD3 3303 10.30.3.11 3403 10.40.3.11

CL-POD4 3304 10.30.4.11 3404 10.40.4.11

CL-POD5 3305 10.30.5.11 3405 10.40.5.11

CL-POD6 3306 10.30.6.11 3406 10.40.6.11

CL-POD7 3307 10.30.7.11 3407 10.40.7.11

CL-POD8 3308 10.30.8.11 3408 10.40.8.11

CL-POD9 3309 10.30.9.11 3409 10.40.9.11

CL-POD10 3310 10.30.10.11 3410 10.40.10.11

CL-POD11 3311 10.30.11.11 3411 10.40.11.11

CL-POD12 3312 10.30.12.11 3412 10.40.12.11

CL-POD13 3313 10.30.13.11 3413 10.40.13.11

CL-POD14 3314 10.30.14.11 3414 10.40.14.11

CL-POD15 3315 10.30.15.11 3415 10.40.15.11

CL-POD16 3316 10.30.16.11 3416 10.40.16.11

51 | Page

Appendix B: Converged Infrastructure Lab – Step by Step Configuration

Setup Compute and Storage Access

Configure Cisco UCS Server and Storage Access

In this lab, you will create a service profile template that includes all of the configuration required to deploy a Cisco UCS server as shown in the figure below:

The configuration templates, once created, can be used to quickly configure and deploy new Cisco UCS Servers. The deployment of Cisco UCS servers is done through Cisco UCS Manager running on Cisco UCS Fabric Interconnects (FI). A high-level workflow for creating a template configuration on Cisco UCS Manager environment to deploy new servers is shown in the figure below.

52 | Page

Task 1: Review Base Setup

This task involves an initial setup of Cisco UCS Fabric Interconnects and configuration common to all servers (e.g. uplinks, chassis discovery, etc.) in a given Cisco UCS domain. A single Cisco UCS domain consists of a pair of Cisco FIs with embedded UCS manager that can manage up to 160 servers with unified access for management, LAN and storage traffic through the fabric. The figure below shows the workflow for completing a base setup of a Cisco UCS domain.

The base setup above is already completed for your lab UCS domain since this involves elements that are shared across multiple user PODs. A summary of the configuration steps is provided here so you have a complete view of the setup required to deploy a Cisco UCS based converged infrastructure.

Note: You can refer to any Converged Infrastructure CVD in the Design Zone for the detailed configuration steps: http://www.cisco.com/c/en/us/solutions/enterprise/design-zone-data- centers/index.html

53 | Page

Task 2: LAN Configuration

The LAN configuration workflow for a Cisco UCS server is shown in the figure below. The completed LAN configuration will be used to create the Service Profile Template that will serve as a template for quickly configuring and deploying new Cisco UCS servers.

In this task, you will follow the workflow steps above to complete all the LAN specific configuration for a Cisco UCS server. The configuration can be completed using the Cisco UCS GUI or by using PowerShell commands. In the previous sections of the lab, we have learnt how to configure UCS using GUI while capturing the PowerShell commands. In the interest of time, this part of the lab will be completed using PowerShell commands provided in the lab guide.

Note: The PowerShell commands provided in this guide were captured by the lab instructors using XML recording of UCSM GUI configuration and then converting the XML to PowerShell using the PowerShell tool.

You will also record the associated XML requests to automate the configuration via a PowerShell script.

Completing this task involves the following high-level steps. 1. Login into Cisco UCSM GUI to verify the configuration while using PowerShell commands 2. Complete the LAN configuration as outlined in the above configuration workflow. 3. (Optional) Integrate PS configlets into the main UCS Configuration script using NotePad++.

Step 0: Login to UCS Manager using Web Browser and PowerShell tool 1. Open a web browser.

54 | Page

2. Navigate to the IP address of Cisco UCSM . 3. Login using the credentials for your POD (Table 3). 4. Launch the UCS Manager PowerTool using the ICON on the desktop of the Jump server. 5. Issue the following command to connect to UCS:

Connect-Ucs 192.168.155.20

6. Provide the username and password from Table 3.

Note: If you make mistakes while issuing PowerShell commands, you can delete the configuration from GUI and re-run the PowerShell commands. In most cases, you should also be able to add “-ModifyPresent” to the command and re-run the command to modify the existing configuration.

Step 1: Create VLANs In this step, you will create multiple VLANs for carrying infrastructure and application traffic. These VLANs will provide connectivity through the UCS fabric i.e. from UCS Blade Server à FEX à FI à Nexus switch.

1. Find the VLAN configuration info for your POD from Table 5 in Lab Access and Configuration Details document. 2. From Cisco UCS Manager, click on the LAN icon in the left navigation menu. 3. Navigate to LAN > LAN Cloud > VLANs and use the GUI to verify addition of the VLANs when you execute the PowerShell commands. 4. The iSCSI-A VLANs will be visible under LAN->LAN Cloud->Fabric A -> VLANs and iSCSI-B VLANs will be visible under LAN->LAN Cloud->Fabric B -> VLANs. The vMotion and NFS VLANs will appear under LAN->LAN Cloud ->VLANs.

Note: The screen capture below is provided as a reference for you to match the GUI configuration information with the PowerShell command.

Get-UcsFiLanCloud -Id “A” | Add-UcsVlan -CompressionType "included" -DefaultNet "no" -Id <31xx> -McastPolicyName "" -Name "CL-POD- iSCSI-A" -PolicyOwner "local" -PubNwName "" -Sharing "none"

55 | Page

Get-UcsFiLanCloud -Id “B” | Add-UcsVlan -CompressionType "included" -DefaultNet "no" -Id <32xx> -McastPolicyName "" -Name "CL-POD- iSCSI-B" -PolicyOwner "local" -PubNwName "" -Sharing "none"

Get-UcsLanCloud | Add-UcsVlan -CompressionType "included" - DefaultNet "no" -Id <33xx> -McastPolicyName "" -Name "CL-POD-NFS" -PolicyOwner "local" -PubNwName "" -Sharing "none"

Get-UcsLanCloud | Add-UcsVlan -CompressionType "included" - DefaultNet "no" -Id <34xx> -McastPolicyName "" -Name "CL-POD- vMotion" -PolicyOwner "local" -PubNwName "" -Sharing "none"

Note: ISCSI-A and iSCSI-B VLANs are only defined on Fabric-A and Fabric-B respectively. The Get-UCSFiLanCloud command combined with [-id “A”] or [-id “B”] flag is used for this configuration. The vMotion and NFS VLANs are created on both Fabrics (Common/Global) and therefore use Get-UcsLanCloud command.

Step 2: Create MAC Address Pools In this step, you will create MAC address pools for your UCSM domain. These addresses will be assigned to the vNIC interfaces for your blade (service profile – to be configured later) 1. Find the MAC address ranges for your POD from Table 6 in the Lab Access and Config detail document and add the values to the commands below. 2. From Cisco UCS Manager, click on the LAN icon in the left navigation menu. 7. Navigate to LAN > Pools > root > Sub-Organizations. 3. Select your POD’s Sub-Organization. 4. Select the Pools Tab > MAC Pools and use the GUI to verify addition of the MAC Pools when you execute the PowerShell commands. 5. Verify MAC Pool configuration.

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsMacPool -AssignmentOrder "sequential" -Name "CL- POD-MAC-Pool-A"

$mo_1 = $mo | Add-UcsMacMemberBlock -From "00:25:B5:12:--:--" -To "00:25:B5:12:--:--"

Complete-UcsTransaction

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsMacPool -AssignmentOrder "sequential" -Name "CL- POD-MAC-Pool-B"

$mo_1 = $mo | Add-UcsMacMemberBlock -From "00:25:B5:12:--:-- " -To "00:25:B5:12:--:-- "

Complete-UcsTransaction

56 | Page

Step 3: Create IP Pools for KVM and iSCSI In this step, you will create IP pools for your UCSM domain. KVM IP address pool is used to assign a routable management IP address to access the KVM console of the UCS blade. Since each group is using a single blade, this pool will consist of a single IP address. Similarly, IP addresses from iSCSI-A and iSCSI-B pools are assigned to the iSCSI vNICs. We only need a single IP address in the iSCSI pools (since we are using a single blade per group) however we will define a range of 10 IP addresses to mimic real life customer deployments. 1. For this step, refer to Table 7 for KVM IP Pool information and Table 8 for iSCSI- A and iSCSI-B IP Pool information. The default GW for KVM pool is 192.168.155.1 and DNS is 192.168.155.14. We do not need to define the default GW for the iSCSI pools. 2. From Cisco UCS Manager, click on the LAN icon in the left navigation menu. 8. Navigate to LAN > Pools > root > Sub-Organizations. 9. Select your POD’s Organization. 10. Select the Pools Tab > IP Pools to verify the IP Pool creation. 11. Use the PowerShell commands below to add the IP Pools

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsIpPool -AssignmentOrder "sequential" -Name "CL- POD-KVM-Pool"

$mo_1 = $mo | Add-UcsIpPoolBlock -DefGw "192.168.155.1" -From "192.168.155.--" -PrimDns "192.168.155.14" -To "192.168.155.--"

Complete-UcsTransaction

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsIpPool -AssignmentOrder "sequential" -Name "CL- POD-iSCSI-A"

$mo_1 = $mo | Add-UcsIpPoolBlock -From "10.10..11" -To "10.10..20"

Complete-UcsTransaction

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsIpPool -AssignmentOrder "sequential" -Name "CL- POD-iSCSI-B"

$mo_1 = $mo | Add-UcsIpPoolBlock -From "10.20..11" -To "10.20..20"

Complete-UcsTransaction

57 | Page

Step 4: Create Network Control Policy In this step, you will create a Network Control Policy to enable CDP. The same policy can also be used to configure LLDP (not covered in this lab). 1. From Cisco UCS Manager, click on the LAN icon in the left navigation menu. 2. Navigate to LAN > Policies > root > Sub-Organizations. 3. Select your POD’s Organization and on the right window pane, select the tab for Policies > Network Control Policy. 4. Navigate to root > Sub-Organizations and select your POD’s Organization to observe the creation of the policy. 5. The name of policy used in this step will be CL-POD-NCP (e.g. CL-POD17- NCP). Use the PowerShell command below to add the Network Control Policy. 6. Use the screen shot below to compare the PowerShell command with the UCS Manager GUI:

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsNetworkControlPolicy -Cdp "enabled" -Name "CL- POD-NCP"

$mo_1 = $mo | Add-UcsPortSecurityConfig -ModifyPresent -Descr "" - Forge "allow" -Name "" -PolicyOwner "local"

Complete-UcsTransaction

Step 5: Create vNIC Templates for Management Traffic In this step, you will create two vNIC Templates for Management Traffic through Fabric A and Fabric B.

1. For this step, we will add the pre-configured NATIVE-VLAN and IB-MGMT- VLAN to the vNIC template. 2. From Cisco UCS Manager, click on the LAN icon in the left navigation menu. 12. Navigate to LAN > Policies > root > Sub-Organizations. 13. Select your POD’s Organization. 14. On the right window pane, select the tab for Policies > vNIC Templates to verify the correct creation of the vNIC template.

58 | Page

15. Use the PowerShell commands below to add the vNIC template. Use the screen captures below to match various fields in the PowerShell commands with the UCSM GUI.

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsVnicTemplate -IdentPoolName "CL-POD-MAC-Pool- A" -Name "Mgmt-A" -NwCtrlPolicyName "CL-POD-NCP" -TemplType "updating-template"

$mo_1 = $mo | Add-UcsVnicInterface -ModifyPresent -DefaultNet "yes" -Name "NATIVE-VLAN"

59 | Page

$mo_2 = $mo | Add-UcsVnicInterface -ModifyPresent -DefaultNet "no" - Name "IB-MGMT-VLAN"

Complete-UcsTransaction

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsVnicTemplate -IdentPoolName "CL-POD-MAC-Pool- B" -Name "Mgmt-B" -NwCtrlPolicyName "CL-POD-NCP" -SwitchId "B" - TemplType "updating-template"

$mo_1 = $mo | Add-UcsVnicInterface -ModifyPresent -DefaultNet "no" - Name "IB-MGMT-VLAN"

$mo_2 = $mo | Add-UcsVnicInterface -ModifyPresent -DefaultNet "yes" -Name "NATIVE-VLAN"

Complete-UcsTransaction

Step 6: Create Data, iSCSI and APP vNICs and LAN Connectivity Policy In this step, you will take the configlet command file from the script repository (to save time and effort), update the file for your POD and create a PowerShell script to configure 6 remaining vNIC templates 1. From Cisco UCS Manager, click on the LAN icon in the left navigation menu. 2. Navigate to LAN > Policies > root > Sub-Organizations and select your POD’s Organization. 3. On the right window pane, select the tab for Policies > vNIC Templates to verify the correct creation of the vNIC template after you execute the PowerShell Script. 4. Using the , go to the folder Z:\Script-Repo and open the file Task2-Step6.txt. Open the file in Notepad or Notepad++ and search and replace all occurrences of “XX” with your POD ID.

5. Using the File Manager, go to the folder Z:\ and copy the file CL- ConnectToUCS.ps1 to Z:\PS-SCRIPTS\Configure-LAN.ps1. 6. Right click Configure-LAN.ps1 and edit with Notepad++. This file should have the code to log into the UCS and authenticate using the credentials previously generated. 7. Copy the modified (after updating the POD ID) contents of Task2-Step6.txt to the PowerShell file Configure-LAN.ps1 between the Connect and Disconnect blocks of the Script.

60 | Page

8. Save the file and bring up the UCS PowerTool. Disconnect from the UCS by issuing:

Disconnect-Ucs

9. Execute the Script:

C:\> Z:\PS-SCRIPTS\Configure-LAN.ps1

If the script does not execute correctly you will need to issue the following command (as outlined earlier) in your PowerShell tool:

PowerTool C:\>Set-ExecutionPolicy -scope Process -Executionpolicy Unrestricted

The LAN portion of the UCS configuration is now complete. All the required VLANs, IP Pools, MAC address Pools and vNIC templates are configured.

61 | Page

Task 3: SAN Configuration

The SAN configuration workflow for a Cisco UCS server is shown in the figure below. The completed SAN configuration will be used to create the Service Profile Template that will serve as a template for quickly configuring and deploying new Cisco UCS servers.

In this task, you will use the provided PowerShell command to configure iSCSI IQN Suffix Pool. Since there is a single blade for each POD, the IQN Suffix Pool will only contain a single Suffix.

Step 1: Create iSCSI IQN Suffix Pool

1. For this step, find the IQN Suffix information for your POD from Table 9 in Lab Access and Configuration Details document. 2. From Cisco UCS Manager, click on the SAN icon in the left navigation menu. 3. Navigate to SAN > Pools > root > Sub-Organizations and select your POD’s Organization. 4. On the right window pane, select the tab for Pools and select IQN Pools to verify the IQN Pool when the PowerShell script is executed.

Start-UcsTransaction

$mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD" - LimitScope | Add-UcsIqnPoolPool -AssignmentOrder "sequential" -Name "CL-POD" -Prefix "iqn.2010-11.com-flexpod"

$mo_1 = $mo | Add-UcsIqnPoolBlock -From 1 -Suffix "cl-pod" -To 1

Complete-UcsTransaction

62 | Page

Task 4: Server Configuration

The server configuration workflow for a Cisco UCS server is shown in the figure below. The completed server configuration will be used to create the Service Profile Template that will serve for quickly configuring and deploying new Cisco UCS servers.

In this task, you will execute the provided script to complete all the server specific configuration for a Cisco UCS server. For this step, the Server pools and UUID pools have been pre-configured. The commands are shown here for your reference:

Start-UcsTransaction $mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD17" - LimitScope | Add-UcsServerPool -Name "CL-POD17-SERVER-POOL" $mo_1 = $mo | Add-UcsComputePooledSlot -ModifyPresent -ChassisId "7" - SlotId 1 Complete-UcsTransaction

Start-UcsTransaction $mo = Get-UcsOrg -Level root | Get-UcsOrg -Name "ORG-CL-POD1" - LimitScope | Add-UcsUuidSuffixPool -AssignmentOrder "sequential" -Name "CL-POD17-UUID-POOL" $mo_1 = $mo | Add-UcsUuidSuffixBlock -From "0000-000000001701" -To "0000-00000000170A" Complete-UcsTransaction

63 | Page

Step 1: Create Various Server Policies with recommended best practices In this step, you will take the configlet command file from the script repository (to save time and effort), update the file for your POD and create a PowerShell script to configure 6 remaining vNIC templates 1. From Cisco UCS Manager, click on the Server icon in the left navigation menu. 2. Navigate to Server > Policies > root > Sub-Organizations and select your POD’s Organization. 3. Use various policies (BIOS, Boot, Host Firmware etc.) to verify the policies are correctly created after you execute the PowerShell Script. 4. Using the File Manager, go to the folder Z:\Script-Repo and open the file Task4-Step1.txt. Open the file in Notepad or Notepad++ and search and replace all occurrences of “XX” with your POD ID.

5. Using the File Manager, go to the folder Z:\ and copy the file CL- ConnectToUCS.ps1 to Z:\PS-SCRIPTS\Configure-Server.ps1. 6. Right click Configure-Server.ps1 and edit with Notepad++. This file should have the code to log into the UCS and authenticate using the credentials previously generated. 7. Copy the modified (after replacing XX with POD ID) contents of Task4-Step1.txt to the PowerShell file Configure-Server.ps1 between the Connect and Disconnect blocks of the Script.

8. Save the file and bring up the UCS PowerTool. Disconnect from the UCS by issuing:

Disconnect-Ucs

9. Execute the Script:

C:\> Z:\PS-SCRIPTS\Configure-Server.ps1

10. Using GUI, verify that various policies were configured for your POD.

64 | Page

If the script does not execute correctly you will need to issue the following command (as outlined earlier) in your PowerShell tool: PowerTool C:\>Set-ExecutionPolicy -scope Process -Executionpolicy Unrestricted The Server Policy portion of the UCS configuration is now complete. Various policies should now be configured and ready to be used in the next section.

65 | Page

Task 5: Create Service Profile Template

In this task, you will create a Service Profile Template (SPT) using the provided script.

Completing this task involves the following high-level steps. 1. Copying and executing the PowerShell script to create the Service Profile Template 2. Using Cisco UCSM GUI to verify the Service Profile Template creation.

Step 1: Create Service Profile Template In this step, you will take the configlet command file from the script repository (to save time and effort), update the file with your POD information and create a PowerShell script to configure Service Profile Templates 1. From Cisco UCS Manager, click on the Server icon in the left navigation menu. 2. Navigate to Server > Service Profile Template > root > Sub-Organizations and select your POD’s Organization. After executing the PowerShell Script, this is where you will verify the Service Profile Template. 3. Using the File Manager, go to the folder Z:\Script-Repo and open the file Task5-Step1.txt. Open the file in Notepad or Notepad++ 4. Enter your POD Number within the quotes for the variable PodID 5. Use the Table 11 to find the storage system IQN and add the value within quotes for the variable StorageIQN. Following figure shows a sample of the changes:

6. Using the File Manager, go to the folder Z:\ and copy the file CL- ConnectToUCS.ps1 to Z:\PS-SCRIPTS\Configure-SPT.ps1. 7. Right click Configure-SPT.ps1 and edit with Notepad++. This file should have the code to log into the UCS and authenticate using the credentials previously generated. 8. Copy the modified (after entering the PodID and StorageIQN values) contents of Task5-Step1.txt to the PowerShell file Configure-SPT.ps1 between the Connect and Disconnect blocks of the Script.

9. Save the file and bring up the UCS PowerTool. Disconnect from the UCS by issuing:

Disconnect-Ucs

10. Execute the Script:

66 | Page

C:\> Z:\PS-SCRIPTS\Configure-SPT.ps1

11. Using GUI, verify that service profile template was configured for your POD. If the script does not execute correctly you will need to issue the following command (as outlined earlier) in your PowerShell tool:

PowerTool C:\>Set-ExecutionPolicy -scope Process -Executionpolicy Unrestricted

67 | Page

Task 6: Deploy Cisco UCS Service Profile

In this task, you will deploy a Service Profile that consolidates all of the configuration associated with configuring a single Cisco UCS server. This includes LAN, SAN and Server specific aspects of the configuration for a server. The Service Profile is generated from the Service Profile Template created in the previous task.

For this task, you’ll do the following. 1. Login into Cisco UCSM GUI to create the Service Profile. 2. Open KVM console to the server to verify storage connectivity Complete the following steps to deploy a service profile for a Cisco UCS server.

Step 1: Create Service Profile (SP) to configure servers in your POD

1. From Cisco UCS Manager, click on the Servers icon in the left navigation menu. 2. Navigate to Servers > Service Profile Template > root > Sub-Organizations. 3. Select your POD’s Organization. 4. Right-click and select Create Service Profile From Template. 5. In the Create Service Profiles From Template wizard, configure a single Service Profile using the information from Table 10. 6. Select the Service Profile Template that you created in the last step by clicking in the box next to Service Profile Template.

7. Click OK 8. It might take a few minutes for the UCS blade to get associated completely with the Service Profile due to firmware upgrade. You can keep an eye on the Overall Status window:

68 | Page

9. You can also keep an eye on FSM by clicking “>>” on the right and selecting FSM to see detailed information of the configuration:

10. When the Service Profile is Associated and Server is assigned, right click on the Service Profile and select KVM Console. You may be prompted for a blocked Pop UP, allow the Pop Ups for the UCS website. Click on the KVM URL that system presents. 11. Power Cycle the blade by clicking the Server Action and selecting Reset:

12. Observe the boot process to verify the server can access the Boot LUN. You should see the storage system accessible over multiple paths using iSCSI:

69 | Page

At this time the Cisco UCS server is configured and ready and storage system is providing a Boot LUN to install (ESXi).

70 | Page vCenter – Detailed configurations

Setup VMware vCenter Step 1: Configure Datacenter and Cluster on vCenter

In this step, you will create a Datacenter and HA/DRS cluster on your POD’s vCenter. The newly deployed ESXi Hosts will be added to this POD. 1. For this step, find the configuration info for your POD from table in Lab Information à Virtualization à vCenter section. 2. Use a web browser to navigate to https://. If you get a warning saying “Your connection is not private (Chrome) or secure(Firefox)”, click on Advanced, followed by either (a) Proceed to 192.168.155… or (b) Add Exception (Firefox) & Confirm Security Exception. 3. Click on vSphere Web Client (Flash) and login into vCenter. For any browser warnings, repeat previous step actions. 4. Navigate to Global Inventory Lists > Resources > vCenter Servers. Select your POD’s vCenter instance. On the right window pane, from the top menu bar, select Actions > New Datacenter. 5. In the New Datacenter pop-up window, configure a datacenter for your POD as shown below.

6. From top Home icon, navigate to Hosts and Clusters and select the newly created datacenter. On the right window pane, from the top menu bar, select Actions > New Cluster.

71 | Page

7. In the New Cluster pop-up window, configure a cluster for your POD as shown below.

Step 2: Specify location of Virtual Machine (VM) Swap File – Cluster Level 1. Use a web browser to navigate to your POD’s vCenter. Use vSphere Web Client (Flash) and login into vCenter. 2. Navigate to Hosts and Clusters. 3. Select the newly created datacenter and cluster. Click on the Configure tab. 4. Navigate to Configuration > General and click on the Edit button. In the Edit Cluster Settings pop-up window, select the radio button for Datastore specified by host. Step 3: Enable ESXi Dump Collector Follow the steps below to verify ESXi Dump Collector is running. 5. Use a web browser to navigate to your POD’s vCenter. Use vSphere Web Client (Flash) and login into vCenter. 6. Navigate to Administration > System Configuration > Nodes and select the vCenter instance from the list. 7. Click the Related Objects tab to see a list of services running on the vCenter. 8. Filter on Dump Collector. Select VMware vSphere ESXi Dump Collector. 9. Right-click and select Edit Startup Type… 10. In the VMware vSphere ESXi Dump Collector pop-up window, select the radio button for Automatic. 11. Right-click and select Start.

Step 4: Create vSphere Distributed Switch The distributed switch for Application VM traffic will use two uplinks or vNICs previously created. The traffic from these vNICs will take different paths across the fabric to provide redundancy and load balancing. To create and setup distributed virtual switch for Application VM traffic, complete the following steps. 1. Use a web browser to navigate to your POD’s vCenter. Use vSphere Web Client (Flash) and login into vCenter.

72 | Page

2. Navigate to Hosts and Clusters and select datacenter. Right-click and select Distributed Switch > New Distributed Switch as shown below.

3. Configure the New Distributed Switch as shown below. Specify a Name. Click Next.

73 | Page

4. For Select version, select the distributed switch version to use. Click Next.

5. For Edit settings, specify number of uplinks & Default Port Group as shown below.

74 | Page

6. Review settings and click Finish to complete.

Modify Port Groups for Applications 1. Navigate to Home > Networking and select the datacenter, followed by the newly created distributed switch. Right-click and select Distributed Port group > Manage Distributed Port Groups.

75 | Page

2. In the Manage Distributed Port Groups window, for Select port group policies, select VLAN policy. Click Next.

3. In the Manage Distributed Port Groups window, for Select port groups, click on icon for Select distributed port groups and add port group to be modified. Click

76 | Page

Next.

77 | Page

4. Under Configure settings, specify the VLAN type and ID for your POD. Use default settings for everything else. Click next.

5. Review settings. Click Finish to complete. Add Port Groups for Applications 1. Navigate to Home > Networking and select the datacenter, followed by the newly created distributed switch. Right-click and select Distributed Port group > New Distributed Port Groups.

78 | Page

79 | Page

2. In the New Distributed Port Group window, for Select name and location, specify a Name for the new Application VM port group. Click Next.

3. Under Configure settings, specify the VLAN type and ID for your POD. Use default settings for everything else. Click next.

80 | Page

4. Review settings. Click Finish to complete.

Edit Uplinks 1. Select the distributed switch and right-click on Settings > Edit Settings… 2. In the - Edit Settings window, click on Edit Uplink Names. In the Edit Uplink Names window, specify the new names. Click OK twice to

81 | Page

complete.

Step 5: Enable/Verify NTP Setup In this step, you will enable/verify NTP on the newly added host in vCenter. 1. For this step, find the configuration info for your POD from table in Lab Information à Cisco UCS Server à LAN Configuration à MAC Address Pools for the Main Compute Setup section. 6. Using vSphere Web Client, go to Hosts and Clusters and select the newly added host. 7. Navigate to Configure > System > Time Configuration and verify NTP status.

82 | Page

8. If it is not enable it, enable NTP by clicking on the Edit and configuring NTP for your POD as shown below.

83 | Page

Step 6: Add vSphere vSwitch Networking for Management, vMotion and NFS Traffic This section covers the virtual switch (vSwitch) setup for Management, vMotion and NFS storage traffic. The configuration workflow is as shown in the figure below.

Workflow for vSphere vSwitch Networking Setup

Add/Verify Uplinks

Modify/Verify vSwitch Update Settings - MTU, Failover Management vSwitch0 Configuration Modify Management Port Group Settings - Label, Failover

Setup Virtual Modify/Verify Management Networking for VMkernel Port Settings Hosts

Modify/Verify vSwitch Settings - MTU, Failover

Setup vSwitch1 for vMotion & NFS Modify/Verify Port Group Settings - Label, Failover (vMotion, NFS)

Add/Modify/Verify VMkernel Port Settings (vMotion, NFS)

Update Management vSwitch0 Configuration This design uses a dedicated vSwitch for management using two uplink vNICs with Active/Passive Failover for the port group and routing based on originating port ID for

84 | Page load balancing at the vSwitch level. Traffic from each vNIC take different paths across fabric to enable redundancy and load balancing. Add/Verify vSwitch Uplinks vSwitch0 is the default virtual switch on the host. The vNICs configured on the host through the assigned UCSM Service Profile appear as vmnics in ESXi. When a host boots up after ESXI install, one vmnic will be assigned to the default virtual switch. The second vNIC/vmnic was also added as vSwitch0’s uplink through the KVM Console configuration in the Setup ESXi Management Settings section. In this step, you will verify the uplinks on vSwitch0 and the redundancy and load balancing configuration. 1. From VMware vSphere web client, navigate to Hosts and Clusters. 2. Select the datacenter > cluster > host. 3. On the right window pane, click the Configure Tab. Navigate to Networking > Virtual switches and select vSwitch0 from the Virtual Switches list. 4. Verify both vmnic0 and vmnic1 are listed as uplinks for vSwitch0 as shown below.

Modify MTU of vSwitch0 to 9000 An end-to-end MTU of 9000 is recommended for the management vSwitch so that a host reboot can be avoided if an MTU change is ever needed. To change the default MTU, complete the following steps. 1. Click on the Edit Settings icon (5th icon) above the list of Virtual Switches to open the Edit settings window. 2. Change the MTU to 9000 as shown below. Click OK.

85 | Page

Verify NIC Teaming and Failover Settings on vSwitch0 In the vSwitch0 - Edit Settings window, under Teaming and failover, verify the load balancing and failover configuration is as shown below. Use the Blue Up/Down Arrow keys to move adapters as needed.

Modify the Network Label for Management Port Group Change the network label for the Management port group from the default. 1. In the Standard Switch: vSwitch0 section of the window, select the management port group and click on Edit Settings (pencil icon) right above it – not the one at the top of the page. 2. In the Edit Settings pop-up window, select Properties and change the network label to MGMT-PG and VLAN ID to 12 (if tagged) as shown below.

Verify NIC Teaming and Failover Settings for Management Port Group Change the failover policy to Active/Passive. In the Management Network: Edit Settings pop-up window, for Teaming and Failover, verify the load balancing and failover configuration is as shown below. Use

86 | Page

the Blue Up/Down Arrow keys to move adapters as needed.

Modify/Verify Management VMkernel Port settings 1. In the Standard Switch: vSwitch0 section of the window, select the VMkernel port in the management port group and click the Edit Settings (pencil) icon right above it – not the one at the top. 2. In the VMkernel port Edit Settings window, for Port properties, under Enable services, verify that Management Traffic is enabled. Click OK.

87 | Page

88 | Page

Network Setup – Configure Nexus 9000 Switches

In this section of the lab, you will use the Sandbox tool to configure the remaining VLANs that needs to be enabled on the Nexus switches to provide network connectivity to the newly deployed Cisco UCS servers.

The figure below shows the high-level connectivity used in the setup.

To configure VLANs to provide connectivity to newly deployed servers, complete the following steps.

Login to NX-API Developer Sandbox Tool for each Nexus Switch

1. For this step, find the configuration info for your POD from the table in Lab Information à Network à Cisco Nexus 9000 Switches – Access Information section. 2. Open a web browser. 3. Navigate to the IP address of Nexus-A switch and login. 4. Open another tab in the browser. 5. Navigate to the IP address of Nexus-B switch and login.

Configure VLANs on each Switch using Sandbox Tool

1. For this step, find the configuration info for your POD from the table in Lab Information à Network à Cisco Nexus 9000 Switches – VLANs Information for Main Lab section. 2. In the Sandbox tool window (top, left), enter the configuration to create VLANs globally and add VLANs for your POD to the port-channels that are part of the Virtual Port Channel (vPC) configuration between Cisco UCS Fabric Interconnects and Nexus 9000 Series switches. 3. Verify once more that the VLANs are the ones for your POD. Click POST to apply the configuration to the Cisco Nexus-A switch. 4. Repeat above steps for Cisco Nexus-B switch.

89 | Page