ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE

Dell EMC Networking Infrastructure Solutions March 2018

DRAFT Revisions

Date Rev. Description Authors February 2018 1.0 Initial release Curtis Bunch, Jordan Wilson, Andrew Waranowski

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR IMPLIED WARRANTIES OF ANY KIND.

Copyright © 2018 Dell Inc. All rights reserved. Dell and the Dell EMC logo are trademarks of Dell Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies.

!2 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT VERSION 1.0 Table of contents Revisions ...... 2 Executive summary ...... 6 1. Introduction ...... 6 1.1. Dell EMC ScaleIO ...... 8 1.2. Typographical conventions ...... 10 2. Hardware overview ...... 10 2.1. Dell EMC Networking S3048-ON ...... 10 2.2. Dell EMC Networking S5048F-ON ...... 10 2.1. Dell EMC Networking Z9100-ON ...... 10 2.2. Dell EMC PowerEdge R640 ...... 11 2.3. Dell EMC ScaleIO Ready Node R740XD-SIO-HC-F...... 11 3. Networking ...... 11 3.1. Network topology...... 12 3.2. Network connectivity ...... 14 3.3. IP addressing ...... 19 4. Configure physical switches ...... 21 4.1. Factory Default Settings ...... 21 4.2. Z9100-ON spine switch configuration ...... 23 4.3. S5048F-ON leaf switch configuration ...... 28

!3 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4.4. S3048-ON management switch configuration...... 34 4.5. Verify switch configuration ...... 38 4.5.1. show vlt brief...... 38 4.5.2. show vlt detail ...... 38 4.5.3. show vlt mismatch ...... 39 4.5.4. show vrrp brief ...... 39 4.5.5. show lldp neighbors ...... 41 4.5.6. show spanning-tree rstp brief ...... 42 5. Deploying ScaleIO with AMS ...... 42 5.1. Deploy the AMS server...... 43 5.1.1. Configure an IP address ...... 43 5.2. Install and log into the ScaleIO GUI ...... 44 5.3. Perform initial AMS setup ...... 44 5.4. Discover and deploy ScaleIO Ready Nodes ...... 46 6. ESXi compute environment...... 48 6.1. Create an IP pool for vMotion VMkernel ...... 50 6.2. Create a compute environment ...... 50 A. Troubleshooting SDS connectivity ...... 52 B. Validated hardware and components ...... 53 B.1. Dell EMC Networking switches ...... 53

!4 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential B.2. Dell EMC ScaleIO Ready Nodes ...... 53 C. Validated software ...... 54 D. Product manuals and technical guides ...... 55 D.1. Dell EMC ...... 55 D.2. VMware ...... 55 E. Support and feedback ...... 56

!5 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential Executive summary

Dell EMC ScaleIO Ready Node brings together next generation, rack-optimized, Dell EMC PowerEdge servers with Dell EMC ScaleIO software for the most agile, scalable and performant SDS solution for block storage. It fits within the existing infrastructure and is the perfect choice for organizations looking for ways to reduce the time needed to plan and deploy a new architecture. Having the software and hardware from a single vendor, delivering single support, dramatically simplifies the process of procuring and employing an entirely architected SDS solution. Also, the ability to select from different deployment models and configurations, including All-Flash, provides the flexibility to select pre-configured Ready Nodes that best meet your business objectives.

This document details the deployment of the Dell EMC ScaleIO Ready Node using the Dell EMC Networking S5048F-ON 25 GbE switch. The Dell EMC Networking S5048F-ON is the latest in the S-series of switches that provide the bandwidth and low latency support for a scalable storage architecture. This document also:

• Assists administrators in selecting the best hardware and topology for their Dell EMC ScaleIO network • Delivers detailed instructions and working examples for the deployment and configuration of Dell EMC S-series switches • Delivers step-by-step instructions and working examples for the deployment and configuration of a sample Dell ScaleIO Ready Node • Shows conceptual, physical, and logical diagram examples for various networking topologies 1. Introduction

ScaleIO is a software-only solution that uses existing servers’ local disks and LAN to create a virtual SAN that has all the benefits of external storage – but at a fraction of cost and complexity. ScaleIO utilizes the existing local storage devices and turns them into shared block storage. For many workloads, ScaleIO storage is comparable to, or better than external shared block storage. The lightweight ScaleIO software components are installed on the application servers and communicate via a standard LAN to handle the application I/O requests sent to ScaleIO block volumes. An extremely efficient decentralized block I/O flow, combined with a distributed, sliced volume layout, results in a massively parallel I/O system. ScaleIO is designed and implemented with enterprise-grade resilience. Furthermore, the software features an efficient distributed self-healing process that overcomes media and server failures, without requiring administrator involvement. A ScaleIO Ready Node is the combination of ScaleIO software-defined block storage and Dell PowerEdge servers, optimized to run ScaleIO, enabling customers to quickly deploy an entirely architected, software- defined, scale-out server SAN. The solution is managed by the Automated Management Service (AMS), which enables a straightforward deployment process from bare metal, no IP state, to a fully-configured system: IP address assignment, ScaleIO deployment and configuration, and vCenter configuration. In the modern data center, 100 Gbps Ethernet is now at an affordable price to respond to the increasing demands of bandwidth from storage and compute. Dell EMC S5048F-ON, offers 25/100GbE Ethernet, designed to be used as a data center Top of Rack (ToR) switch.

!6 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential This document covers the network deployment requirements for an AMS-based solution including interface configuration and adding the ScaleIO Ready-Nodes to an existing VMware vCenter server. The topology is a layer 2, Virtual Link Trunking (VLT) enabled topology.

Note: For deploying a spine-leaf architecture using Dell EMC Networking see the, Dell EMC Leaf-Spine Deployment Guide for a step-by-step guide for more information. For steps on deploying and configuring VMware vSphere, see vSphere Networking Guide for vSphere 6.5, ESXi 6.5, and vCenter Server 6.5.

!7 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 1.1. Dell EMC ScaleIO A ScaleIO virtual SAN consists of the following software components:

• Meta Data Manager (MDM) - Configures and monitors the ScaleIO system. The MDM can be configured in redundant cluster mode, with three members on three servers or five members on five servers, or in a single mode on a single server. In this guide, four ScaleIO Ready Nodes are used resulting in a three-member MDM configuration. • ScaleIO Data Server (SDS) – Manages the capacity of a single server and acts as a back-end for data access. • ScaleIO Data Client (SDC) – A lightweight device driver that exposes ScaleIO volumes as block devices.

The figure below illustrates the communication between these components.

! 1. ScaleIO component communication

!8 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential In a VMware environment, like the one built in this document, the MDM and SDS components are installed on a dedicated ScaleIO Virtual Machine (SVM), whereas the SDC is installed directly on the ESXi host. Figure 2 illustrates this.

! 2. ScaleIO implementation on ESXi

!9 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 1.2. Typographical conventions This document uses the following typographical conventions:

Monospaced text Command Line Interface (CLI) examples

Bold monospaced text Commands entered at the CLI prompt

Italic monospaced text Variables in CLI examples

Bold text Graphical User Interface (GUI) fields and information entered in the GUI 2. Hardware overview

This section briefly describes the hardware used to validate the deployment example in this guide. A complete listing of hardware and components is found in Appendix B. 2.1. Dell EMC Networking S3048-ON The Dell EMC Networking S3048-ON is a 1-Rack Unit (RU) switch with forty-eight 1 GbE Base-T ports and four 10 GbE SFP+ ports. In this guide, one S3048-ON supports ESXi management, AMS management and discovery, as well as iDRAC management.

! 3. Dell EMC Networking S3048-ON 2.2. Dell EMC Networking S5048F-ON The S5048F-ON is a 1RU switch with forty-eight 25 GbE SFP28 ports and six ports of 100GbE. In this guide, this switch is deployed as a leaf switch performing basic access switch functionality for the attached ScaleIO Ready Nodes.

! 4. Dell EMC Networking S5048F-ON 2.1. Dell EMC Networking Z9100-ON The Dell EMC Networking Z9100-ON is a 1-RU, multilayer switch with thirty-two ports supporting 10/25/40/50/100GbE plus two 10 GbE ports. The Z9100-ON is used as a spine in this guide.

!10 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential ! 5. Dell EMC Networking Z9100-ON 2.2. Dell EMC PowerEdge R640 This R640 server is a 2-socket, 1 RU server. This server runs the Automated Management Service (AMS) server in this environment.

! Figure 5 Dell EMC PowerEdge R640 2.3. Dell EMC ScaleIO Ready Node R740XD-SIO-HC-F This 2U1N ScaleIO Ready Node is based on the Dell EMC PowerEdge R740xd high-capacity server. The R740XD-SIO-HC-F is a 2-socket, 2 RU server with 24 or 32 drive slots. Four R740XD-SIO-HC-F nodes, using all-flash storage, form the ScaleIO cluster for this deployment.

! Figure 4 R740XD-SIO-HC-F 2U1N single node 3. Networking

In ScaleIO, inter-node communication (for the purposes of managing data locations, rebuild and rebalance, and for application access to stored data) can be done on one IP network, or on separate IP networks. VLANs are also supported. Management (via any of the management interfaces) can be done in the following ways:

• Via a separate network with access to the other ScaleIO components • On the same network

In this environment management is deployed using the S3048-ON management switch. Each ScaleIO Ready Node has two 1 GbE ports, one for ESXi/AMS management and the other for iDRAC connectivity. Both of these interfaces are connected to the switch.

!11 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 3.1. Network topology This section provides an overview of the network topology and the physical connection used in this deployment.

Production network

A non-blocking network design allows the use of all switch ports concurrently. Such a design is needed to accommodate various traffic patterns in a ScaleIO deployment and to optimize the additional traffic generated in the HCI environment. Figure 6 shows a leaf-spine topology providing access to the existing infrastructure found in a typical data center. The ScaleIO components, MDM, SDS, and SDC, reside on the ScaleIO Ready Nodes while ESXi is managed through vCenter in the management environment.

eVLT

S5048F-ON Leaf switches Existing Infrastructure Network: 172.16.8.0/21 ScaleIO Ready Nodes • ESXi host RN-01 • MDM • SDS • SDC

RN-02

RN-03

RN-04 ! 6. Production network

Existing Infrastructure

In the figure above the existing infrastructure cloud represents the existing components found in a datacenter. The component and IP addresses used in this deploymented are outlined in the following table.

1. Existing infrastructure component list Component DNS or IP Address Purpose

!12 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential vCenter Atx01m01vc01.dell.local VMware vCenter server for cluster management NTP Ntp0.dell.local Time server DNS 172.16.11.4 DNS services 172.16.11.5 Management workstation 172.16.11.10 Configuration workstation

Management Network

A single S3048-ON switch provides iDRAC, ESXi, and SVM connectivity to the ScaleIO Ready Nodes. Figure 7 shows the S3048-ON is connected to the S5048F-ON through a port channel. Both sets of ports (iDRAC and ESXi/SVM/AMS Management) are assigned two separate VLAN IDs. AMS must be in the same broadcast domain as the ESXi/SVM interface with the ScaleIO Ready Nodes. Both VLANs are tagged on the upstream port channel.

S5048F-ON Leaf Switches

Po 101

S3048-ON MGMT Switch

iDRAC Ready Node 1 ESX/SVM

iDRAC Ready Node 2 ESX/SVM

iDRAC Ready Node 3 ESX/SVM

iDRAC Ready Node 4 ESX/SVM

iDRAC AMS Server ESX/SVM iDRAC ! ESXi Management 7. Management network

!13 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 3.2. Network connectivity In this guide, the Z9100-ON spine layer and the S5048F-ON leaf layer switches are configured with two VLT domains joined together. This is known as enhanced VLT (eVLT).

The Z9100-ON switches, comprising the spine layer, are configured in a VLT domain. In turn the Z9100-ON has two ports connecting to both S5048-ON switches in the leaf layer. These connections are shown in Figure 8, represented by the blue and red lines. The spine layer behaves as the distribution layer and contains the default gateway addresses for the subnets created in this environment. The S3048-ON is a management switch used to connect iDRAC, AMS, ESXi management, and SVM management. The S3048-ON is connected by a port channel (green lines) to the S5048F-ON VLT domain.

! 8. eVLT wiring diagram

!14 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential Part of a successful VLT deployment is the use of a VLT backup link. The backup link monitors the connectivity between the VLT peer switches. The backup link sends configurable, periodic keepalive messages between the VLT peer switches. Figure 9 and Figure 10 shows that the out-of-band (OOB) management interface (ma1/1) is configured as a point-to-point link to fulfill this requirement. Administrative access is performed in-band, on an isolated VLAN, and does not require this port.

!

9. Z9100-ON OOB management interface used for VLT backup destination

! 10. S5048F-ON OOB management interface used for VLT backup destination

!15 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential Figure 11 shows one ScaleIO Ready Node connected to the two S5048F-ON switches via two Mellanox ConnectX-4 LX PCIe cards installed in PCIe slots one and two. The leaf switches are VLT peers and one port from each PCIe card connects to each leaf switch. The blue links represent ScaleIO data1 and data2 while the red links carry all other traffic and are used when deploying the compute environment (see section 6). The connections for R740xd-SIO-HC-1 through R740xd-SIO-HC-4 (not shown) are made in the same manner.

! 11. R740XD-SIO-HC-1 (RN-1) wiring to production network

!16 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential Figure 12 shows one ScaleIO Ready Node (RN-1) iDRAC connected to the S3048-ON switch via the onboard iDRAC port, shown in green. Also connected is a single 1 GbE port, shown in red, from the network daughter card (NDC). This connection is to manage ESXi as well as the SVMs. The S3048-Management switch is connected via a port channel to the S5048F-ON VLT pair. The connections for R740xd-SIO-HC-1 through R740xd-SIO-HC-4 (not shown) is made in the same manner.

! 12. R740xd-SIO-HC-1 iDRAC wiring to management switch

For the single-node R740xd server used in this deployment, the cables and configuration of the system are shown in the table below.

2. R740xd single-node server configuration Description iDRAC Management Data 1 Data 2 Client 1 Client 2 Port iDRAC vmnic2 vmnic5 vmnic7 vmnic4 vmnic6 PCI slot n/a rNDC 1 2 1 2 Physical Port on iDRAC 3 1 1 2 2 node Speed 1 GbE 1 GbE 25 GbE 25 GbE 25 GbE 25 GbE Physical Switch/ S3048-Mgmt S3048-Mgmt S5048- S5048- S5048- S5048- Port gi1/1 gi1/3 Leaf1 Leaf1 Leaf2 Leaf2 tw1/24 tw1/24 tw1/23 tw1/23

!17 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential AMS must be on the same layer 2 network as the management ports. This allows broadcast to be used for initial IP address configuration. The iDRAC connection from the AMS management server is not required for a successful deployment, but is shown as the green link in Figure 13 to illustrate a complete deployment.

!

13. Dell EMC PowerEdge R640 server wiring to management switch

!18 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 3.3. IP addressing Dell EMC ScaleIO MDMs, SDSs, and SDCs can have multiple IP addresses and can reside on more than one network. Multiple IPs provides options for load balancing and redundancy. ScaleIO natively provides redundancy and load balancing across physical network links when an MDM or SDS is configured to send traffic across multiple links. In this configuration, each interface available to the MDM or SDS is assigned an IP address, each in a different subnet. In this deployment guide, two networks are used for MDM and SDS traffic. When each MDM or SDS has multiple IP addresses, ScaleIO load balances more efficiently due to its awareness of the traffic pattern.

Accounting for management networks and two data networks, each ScaleIO node needs seven IPs with the VMware vMotion network being optional:

• iDRAC Management – Used for administrative management of the R740xd through the iDRAC interface. • ESXi Management – Used to connect the host to vCenter and management. • ESXi vMotion – Used to migrate virtual machine states between ESXi hosts (optional). • SVM Management – Used for administrative management of the SVM installed on each host. • Data Network 1 SVM – Used by the SVM to access Data Network 1. • Data Network 1 SDC – Used by ESXi as a VMkernel IP for data access to Data Network 1. • Data Network 2 SVM – Used by the SVM to access Data Network 2. • Data Network 2 SDC – Used by ESXi as a VMkernel IP for data access to Data Network 2.

Table 2 and Table 3 shows how to calculate the IP address requirements.

3. VMware and management IP network calculations Item Description Comments N Number of nodes IP address The pools of IP addresses used for the following groups: SVM IP pools addresses are ESXI_MGMT_IP = Management IP addresses relevant for ESXi ESXI_VMOTION_IP = ESXi vMotion IP addresses nodes only. SVM_MGMT_IP = SVM management IP addresses IDRAC_MGMT_IP = iDRAC IP address for each node AMS/Proxy The IP address used by the AMS server / proxy to manage the ScaleIO Recommended servers IP Nodes and logical entities. residing on the addresses same subnet/ broadcast, unless the proxy is used for discovery operations. The formula to calculate IP address subnet pool or subnet needs is: N * ESXI_MGMT_IP + N * ESXI_VMOTION_IP + N * SVM_MGMT_IP + N * IDRAC_MGMT_IP + AMS server IP address

!19 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4. ScaleIO data IP network calculations Item Description Comments N Number of nodes Data network 1 The pools of IP addresses used for static allocation for the For clarity the first subnet is following groups: referred to as “Data1”. 1. Node_DATA1_IP = ScaleIO internal (interconnect) IP addresses 2. SVM_DATA1_IP = SVM management IP addresses 3. MDM_Cluster_Virtual_IP_DATA1 = The virtual IP of the MDM cluster in the Data1 network The formula to calculate IP address subnet pool or subnet needs is: N * ESXI_MGMT_IP + N * SVM_MGMT_IP + N * IDRAC_MGMT_IP Data network 2 The pools of IP addresses used for static allocation for the For clarity the first subnet is following groups: referred to as “Data2”. 1. Node_DATA2_IP = ScaleIO Nodes internal (interconnect) IP addresses 2. SVM_DATA2_IP = SVM management IP addresses MDM_Cluster_Virtual_IP_DATA2 = The virtual IP of the MDM cluster in the Data2 network The formula to calculate the subnet pool or subnet needs is: N * Node_DATA2_IP + N * SVM_DATA2_IP + MDM_Cluster_Virtual_IP_DATA2

In Table 4 the IP addresses used for the AMS and Virtual IP addresses in both data networks are defined.

5. AMS and virtual IP addresses Object VIP Address AMS server 172.16.41.250 MDM_Cluster_Virtual_IP_DATA1 172.16.44.250 MDM_Cluster_Virtual_IP_DATA2 172.16.46.250

!20 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential The IP ranges and default gateway for each IP subnet are shown in Table 5. The ESXI_VMOTION_IP, and both Data 1 and Data 2 networks are non-routed in this environment and do not require a gateway address to function.

6. IP ranges IP Pool IP Subnet Default gateway IDRAC_MGMT_IP 172.16.40.0/24 172.16.40.254 ESXI_MGMT_IP 172.16.41.0/24 172.16.41.254 ESXI_VMOTION_IP 172.16.42.0/24 n/a SVM_MGMT_IP 172.16.43.0/24 172.16.43.254 Node_DATA1_IP & SVM_DATA1_IP 172.16.44.0/24 n/a Node_DATA2_IP & SVM_DATA2_IP 172.16.45.0/24 n/a

Each data and management network can reside on a different VLAN, enabling separation at a much higher granularity. In this deployment guide, VLANs are used to separate each subnet as shown in Table 6. Note that due to the AMS requirement to have the AMS server reachable by both the ESXi management as well as the SVM management networks through a broadcast message, these two networks share the same VLAN.

7. VLAN and subnet association IP Pool VLAN ID IDRAC_MGMT_IP 1640 ESXI_MGMT_IP 1641 ESXI_VMOTION_IP 1642 SVM_MGMT_IP 1641 Node_DATA1_IP & SVM_DATA1_IP 1644 Node_DATA2_IP & SVM_DATA2_IP 1645

4. Configure physical switches

This section contains switch configuration details with explanations for one switch of each type. The remaining switch uses a configuration very similar. Complete configuration files for both switches are provided as attachments. 4.1. Factory Default Settings The configuration commands in the sections below assume switches are at their factory default settings. All switches in this guide can be reset to factory defaults as follows:

switch# restore factory-defaults stack-unit unit# clear-all

!21 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential Proceed with factory settings? Confirm [yes/no]:yes

Factory settings are restored, and the switch reloads. After reload, enter A at the [A/C/L/S] prompt as shown below to exit Bare Metal Provisioning mode.

This device is in Bare Metal Provisioning (BMP) mode. To continue with the standard manual interactive mode, it is necessary to abort BMP.

Press A to abort BMP now.

Press C to continue with BMP. Press L to toggle BMP syslog and console messages.

Press S to display the BMP status.

[A/C/L/S]:A

% Warning: The bmp process will stop ...

Dell>

The switch is now ready for configuration.

!22 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4.2. Z9100-ON spine switch configuration The following sections outline the configuration commands issued to the Z9100-ON spine switches to build the topology in Section 3.1.

Note: The following configuration details are specific to Z9100-Spine1, Spine 1. The remaining spine switch is similar. Complete configuration details for both leaf switches are provided in the attachments named Spine1.txt and Spine2.txt.

Initial configuration involves setting the hostname and enabling LLDP. LLDP is useful for troubleshooting. The VLT backup interface is configured with an IP address.

enable configure

hostname Z9100-Spine1 protocol lldp

advertise management-tlv management-address system-description system-name

advertise interface-port-desc

interface ManagementEthernet 1/1

ip address 192.168.255.1/30 no shutdown

protocol spanning-tree rstp no disable

bridge-priority 0

Next, the VLTi between Z9100-Spine1 and Z9100-Spine2 are configured. In this configuration, add interfaces hundredGigE 1/31-32 to static port channel 127 for the VLT interconnect. The backup destination is the management IP address of the VLT peer switch, Z9100-Spine2. This switch is given the unit ID of zero.

interface port-channel 127

description VLTi

mtu 9216 channel-member hundredGigE 1/31 - 1/32

no shutdown

!23 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential interface range hundredGigE 1/31 - 1/32

description VLTi mtu 9216

no shutdown

vlt domain 127

peer-link port-channel 127

back-up destination 192.168.255.2 unit-id 0

Interfaces hundredGigE 1/1-2 connect to the leaf switches downstream via LACP port channels. Port channel 126 has both interfaces and it is configured as a peer LAG member.

interface hundredGigE 1/1

description S5048-Leaf1 port-channel-protocol LACP

port-channel 126 mode active

no shutdown

interface hundredGigE 1/2

description S5048-Leaf2 port-channel-protocol LACP

port-channel 126 mode active

no shutdown interface Port-channel 126

description S5048F-Leaf1 & Leaf2 portmode hybrid

switchport

vlt-peer-lag port-channel 126 no shutdown

Six VLAN interfaces are created. The port channel 126 is tagged for each VLAN. Management VLANs (1640, 1641, and 1643) as well as the DevOps VLAN (See Section 6) are assigned to a VRRP group and a virtual address is assigned. VRRP priority is set to 254 to increase the preference for making this switch the VRRP

!24 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential master (On the VRRP peer switch, priority is set to 1). VLAN 1641 is assigned two IP addresses corresponding to the two subnets contained in the VLAN. Two VRRP groups are also assigned to this VLAN. The other VLANs are not routed and do not require an IP address be assigned to the VLAN.

interface Vlan 1640 description IDRAC_MGMT_IP & SW_MGMT

ip address 172.16.30.251/24

tagged Port-Channel 101

vrrp-group 40

description IDRAC_MGMT_IP & SW_MGMT priority 254

virtual-address 172.16.40.254

no shutdown

!25 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential interface Vlan 1641

description ESXI_MGMT & SVM_MGMT ip address 172.16.41.251/24

ip address 172.16.43.251/24 secondary

tagged Port-channel 126

vrrp-group 41

description ESXI_MGMT priority 254

virtual-address 172.16.41.253

vrrp-group 43

description SVM_MGMT_IP priority 254

virtual-address 172.16.43.253

no shutdown

interface Vlan 1642

description ESXI_VMOTION_IP mtu 9216

tagged Port-channel 126

no shutdown

interface Vlan 1644

description Node_DATA1_IP & SVM_DATA1_IP tagged Port-channel 126

no shutdown

interface Vlan 1645

description Node_DATA2_IP & SVM_DATA2_IP

!26 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential tagged Port-channel 126

no shutdown interface Vlan 500

description DevOps Group

ip address 100.64.20.2/22

vrrp-group 100

description DevOps-VLAN priority 254

virtual-address 100.64.20.1 no shutdown

!27 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4.3. S5048F-ON leaf switch configuration The following section outlines the configuration commands issued to S5048F-ON leaf switches. The switches start at their factory default settings per Section 4.1.

Note: The following configuration details are specific to S5048F, Leaf 1. The remaining leaf switch is similar. Complete configuration details for both leaf switches are provided in the attachments named S5048- Leaf1.txt and S5048-Leaf2.txt.

Initial configuration involves setting the hostname and enabling LLDP. LLDP is useful for troubleshooting. The VLT backup interface is configured with an IP address. Finally RSTP is enabled.

enable configure

hostname S5048F-Leaf1 protocol lldp

advertise management-tlv management-address system-description system-name

advertise interface-port-desc

interface ManagementEthernet 1/1

ip address 192.168.255.1/30 no shutdown

protocol spanning-tree rstp no disable

Next, the VLT interfaces between S5048F-Leaf1 and S5048F-Leaf2 are configured. In this configuration, add interfaces hundredGigE 1/53-54 to static port channel 127 for the VLT interconnect. They are added to static port-channel 127. The backup destination is the management IP address of the VLT peer switch, S5048F- Leaf2. Finally, VLT peer-routing is enabled providing forwarding redundancy in the event of a switch failure.

interface port-channel 127

description VLTi

mtu 9216 channel-member hundredGigE 1/53 - 1/54

no shutdown

!28 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential interface range hundredGigE 1/53 - 1/54 description VLTi

mtu 9216

no shutdown

vlt domain 127

peer-link port-channel 127 back-up destination 192.168.255.2

unit-id 0

Interfaces hundredGigE 1/49-50 connect to the leaf switches upstream via LACP port channels. Port channel 126 has both interfaces and it is enabled to participate in VLT.

interface hundredGigE 1/49 description Z9100-Spine1

port-channel-protocol LACP

port-channel 126 mode active no shutdown

interface hundredGigE 1/50 description Z9100-Spine2

port-channel-protocol LACP

port-channel 126 mode active no shutdown

interface Port-channel 126

description Z9100-Spine1 & Spine2

portmode hybrid switchport

mtu 9216

vlt-peer-lag port-channel 126

!29 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential no shutdown

The downstream interfaces, to the ScaleIO Ready Nodes in this case, are configured in the next set of commands. The downstream interface to the S3048-ON management switch, is also configured.

interface twentyFiveGigE 1/17 description Ready Node 1_NIC-1_vmnic5

switchport

spanning-tree rstp edge-port no shutdown

interface twentyFiveGigE 1/18 description Ready Node 1_NIC-1_vmnic4

mtu 9216

switchport spanning-tree rstp edge-port

no shutdown

interface twentyFiveGigE 1/19

description Ready Node 2_NIC-1_vmnic5

switchport spanning-tree rstp edge-porft

no shutdown

interface twentyFiveGigE 1/20

description Ready Node 2_NIC-1_vmnic4 mtu 9216

switchport

spanning-tree rstp edge-port no shutdown

interface twentyFiveGigE 1/21

!30 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential description Ready Node 3_NIC-1_vmnic5

switchport spanning-tree rstp edge-port

no shutdown

interface twentyFiveGigE 1/22

description Ready Node 3_NIC-1_vmnic4

mtu 9216 switchport

spanning-tree rstp edge-port no shutdown

interface twentyFiveGigE 1/23 description Ready Node 4_NIC-1_vmnic5

switchport

spanning-tree rstp edge-port no shutdown

interface twentyFiveGigE 1/24 description Ready Node 4_NIC-1_vmnic4

mtu 9216

switchport spanning-tree rstp edge-port

no shutdown

interface twentyFiveGigE 1/48

description uplink from S3048-Management

port-channel-protocol LACP port-channel 101 mode active

no shutdown

!31 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential interface Port-channel 101 description uplink from S3048-Management

portmode hybrid

switchport vlt-peer-lag port-channel 101

no shutdown

Four VLAN interfaces are created. Next, all server facing downstream interfaces are tagged for each VLAN with VLANs 1640 and 1641 being tagged only for port-channel 101. Only the VLAN related to Data 1 is created on this leaf switch while Data 2 is created on the other leaf switch.

interface Vlan 1640

description IDRAC_MGMT_IP & SW_MGMT

ip address 172.16.40.248/24 tagged Port-channel 101,126

no shutdown

interface Vlan 1641

description AMS_Node_&_SVM_MGMT

tagged Port-channel 101,126 no shutdown

interface Vlan 1642

description ESXI_VMOTION_IP

mtu 9216 tagged twentyFiveGigE 1/18,1/20,1/22,1/24

tagged Port-channel 126

no shutdown

interface Vlan 1644

description Data1

!32 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential tagged twentyGiveGigE 1/17,1/19,1/21,1/23

no shutdown

A gateway of last resort is configured pointing at the virtual IP address configured for the VLAN ID 1640 on the Z9100-ON spine pair (See, Section 4.2). Next, a user account with privilege level 15 is created, and the SSH daemon is enabled allowing remote login. Finally, an ACL is created to allow SSH login only from the management subnet (see section 3.1) and the configuration is saved.

ip route 0.0.0.0 0.0.0.0 172.16.40.254 ip ssh server enable

username admin secret 5 xxxxx privilege 15

ip access-list standard ALLOW-SSH

description Allow SSH from the management network

seq 5 permit 172.16.11.0/24 seq 20 deny any log

line vty 0 9 access-class ALLOW-SSH ipv4

Save the configuration.

end

write

!33 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4.4. S3048-ON management switch configuration The following section outlines the configuration commands issued to S3048-ON management switch. The switches start at their factory default settings per Section 5.1.

Initial configuration involves setting the hostname and enabling LLDP.

enable

configure

hostname S3048-Management protocol lldp

advertise management-tlv management-address system-description system-name

advertise interface-port-desc

The upstream interfaces, to the S5048F-ON leaf switches, are configured in the next set of commands. Each interface is added to a port channel.

interface TenGigabitEthernet 1/49 description uplink to S5048F_Leaf1

port-channel-protocol LACP port-channel 101 mode active

no shutdown

interface TenGigabitEthernet 1/50

description uplink to S5048F_Leaf2

port-channel-protocol LACP port-channel 101 mode active

no shutdown

interface Port-channel 101

description uplink to S5048F_Leaf1 & 2

portmode hybrid switchport

no shutdown

The downstream interfaces, to the servers iDRAC and ESX/SVM ports, are configured in the next set of commands. Each interface is set as a switch port interface.

!34 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential interface GigabitEthernet 1/2

description Ready Node 1 iDRAC switchport

no shutdown

interface GigabitEthernet 1/4

description Ready Node 1 ESX/SVM MGMT

switchport no shutdown

interface GigabitEthernet 1/6

description Ready Node 2 iDRAC

switchport no shutdown

interface GigabitEthernet 1/8 description Ready Node 2 ESX/SVM MGMT

switchport

no shutdown

interface GigabitEthernet 1/10

description Ready Node 3 iDRAC switchport

no shutdown

interface GigabitEthernet 1/12

description Ready Node 3 ESX/SVM MGMT

switchport no shutdown

!35 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential interface GigabitEthernet 1/14

description Ready Node 4 iDRAC switchport

no shutdown

interface GigabitEthernet 1/16

description Ready Node 4 ESX/SVM MGMT

switchport no shutdown

interface GigabitEthernet 1/38

description AMS server iDRAC

switchport no shutdown

interface GigabitEthernet 1/40 description AMS server ESX/SVM MGMT

switchport

no shutdown

Two VLANs, 1640 and 1641 are created next. VLAN ID 1640 is used for iDRAC and switch management. VLAN ID 1641 is used for ScaleIO Ready Node discovery as well as ESXi management. The upstream port channel to the S5048-ON is tagged for both interfaces.

interface vlan 1640

description IDRAC_MGMT_IP & SW_MGMT ip address 172.16.40.250/24

untagged GigabitEthernet 1/2,1/6,1/10,1/14,1/38

tagged port-channel 101 no shutdown

interface Vlan 1641

!36 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential description ASM_Node_&_SVM_MGMT

tagged Port-channel 101 untagged GigabitEthernet 1/4,1/8,1/12,1/16,1/40

no shutdown

A gateway of last resort is configured pointing at the virtual IP address configured for the VLAN ID 1640 on the Z9100-ON spine pair (See, Section 4.2). Next, a user account with privilege level 15 is created, and the SSH daemon is enabled allowing remote login. Finally, an ACL is created to allow SSH login only from the management subnet (see section 3.1) and the configuration is saved.

ip route 0.0.0.0 0.0.0.0 172.16.40.254

ip ssh server enable username admin secret 5 xxxxx privilege 15

ip access-list standard ALLOW-SSH description Allow SSH from the management network

seq 5 permit 172.16.11.0/24

seq 20 deny any log

line vty 0 9

access-class ALLOW-SSH ipv4

Save the configuration.

end write

!37 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4.5. Verify switch configuration The following sections show commands and output to verify switches are configured and connected correctly. Except where there are fundamental differences, the only output shown is the S5048F-Leaf1 switch and the Z9100-Spine 1 switch. The output from remaining devices is similar.

4.5.1. show vlt brief The Inter-chassis link (ICL) Link Status, Heart Beat Status, and VLT Peer Status must all be up. The role of one switch in the VLT pair is Primary, and its peer switch (not shown) is assigned the Secondary role. Peer routing is also enabled leaving timers at system defaults.

S5048F-Leaf-1#show vlt brief VLT Domain Brief

------

Domain ID: 127 Role: Primary

Role Priority: 32768 ICL Link Status: Up

HeartBeat Status: Up

VLT Peer Status: Up Local Unit Id: 0

Version: 6(7)

Local System MAC address: 34:17:eb:37:21:00 Remote System MAC address: 34:17:eb:37:1c:00

Remote system version: 6(7)

Delay-Restore timer: 90 seconds Delay-Restore Abort Threshold: 60 seconds

Peer-Routing : Disabled

Peer-Routing-Timeout timer: 0 seconds Multicast peer-routing timeout: 150 seconds

4.5.2. show vlt detail Port channel 101 will be up with active VLANs 1640. Port channel 126 will be active will all configured VLANs.

S5048F-Leaf1#show vlt detail Local LAG Id Peer LAG Id Local Status Peer Status Active VLANs

!38 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential ------

101 101 UP UP 1, 1640-1641 126 126 UP UP 1, 1640-1642, 1644

4.5.3. show vlt mismatch Show VLT mismatch lists VLANs configured on a single switch in the VLT domain. The output shows the two VLANs associated with Data 1 and Data 2 are listed.

S5048F-Leaf1#show vlt mismatch

Domain

------Parameters Local Peer

------

Vlan-config

------

Vlan-ID Local Mode Peer Mode ------

1644 L2 --

1646 -- L2

4.5.4. show vrrp brief The output from the show vrrp brief command should be similar to that shown below. The priority (Pri column) of the master router in the pair is 254, and the backup router (not shown) is assigned priority 1. Note that some of the descriptions have been truncated to fit the document.

Z9100-Spine1#show vrrp brief Interface Group Pri Pre State Master addr Virtual addr(s) Descript ion

------

Vl 500 IPv4 100 254 Y Master 100.64.20.2 100.64.20.1 DevOps-VLAN-VIP Vl 1640 IPv4 40 254 Y Master 172.16.40.251 172.16.40.253 IDRAC_MGMT_IP…

Vl 1641 IPv4 41 254 Y Master 172.16.41.251 172.16.41.253 ESXI_MGMT

!39 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential Vl 1641 IPv4 43 254 Y Master 172.16.43.251 172.16.43.253 SVM_MGMT_IP

!40 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4.5.5. show lldp neighbors Show LLDP neighbors can be used to review

S5048F-Leaf1#show lldp neighbors

Loc PortID Rem Host Name Rem Port Id Rem Chassis Id ------

Tf 1/17 - ec:0d:9a:9e:81:0f ec:0d:9a:9e:81:11

Tf 1/18 - ec:0d:9a:9e:81:0e ec:0d:9a:9e:81:10 Tf 1/19 - ec:0d:9a:9e:81:07 ec:0d:9a:9e:81:09

Tf 1/20 - ec:0d:9a:9e:81:06 ec:0d:9a:9e:81:08 Tf 1/21 - ec:0d:9a:9e:80:07 ec:0d:9a:9e:80:09

Tf 1/22 - ec:0d:9a:9e:80:06 ec:0d:9a:9e:80:08

Tf 1/23 - ec:0d:9a:9e:82:77 ec:0d:9a:9e:82:79 Tf 1/24 - ec:0d:9a:9e:82:76

Tf 1/48 S3048-Mana… TenGigabitEthernet 1/49 64:00:6a:d8:30:a0

Hu 1/49 Z9100-Spine1 hundredGigE 1/1 4c:76:25:e5:94:40 Hu 1/50 Z9100-Spine2 hundredGigE 1/1 4c:76:25:e8:e5:40

Ma 1/1 S5048F-Leaf2 ManagementEthernet 1/1 34:17:eb:37:1c:00

!41 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 4.5.6. show spanning-tree rstp brief The command show spanning-tree rstp brief shows that the Z9100-Spine1 switch is configured as the root bridge. All interfaces, port channel 126, is in a forwarding state.

Z9100-Spine1#show spanning-tree rstp brief Executing IEEE compatible Spanning Tree Protocol

Root ID Priority 0, Address 3417.eb37.2100 Root Bridge hello time 2, max age 20, forward delay 15

Bridge ID Priority 0, Address 4c76.25e5.9440

Configured hello time 2, max age 20, forward delay 15

Interface Designated

Name PortID Prio Cost Sts Cost Bridge ID PortID ------

Po 126 128.127 128 140 FWD(vlt) 320 4096 3417.eb37.1c00 128.127

Po 127 128.128 128 180 FWD(vltI 320 0 4c76.25e5.9440 128.128

Interface

Name Role PortID Prio Cost Sts Cost Link-type Edge ------

Po 126 Root 128.127 128 140 FWD 320 (vlt) P2P No

Po 127 Desg 128.128 128 180 FWD 320 (vltI)P2P No 5. Deploying ScaleIO with AMS

Deploying ScaleIO Ready Nodes is performed using the deployment wizard, which is accessed from the ScaleIO GUI. The GUI is the client interface to the AMS. The wizard discovers nodes in the ScaleIO Ready Node system and deploys all necessary software components to the selected nodes.

Deployment of the ScaleIO Ready Nodes is covered in the following section and includes these four tasks:

1. Deploy the AMS server 2. Install the ScaleIO GUI 3. Perform initial AMS setup 4. Discover and deploy ScaleIO Ready Nodes

!42 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 5.1. Deploy the AMS server The AMS server must be installed on a server with access to the ScaleIO Ready Node nodes, but not on a ScaleIO Ready Node server. In this deployment a dedicated ScaleIO management server is used as the AMS server. The server is connected to the S3048ON management switch to meet the requirements of being on the same local network (See Section 3).

The AMS server OS installed is part of the ScaleIO 2.5 AMS downloads. The file name used in this deployment is AMS_Kylin.x86_64.2.5.0..install.iso. This ISO is bootable media that will automatically configure Kylin Linux with the appropriate AMS package pre-installed.

Note: For detailed instructions on deploying the AMS server, see https://community.emc.com/docs/ DOC-57945

5.1.1. Configure an IP address Once the OS is deployed an IP address must be assigned. To assign an IP address to the AMS server login to the server using the default credentials, username of root, and the password Scaleio123. Then edit two files:

1. /etc/sysconfig/network/ifcfg-eth0 2. /etc/sysconfig/network/routes

Below is the output of the two files used in this deployment and can be used as a template.

linux-pcex:~ # cat /etc/sysconfig/network/ifcfg-eth0 BOOTPROTO='static'

BROADCAST=''

ETHTOOL_OPTIONS='' IPADDR='172.16.41.250'

NETMASK='255.255.255.0' STARTMODE='auto'

USERCONTROL='no'

MTU='1500'

linux-pcex:~ # cat /etc/sysconfig/network/routes

default 172.16.41.253

Once the IP address is configured used ifdown eth0 && ifup eth0 to apply the IP address. Validation can be done by pinging from the AMS (172.16.41.250) to the workstation that the ScaleIO GUI will be installed to next (172.16.11.242).

linux-pcex:~ # ping -c 5 172.16.11.242

!43 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential PING 172.16.11.242 (172.16.11.242) 56(84) bytes of data.

64 bytes from 172.16.11.242: icmp_seq=1 ttl=125 time=0.159 ms 64 bytes from 172.16.11.242: icmp_seq=2 ttl=125 time=0.244 ms

64 bytes from 172.16.11.242: icmp_seq=3 ttl=125 time=0.199 ms

64 bytes from 172.16.11.242: icmp_seq=4 ttl=125 time=0.180 ms 64 bytes from 172.16.11.242: icmp_seq=5 ttl=125 time=0.235 ms 5.2. Install and log into the ScaleIO GUI The ScaleIO installation file can be downloaded from the product ISO or the EMC Support Site. The file name is EMC-ScaleIO-gui-2.5-.msi. Log in to the management system and install the MSI file, there are no configuration options to set during installation. Once complete, log in to the AMS server IP (see section 5.1.1) with the default credentials. The username is admin with the password Scaleio123. The password must be changed at this first-time use. Once logged in the AMS Initial Setup will appear. 5.3. Perform initial AMS setup The initial AMS setup collects information about the environment before any ScaleIO Ready Node nodes are discovered. This one-time setup will define the IP ranges (see section 0) as well as DNS, NTP, and vCenter information. For detailed instructions reference the ScaleIO Ready Node 2.5 AMS Deployment Guide. The tables below contain the values used in this deployment. Each table represents its respective tab during the setup process.

8. System Parameter Value Name AMS-ScaleIO Type Virtual host (ESXi) Use Auto Discovery proxy n/a

9. Security Parameter Value Host Operating System ScaleIO Virtual Machine (SVM) BMC (iDRAC) AMS Recovery Key

10. Management Networks

!44 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential IP Address Parameter Range IP Address Subnet Mask Default VLAN Start Range End Gateway ID Host 172.16.41.1 172.16.41.249 255.255.255.0 172.16.41.253 n/a ScaleIO Virtual 172.16.43.1 172.16.43.249 255.255.255.0 172.16.43.253 n/a Machine (SVM) BMC (iDRAC) 172.16.40.1 172.16.40.249 255.255.255.0 172.16.40.253 n/a

11. Data Networks IP Address IP Address VLAN Parameter Range Start Range End Subnet Mask ID Data Network A 1644 Virtual IP 172.16.45.250 Host 172.16.44.1 172.16.44.249 255.255.254.0 SVM 172.16.45.1 172.16.45.249 255.255.254.0 Data Network B 172.16.43.1 172.16.43.249 255.255.255.0 n/a Virtual IP 172.16.47.250 1646 Host 172.16.46.1 172.16.46.249 255.255.254.0 SVM 172.16.47.1 172.16.47.249 255.255.254.0

12. DNS and NTP Parameter IP Address DNS 1 172.16.11.4 DNS 2 172.16.11.5 NTP 1 172.16.11.4 NTP 2 172.16.11.5

Note: In the deployment environment DNS is provided by a Windows 2016 server while NTP services are managed by Meinberg services.

!45 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 13. Virtualization Parameter IP Address vCenter IP 172.16.11.62 Address Username [email protected] Password Data Center ScaleIO

5.4. Discover and deploy ScaleIO Ready Nodes Once the initial setup is complete node discovery can be made. The deployment wizard discovers nodes in the system and deploys all necessary software components on the selected nodes. Deployment can be done in one of two methods:

• Simple - The AMS deploys all available nodes in the system using IP addresses in the IP pool. All steps are completed with their default values. • Advanced - The AMS presents a system using IP addresses in the IP pool. The option to edit the node settings before deployment is available as well as the following: - Define MDM and Tie-Breaker roles for the nodes. - Configure the disk role (storage or caching) and what kind of caching. - Create and assign nodes to Protection Domains (PD) and Fault Sets. - Create and assign devices to Storage Pools (SP).

In this deployment the simple mode is used. During the first step, discovering nodes, the AMS sends out a broadcast message. This message is received by the pre-deployed virtual machine running on each ScaleIO Ready Node. Once a Ready Node has been discovered and configured by AMS the node cannot be discovered by another AMS server. A factory reset of the node will need to be done to perform discovery again. For steps on factory defaulting a node see, ScaleIO Ready Node 2.5 AMS Deployment Guide.

Once the discovery is completed the wizard will automatically configure all software components including MDM, SDS, and SDC roles on the selected hosts. All local storage devices are brought together under a single storage pool and placed in a single protection domain.

The final screen of the deployment wizard enables you to perform the most common post-deployment tasks. Some tasks are mandatory, while others are recommended. See the post-deployment checklist in the ScaleIO Ready Node 2.5 AMS Deployment Guide for further information.

!46 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential To view the details of a managed ScaleIO Ready Node from the ScaleIO GUI navigate to the hardware screen. Below in Figure 14 the node properties of the MDM (Master) are shown.

! 14. ScaleIO Post-deployment configuration

From VMware vCenter, Figure 15 shows the four nodes are shown to be added to the data center object. The hosts are not added to a cluster at this point. Cluster creation is covered as part of the frontend compute environment wizard.

!47 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential ! 15. Initial VMware vCenter configuration 6. ESXi compute environment

The GUI and the REST API can be used to administer and orchestrate compute cluster creation and allocation of nodes. These tools can be leveraged to create compute clusters on ESXi servers, configure the application network and ports, map storage, and then deploy virtual machine workloads.

This feature simplifies the configuration of the application network and ports, and provides an interface for monitoring, editing, and removing compute nodes and clusters that have been configured using the GUI or REST API.

Volume creation and ESXi mapping processes include the ability to format the mapped volume with a VMFS datastore on ESXi with the VMFS5 format.

Cluster and network configuration can be performed in simple or advanced modes. For some settings, auto- select options allow the system to recommend the most suitable configurations. In addition, default or customized profiles speed up the configuration activities.

!48 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential In this section a VMware vSphere compute cluster will be deployed using the ScaleIO GUI. This cluster will provide the space to deploy virtual machines that in turn will use the ScaleIO virtual SAN for storage. The environment created will use a single port group for the virtual machines called, “DevOps Team”, and that port group is associated with VLAN ID 500. 0 shows the virtual environment post creation but before any virtual machine workloads are deployed.

! 16. DevOps team environment

Creating an ESXi compute cluster an IP pool for VMkernel interfaces should be created before the compute environment is deployed.

!49 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential 6.1. Create an IP pool for vMotion VMkernel To define and IP pool to be used for VMkernel interfaces.

1. From the ScaleIO GUI click the System Settings icon (! ) icon and select System Settings. 2. In the navigation pane, click VMkernel IP Pools. 3. Click the Add IP Pool icon (! ) and provide the following details: a. Name: ESXI_VMOTION_IP b. IP Address Ranges: 172.16.42.1 – 172.16.42.249 c. Subnet Mask: 255.255.255.0 4. Click Save and then Close. 6.2. Create a compute environment To deploy the compute cluster

1. From the ScaleIO GUI click Frontend > Compute > Environment. 2. In the Environment window add a new profile. 3. Click the save icon (! ) and type atx01-0w04-ScaleIO. 4. In the environment name field type, atx01-w04-ScaleIO. 5. Under Compute click the edit icon (! ) and select all four hosts. 6. Expand Advanced (! ) and set the following: a. Clusters to 1. b. In the Cluster Prefix field type the name for the vSphere cluster (atx01-w04-ScaleIO). c. Check HA and DRS d. Check Overcommit and leave the default of 100%. e. Click the save icon (! ) and in the Save Profile window: i. Choose Create a New Profile. ii. In the profile name field type atx01-w04-ScaleIO-cluster_settings. iii. Click OK. 7. Click Select Storage Pool and make sure the appropriate storage pool is selected (SSD_1). 8. Expand Advanced (! ) and set the following: a. Set size to 1TB b. In the Datastore Prefix field type atx01w04scaleio c. Check Overcommit and leave the default of 100%. 9. In the Network section set the # of Switches to 1. 10. Expand Advanced (! ) and set the following: a. Click the save icon (! ) and in the Save Profile window: i. Choose Create a New Profile. ii. In the profile name field type atx01-w04-ScaleIO-single_switch. iii. Click OK. b. Under Virtual switches > Name type VSS. c. Check Migration d. In the VM Networks dropdown set the number to 1. e. In the Prefix field type DevOps f. Click the add icon (! ) under NIC and select vmnic4 and vmnic6 for each ScaleIO Ready Node. 11. Expand VM Network and Switch Settings (! ) and set the following:

!50 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential a. For VM Net Name DevOps_01 set the MTU to 9000 and VLAN to 500. b. Set the Switch Type to vSwitch c. Set Link Aggregation to Active-Standby d. Under VMkernel in the IP Address Pool drop-down select ESXI_VMOTION_IP. e. Set the MTU to 9000 and VLAN to 1642. 12. Click Create Environment

AMS will show a log of current tasks being done to create the compute environment. Once successful the job will show as complete. From the vSphere Web Client navigate to Hosts and Clusters, expand the created cluster atx01-w04-ScaleIO and select a ScaleIO Ready Node. In the center pane select Configure > Networking > Virtual switches. Figure 17 shows the AMS created environment.

! 17. AMS created environment

At this point the creation of virtual machines can be done through AMS using the workload wizard. That effort is beyond the scope of this document.

!51 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential A. Troubleshooting SDS connectivity

SDS connectivity problems will affect ScaleIO performance. ScaleIO has a built-in tool to verify all SDS nodes in a given protection domain have connectivity. From the ScaleIO CLI (SCLI), run the ScaleIO internal network test in order to verify the network speed between all the SDS nodes in the Protection Domain.

The command below will test all SDS nodes with a payload of 10 GB, using 8 parallel threads.

JD6X0M2-svm:~ # scli --start_all_sds_network_test --protection_domain_name Defau lt_PD --parallel_messages 8 --network_test_size 10

Started test on SDS_172.16.43.2 172.16.45.2 at 17:11:37...... Test finished. Started test on SDS_172.16.43.3 172.16.45.3 at 17:11:47...... Test finished.

Started test on SDS_172.16.43.4 172.16.45.4 at 17:11:57...... Test finished.

Started test on SDS_172.16.43.1 172.16.45.1 at 17:12:07...... Test finished. Protection Domain Default_PD contains 4 SDSs

JD6X0M2-svm:~ # scli --query_all_sds_network_test_results -- protection_domain_name Default_PD

Protection Domain Default_PD

Connection: SDS_172.16.43.2 172.16.45.2 <---> SDS_172.16.43.1 172.16.45.1 --> 3.0 GB (3065 MB) per-second <-- 3.2 GB (3261 MB) per-second

Connection: SDS_172.16.43.3 172.16.45.3 <---> SDS_172.16.43.1 172.16.45.1 --> 3.2 GB (3230 MB) per-second <-- 3.0 GB (3103 MB) per-second Connection: SDS_172.16.43.2 172.16.45.2 <---> SDS_172.16.43.3 172.16.45.3 --> 3.1 GB (3210 MB) per-second <-- 3.1 GB (3150 MB) per-second

Connection: SDS_172.16.43.3 172.16.45.3 <---> SDS_172.16.43.4 172.16.45.4 --> 3.3 GB (3368 MB) per-second <-- 3.1 GB (3210 MB) per-second

Connection: SDS_172.16.43.2 172.16.45.2 <---> SDS_172.16.43.4 172.16.45.4 --> 3.2 GB (3282 MB) per-second <-- 3.2 GB (3230 MB) per-second

Connection: SDS_172.16.43.4 172.16.45.4 <---> SDS_172.16.43.1 172.16.45.1 --> 3.2 GB (3313 MB) per-second <-- 3.2 GB (3240 MB) per-second

!52 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential B. Validated hardware and components

The following tables list the hardware and components used to configure and validate the example configurations in this guide.

B.1. Dell EMC Networking switches

Qty Item OS/Firmware version 1 S3048-ON - Management switch OS: DNOS 9.13.0.0 System CPLD: 9 2 S5048F-ON - Leaf switch OS: DNOS 9.12(1.0) CPLD: 1.0 2 Z9100-ON – Spine switch OS: DNOS 9.13(0.0) System CPLD: 6

B.2. Dell EMC ScaleIO Ready Nodes

Qty per server Item Firmware version 2 Intel Xeon CPU Gold 6126 2.60GHz, 12 cores N/A 192 16GB x 12 slots DDR4 GB RAM N/A 5 1TB MZ7KM960HMJP0D3 SAS SSD GD53 1 HBA330 13.17.03.05 1 Mellanox ConnectX-4 LX 25 GbE SFP Adapters 18.3.6 2 Mellanox ConnectX-4 LX 25 GbE DP 18.3.6 - BIOS 1.2.11 - iDRAC with Lifecycle Controller 3.15.15.15

!53 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential C. Validated software

The Software table lists the software components used to validate the example configurations in this guide.

Item Version Dell EMC ScaleIO 2.5 build 254 VMware ESXi ScaleIO Ready Node custom ISO 6.5 U1 build 5969303 VMware vCenter Server Appliance 6.5 build 5973321

!54 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential D. Product manuals and technical guides

D.1. Dell EMC

Dell EMC TechCenter - An online technical community where IT professionals have access to numerous resources for Dell EMC software, hardware and services.

Dell EMC TechCenter Networking Guides

Manuals and documentation for Dell Networking S5048F-ON

Manuals and documentation for Dell Networking S3048-ON

ScaleIO Document Library ScaleIO Ready Node: AMS (Dell) – This link requires a username and password to access. Please contact your sales representative for login credentials.

D.2. VMware

VMware vSphere Documentation

vSphere Installation and Setup – This document includes ESXi 6.5 and vCenter Server 6.5

VMware Compatibility Guide;

!55 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential E. Support and feedback

Contacting Technical Support

Support Contact Information Web: http://support.dell.com/

Telephone: USA: 1-800-945-3355

Feedback for this document

We encourage readers to provide feedback on the quality and usefulness of this publication by sending an email to [email protected].

!56 ScaleIO 14G Ready Node Deployment with Dell Networking S5048F-ON 25 GbE | DRAFT Version 1.0 Dell - Internal Use - Confidential