Deploying MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment

Dell Networking Solutions Engineering November 2014

A Dell EMC Deployment and Configuration Guide

Revisions

Date Description Authors

November 2014 Version 1.3 Fixed Duplicate Images Ed Blazek, Kevin Locklear, Curtis Bunch, Mike Matthews

October 2014 Version 1.2 Ed Blazek, Kevin Locklear, Curtis Added vPC/VLT switch configuration Bunch, Mike Matthews Updated existing configurations

November 2013 Version 1.0 Release Ed Blazek, Kevin Locklear

Copyright © 2014-2016 Dell Inc. or its subsidiaries. All Rights Reserved. Except as stated below, no part of this document may be reproduced, distributed or transmitted in any form or by any means, without express permission of Dell.

You may distribute this document within your company or organization only, without alteration of its contents.

THIS DOCUMENT IS PROVIDED “AS-IS”, AND WITHOUT ANY WARRANTY, EXPRESS OR IMPLIED. IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE SPECIFICALLY DISCLAIMED. PRODUCT WARRANTIES APPLICABLE TO THE DELL PRODUCTS DESCRIBED IN THIS DOCUMENT MAY BE FOUND AT: http://www.dell.com/learn/us/en/vn/terms-of-sale-commercial-and-public-sector-warranties

Performance of network reference architectures discussed in this document may vary with differing deployment conditions, network loads, and the like. Third party products may be included in reference architectures for the convenience of the reader. Inclusion of such third party products does not necessarily constitute Dell’s recommendation of those products. Please consult your Dell representative for additional information.

Trademarks used in this text: Dell™, the Dell logo, Dell Boomi™, PowerEdge™, PowerVault™, PowerConnect™, OpenManage™, EqualLogic™, Compellent™, KACE™, FlexAddress™, ™ and Vostro™ are trademarks of Dell Inc. EMC VNX®, and EMC Unisphere® are registered trademarks of Dell. Other Dell trademarks may be used in this document. Cisco Nexus®, Cisco MDS®, Cisco NX-0S®, and other Cisco Catalyst® are registered trademarks of Cisco System Inc. Intel®, Pentium®, Xeon®, Core® and Celeron® are registered trademarks of Intel Corporation in the U.S. and other countries. AMD® is a registered trademark and AMD Opteron™, AMD Phenom™ and AMD Sempron™ are trademarks of Advanced Micro Devices, Inc. Microsoft®, Windows®, Windows Server®, Internet Explorer®, MS-DOS®, Windows Vista® and Active Directory® are either trademarks or registered trademarks of Microsoft Corporation in the United States and/or other countries. Red Hat® and Red Hat® Enterprise Linux® are registered trademarks of Red Hat, Inc. in the United States and/or other countries. Novell® and SUSE® are registered trademarks of Novell Inc. in the United States and other countries. Oracle® is a registered trademark of Oracle Corporation and/or its affiliates. VMware®, Virtual SMP®, vMotion®, vCenter® and vSphere® are registered trademarks or trademarks of VMware, Inc. in the United States or other countries. IBM® is a registered trademark of International Business Machines Corporation. Broadcom® and NetXtreme® are registered trademarks of QLogic is a registered trademark of QLogic Corporation. Other trademarks and trade names may be used in this document to refer to either the entities claiming the marks and/or names or their products and are the property of their respective owners. Dell disclaims proprietary interest in the marks and names of others.

2 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Table of Contents Revisions...... 2 1 Introduction ...... 5 1.1 Configuration Overviews ...... 5 1.1.1 Overview of Configuration One ...... 6 1.1.2 Overview of Configuration Two ...... 7 1.1.3 Overview of Configuration Three ...... 8 2 Technology used in this Deployment Guide ...... 9 2.1 Fiber Channel Over Ethernet ...... 9 2.2 Data Center Bridging ...... 10 2.3 N_Port ID Virtualization and N_Port Virtualization ...... 10 2.4 Cisco vPC and Dell Networking FTOS Multichassis Ether Channel Technology ...... 10 2.5 Multi-Path I/O ...... 10 3 Hardware Used in this Deployment Guide ...... 11 3.1 Dell PowerEdge M1000e Blade Enclosure Overview ...... 11 3.2 Server – PowerEdge M620 Blade Server ...... 12 3.3 M1000e I/O Modules ...... 12 3.3.1 Dell Networking MXL Overview ...... 13 3.3.2 Dell PowerEdge M I/O Aggregator Overview ...... 13 3.3.3 FlexIO Expansion Modules ...... 14 3.4 Cisco Nexus 5548UP Overview ...... 16 3.5 EMC VNX 5300 Overview ...... 16 4 Preparation ...... 17 4.1.1 WWN/MAC Addresses ...... 17 4.1.2 Virtual SAN (VSAN) and Virtual Fibre Channel (VFC) ...... 18 4.1.3 Configuration Table ...... 18 4.1.4 Component Information ...... 19 5 Configuration One – Dell MXL or IOAs in Nexus Fabric Mode ...... 20 5.1 Cisco Nexus 5448UP Setup ...... 21 5.2 Dell Networking MXL Setup ...... 26 6 Configuration Two – Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148 ...... 28 6.1 Cisco Nexus 5548UP Setup ...... 29

3 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

6.2 Dell Networking MXL Setup ...... 34 6.3 Cisco MDS 9148 Setup ...... 36 7 Configuration Three – Nexus Fabric Mode with Brand Varied MC-LAG Architecture ...... 37 7.1 Cisco Nexus 5548UP Setup ...... 38 7.2 Dell Networking IOA Setup ...... 43 8 Configuration and Troubleshooting ...... 48 8.1 Dell PowerEdge MXL or M I/O Aggregator ...... 48 8.2 Cisco Nexus 5548UP and MDS 9148 Validation ...... 58 A Basic Terminology ...... 64 B References ...... 67 C Attachments ...... 69 Support and Feedback ...... 69

4 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

1 Introduction This deployment guide covers configuring two Blade server chassis I/O Modules (IOMs) in a Fibre Channel over Ethernet (FCoE) single-hop topology with the Blade IOMs in FIP Snooping Bridge (FSB) mode. FSB capabilities allow the bridge (or the switch in this case) to snoop the packets coming across the ports, process the FCoE packets appropriately and send them to the intended Fiber Channel Forwarder (FCF). This is a very simple explanation of the process, as there are several things that occur such as installing Access Control lists (ACLs) that allow FCoE traffic that has logged in (FLOGI’ed). While some of these more advanced topics will be touched on in this document, for the most part the document is purposefully kept at a high level.

This document focuses on a few of the many possible network configurations containing FCoE topologies. Similar products are used in the configurations to reduce the amount of overlapping content while still covering numerous customer environments.

While not covered in this document, additional configuration is necessary before a switch is deployed in a production environment (e.g. Security, Inter Switch Links (ISLs), Virtual Port Channels (vPCs) etc.). In addition, due to the varied nature of storage offerings, configuring the storage is not covered in any detail in this document.

1.1 Configuration Overviews This section covers the three configurations that are built in this deployment guide.

Note: In a typical production environment, most configurations will include several additional connections between servers, networking and storage devices.

5 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

1.1.1 Overview of Configuration One Dell MXL or in Nexus Fabric Mode Configuration One consists of a pair of two-port LAG connections configured between two Cisco Nexus 5500s and two Dell Networking MXLs or PowerEdge M I/O Aggregators (IOA), which act as a FSB. As illustrated in Figure 1, the I/O modules are in slots A1 and A2 of the M1000e chassis. N_Port ID Virtualization (NPIV) is enabled on the Nexus switches and FC capable storage is attached directly to the Nexus switches.

Cisco Nexus 7000 Series Cisco Nexus 7000 Series

SAN A SAN B

Cisco Nexus 5500 Series Cisco Nexus 5500 Series

Dell Networking MXL Dell Networking MXL or or Dell PowerEdge I/O Dell PowerEdge I/O Aggregator Aggregator

FCoE

Dell PowerEdge M1000e Ethernet Blade Server Chassis FC vPC Configuration One - Dell MXLs or IOAs in Nexus Fabric Mode

6 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

1.1.2 Overview of Configuration Two Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148 In Configuration Two (Figure 2), a two-port connection is configured between a Cisco 5548UP and either a MXL or an IOA. This is similar to the previous example but in this configuration the Cisco 5548UP is running in NPV mode with Inter-Switch links (ISLs) to Cisco MDS devices.

Cisco Nexus 7000 Series Cisco Nexus 7000 Series

CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 1/10 GIGABITETHERNET 1/2/4/8 G FIBRE CHANNEL CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 1/10 GIGABITETHERNET 1/2/4/8 G FIBRE CHANNEL

3 3

ID ID N55-M8P8FP 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 N55-M8P8FP 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 STAT STAT Cisco Nexus 5500 Cisco Nexus 5500 SAN A SAN B

Cisco MDS 9000 Cisco MDS 9000

DS-C9148-K9 DS-C9148-K9

E E

L L

O O

S S

N N

O O

C C 0

STATUS 0 STATUS

0 0

1 1

/ /

0 0

1 1

T

P/S T P/S

M M

G G M FAN M FAN LINK ACT LINK ACT MDS 9148 Multilayer Fabric Switch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 MDS 9148 Multilayer Fabric Switch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

49-56 49 -56

L L

N N

P P

K K

o o

w w

e e

r r

E E

d d

1 1

g g

0 0

e

G e G

M

S M S

F F

P P

I I

+ + / /

O

O

M M

O O

A A

D D

g g

U U

g

L g L

E E r r

e e

g g

a a

t t

o o

r r

A A

C C

T T

Dell Networking MXL Dell Networking MXL or or

Dell PowerEdge I/O 41-48 41 -48 Dell PowerEdge I/O

L L

N N

K K

3 3

7 7

- -

4 4

0 0

A A C

Aggregator C Aggregator

T T

L L

N N

K K

3 3

3 3

- -

3 3

6 6

A A

C C

T T

C ONS O LE C ONS O LE

FCoE

Ethernet Dell PowerEdge M1000e FC Blade Server Chassis vPC Configuration Two - Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148

7 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

1.1.3 Overview of Configuration Three Dell MXL or IOA in a Nexus Fabric Mode with Brand Varied MC-LAG Architecture In Configuration Three (Figure 3), a two-port connection is configured between a Cisco 5548UP and either a MXL or an IOA I/O. This is similar to Configuration 1 except the I/O modules are placed in PMUX mode and a VLTi peer link is built connecting the two I/O modules together. For further details on the benefits of this, please see the Technology used in this deployment Guide section.

SAN A SAN B

Cisco Nexus 5500 Series Cisco Nexus 5500 Series

Dell Networking MXL Dell Networking MXL or or Dell PowerEdge I/O Dell PowerEdge I/O Aggregator Aggregator

FCoE

Ethernet FC Dell PowerEdge M1000e Blade Server Chassis VLT or vPC Configuration Three - Dell MXL or IOA in a Nexus Fabric Mode with Brand Varied MC-LAG Architecture

8 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

2 Technology used in this Deployment Guide

2.1 Fiber Channel Over Ethernet Fiber Channel Over Ethernet (FCoE) is a networking protocol that encapsulates Fiber channel frames over Ethernet networks. This allows Fibre Channel to use 10, 40 or even 100 Gigabit Ethernet networks while preserving the Fibre Channel protocol. The FCoE protocol specification replaces the FC0 and FC1 layers of Fibre Channel stack with Ethernet. By retaining the native Fibre Channel constructs, FCoE can integrate with existing Fibre Channel fabrics and management solutions.

Note: FCoE (which is referenced as FC-BB_E in the FC-BB-5 specifications) achieved standard status in June 2009, and is documented in the T11 publication. You can access this publication at http://www.t11.org/ftp/t11/pub/fc/bb-5/09-056v5.pdf.

FCoE operates directly above Ethernet in the network protocol stack, in contrast to iSCSI that runs on top of TCP and IP. As a consequence, FCoE cannot be routed across IP networks. In addition, traditional Ethernet has no priority-based flow control, unlike Fibre Channel. As a result, FCoE requires modifications to the Ethernet standard to support priority-based flow control mechanisms (this reduces frame loss from congestion). The IEEE standards body added priorities via Data Center Bridging (DCB). The three primary extensions are:

 Encapsulation of native Fibre Channel frames into Ethernet frames  Extensions to the Ethernet protocol itself to enable Lossless Ethernet links.  Mapping between Fibre Channel N_Port Ids (aka FCIDs) and Ethernet MAC address

The primary purpose of the FCoE protocol in the data center is Storage Area Networks (SANs). FCoE enables cable reduction due to converged networking possibilities. To achieve these goals three hardware components must be in place.

 Converged Network Adapters (CNAs).  Lossless Ethernet Links via DCB extensions.  An FCoE capable switch, typically referred to as a Fibre Channel Forwarder (FCF)

FIP Snooping Bridge (FSB) is a fourth, optional, component that can be introduced and still allow full FCoE functionality. In traditional Fibre Channel networks, FC switches are considered trusted. Other FC devices must log directly into the switch before they can communicate with the rest of the fabric. This login process is accomplished through a protocol called Fibre Channel Initialization Protocol (FIP), which operates at L2 for end point discovery and fabric association. With FCoE an Ethernet bridge typically exists between the End Node (ENode) and the FCF, this prevents a FIP session from properly establishing. To allow ENodes to login to the FCF, FSB is enabled on the Ethernet Bridge. By snooping the FIP packets during the discovery and login process, the intermediate bridge can implement data integrity using ACLs that permit valid FCoE traffic between the ENode and FCF.

NOTE: In this document both, the Dell Networking MXL and the Dell PowerEdge IOA can behave as a FSB if the appropriate features are enabled.

9 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

2.2 Data Center Bridging Data Center Bridging (DCB) deals with a collection of mechanisms that have been added to the existing Ethernet protocol. These mechanisms allow Ethernet to become lossless which is a perquisite for FCoE. The four additions made to the existing Ethernet protocol are:

 Priority-based Flow Control (PFC) (IEEE 802.1Qbb)  Enhanced Transmission Selection (ETS) (IEEE P802.1Qaz)  Congestion Notification (CN) (IEEE P802.1Qau)  Data Center Ethernet Bridging Capability Exchange Protocol (DCBX)

2.3 N_Port ID Virtualization and N_Port Virtualization N_Port ID Virtualization (NPIV) allows an N_Port to have multiple Word Wide Port Names (WWPNs), associated with it. In traditional FC fabrics, an N_Port is associated with a single WWPN. After the initial FLOGI process, a NPIV enabled physical N_Port can issue subsequent WWPNs. NPIV is required when dealing with numerous servers that are behind a single switch, as is found in an M1000e blade enclosure.

The purpose of N_Port Virtualization (NPV) is different from NPIV. NPV provides simplified management and increased interoperability in large SAN deployments. Each edge FC switch requires a domain, which are limited to 239 domain IDs on the same SAN or VSAN. This number can be kept manageable by having some of the edge devices act as N_Port proxies, aka NPV mode.

NPV introduces a new Fibre Channel port type, the NP_Port. This connects to a F_Port and acts similar in function as a proxy to the N_Port on the NPV enabled switch. The NPV enabled switch then registers WWPNs via NPIV.

2.4 Cisco vPC and Dell Networking FTOS Multichassis Ether Channel Technology Cisco vPC and Dell Networking FTOS VLT are separate but similar layer two solutions. vPC and VLT are virtualization technologies that present a pair of identical switches as a unique Layer 2 logical node to access layer switch and servers. In other words, this technology allows links that are physically connected to two different switches to appear as a single port channel to a third device. This device can be a switch, server or any other networking device that supports link aggregation.

The primary benefits from deploying these technologies is the elimination of (STP) blocking ports. By eliminating STP blocking ports, all available uplink bandwidth can be utilized. These two benefits lead to a simplified network design while growing the Layer 2 network in a controlled method.

2.5 Multi-Path I/O There are generally two types of multi-path access methods for communicating from a host to an external device. For general networking communications, the preferred method of redundant connections is teaming multiple NICs into a single, virtual network connection entity. For storage, the preferred method is the use of Multi-Path IO (MPIO).

10 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

3 Hardware Used in this Deployment Guide The following section highlights the hardware used in this document.

3.1 Dell PowerEdge M1000e Blade Enclosure Overview Powerful management tools The PowerEdge M1000e Blade enclosure allows you to focus more on growing your business or managing your organization and less on managing computing resources by using an array of blade management tools that help make your job easier. These tools include:

 Centralized management controllers that provide redundant and secure access paths for you to manage multiple enclosures and dozens of blades from a single console.  Dynamic power management that enables you to set high and low power thresholds to help ensure that blades operate efficiently within your power envelope.

Flexible remote management Manage the blades in the M1000e chassis individually or as groups, in single or multiple enclosures, and within a data center or in remote locations around the world with the Dell Chassis Management Controller (CMC). It provides:

 A single secure interface for inventory and configuration, as well as monitoring and alerting, for the enclosure and all installed components.  Multi-chassis management from a single, embedded, agentless interface spanning nine enclosures and up to 288 servers.  Real-time power and thermal monitoring and management, including AC power consumption with resettable peak and minimum values.  System-level power limiting and slot-based power prioritization.

Outstanding efficiency The M1000e blade enclosure allows you to take advantage of the thermal design efficiencies of Dell’s Energy Smart technology, including:

 Up to six hot-swap ultra-efficient power supplies.  Nine hot-swap redundant fan modules with dynamic power-efficient fans.  Optimized airflow design to efficiently cool the enclosure and enable exceptional performance in a low power envelope.

11 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

3.2 Server – PowerEdge M620 Blade Server The Dell PowerEdge M620 blade server (Figure 4) is a feature rich, 2-socket blade server, designed for maximum performance with extreme density.

M620 Blade Server

Designed for taxing workloads, such as email, database and virtual environments, the M620 blade server is an ideal blend of density, performance, efficiency and scalability. The M620 delivers unprecedented memory density and superb performance with no compromise on enterprise-class features.

 Intel Xeon processor E5-2600 and E5-2600 v2 product families. Supporting up to twelve cores per processor.  Memory - Up to 768GB (24 DIMM slots): 2GB/4GB/8GB/16GB/32GB DDR3 up to 1866MT/s. - Up to 1.5TB (24 DIMM slots): 64GB DDR3 LRDIMM up to 1600MT/s (with Intel Xeon processor E5-2600 v2 product family only).  Support for a failsafe hypervisor. Protect against hardware failure and maximize virtualization uptime by running the hypervisor on an optional SD card and installing a backup copy on the other mirrored SD card.  The M620 blade server takes advantage of the shared power, cooling and networking infrastructure of the M1000e blade enclosure coupled with the Dell Chassis Management Controller to manage individual or groups of M620 blade servers.

3.3 M1000e I/O Modules The Dell I/O Modules used in this document are the Dell Networking MXL and PowerEdge M I/O Aggregator. Both of these modules were designed with ease of use in mind and support interchangeable FlexIO Expansion Modules.

12 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

3.3.1 Dell Networking MXL Overview The MXL 10/40GbE Switch (Figure 5) is a layer 2/3 blade switch with two fixed 40GbE ports on the base module and support for two optional plug-in modules (FlexIO Expansion Modules). The MXL 10/40GbE switch runs the Dell Networking Operating System, providing switching, bridging and routing functionality for transmitting data, storage and server traffic.

Expansion Slot 1 Expansion Slot 0 Fixed 40GbE QSFP+ Ports Dell Networking MXL

3.3.2 Dell PowerEdge M I/O Aggregator Overview The IOA (Figure 6) is a zero-touch blade switch with two fixed 40 GB ports on the base module and support for two optional plug-in modules (FlexIO Expansion Modules). The Aggregator runs the Dell Networking Operating System and has the capability to auto configure as an unmanaged switch with bridging and multiplexing functionality. In one of these automated modes (SMUX, VLT or Stacking) all VLANs are allowed as well as any DCBx, iSCSI or FCoE settings. In addition, the external ports are all part of the same LAG which obviates the need for the Spanning Tree Protocol (STP) on the IOA.

I/O Bay 1

Expansion Slot 1 Expansion Slot 0 Fixed 40GbE QSFP+ Ports Dell PowerEdge M I/0 Aggregator

13 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

3.3.3 FlexIO Expansion Modules The Dell FlexIO Expansion Modules will support a combination of FlexIO Modules (Figure 7). The four types of FlexIO expansion modules are:

 4-port 10Gbase-T FlexIO module (only one 10Gbase-T module can be used)  4-port 10Gb SFP+ FlexIO module  2-port 40Gb QSFP+ FlexIO module  4-port Fiber Channel 8Gb module

FlexIO expansion modules

Note: Using the FC FlexIO module that provides 8 GB Fiber Channel interfaces is not covered in this deployment guide.

14 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

3.3.3.1 I/O Module Port Mapping The connections are 10 Gigabit Ethernet connections for basic Ethernet traffic, iSCSI storage traffic or FCoE storage traffic. In a typical M1000e configuration of 16 half-height blade servers, ports 1-16 are used and 17 - 32 disabled. However if quad port adapters or quarter-height blade servers are used, ports 17-32 will be enabled.

Table 1 lists the port mapping for the two expansion slots on the Dell Networking MXLs and Dell PowerEdge IOAs as well as the internal 10/1 GbE interfaces on the blade servers installed in the M1000e chassis.

Port-Mapping for the M1000e Blade Enclosure

Dell Networking MXL and Dell PowerEdge M I/O Aggregator – Port Mapping

QSFP+ 8x10GB 10G-BaseT QSFP+ 2x40Gb SFP+ (breakout) SFP+ 4x10Gb 4x10Gb FC8 x 4

t 56 o l 1

S 55

n 54 o i 53 53 s

n 52 52 52 52 a

p 51 51 51 51 x

E 50 50 50 50 49 49 49 49 49 QSFP+ 2 X QSFP+ 8 X 10GB 10G-BaseT 4 X 40Gb SFP+ (breakout) SFP+ 4 X10Gb 10Gb FC8 x 4

t 48 o l 0

S 47

n 46 o i

s 45 45

n 44 44 44 44 a

p 43 43 43 43 x

E 42 42 42 42 41 41 41 41 41 QSFP+ 2 X QSFP+ 8 X 10GB 10G-BaseT 4 X

s 40Gb SFP+ (breakout) SFP+ 4 X10Gb 10Gb FC8 x 4 t r 40 o P

39

P 38 F

S 37 37 Q 36 . . . d

e 35 . . . x i

F 34 . . . 33 33 . . . Internal 10 / 1 GB interfaces 32 32 32 32 32 b G 31 31 31 31 31 1

/ . . . . .

0 l 1 a . . . . . n

r . . . . . e

t 2 2 2 2 2 n I 1 1 1 1 1

15 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

3.4 Cisco Nexus 5548UP Overview The Cisco Nexus 5548UP is a 1RU 10gigabit Ethernet, Fibre Channel, and FCoE capable switch offering up to 48 ports. The switch has 32 unified ports and a single expansion slot. The switch operates in NPIV by default and NPV can be enabled if required.

Note: This document utilizes command line interface (CLI) commands to configure the devices. Cisco supplies various graphical interfaces for managing their equipment. These interfaces may make it easier to configure the Cisco switches.

3.5 EMC VNX 5300 Overview The VNX5300, the introductory model for the VNX unified platform, is designed for the mid-range entry space. This model provides either block and file services, file only services, or block only services, and uses a Disk- Processor Enclosure (DPE).

The VNX5300 uses a 1.6Ghz, four-core Xeon 5600 processor with 8 GB RAM and a maximum of 125 drives with the following block-based host connectivity options: FC, FCoE, and iSCSI.

16 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

4 Preparation The following sections contain information on gathering and verifying the required FCoE component’s addresses and numbers. It also contains a list of the firmware or versions of the components, which were used to validate the configurations.

4.1.1 WWN/MAC Addresses Obtain the MAC addresses of the network adapters in the blade servers and covert them to the FIP MAC addresses by performing the following steps.

1. Login to the Chassis Management Controller (CMC). 2. In the left pane, select Server Overview (Figure 8).

M1000e Chassis Management Controller WWN/MAC Screen (Server in Slot 1)

3. Once the Server Overview page populates, select WWN/ MAC in the top pane. The screen shows all the server’s MAC addresses. 4. Scroll down to Slot 1, in the Filter drop down, select Fabric B and in the next drop down select Fibrechannel. 5. Record the MAC addresses (Server-Assigned or Chassis-Assigned). In this example, the first Chassis-Assigned B1 MAC address is 20:01:5C:F9:DD:16:EF:07 and the first B2 MAC address is 20:01:5C:F9:DD:16:F0:10. 6. Next, derive the FIP MAC address from the WWPN by dropping the first two sets of numbers from the WWPN. For example, for Server 1 the WWPN is 20:01:5C:F9:DD:16:EF:07, the first two sets of numbers (20:01) is dropped leaving the FIP MAC address of 5C:F9:DD:16:EF:07.

17 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

4.1.2 Virtual SAN (VSAN) and Virtual Fibre Channel (VFC) Once the Fibre Channel related addresses have been gathered, the VSAN and VFC can be planned. In the case of the configurations contained in this paper, VSAN 2 and VFC 101 are assigned to SAN A, and VSAN 3 and VFC 201 are assigned to SAN B. Keep in mind the VSAN number cannot be the same on SAN A and B, must be between 1 and 4094, and should be easy to manage and facilitate troubleshooting.

4.1.3 Configuration Table The following table (Table 2) shows the configuration information for the devices (servers, switches, network adapters) used in the scenarios covered in this document.

Configuration Information SAN A SAN B Storage Storage Processor WWPN 50:06:01:6F:3E:E0:18:70 50:06:01:6F:3E:E0:18:70 Boot LUN 0 0

Server 1 VSAN Number 2 3 FCoE VLAN 1000 1001 VFC Number 101 201 Binding method MAC MAC Physical Port 1/1-2 1/1-2 Network Adaptor MAC address 20:01:5C:F9:DD:16:EF:0 20:01:5C:F9:DD:16:F0:10 WWPN (20:01 + FIP MAC) 7 5C:F9:DD:16:F0:10 FIP MAC 5C:F9:DD:16:EF:07 20:00:5C:F9:DD:16:F0:10 WWNN (20:00 + FIP MAC) 20:00:5C:F9:DD:16:EF:0 7

Cisco 5548UP Physical Ports (Fiber Channel) FC 2/1-2 FC 2/1-2 Physical Port (vPC Ports) 1/17-18 1/17-18

Cisco 9148 FC 1/13-14 FC 1/13-14 MDS Physical Ports

18 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

4.1.4 Component Information The following table (Table 3) lists the components and firmware revisions used in the scenarios covered in this document.

Component Information Component Version Chassis / Server M1000e Chassis Management Controller 4.45 Dell PowerEdge M I/O Aggregator 9.6 Dell Networking MXL 9.6 Dell PowerEdge M620 Blade Server BIOS 2.4.3 Lifecycle Controller 1.4.2.12 Broadcom 10Gb 2P 57810S-k Mezzanine Card 7.10.18 QLogic 10Gb 2P QME8262-k Mezzanine Card 02.10.07 Intel 10Gb 2P X520-k blade Network Daughter Card 01.03.10

Storage EMC_3U VNX 5300 05.32.000.5.008

Network Cisco Nexus 7004 (system and kickstart) 6.2 (8) Cisco Nexus 5548UP (system and kickstart) 7.0.(2)N1(1) Cisco MDS 9148 (system and kickstart) 6.2(9)

Cables SFP+ Optical Transceivers (SR or LR) with Fiber 5 Meter Cable Cables

19 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

5 Configuration One – Dell MXL or IOAs in Nexus Fabric Mode

Cisco Nexus 7000 Series Cisco Nexus 7000 Series

SAN A SAN B

Cisco Nexus 5500 Series Cisco Nexus 5500 Series

Dell Networking MXL Dell Networking MXL or or Dell PowerEdge I/O Dell PowerEdge I/O Aggregator Aggregator

FCoE

Dell PowerEdge M1000e Ethernet Blade Server Chassis FC

vPC

Configuration One - Dell MXL or IOAs in Nexus Fabric Mode

In Configuration One (Figure 9), the Cisco Nexus 5500 Series Top of Rack switch is left in the default fabric mode, which allows the Nexus switch to perform as a fabric services provider and a fiber channel switch. For

20 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

the storage fabric, the following configuration is a default FCoE single-hop configuration with FSBs in a converged network environment. Configurations for both SAN A and SAN B are provided. For upstream Ethernet connectivity to the spine or core, a vPC domain is created allowing all available bandwidth to be utilized.

5.1 Cisco Nexus 5448UP Setup In this configuration, the Cisco Nexus 5548UP switch is the primary configuration point for the rest of the solution. The M1000e I/O modular switches will pass DCB information from the Nexus 5548UP switch down to the servers CNAs. The steps required to configure the Nexus 5548UP switches are shown on the following pages.

Note: The following instructions have been included as an attachment (Fabric_Mode-Config_Sheets.pdf) to this document.

In this first section, the required features are enabled (Figure 10). Then the interfaces are substantiated and finally the FIP address is bound to the Virtual Fibre Channel (VFC) interface.

Nexus_5548-1 Nexus_5548-2 Enable required features and management Enable required features and management interface for vPC interface for vPC  Enable LACP, vPC and NPIV features  Enable LACP, vPC and NPIV features

feature lacp feature lacp feature fcoe feature fcoe feature npiv feature npiv feature vpc feature vpc

 Create interfaces and VSAN used  Create interfaces and VSAN used

vsan database vsan database vsan 2 vsan 3 vlan 20,30-32, 88 vlan 21,30-32, 88 vlan 1000 vlan 1001 fcoe vsan 2 fcoe vsan 3 interface port-channel 8 interface port-channel 8 interface port-channel 20 interface port-channel 21

 Create VFC interfaces and bind fip-addresses  Create VFC interfaces and bind fip-addresses  Bring VFC interfaces out of administrative  Bring VFC interfaces out of administrative shutdown shutdown

interface vfc101 interface vfc201 bind mac-address 5C:F9:DD:16:EF:03 bind mac-address 5C:F9:DD:16:F0:10 no shutdown no shutdown

Enable Global Switch Features and configure Interfaces

21 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Next, the created VSAN is populated with the appropriate interfaces (Figure 11). In production environments additional VFCs would be created for each server occupying the M1000e enclosure chassis and added to the appropriate VSAN, The port channels are then configured, and the appropriate physical interfaces are added to the corresponding upstream and downstream port-channel groups.

5548-1 5548-2 Associate interfaces created earlier with the Associate interfaces created earlier with the appropriate VSAN. appropriate vsan.

vsan database vsan database vsan 2 interface vfc101 vsan 3 interface vfc201 vsan 2 interface fc2/1 vsan 3 interface fc2/1 vsan 2 interface fc2/2 vsan 3 interface fc2/2

 Add downstream interfaces to appropriate port  Add downstream interfaces to appropriate port channel channel  Add upstream interfaces to appropriate port  Add upstream interfaces to appropriate port channel channel

interface ethernet 1/21-22 interface ethernet 1/21-22 channel-group 20 mode active channel-group 21 mode active desc FCoE_downlink_to_IOA-MXL desc FCoE_downlink_to_IOA-MXL interface ethernet 1/9-10 interface ethernet 1/9-10 channel-group 1 mode active channel-group 2 mode active desc Ethernet_uplink_to_7K desc Ethernet_uplink_to_7K

 Configure the port channels created previously  Configure the port channels created previously with applicable settings. with applicable settings.

interface port-channel 8 interface port-channel 8 desc port-channel_eth9+10_to_7k desc port-channel_eth9+10_to_7k switchport mode trunk switchport mode trunk switchport trunk allowed vlan 30-32,88 switchport trunk allowed vlan 30-32,88

interface port-channel 20 interface port-channel 21 desc port-channel_eth1+2_to_IOA-MXL desc port-channel_eth1+2_to_IOA-MXL switchport mode trunk switchport mode trunk switchport trunk native vlan 20 switchport trunk native vlan 21 switchport trunk allowed vlan 20,1000 switchport trunk allowed vlan 21,1001

Configure VSAN Database and Upstream/Downstream Port Channels

22 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

In Figure 12 the fibre channel interfaces leading to the storage array are brought out of administrative shutdown and the FC fabric is built and activated.

At this time, the command show flogi database can be ran to verify that both the storage array and the servers CNAs have completed successful Fabric Logins (FLOGI).

5548-1 5548-2 Associate interfaces created earlier with the Associate interfaces created earlier with the appropriate VSAN. appropriate vsan.

interface fc2/1-2 interface fc2/1-2 no shutdown no shutdown

 Create zone and add all participating members  Create zone and add all participating members

zone name zone1SAN_A vsan 2 zone name zone1SAN_B vsan 3 member pwwn <20:01:5c:f9:dd:16:ef:03> member pwwn <20:01:5c:f9:dd:16:f0:10> member interface fc2/1 member interface fc2/1 member interface fc2/2 member interface fc2/2

 Configure the port channels created previously  Configure the port channels created previously with applicable settings. with applicable settings.

zoneset name set1SAN_A vsan 2 zoneset name set1SAN_B vsan 3 member zone1SAN_A member zone1SAN_B zoneset activate name set1SAN_A vsan 2 zoneset activate name set1SAN_B vsan 3

Bring Fibre Channel Ports Online and Configure FC Fabric

23 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Next, a vPC peer link is created (Figure 13). First, the vPC feature is enabled on both switches and a management IP is assigned. Next, the vPC domain is configured using a value of 1 and the keep-alive address of the peer switch.

5548-1 5548-2 Enabling VPC by configuring the Enabling VPC by configuring the management interface and creating a VPC management interface and creating a VPC domain ID. domain ID.

configure configure interface mgmt 0 interface mgmt 0 ip address 172.25.188.60 255.255.0.0 ip address 172.25.189.60 255.255.0.0 no shutdown no shutdown end end

 Create a VPC domain.  Create a VPC domain.  Assign role priority.  Assign role priority.  Assign the keepalive management IP of 5548-2.  Assign the keepalive management IP of 5548-1.

configure configure vpc domain 55 vpc domain 55 role priority 1 role priority 65535 peer-keepalive dest 172.25.189.60 peer-keepalive dest 172.25.188.60 end end

Configure vPC domain and keep alive address

24 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Finally, a port channel with the same ID as the vPC domain is created (Figure 14). It is important to limit the VLANs that were selected for FCoE traffic to NOT be allowed to transverse this trunk.

5548-1 5548-2 Configure port channel and port channel Configure port channel and port channel members for the vPC peer-link. members for the vPC peer-link.  Create a port channel.  Create a port channel.  Enable switchport mode trunk.  Enable switchport mode trunk.  Assign as a vpc peer-link.  Assign as a vpc peer-link.

configure configure interface port-channel 55 interface port-channel 55 description “vPC Peer-Link” description “vPC Peer-Link” switchport mode trunk switchport mode trunk switchport trunk allowed vlan except Switchport trunk allowed vlan except 1000-1001 1000-1001 no shutdown no shutdown vpc peer-link vpc peer-link end end

 Assign the interfaces to the port channel and  Assign the interfaces to the port channel and enable LACP. enable LACP.

configure configure interface ethernet 1/16-17 interface ethernet 1/16-17 description “vPC Peer-Link” description “vPC Peer-Link” switchport mode trunk switchport mode trunk channel-group 55 mode active channel-group 55 mode active no shutdown no shutdown end end

vPC Port Channel Configuration

25 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

5.2 Dell Networking MXL Setup The steps required to configure the Dell Networking MXL are shown in this section.

First, enable FIP snooping and change the default VLAN. The downstream and upstream interfaces are then configured for DCBx (Figure 15). In this case, all DCBx settings are adopted from Cisco 5548UP ToR.

MXL_IOA_1 MXL_IOA_2 Enable features, configure all pre-planned Enable features, configure all pre-planned VLANs and other commands. VLANs and other commands.  Enable FIP-snooping feature  Enable FIP-snooping feature  Enable LLDP protocol  Enable LLDP protocol  Configure service-class dynamic dot1p  Configure service-class dynamic dot1p  Set the global default VLAN  Set the global default VLAN

feature fip-snooping feature fip-snooping protocol lldp protocol lldp exit exit service-class dynamic dot1p service-class dynamic dot1p default vlan-id 20 default vlan-id 21

Configure the downstream, server facing, Configure the downstream, server facing, ports. ports.

interface range te 0/1 interface range te 0/1 portmode hybrid portmode hybrid switchport switchport protocol lldp protocol lldp dcbx port-role auto-downstream dcbx port-role auto-downstream exit exit no shutdown no shutdown

Configure upstream, FCF switch facing, Configure upstream, FCF switch facing, external ports to be part of a port channel. external ports to be part of a port channel.

interface range te 0/51 - 52 interface range te 0/51 - 52 port-channel-protocol LACP port-channel-protocol LACP port-channel 1 mode active port-channel 1 mode active exit exit protocol lldp protocol lldp man management advertise system-name man management advertise system-name no advertise dcbx-tlv ets-reco no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream dcbx port-role auto-upstream exit exit no shutdown no shutdown Dell Networking MXL Configuration for FIP Snooping

26 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Next, the upstream port channel is configured, and the appropriate FCoE designated VLAN is set on the corresponding interfaces (Figure 16).

MXL_IOA_1 MXL_IOA_2

Configure the upstream port-channel and Configure the upstream port-channel and then add all interfaces to the FCoE VLAN. then add all interfaces to the FCoE VLAN.  Enable fip-snooping on the FCoE VLAN  Enable fip-snooping on the FCoE VLAN

interface port-channel 1 interface port-channel 1 portmode hybrid portmode hybrid switchport switchport fip-snooping port-mode fcf fip-snooping port-mode fcf no shutdown no shutdown exit exit interface vlan 1000 interface vlan 1001 tagged TenGigabitEthernet 0/1 tagged TenGigabitEthernet 0/1 tagged Port-channel 1 tagged Port-channel 1 fip-snooping enable fip-snooping enable no shutdown no shutdown

Dell Networking MXL Enabling Uplinks for FCoE FIP Snooping

27 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

6 Configuration Two – Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148 Usually the Cisco Nexus 5548UP top of rack switch is configured in NPV to pass FC traffic out to another terminating switch, in this example the Cisco MDS 9148. The following figure (Figure 17) and the following examples describe a two link LAG from an IOA to the Cisco 5548UP ToR switch configured in NPV mode.

Cisco Nexus 7000 Series Cisco Nexus 7000 Series

CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 1/10 GIGABITETHERNET 1/2/4/8 G FIBRE CHANNEL CISCO NEXUS N5548P 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 1/10 GIGABITETHERNET 1/2/4/8 G FIBRE CHANNEL

3 3

ID ID N55-M8P8FP 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 N55-M8P8FP 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 STAT STAT Cisco Nexus 5500 Cisco Nexus 5500 SAN A SAN B

Cisco MDS 9000 Cisco MDS 9000

DS-C9148-K9 DS-C9148-K9

E E

L L

O O

S S

N N

O O

C C 0

STATUS 0 STATUS

0 0

1 1

/ /

0 0

1 1

T

P/S T P/S

M M

G G M FAN M FAN LINK ACT LINK ACT MDS 9148 Multilayer Fabric Switch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 MDS 9148 Multilayer Fabric Switch 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48

49-56 49 -56

L L

N N

P P

K K

o o

w w

e e

r r

E E

d d

1 1

g g

0 0

e

G e G

M

S M S

F F

P P

I I

+ + / /

O

O

M M

O O

A A

D D

g g

U U

g

L g L

E E r r

e e

g g

a a

t t

o o

r r

A A

C C

T T

Dell Networking MXL Dell Networking MXL or or

Dell PowerEdge I/O 41-48 41 -48 Dell PowerEdge I/O

L L

N N

K K

3 3

7 7

- -

4 4

0 0

A A C

Aggregator C Aggregator

T T

L L

N N

K K

3 3

3 3

- -

3 3

6 6

A A

C C

T T

C ONS O LE C ONS O LE

FCoE

Ethernet Dell PowerEdge M1000e FC Blade Server Chassis vPC Configuration Two – NPV with Cisco MDS

28 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

6.1 Cisco Nexus 5548UP Setup By default, the Cisco Nexus 5000 series switches operate in NPIV mode. A disadvantage of running in this mode, in a large datacenter with a large number of edge FC switches, is the limited number of domain IDs. With the Cisco Nexus configured for NPV mode, the switch will not provide the essential fabric services, but it will pass these services from an upstream fabric services core/aggregation device through to end devices. Typically, in a Cisco environment this upstream device will be a Cisco MDS multilayer fabric switch operating in default fabric mode.

Note: The following instructions have been included as an attachment (NPV_Mode-Config_Sheets.pdf) to this document.

The following tables/pages (Figure 18 thru Figure 21) show the steps required to configure the Nexus 5548UP switches.

5548-1 5548-2 NPV configurations with FC SAN switches, NPV configurations with FC SAN switches, NPV must be set. Once issued the switch will NPV must be set. Once issued the switch will reload. reload.

feature npv feature npv

Enable required features and management Enable required features and management interface for vPC interface for vPC  Enable LACP, vPC and NPIV features  Enable LACP, vPC and NPIV features

feature lacp feature lacp feature fcoe feature fcoe feature npiv feature npiv feature vpc feature vpc

 Configure Interfaces  Configure Interfaces

vsan database vsan database vsan 2 vsan 3 vlan 20,30-32, 88 vlan 21,30-32, 88 vlan 1000 vlan 1001 fcoe vsan 2 fcoe vsan 3 interface port-channel 8 interface port-channel 8 interface port-channel 20 interface port-channel 21

Enabling Global Switch Features and Interfaces

29 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

5548-1 5548-2 Create VFC interfaces and bind fip-addresses Create VFC interfaces and bind fip-addresses  Bring VFC interfaces out of administrative  Bring VFC interfaces out of administrative shutdown shutdown

interface vfc101 interface vfc201 bind mac-address 5C:F9:DD:16:EF:03 bind mac-address 5C:F9:DD:16:F0:10 no shutdown no shutdown

 Associate interfaces created earlier with  Associate interfaces created earlier with the appropriate VSAN ID. the appropriate VSAN ID.

vsan database vsan database vsan 2 interface vfc101 vsan 3 interface vfc201 vsan 2 interface fc2/1 vsan 3 interface fc2/1 vsan 2 interface fc2/2 vsan 3 interface fc2/2

 Add downstream interfaces to appropriate port  Add downstream interfaces to appropriate port channel channel  Add upstream interfaces to appropriate port  Add upstream interfaces to appropriate port channel channel

interface ethernet 1/1-2 interface ethernet 1/1-2 channel-group 20 mode active channel-group 21 mode active desc FCoE_downlink_to_IOA-MXL desc FCoE_downlink_to_IOA-MXL interface ethernet 1/9-10 interface ethernet 1/9-10 channel-group 8 mode active channel-group 8 mode active desc Ethernet_uplink_to_7K desc Ethernet_uplink_to_7K VFC Configuration and VSAN Database Configurations

30 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

5548-1 5548-2 Associate interfaces created earlier with the Associate interfaces created earlier with the appropriate VSAN. appropriate vsan.

interface fc2/1-2 interface fc2/1-2 no shutdown no shutdown

 Configure the port channels created previously  Configure the port channels created previously with applicable settings. with applicable settings.

interface port-channel 8 interface port-channel 8 desc port-channel_eth9+10_to_7k desc port-channel_eth9+10_to_7k switchport mode trunk switchport mode trunk switchport trunk allowed vlan 30-32,88 switchport trunk allowed vlan 30-32,88

interface port-channel 20 interface port-channel 21 desc port-channel_eth1+2_to_IOA-MXL desc port-channel_eth1+2_to_IOA-MXL switchport mode trunk switchport mode trunk switchport trunk native vlan 20 switchport trunk native vlan 21 switchport trunk allowed vlan 20,1000 switchport trunk allowed vlan 21,1001

Enable Fibre Channel Interfaces and Upstream/Downstream Port Channels

31 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Next, a vPC peer link is created (Figure 21). First vPC is enabled on both switches and a management IP is assigned. Then the vPC domain is configured using a value of 1 with the keep-alive address of the peer switch.

5548-1 5548-2 Enabling VPC by configuring the Enabling VPC by configuring the management interface and creating a VPC management interface and creating a VPC domain ID. domain ID.

configure configure interface mgmt 0 interface mgmt 0 ip address 172.25.188.60 255.255.0.0 ip address 172.25.189.60 255.255.0.0 no shutdown no shutdown end end

 Create a VPC domain.  Create a VPC domain.  Assign role priority.  Assign role priority.  Assign the keepalive management IP of 5548-2.  Assign the keepalive management IP of 5548-1.

configure configure vpc domain 55 vpc domain 55 role priority 1 role priority 65535 peer-keepalive dest 172.25.189.60 peer-keepalive dest 172.25.188.60 end end

Configure vPC domain and keep alive address

32 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

As a final step a port channel with the same ID as the vPC domain is created (Figure 22).The designated FCoE VLANs should not be allowed be allowed to transverse this vPC peer link.

5548-1 5548-2 Configure port channel and port channel Configure port channel and port channel members for the vPC peer-link. members for the vPC peer-link.  Create a port channel.  Create a port channel.  Enable switchport mode trunk.  Enable switchport mode trunk.  Assign as a vpc peer-link.  Assign as a vpc peer-link.

configure configure interface port-channel 55 interface port-channel 55 description “vPC Peer-Link” description “vPC Peer-Link” switchport mode trunk switchport mode trunk switchport trunk allowed vlan except Switchport trunk allowed vlan except 1000-1001 1000-1001 no shutdown no shutdown vpc peer-link vpc peer-link end end

 Assign the interfaces to the port channel and  Assign the interfaces to the port channel and enable LACP. enable LACP.

configure configure interface ethernet 1/16-17 interface ethernet 1/16-17 description “vPC Peer-Link” description “vPC Peer-Link” switchport mode trunk switchport mode trunk channel-group 55 mode active channel-group 55 mode active no shutdown no shutdown end end

vPC Port Channel Configuration

33 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

6.2 Dell Networking MXL Setup The following pages show the steps required to configure the Dell Networking MXL (Figure 23 and Figure 24). First, enable FIP snooping and change the default VLAN. The downstream and upstream interfaces are then configured for DCBx. In this case all DCBx settings are adopted from Cisco 5548UP ToR.

MXL_IOA_1 MXL_IOA_2 Enable features, configure all pre-planned Enable features, configure all pre-planned VLANs and other commands. VLANs and other commands.  Enable FIP-snooping feature  Enable FIP-snooping feature  Enable LLDP protocol  Enable LLDP protocol  Configure service-class dynamic dot1p  Configure service-class dynamic dot1p  Set the global default VLAN  Set the global default VLAN

feature fip-snooping feature fip-snooping protocol lldp protocol lldp exit exit service-class dynamic dot1p service-class dynamic dot1p default vlan-id 20 default vlan-id 21

Configure the downstream, server facing, Configure the downstream, server facing, ports. ports.

interface range te 0/1 interface range te 0/1 portmode hybrid portmode hybrid switchport switchport protocol lldp protocol lldp dcbx port-role auto-downstream dcbx port-role auto-downstream no shutdown no shutdown

Dell Networking MXL Setup (Pt. 1)

34 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

MXL_IOA_1 MXL_IOA_2

Configure the upstream port-channel and Configure the upstream port-channel and then add all interfaces to the FCoE VLAN. then add all interfaces to the FCoE VLAN.  Enable fip-snooping on the FCoE VLAN  Enable fip-snooping on the FCoE VLAN

interface port-channel 1 interface port-channel 1 portmode hybrid portmode hybrid switchport switchport fip-snooping port-mode fcf fip-snooping port-mode fcf no shutdown no shutdown exit exit interface vlan 1000 interface vlan 1001 tagged TenGigabitEthernet 0/1 tagged TenGigabitEthernet 0/1 tagged Port-channel 1 tagged Port-channel 1 fip-snooping enable fip-snooping enable no shutdown no shutdown

Configure upstream, FCF switch facing, Configure upstream, FCF switch facing, external ports to be part of a port channel. external ports to be part of a port channel.

interface range te 0/51 - 52 interface range te 0/51 - 52 port-channel-protocol LACP port-channel-protocol LACP port-channel 1 mode active port-channel 1 mode active protocol lldp protocol lldp no advertise dcbx-tlv ets-reco no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream dcbx port-role auto-upstream no shutdown no shutdown

Dell Networking MXL Setup (Pt. 2)

35 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

6.3 Cisco MDS 9148 Setup The Cisco MDS 9148 is configured in this section. This configuration requires NPIV allow the necessary number of WWPN to be assigned through the two downstream ports to the Next 5548UP.

MDS_9000_1 MDS_9000_2 Enable NPIV feature Enable NPIV feature

feature npiv feature npiv

Create relevant entries in VSAN database Create relevant entries in VSAN database

vsan database vsan database vsan 2 vsan 3 vsan 2 interface fc1/1-2 vsan 3 interface fc1/1-2 vsan 2 interface fc1/13-14 vsan 3 interface fc1/13-14

Create zone, zoneset and activate that Create zone, zoneset and activate that zoneset zoneset

zone name Blade1And2-SAN_A vsan 2 zone name Blade1And2-SAN_B vsan 3 member interface fc1/1-2 member interface fc1/1-2 member interface fc1/13-14 member interface fc1/13-14 zoneset name set1-SAN_A vsan 2 zoneset name set1-SAN_B vsan 3 member Blade1And2-SAN_A vsan 2 member Blade1And2-SAN_B vsan 3 exit exit zoneset activate name set1-SAN_A vsan 2 zoneset activate name set1-SAN_B vsan 3

Cisco MDS 9148 Configuration Steps

36 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

7 Configuration Three – Nexus Fabric Mode with Brand Varied MC-LAG Architecture The following sections contains the CLI to configure a Dell PowerEdge IOA and Cisco Nexus 5548UP in a configuration that allows a fully functional VLT and vPC Ethernet fabric while using separate FCoE links between the IOAs and the Nexus switches.

SAN A SAN B

Cisco Nexus 5500 Series Cisco Nexus 5500 Series

Dell Networking MXL Dell Networking MXL or or Dell PowerEdge I/O Dell PowerEdge I/O Aggregator Aggregator

FCoE

Ethernet FC Dell PowerEdge M1000e Blade Server Chassis VLT or vPC Configuration Three - Dell MXL or IOA in a Nexus Fabric Mode with Brand Varied MC-LAG Architecture

37 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

7.1 Cisco Nexus 5548UP Setup All required switch features are enabled, a hostname is specified and a management address is put in place (Figure 27). Finally, a vPC domain is created with the peer switch management IP address. This vPC domain is for vPC heartbeat monitoring to prevent a split-brain situation.

5548-1 5548-2 Enable the required features and Enable the required features and management interface for vPC. management interface for vPC.  Enable the FCoE, LACP, vPC and NVIP features.  Enable the FCoE, LACP, vPC and NVIP features.

feature fcoe feature fcoe feature lacp feature lacp feature vpc feature vpc feature npiv feature npiv

 Configure the hostname and an assign IP to  Configure the hostname and assign an IP to management. management.

configure configure hostname 5548-1 Hostname 5548-2 interface mgmt 0 interface mgmt 0 ip address 172.25.188.60 255.255.0.0 ip address 172.25.189.60 255.255.0.0 no shutdown no shutdown end end

 Create a VPC domain.  Create a VPC domain.  Assign role priority.  Assign role priority.  Assign the keepalive management IP of 5548-2.  Assign the keepalive management IP of 5548-1.

configure configure vpc domain 55 vpc domain 55 role priority 1 role priority 65535 peer-keepalive dest 172.25.189.60 peer-keepalive dest 172.25.188.60 end end

Initial Nexus 5548 Setup

38 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Once the vPC domain has been created, a port channel for the switch-to-switch vPC peer-link is created. This is a normal trunk and it is considered a best practice to exclude FCoE designated VLANs from traversing the trunk (Figure 28).

5548-1 5548-2 Configure port channel and port channel Configure port channel and port channel members for the vPC peer-link. members for the vPC peer-link.  Create a port channel.  Create a port channel.  Enable switchport mode trunk.  Enable switchport mode trunk.  Assign as a vpc peer-link.  Assign as a vpc peer-link.

configure configure interface port-channel 55 interface port-channel 55 description vPC Peer-Link description vPC Peer-Link switchport mode trunk switchport mode trunk switchport trunk allowed vlan except Switchport trunk allowed vlan except 1000-1001 1000-1001 no shutdown no shutdown vpc peer-link vpc peer-link end end

 Assign the interfaces to the port channel and  Assign the interfaces to the port channel and enable LACP. enable LACP.

configure configure interface ethernet 1/16-17 interface ethernet 1/16-17 description “vPC Peer-Link” description “vPC Peer-Link” switchport mode trunk switchport mode trunk channel-group 55 mode active channel-group 55 mode active no shutdown no shutdown end end

vPC Peer-Link and Port Channel Configuration

39 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Next, the port channel that will only carry Ethernet traffic is created (Figure 29). Note, vPC 30 must be included to ensure that the vPC configuration is aware of both sides of the port channel. It is considered a best practice to set the vPC ID the same as the Port Channel ID to simplify troubleshooting.

5548-1 5548-2 Configure the port channel and port channel Configure the port channel and port channel members for IOA connectivity. members for IOA connectivity.  Create the port channel.  Create the port channel.  Enable switchport mode trunk.  Enable switchport mode trunk.  Specify the vPC ID.  Specify the vPC ID.

configure configure interface port-channel 1 interface port-channel 1 description vPC/VLT enabled Eth to IOA description vPC/VLT enabled Eth to IOA switchport mode trunk switchport mode trunk switchport trunk allowed vlan 30-32,88 switchport trunk allowed vlan 30-32,88 vpc 1 vpc 1 no shutdown no shutdown end end

 Assign interfaces to the port channel and enable  Assign interfaces to the port channel and enable LACP. LACP.

configure configure interface ethernet 1/1-2 interface ethernet 1/1-2 description PO1 Member description PO1 Member switchport mode trunk switchport mode trunk channel-group 1 mode active channel-group 1 mode active no shutdown no shutdown end end

Configure Downstream vPC Enabled Port Channel for Ethernet Traffic

40 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

In the next set of commands (Figure 30), the designated VLAN for FCoE traffic is substantiated and the VSAN database is populated with the corresponding interfaces.

Nexus5548_5548-1 -1 Nexus5548_5548-2 -2 Create a VLAN for the appropriate VSAN and Create a VLAN for the appropriate VSAN and add the VSAN to the database. add the VSAN to the database. Create a VLAN. Create a VLAN. Add a VSAN ID to the VLAN. Add a VSAN ID to the VLAN.

configure configure interfacevlan 1000 vlan 1000 interfacevlan 1001 vlan 1001 description “VSANVSAN 2 2 VLANVLAN” description “VSANVSAN 3 3VLAN VLAN” fcoe vsan 2 fcoe vsan 3 no shutdown no shutdown end end

 Add the VSAN ID to the VSAN database.  Add the VSAN ID to the VSAN database.  Create the VFC interface and bind the FCoE  Create the VFC interface and bind the FCoE  AddFIP- theMAC VSANaddress ID to of the the VSAN CNA database recorded  AddFIP- theMAC VSANaddress ID to of the the VSAN CNA database recorded  Createearlier the. VRF interfaces and bind the PWWN  Createearlier the. VRF interfaces and bind the PWWN  addressesEnable the. designated Fibre Channel  addressesEnable the. designated Fibre Channel  Enableinterface the .designated fiber channel interface  Enableinterface the .designated fiber channel interface  Add the VFC and FC interfaces to the VSAN  Add the VFC and FC interfaces to the VSAN   databaseAdd all of theand interfaces bind them to theto VSANthe VSAN database ID databaseAdd all of theand interfaces bind them to theto VSANthe VSAN database ID createdbinding them earlier to the. VSAN ID created prior. createdbinding them earlier to the. VSAN ID created prior.

configure configure vsan database vsan database vsan 2 vsan 3 exit exit interface vfc101 interface vfc101201 bind mac-address 5CFc:9f.9DD:dd16:.16EF:07ef:03 bind mac-address 5CFc:9f.9DD:dd16:.16F010:f0:10 no shutdown no shutdown exit exit interface fc2/1-2 interface fc2/1-2 no shutdown no shutdown exit exit vsan database vsan database vsan 2 interface vfc101 vsan 3 interface vfc101201 vsan 2 interface fc2/1-2 vsan 3 interface fc2/1-2 end end

Initial Interface and VSAN Configuration

41 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

The port channel dedicated to carrying FCoE traffic is then configured (Figure 31). The command shutdown LAN will prevent any VLAN not associated with a VSAN ID from traversing the trunk to the Dell IOAs.

5548-1 5548-2 Configure the port channel and port channel Configure the port channel and port channel members for IOA connectivity. members for IOA connectivity.  Create the port channel.  Create the port channel.  Enable switchport mode trunk.  Enable switchport mode trunk.  Specify the vPC ID.  Specify the vPC ID.

configure configure interface port-channel 10 interface port-channel 20 description FCoE enabled Eth to IOA description FCoE enabled Eth to IOA switchport mode trunk switchport mode trunk shutdown LAN shutdown LAN no shutdown no shutdown end end

 Assign interfaces to the port channel and enable  Assign interfaces to the port channel and enable LACP. LACP.

configure configure interface ethernet 1/23-24 interface ethernet 1/23-24 description PO10 Member description PO20 Member switchport mode trunk switchport mode trunk channel-group 10 mode active channel-group 20 mode active no shutdown no shutdown end end

Configure Downstream Port Channel for FCoE Traffic

Finally, the zone is created and all the related FCIDs and FC interfaces to the zone (Figure 32). The zone is then activated.

5548-1 5548-2

Create the zone name and add the interfaces Create the zone name and add the interfaces  Create a zoneset and place the zone name in the  Create a zoneset and place the zone name in the container and finally activate the zoneset. container and finally activate the zoneset.

configure configure zone name zone1SAN_A vsan 2 zone name zone1SAN_B vsan 3 member pwwn 20:01:5c:f9:dd:16:ef:03 member pwwn 20:01:5c:f9:dd:16:f0:10 member interface fc2/1-2 member interface fc2/1-2 exit exit zoneset name set1SAN_A vsan 2 zoneset name set1SAN_B vsan 3 member zone1SAN_A member zone1SAN_B exit exit zoneset activate name set1SAN_A vsan 2 zoneset activate name set1SAN_B vsan 3 end end

Configure and Enable Zone Fabric

42 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

7.2 Dell Networking IOA Setup Before configuring the IOA, it is strongly suggested that the switches be returned to their factory default settings and set them to programmable MUX (PMUX) mode (Figure 33). This mode allows the IOA to act very similar to the MXL. For additional information on the IOA modes please see Dell PowerEdge Configuration Guide for the M I/O Aggregator.

IOA-1 IOA-2 In this environment the IOA is used in PMUX In this environment the IOA will be used in mode. PMUX mode.  Factory default the IOA to place the switch in  Factory default the IOA to place the switch in standalone mode. standalone mode.

restore factory-defaults stack-unit 0 restore factory-defaults stack-unit 0 clear-all clear-all

 Once the switch is reloaded, configure for  Once the switch is reloaded, configure for PMUX mode. PMUX mode.

configure configure stack-unit 0 iom-mode progammable-mux stack-unit 0 iom-mode progammable-mux end end reload reload

Restoring Factory Defaults Before Configuration

43 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

The first step of configuring the IOAs is to configure the Out-of-band management interface for VLT heart beats (Figure 34), which are used later in the configuration. Once this is done the FIP snooping feature is enabled globally and the designated FCoE VLAN ID is created and enabled.

IOA-1 IOA-2 Management Configuration Management Configuration  Set the hostname.  Set the hostname.  Set the IP for management.  Set the IP for management.  Set the default route for management.  Set the default route for management.  Enable LLDP hostname advertisement globally.  Enable LLDP hostname advertisement globally.

enable enable configure configure hostname IOA-1 hostname IOA-2 interface managementethernet 0/0 interface managementethernet 0/0 ip address 172.25.189.29 /16 ip address 172.25.189.30 /16 yes yes ! ! exit exit manage route 0.0.0.0/0 172.25.189.254 manage route 0.0.0.0/0 172.25.189.254 protocol lldp protocol lldp advertise management-tlv management- advertise management-tlv management- address system-name address system-name end end

 Turn on the fip-snooping feature.  Turn on the fip-snooping feature.  Enable fip-snooping globally.  Enable fip-snooping globally.

configure configure feature fip-snooping feature fip-snooping fip-snooping enable fip-snooping enable default vlan-id 20 default vlan-id 21 end end

 Create the FCoE VLAN.  Create the FCoE VLAN.  Enable fip-snooping on the VLAN.  Enable fip-snooping on the VLAN.  Set the FC map to match FCF switch.  Set the FC map to match FCF switch.

configure configure interface vlan 1000 interface vlan 1001 fip-snooping enable fip-snooping enable no shut no shut end end

Configure global settings and substantiate FCoE VLAN

44 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Next, the FCoE specific upstream port channel is created (Figure 35). The interface is also set for FIP snooping and DCBx is set to automatically accept DCB settings from the Nexus 5548UP ToR. Finally, the internal interface attached to slot 1 is tagged with the designated FCoE VLAN ID.

IOA-1 IOA-2

Configure the upstream LAG for FCoE. Configure the upstream LAG for FCoE.  Remove the switchport before enabling the port as  Remove the switchport before enabling the port as hybrid. hybrid.  Tag the interface with the FCoE VLAN.  Tag the interface with the FCoE VLAN.  Set the fip-snooping port mode to FCF.  Set the fip-snooping port mode to FCF.  Set the DCB port role.  Set DCB port role.

configure configure interface po10 interface po20 no switchport no switchport portmode hybrid portmode hybrid switchport switchport vlan tagged 1000 vlan tagged 1001 fip-snooping port-mode fcf fip-snooping port-mode fcf no shutdown no shutdown exit exit interface range te 0/49-50 interface range te 0/49-50 description FCoE Po Members to 5548-1 Description FCoE Po Members to 5548-2 port-channel-protocol LACP no port-channel-protocol LACP port-channel 10 mode active port-channel 20 mode active no shutdown no shutdown protocol lldp protocol lldp advertise management-tlv management- advertise management-tlv management- address system-name address system-name no advertise dcbx-tlv ets-reco no advertise dcbx-tlv ets-reco dcbx port-role auto-upstream dcbx port-role auto-upstream end end

 Configure the internal port facing the server.  Configure the internal port facing the server.  Remove the switchport before enabling the port  Remove switchport before enabling the port as as hybrid. hybrid.  Tag the interface with the FCoE VLAN.  Tag the interface with the FCoE VLAN.  Set the DCB port role.  Set the DCB port role.

configure configure interface te0/1 interface te0/1 no switchport no switchport portmode hybrid portmode hybrid switchport switchport vlan tagged 1000 vlan tagged 1001 protocol lldp protocol lldp advertise management-tlv management- advertise management-tlv management- address system-name address system-name no advertise dcbx-tlv ets-reco no advertise dcbx-tlv ets-reco dcbx port-role auto-downstream dcbx port-role auto-downstream end end

Configure Upstream LAG and Downstream Internal Server Connections

45 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Then the components and interfaces that comprise the VLTi peer-link that will allow the pair of IOAs to appear as a single switch to the upstream Nexus switches are created (Figure 36). In this environment the eight ports (te0/33 – 40) that comprise the two Forty Gigabit Ethernet ports are selected. For more information please see the Dell Networking OS Configuration Guide.

IOA-1 IOA-2 Enable VLT and configure the VLTi peer-link. Enable VLT and configure the VLTi peer-link.  Create the VLAN for VLT traffic.  Create the VLAN for VLT traffic.  Create the VLAN for Ethernet traffic.  Create the VLAN for Ethernet traffic.  Create a port channel interface for the VLT peer-  Create a port channel interface for the VLT peer- link. link.  Create the VLT domain and the set back-up  Create the VLT domain and set the back-up destination. destination.

configure configure interface vlan 55 interface vlan 55 no shut no shut exit exit interface po55 interface po55 no shut no shut exit exit vlt domain 55 vlt domain 55 peer-link port-channel 55 peer-link port-channel 55 back-up destination 172.25.189.30 back-up destination 172.25.189.29 unit-id 0 unit-id 1 end end

 Tag the VLTi port channel with the VLT VLAN.  Tag the VLTi port channel with the VLT VLAN.  Add both 40GbE interfaces to the VLTi port  Add both 40GbE interfaces to the VLTi port channel. channel.

configure configure interface po55 interface po55 vlan tagged 55 vlan tagged 55 exit exit interface range te0/33-40 interface range te0/33-40 port-channel-protocol lacp port-channel-protocol lacp port-channel 55 mode active port-channel 55 mode active no shut no shut end end

Create VLT LAG, VLAN and other Ethernet Designated VLANs

46 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Finally, a typical port channel with the VLT peer LAG enabled is created (Figure 37). Next, the upstream port channels are tagged with all the required Ethernet VLANs (30-32 and 88). Alternatively, all VLANs except the designated FCoE VLANS (1000 and 1001) can be allowed and the Nexus can be used to prune the allowed VLANs. Finally, the internal facing VLAN is tagged with the Ethernet VLAN allowed to Slot 1 of the M1000e enclosure.

IOA-1 IOA-2 Enable the Ethernet VLT member ports facing Enable the Ethernet VLT members ports upstream. facing upstream.  Add ports to PO30.  Add ports to PO30  Tag PO30 with the LAN VLAN.  Tag PO30 with the LAN VLAN.  Specify the peer VLT peer interface.  Specify the peer VLT peer interface.

configure configure interface range te 0/51-52 interface range te 0/51-52 port-channel-protocol lacp port-channel-protocol lacp port-channel 30 mode active port-channel 30 mode active no shut no shut exit exit interface port-channel 30 interface port-channel 30 portmode hybrid portmode hybrid switchport switchport vlan tagged 30-32,88 vlan tagged 30-32,88 vlt-peer-lag po30 vlt-peer-lag po30 no shut no shut end end

 Tag the server facing interface with LAN VLAN.  Tag the server facing interface with LAN VLAN.  Save the configuration.  Save the configuration.

configure configure interface te0/1 interface te0/1 vlan tagged 31 vlan tagged 31 end end copy run start copy run start

Configure Upstream Connectivity to Nexus 5572UP Pair

47 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

8 Configuration and Troubleshooting The following section contains commands that can be used to validate the configuration covered in this document.

Note: When using these validation/debug commands keep in mind that both configurations need to be in place for many of these commands to show the expected results. In other words if the Cisco Nexus 5548 has been configured based on the CLI examples in this document but the Dell PowerEdge M I/O Aggregator has not been connected properly many of the results shown here will not match. The storage device and SAN switches will also need to be configured if using NPV mode.

Note: The output of many of the following commands have been edited to highlight the most pertinent information for this document.

8.1 Dell PowerEdge MXL or M I/O Aggregator With all cables in place between the switches, the FC/FCoE SAN configured, and all switch settings configured use the following commands to validate this newly created configuration.

show interface status Check the general status of the ports, and links, by using the show interfaces status command (Figure 38).

MXL IOA-1#show interfaces status

Port Description Status Speed Duplex Vlan Te 0/1 Up 10000 Mbit Full 20,1000

- section removed for sizing –

Te 0/51 Up 10000 Mbit Full -- Te 0/52 Up 10000 Mbit Full -- The show interfaces status command

48 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show lldp neighbors Verify all cables are mapped correctly by issuing the show lldp neighbors command. The output below (Figure 39) was captured after connecting IOA-1 to the second Cisco Nexus in Environment 3.

MXL_IOA_1#show lldp neighbors

Loc PortID Rem Host Name Rem Port Id Rem Chassis Id ------Te 0/1 - 5c:f9:dd:16:ef:03 Te 0/51 Nexus_5548-1 Eth1/1 54:7f:ee:53:3e:88 Te 0/52 Nexus_5548-1 Eth1/2 54:7f:ee:53:3e:89

The show lldp neighbors command

show fip-snooping config On the PowerEdge IOA, the fip-snooping configuration can be checked by using the show fip-snooping config command. This will show that the fip-snooping feature is enabled and global configuration is set (Figure 40). It will also display the current default or configured FC-MAP value.

MXL_IOA_1#show fip-snooping config FIP Snooping Feature enabled Status: Enabled FIP Snooping Global enabled Status: Enabled Global FC-MAP Value: 0X0EFC00 Maximum Sessions Per ENode Mac: 32

FIP Snooping enabled VLANs VLAN Enabled FC-MAP ------1000 TRUE 0X0EFC00 The show fip-snooping config command

show fip-snooping fcf The IOA - FIP-snooping FCF connection information can be checked by using show fip-snooping fcf. This command will display the FCF’s MAC address, the connecting interface and applicable VLAN configured for the connection (Figure 41).

MXL_IOA_1# show fip-snooping fcf FCF MAC FCF Interface VLAN FC-MAP FKA_ADV_PERIOD No.of ENodes ------00:05:73:dc:04:89 Po 1 1000 0e:fc:00 8000 1 The show fip-snooping fcf command

49 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show fip-snooping enode The FCoE enabled connection to an ENode or network adapter in a host server can be checked by using the show fip-snooping enode command. This command will display the ENode MAC address, which port the ENode is connected with, the FCF’s MAC address, the applicable VLAN configured for the connection and the FC-ID (Figure 42).

MXL_IOA_1# show fip-snooping enode ENode MAC ENode Interface FCF MAC VLAN FC-ID ------5c:f9:dd:16:ef:03 Te 0/1 00:05:73:dc:04:89 1000 85:02:01 The show fip-snooping ENode command

show fip-snooping sessions Using the Dell PowerEdge IOA, the fip-snooping sessions can be validated using show fip-snooping sessions. This command will display the following items that are in an FCoE session: ENode MAC address, ENode interface, and FCF MAC address, FCF connecting interface, FCoE VLAN being used, FCoE MAC address, FC-ID, Port WWPN, and Port WWNN being used for the session (Figure 43). Note that not all of the command’s output is show in Figure 43, specifically Port WWPN and WWWN.

MXL_IOA_1# show fip-snooping sessions Enode MAC Enode Intf FCF MAC FCF Intf VLAN ------5c:f9:dd:16:ef:03 Te 0/1 54:7f:ee:53:3e:ab Po 1 1000

FCoE MAC FC-ID Port WWPN Port WWNN ------0e:fc:00:ed:00:20 ed:00:20 20:01:5c:f9:dd:16:ef:03 20:00:5c:f9:dd:16:ef:03 The show fip-snooping sessions results

show fip-snooping statistics Using the IOA, fip-snooping statistics can be validated using the show fip-snooping statistic command (Figure 44). This command will display the various counters applicable to the fip-snooping feature. Specifics like VLANs and ports can be specified. In the event that the configuration is complete but a FIP session is not established check the statistics on the internal facing interface. If only VLAN requests are increasing this is likely a misconfiguration in the FCF MAC or Port WWPN setting on the upstream FCF ToR configuration. A healthy, functioning FIP session will have the counters for Enode Keep Alive and VN Port Keep Alive will increasing periodically.

IOA-1# show fip-snooping statistics interface vlan 1000 Number of Vlan Requests :0 Number of Vlan Notifications :0 Number of Multicast Discovery Solicits :7 Number of Unicast Discovery Solicits :0 Number of FLOGI :5 Number of FDISC :0 Number of FLOGO :2 Number of ENode Keep Alive :30963 Number of VN Port Keep Alive :2757 Number of Multicast Discovery Advertisement :39420 Number of Unicast Discovery Advertisement :5

50 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Number of FLOGI Accepts :5 Number of FLOGI Rejects :0 Number of FDISC Accepts :0 Number of FDISC Rejects :0 Number of FLOGO Accepts :2 Number of FLOGO Rejects :0 Number of CVL :1 Number of FCF Discovery Timeouts :0 Number of ENode Mac Timeouts :2 Number of VN Port Session Timeouts :0 Number of Session failures due to Hardware Config :0

IOA-1# show fip-snooping statistics interface tengigabitethernet 0/1 Number of Vlan Requests :3 Number of Vlan Notifications :0 Number of Multicast Discovery Solicits :4 Number of Unicast Discovery Solicits :0 Number of FLOGI :3 Number of FDISC :0 Number of FLOGO :2 Number of ENode Keep Alive :27197 Number of VN Port Keep Alive :2421 Number of Multicast Discovery Advertisement :0 Number of Unicast Discovery Advertisement :0 Number of FLOGI Accepts :0 Number of FLOGI Rejects :0 Number of FDISC Accepts :0 Number of FDISC Rejects :0 Number of FLOGO Accepts :0 Number of FLOGO Rejects :0 Number of CVL :0 Number of FCF Discovery Timeouts :0 Number of ENode Mac Timeouts :0 Number of VN Port Session Timeouts :0 Number of Session failures due to Hardware Config :0 The show fip-snooping statistics command

show interface dcbx detail To show vital information on the DCBX configuration, type show interfaces dcbx detail.

This command (Figure 45) will show specific detail information about the configuration that has been negotiated between the devices. In this example, interface TenGigabitEthernet 0/3 is used as a downlink to a host server and network adapter installed in that system. Interface TenGigabitEthernet 0/52 is the uplink into the top-of-rack FCF switch.

Note that some key items in these results will be true or false. These include: Is configuration source? Local DCBX Compatibility mode is and Peer Operating version is. These results will be important in understanding the negotiations happening between the switches and network adapter.

51 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

IOA-1# show interfaces dcbx detail | find te0/52

E-ETS Configuration TLV enabled e-ETS Configuration TLV disabled R-ETS Recommendation TLV enabled r-ETS Recommendation TLV disabled P-PFC Configuration TLV enabled p-PFC Configuration TLV disabled F-Application priority for FCOE enabled f-Application Priority for FCOE disabled I-Application priority for iSCSI enabled i-Application Priority for iSCSI disabled

------

Interface TenGigabitEthernet 0/1 Remote Mac Address 5c:f9:dd:16:ef:01 Port Role is Auto-Downstream DCBX Operational Status is Enabled Is Configuration Source? FALSE Local DCBX Compatibility mode is AUTO Local DCBX Configured mode is AUTO Peer Operating version is CEE Local DCBX TLVs Transmitted: ErPFi

Interface TenGigabitEthernet 0/51 Remote Mac Address 54:7f:ee:53:3e:88 Port Role is Auto-Upstream DCBX Operational Status is Enabled Is Configuration Source? TRUE Local DCBX Compatibility mode is CIN Local DCBX Configured mode is AUTO Peer Operating version is CIN Local DCBX TLVs Transmitted: ErPFi

Local DCBX Status ------DCBX Operational Version is 0 DCBX Max Version Supported is 0 Sequence Number: 3 Acknowledgment Number: 1 Protocol State: In-Sync

Peer DCBX Status: ------DCBX Operational Version is 0 DCBX Max Version Supported is 0 Sequence Number: 1 Acknowledgment Number: 3 13 Input PFC TLV pkts, 24 Output PFC TLV pkts, 0 Error PFC pkts 0 PFC Pause Tx pkts, 0 Pause Rx pkts 13 Input PG TLV Pkts, 24 Output PG TLV Pkts, 0 Error PG TLV Pkts 13 Input Appln Priority TLV pkts, 22 Output Appln Priority TLV pkts, 0 Error Appln Priority TLV Pkts Total DCBX Frames transmitted 27

52 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Total DCBX Frames received 13 Total DCBX Frame errors 0 Total DCBX Frames unrecognized 39

Interface TenGigabitEthernet 0/52 Remote Mac Address 54:7f:ee:53:3e:88 Port Role is Auto-Upstream DCBX Operational Status is Enabled Is Configuration Source? FALSE Local DCBX Compatibility mode is AUTO Local DCBX Configured mode is AUTO Peer Operating version is AUTO Local DCBX TLVs Transmitted: ErPFi The show interfaces dcbx detail

show interfaces pfc detail The next command will show vital information on the DCBX established PFC configuration. Type: show interfaces pfc detail. This command (Figure 46) will show specific detail information about the configuration that is negotiated between the devices. In this example, interface TenGigabitEthernet 0/3 is used as a downlink to a host server and network adapter installed in that system. Interface TenGigabitEthernet 0/52 is the uplink into the top-of-rack FCF switch. Some key items in these results are Priority list is 3 and Remote Willing status. The Remote Willing status should be willing enabled for the Host network adapter, and willing disabled for FCF Nexus switch. Other important items are PFC DCBX Oper status(should be up) and FCOE TLV TX Status (should be enabled).

IOA-1# show interfaces pfc detail

Interface TenGigabitEthernet 0/1 Admin mode is on Admin is enabled Remote is enabled, Priority list is 3 Remote Willing Status is enabled Local is enabled, Priority list is 3 Oper status is internally propagated PFC DCBX Oper status is Up State Machine Type is Symmetric TLV Tx Status is enabled PFC Link Delay 45556 pause quntams Application Priority TLV Parameters: ------FCOE TLV Tx Status is enabled ISCSI TLV Tx Status is disabled Local FCOE PriorityMap is 0x8 Local ISCSI PriorityMap is 0x0 Remote FCOE PriorityMap is 0x8 Remote ISCSI PriorityMap is 0x10

11 Input TLV pkts, 28 Output TLV pkts, 5 Error pkts, 1202 Pause Tx pkts, 0 Pause Rx pkts

Interface TenGigabitEthernet 0/52 Admin mode is on Admin is enabled Remote is enabled, Priority list is 3

53 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Remote Willing Status is disabled Local is enabled, Priority list is 3 Oper status is recommended PFC DCBX Oper status is Up State Machine Type is Feature TLV Tx Status is enabled PFC Link Delay 45556 pause quntams Application Priority TLV Parameters: ------FCOE TLV Tx Status is enabled Local FCOE PriorityMap is 0x8 Remote FCOE PriorityMap is 0x8

0 Input TLV pkts, 1 Output TLV pkts, 0 Error pkts, 0 Pause Tx pkts, 12748 Pause Rx pkts The show interfaces pfc detail

show interfaces ets detail The show interfaces ets detail command will show vital information on the ETS settings in the configuration. This includes specific detailed information about the ETS configuration that has been negotiated between the devices (Figure 47). In this example interface, TenGigabitEthernet 0/2 is used as a downlink to a host server and the network adapter that is installed in the system. Interface TenGigabitEthernet 0/52 is the uplink into the top-of-rack FCF switch.

Some key items in these results will be TC-grp, Priority#, Bandwidth and TSA. Note that only these two interfaces are seen typically this command will list them all. IOA-1# show interfaces ets detail

Interface TenGigabitEthernet 0/1 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on

Admin Parameters: ------Admin is enabled

TC-grp Priority# Bandwidth TSA ------0 0,1,2,3,4,5,6,7 100% ETS 1 - - 2 - -

Remote Parameters : ------Remote is enabled

TC-grp Priority# Bandwidth TSA ------0 0,1,2,4,5,6,7 50 % ETS 1 3 50 % ETS 2 - - 3 - -

54 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Remote Willing Status is enabled Local Parameters: ------Local is enabled

TC-grp Priority# Bandwidth TSA ------0 0,1,2,4,5,6,7 50 % ETS 1 3 50 % ETS 2 - - 3 - - 4 - -

Oper status is internally propagated ETS DCBX Oper status is Up State Machine Type is Feature Conf TLV Tx Status is enabled

1 Input Conf TLV Pkts, 1 Output Conf TLV Pkts, 1 Error Conf TLV Pkts

Interface TenGigabitEthernet 0/52 Max Supported TC Groups is 4 Number of Traffic Classes is 8 Admin mode is on

Admin Parameters : ------Admin is enabled

TC-grp Priority# Bandwidth TSA ------0 0,1,2,3,4,5,6,7 100% ETS 1 - - 2 - - 3 - -

Remote Parameters : ------Remote is enabled

TC-grp Priority# Bandwidth TSA ------0 0,1,2,4,5,6,7 50 % ETS 1 3 50 % ETS 2 - - 3 - - 4 - - 5 - - 6 - - 7 - - 15 - -

55 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

Remote Willing Status is disabled Local Parameters : ------Local is enabled

TC-grp Priority# Bandwidth TSA ------0 0,1,2,3,4,5,6,7 100% ETS 1 - - 2 - - 3 - - 4 - - 5 - - 6 - - 7 - - 15 - -

Oper status is init ETS DCBX Oper status is Up with Mismatch State Machine Type is Feature Conf TLV Tx Status is enabled

23 Input Conf TLV Pkts, 7 Output Conf TLV Pkts, 0 Error Conf TLV Pkts

The show interfaces ets detail

After these validation steps, go into the disk management interface of the server. Verify the SAN is configured appropriately. The server will have an available LUN to use for storage.

56 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show vlt brief

This command will give a brief overview of the status of the VLT between the Dell MXLs or IOAs. The three key items (Figure 48) to verify are all in a ‘Up’ status are ICL Link Status, HeartBeat Status and VLT Peer Status.

IOA-1#show vlt brief VLT Domain Brief ------Domain ID: 55 Role: Primary Role Priority: 32768 ICL Link Status: Up HeartBeat Status: Up VLT Peer Status: Up Local Unit Id: 1 Version: 6(3) Local System MAC address: d0:67:e5:ac:ac:04 Remote System MAC address: 00:00:00:00:00:00 Remote system version: 0(0) Delay-Restore timer: 90 seconds Peer-Routing : Disabled Peer-Routing-Timeout timer: 0 seconds Multicast peer-routing timeout: 150 seconds

Show vlt brief

57 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

8.2 Cisco Nexus 5548UP and MDS 9148 Validation

Note: In order to have a fully supported configuration the cables or SFP+ transceivers must be Cisco- branded products for the Cisco Nexus.

show lldp neighbors To show information about the physical interfaces and the interfaces they are connect use the show lldp neighbors command. The output for this command (Figure 49) is shown below.

5548-1# show lldp neighbors Capability codes: (R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device (W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other

Device ID Local Intf Hold-time Capability Port ID Rack188-5524 mgmt0 120 BR gi1/0/17 MXL_IOA_1 Eth1/1 120 te0/51 MXL_IOA_1 Eth1/2 120 te0/52 5548-2 Eth1/16 120 B Eth1/16 5548-2 Eth1/17 120 B Eth1/17 Total entries displayed: 5

show lldp neighbors

show interface vfc Use the show interface vfc vfc-id command (where vfc-id is the vfc id ex. 101) to show information about a virtual Fiber Channel Interfaces (VFC)The output for show interface vfc 101 is shown in Figure 50.

5548-1#show interface vfc 101 vfc101 is trunking Bound MAC is 5c:f9:dd:16:ef:07 Hardware is Ethernet Port WWN is 20:64:54:7f:ee:53:3e:bf Admin port mode is F, trunk mode is on snmp link state traps are enabled Port mode is TF Port vsan is 2 Trunk vsans (admin allowed and active) (1-2) Trunk vsans (up) (2) Trunk vsans (isolated) () Trunk vsans (initializing) (1) 1 minute input rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 1 minute output rate 0 bits/sec, 0 bytes/sec, 0 frames/sec 0 frames input, 0 bytes 0 discards, 0 errors 0 frames output, 0 bytes 0 discards, 0 errors last clearing of "show interface" counters never The show interface vfc101 command

58 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show interface brief Use the show interface brief command to display the active ports and VFC interfaces (Figure 51). Verify that the ports that are expected to have links, show up correctly.

NX5548K#show interface brief

------Interface Vsan Admin Admin Status SFP Oper Oper Port Mode Trunk Mode Speed Channel Mode (Gbps) ------fc2/1 2 NP off up swl NP 8 ------Ethernet VLAN Type Mode Status Reason Speed Port Interface Ch # ------Eth1/1 20 eth trunk up none 10G(D) 20 Eth1/2 20 eth trunk up none 10G(D) 20 ------Port-channel VLAN Type Mode Status Reason Speed Protocol Interface ------Po20 20 eth trunk up none a-10G(D) lacp ------Interface Vsan Admin Admin Status Bind Oper Oper Mode Trunk Info Mode Speed Mode (Gbps) ------vfc101 2 F on trunking 5c:f9:dd:16:ef:03 TF auto

The show interface brief command

59 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show npv status Enter the show npv status command (Figure 52), to display the NP uplink interfaces, This command is used for verification of Configuration Two.

5548-1#show npv status npiv is enabled disruptive load balancing is disabled External Interfaces: ======Interface: fc2/1, VSAN: 2, FCID: 0x850000, State: Up Interface: fc2/2, VSAN: 2, FCID: 0x850200, State: Up Number of External Interfaces: 2

Server Interfaces: ======Interface: vfc101, VSAN: 2, State: Up Number of Server Interfaces: 1 The show npv status command

show npv flogi-table The devices, server interfaces and their NP uplinks can be displayed with the show npv flogi-table command. The output shown in Figure 53 shows that vsan 2 has two servers logged in. This command is used for verification of Configuration Two.

5548-1#show npv flogi-table vsan 2 ------SERVER EXTERNAL INTERFACE VSAN FCID PORT NAME NODE NAME INTERFACE ------fc2/1 2 0xed00ef 50:06:01:66:3e:e0:18:70 50:06:01:60:be:e0:18:70 fc2/2 2 0xed01ef 50:07:01:66:3e:e0:18:70 50:07:01:60:be:e0:18:70

vfc101 2 0x850001 20:01:5c:f9:dd:16:ef:03 20:00:5c:f9:dd:16:ef:03 Total number of flogi = 3. The show npv flogi -table command

show fcns database The devices, server interfaces and their NP uplinks can be displayed with the show fcns database command. The output shown in Figure 54 shows that vsan 2 has two servers logged in (scsi-fcp: init). This command is ran from either the Nexus 5548 in Configurations One or Three or from the Cisco MDS in Configuration Two.

MDS9148-1#show fcns database

VSAN 2: ------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE ------0xed0020 N 20:01:5c:f9:dd:16:ef:03 scsi-fcp:init 0xed00ef N 50:06:01:66:3e:e0:18:70 (Clariion) scsi-fcp:target The show fcns database command

60 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show flogi database Next, verify that negotiations have happened properly between the FCF and end devices (Figure 55). In this case, the Cisco Nexus 5548 is the FCF. Type: show flogi database This command is ran from either the Nexus 5548 in Configurations One and Three or from the Cisco MDS in configuration Two.

5548-1# show flogi database ------INTERFACE VSAN FCID PORT NAME NODE NAME ------fc2/1 2 0xed00ef 50:06:01:66:3e:e0:18:70 50:06:01:60:be:e0:18:70 vfc101 2 0xed0020 20:01:5c:f9:dd:16:ef:03 20:00:5c:f9:dd:16:ef:03 The show flogi database command

At this point, the VFC and FC interfaces should be populated in the FLOGI database. This command shows the devices that have done a valid FLOGI (fabric login) to the Cisco Nexus switch. The VFC should show the expected port and node WWN of the network adapter being used in the server.

show zoneset active The show zoneset active command will show the status of the zones that have been put into place. This includes the current activated zoneset and all the participating zones with their individual members (Figure 56).

5548-1# show zoneset active zoneset name set1SAN_A vsan 2 zone name zone1SAN_A vsan 2 * fcid 0xed0020 [pwwn 20:01:5c:f9:dd:16:ef:03] * fcid 0xed00ef [interface fc2/1 swwn 20:00:54:7f:ee:53:3e:80] The show zoneset active command

61 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show spanning-tree summary Use show spanning-tree summary to the Check the spanning tree configuration (Figure 57).

Any blocking ports which are reported in the output of this command should be recognized. Any unrecognized blocking ports could an indication that an unintentional cable loop has been created, which will need to be resolved.

5548-1#show spanning-tree summary Switch is in rapid-pvst mode Root bridge for: VLAN0001, VLAN0020, VLAN0030-VLAN0032, VLAN0088, VLAN1000 Port Type Default is disable Edge Port [PortFast] BPDU Guard Default is disabled Edge Port [PortFast] BPDU Filter Default is disabled Bridge Assurance is enabled Loopguard Default is disabled Pathcost method used is short STP-Lite is enabled

Name Blocking Listening Learning Forwarding STP Active ------VLAN0001 0 0 0 1 1 VLAN0020 0 0 0 1 1 VLAN0030 0 0 0 1 1 VLAN0031 0 0 0 1 1 VLAN0032 0 0 0 1 1 VLAN0088 0 0 0 1 1 VLAN1000 0 0 0 1 1 ------7 vlans 0 0 0 7 7 The show spanning-tree summary command

62 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

show vpc brief Use the vpc brief command to verify that the vPC connection is functioning (Figure 58). The three key lines are Configuration consistency status, Per-vlan consistency status and Type—2 consistency status. You are looking for success on all three.

5548-1#show vpc brief Legend: (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id : 55 Peer status : peer adjacency formed ok vPC keep-alive status : peer is alive Configuration consistency status : success Per-vlan consistency status : success Type-2 consistency status : success vPC role : primary Number of vPCs configured : 0 Peer Gateway : Disabled Dual-active excluded VLANs : - Graceful Consistency Check : Enabled Auto-recovery status : Enabled (timeout = 240 seconds)

vPC Peer-link status ------id Port Status Active vlans ------1 Po55 up 1,20,30-32,88

show vpc brief command

63 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

A Basic Terminology

CLI: Command line interface (CLI) is the text-based , SSH, or serial type interface that is used for entering commands into the Dell Networking MXL model switch. At release, the MXL is only configurable via CLI.

CMC: Chassis management controller (CMC) is the module, which controls the Dell PowerEdge™ M1000e blade server chassis. Through this controller a telnet, SSH or serial based connection can be used to manage the MXL switch.

ETS: Enhanced Transmission Selection (ETS) is defined in the IEEE 802.1Qaz standard (IEEE, 2011). ETS supports allocation of bandwidth amongst traffic classes. It then allows for sharing of bandwidth when a particular traffic class does not fully utilize the allocated bandwidth. The management of the bandwidth allocations is done with bandwidth-allocation priorities, which coexist with strict priorities (IEEE, 2011).

FIP-Snooping: With FIP-Snooping enabled on the Dell Networking MXL model switch, FIP logins, solicitations, and advertisements are monitored. In this monitoring or snooping process, the switch gathers information pertaining to the ENode and FCF addresses. With this information, the switch will then place filters that only allow access to ENode devices that have logged-in successfully. This enables the FCoE VLAN to deny all other traffic except this lossless FCoE storage traffic.

The filtering process also secures the end-to-end path between the ENode device and the FCF. The ENode will only be able to talk with the FCF in which it has logged into.

FIP-Snooping Bridge (FSB): With a switch configured to performing FIP-Snooping the industry, term for this switch is FSB or FIP-Snooping Bridge. It is performing FIP-Snooping as described in the previous term.

FCF: FCoE forwarders (FCFs) act as an Ethernet and FC switch combined. All typical termination functions that would occur on a FC switch occur on the FCF. FCF’s give VF_Ports and VE_Ports for their virtual FC interfaces.

FlexAddress: A virtual, unique, chassis assigned, slot persistent address that is substituted for the factory programmed protocol specific address on Ethernet and Fibre Channel devices within a FlexAddress enabled chassis.

IOM: I/O Module (IOM) refers to the modules on the rear of the Dell PowerEdge M1000e chassis that will receive and transmit I/O (Ethernet, FC, InfiniBand, etc.) from the blade servers. The Dell Networking MXL and Dell PowerEdge IOA are two switches for the M1000e blade server chassis.

MAC Address: Media Access Control Address (MAC Address) is a layer-2 node identifier. In Ethernet bridging, MAC addresses are used for source and destination identification. They can also be used as system identifiers since vendor-assigned (or burned-in) MAC addresses are globally unique. An Ethernet MAC address is 48 bits long and generally written in groupings of two hexadecimal digits often separated by colons or hyphens like this: 00:1e:c9:00:cb:01. But, are sometimes written in groupings of four hexadecimal digits separated by periods like this: 001e.c900.cb01

64 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

NPIV: N-port identifier virtualization, which enables multiple N-port fabric logins at the same time on the same physical FC link (, Inc., 2011) .This term, is in reference to the Cisco Nexus 5000 series switches implementation of NPIV. NPIV must be enabled to share multiple logins across a single port/link or a port-channel/multiple-line link.

NPV: N-port Virtualizer is a FC aggregation method, which passes traffic through to end devices, while eliminating the need to use a domain ID for this device (Cisco Systems, Inc., 2011). This term is also in reference to configuration settings on the Cisco Nexus 5000 series switches.

PFC: Priority Flow Control (PFC) or Per-Priority Pause is defined in the IEEE 802.1Qbb standard. PFC is flow control based on priority settings and adds additional information to the standard pause frame. The additional fields, which are added to the pause frame, allow devices to pause traffic on a specific priority instead of pausing all traffic. (IEEE, 2009) Pause frames will be initiated by the FCF in most cases when its receive buffers are starting to reach a congested point. With PFC, traffic is paused instead of dropped and retransmitted. This provides the lossless network behavior necessary for FC packets to be encapsulated and passed along the Ethernet paths.

ToR: Top of Rack (ToR) is a term for a switch that is actually positioned at the top of a server rack in a data center.

VLAN: Virtual Local Area Network (VLAN) is a single layer-2 network (also called a broadcast domain, as broadcast traffic does not escape a VLAN on its own). Multiple VLANs can be passed between switches using switchport trunk interfaces. When passed across trunk links, frames in a VLAN are prefixed with the number of the VLAN that they belong to—a twelve-bit value that allows just over 4000 differently numbered VLANs.

vPC: vPC refers to the combined PortChannel between the vPC peer devices and the downstream device.

vPC peer switch: The vPC peer switch is one of a pair of switches that are connected to the special PortChannel known as the vPC peer link. One device will be selected as the primary device, and the other will be the secondary device.

vPC peer link: The vPC peer link is the link used to synchronize states between the vPC peer devices. The vPC peer link carries control traffic between two vPC switches and multicast, broadcast data traffic. In some link failure scenarios, it also carries unicast traffic. You should have at least two 10 Gigabit Ethernet interfaces for peer links.

vPC domain: This domain includes both vPC peer devices, the vPC peer keepalive link, and all the PortChannels in the vPC connected to the downstream devices. It is also associated with the configuration mode that you must use to assign vPC global parameters.

vPC peer keep-alive link: The peer keepalive link monitors the vitality of a vPC peer switch. The peer keepalive link sends periodic keepalive messages between vPC peer devices. The vPC peer keepalive link can be a management interface or switched virtual interface (SVI). No data or synchronization traffic moves over the vPC peer keepalive link; the only traffic on this link is a message that indicates that the originating switch is operating and running vPC.

vPC member port: vPC member ports are interfaces that belong to the vPCs.

65 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

VSAN: Virtual SAN is a logical partitioning of physical connections to provide for fabric or SAN separation. VSAN is a term that is particular to the Cisco Nexus series switches.

WWNN (World Wide Node Name): A unique identifier that is assigned to a manufacturer and hard-coded into a FC switch. The WWNN is used to identify a switch.

WWPN (World Wide Port Name): A unique identifier that is assigned to a manufacturer and hard-coded into a FC switch. The WWPN is used to identify an individual port on a switch.

66 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

B References

Cisco Systems, Inc. (2011). Cisco Nexus 5000 Series NX-OS SAN Switching Configuration Guide, Release 5.1(3)N1(1). San Jose: Cisco Systems, Inc http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/san_switching/513_n1_1/b_Cisco_n5k_ nxos_sanswitching_config_guide_rel513_n1_1.html

Cisco Systems, Inc. (2010, 01). Cisco 4000 Series Design Guide. Retrieved 06 19, 2012, from Cisco.com: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps10596/deployment_guide_c07- 574724.html

Fibre Channel over Ethernet Initialization Protocol, Cisco, http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/white_paper_c11-560403.html

Cisco Systems Inc. (2009). Configurating SAN Port Channel. Retrieved 7 15, 2013, from Cisco.com - Cisco Nexus 5000 Series NX-OS SAN Switching Configuration Guide: http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/san_switching/Cisco_Nexus_5000_Seri es_NX-OS_SAN_Switching_Configuration_Guide_chapter7.html

Cisco Systems, Inc. (2009, 07). Virtual PortChannel Quick Configuration Guide. Retrieved Jun 18, 2012, from Cisco.com: http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9670/configuration_guide_c07-543563.html

http://www.cisco.com/c/en/us/products/collateral/switches/nexus-5000-series-switches/data_sheet_c78- 618603.html

Cisco Nexus 5000 Series NX-OS SAN Switching Configuration Guide, Release 5.2(1)N1(1) http://www.cisco.com/en/US/docs/switches/datacenter/nexus5000/sw/san_switching/521n11/b_5k_SAN_Swit ching_Config_521N11.html

IEEE. (2008, November 24). 802.1Qaz/D0.2. Draft Standard for Local and Metropolitan Area Networks – Virtual.

IEEE. (2009, Feb 9). 802.1Qbb/D1.0. Draft Standard for Local and Metropolitan Area Networks – Virtual.

IEEE. (2011, Sep 15). 802.1Qaz - Enhanced Transmission Selection. Retrieved May 30, 2012, from IEEE802.org: http://www.ieee802.org/1/pages/802.1az.html

IEEE Data Center Bridging Task Group http://www.ieee802.org/1/pages/dcbridges.html

DCB Capability Exchange Protocol Base Specification Rev 1.01 IEEE Data Center Bridging Task Group http://www.ieee802.org/1/files/public/docs2008/az-wadekar-dcbx-capability-exchange-discovery-protocol- 1108-v1.01.pdf

DCBX TLV Summary http://www.ieee802.org/1/files/public/docs2009/az-pelissier-dcbxtlvs-0309.pdf

67 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

T11/08-264v0 FCoE: Increasing FCoE Robustness using FIP-Snooping and FPMA, T11, http://www.t11.org/ftp/t11/pub/fc/bb-5/08-264v0.pdf

T11/09-291v0 FIP VLAN discovery updates, T11, http://www.t11.org/ftp/t11/pub/fc/bb-5/09-291v1.pdf

68 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3

C Attachments

This document contains the following attachments:

Nexus_Fabric_Mode_with_Brand_Varied_MC-LAG Architecture.pdf Configuration sheets for the Nexus Fabric Mode with Brand Varied MC-LAG Architecture configuration.

Dell_MXL_ or_IOAs_in_Nexus_Fabric_Mode.pdf Configuration sheets for the Dell MXL or IOAs in Nexus Fabric Mode configuration.

Dell_MXL_or_IOA_in_Nexus_NPV_Mode_ with_Cisco_MDS_9148.pdf Configuration sheets for the Dell MXL or IOA in Nexus NPV Mode with Cisco MDS 9148 configuration.

Configuring_BCM57810_With_Ctrl+S.pdf Configuring a Broadcom BCM57810 Network Adapter using the Ctrl S utility.

Configuring_ QME8262_With_Ctrl+Q.pdf Configuring a QLogic QME8262 Network Adapter using the Ctrl Q utility.

Configuring_QME8262_With_LifeCycle_Controller.pdf Configuring a QLogic QME8262 Network Adapter using the LifeCycle Controller.

VSAN_VFC_Port_Zone_Workbook.xlsx A spreadsheet to record VSANs, VLANs, ports, zones and other settings required when using this deployment guide.

Support and Feedback

Contacting Technical Support

Support Contact Information Web: http://Support.Dell.com/

Telephone: USA: 1-800-945-3355

Feedback for this document

We encourage readers of this publication to provide feedback on the quality and usefulness of this deployment guide by sending an email to [email protected]

69 Deploying Dell Networking MXL and PowerEdge I/O Aggregator in a Cisco Nexus Environment | Version 1.3