We’re ready. Are you? FCoE for Small and Mid-size Enterprise Hernan Vukovic Consulting Systems Engineer

BRKSAN-2101 The Session Objectives:

• Provide a refresh of FCoE and DCBX • Understand the basic FCoE implementation on Nexus 5K • FCoE design options for small and mid-size enterprise • FCoE deployment with Cisco Unified architecture • Step-by-step configuration examples The Session Non-Objectives:

• Nexus hardware architecture deep dive • UCS storage architecture • SAN distance extension using FCoE • FCOE or iSCSI, who is better for small and mid-size customers Related Sessions Storage Contents

• BRKCOM-2007 - UCS Storage Integration, Technologies, and Topologies Traditional Data Center Design Ethernet LAN and SAN Ethernet FC

FC

Fabric ‘A’ Fabric ‘B’ L3 L2

LAN NIC HBA SAN Agenda

• Introduction to FCoE Technology • FCoE SAN Design for Small and Mid-size Enterprise • Basic FCoE Configuration and Troubleshooting • Conclusion Agenda

• Introduction to FCoE Technology • FCoE SAN Design for Small and Mid-size Enterprise • Basic FCoE Configuration and Troubleshooting • Conclusion Block Storage Protocols (FC/FCoE/iSCSI) What is SAN ()

LAN (TCP/IP)

SAN (Fibre Channel, iSCSI, FCoE)

A dedicated network that provides access to consolidated, block level data storage. The SCSI I/O Transaction

• The SCSI protocol defines a bus based system used to carry block based storage commands • The channel provides connectivity between server and storage The following shows two sample SCSI exchanges:

Host (Initiator) Disk (Target) SCSI READ OPERATION DATA STATUSDATA DATA READ SCSI I/O Channel

Host (Initiator) Disk (Target) SCSI WRITE OPERATION STATUS WRITE SCSI I/O Channel DATA DATA DATA SAN Protocols Network Stack Comparison SCSI iSCSI FCoE FibreChannel

SCSI SCSI SCSI SCSI

iSCSI FCP FCP

FC FC

TCP IP FCoE Lossless Ethernet Ethernet PHYSICAL WIRE Block Storage Networking Protocols

FC FCoE iSCSI

• SCSI transport protocol that operates • Mapping of Fibre Channel frames over • SCSI transport protocol that operates over Fiber Channel Ethernet over TCP • FC frames with the SCSI CDB • Fibre Channel is enabled to run on a • Encapsulation of SCSI command payload are transported over Fiber lossless Ethernet network descriptor blocks and data in TCP/IP channel Protocol byte streams • Works on Ethernet switches with FCF • Works on Fiber Channel switches capability • Works on any Ethernet switch • OXID/RXID generated for every I_T • OXID/RXID generated for every I_T pair • ISID/TSID generated for every I_T pair conversation conversation pair conversation • Needs Zoning • Needs Zoning • Zoning not required • Runs on dedicated lossless Fiber • Uses Ethernet and needs lossless • Works on TCP and subject to losses Channel networks network in network • Limited by distance • Limited by distance • No Distance limitations • Well suited for latency sensitive and • Reduces the TCO of the fabrics by • Well suited for applications with less high I/O applications preserving the advantages of FC I/O requirements while reducing the networks TCO Unified Fabric and FCoE All Data accessed over a common fabric iSCSI iSCSI NAS NAS FC SAN FCoE SAN Appliance Gateway Appliance Gateway Computer System Computer System Computer System Computer System Computer System Computer System Host/ Application Application Application Application Application Application Server File System File System File System File System File System File System Volume Manager Volume Manager Volume Manager Volume Manager I/O Redirector I/O Redirector SCSI Device Driver SCSI Device Driver SCSI Device Driver SCSI Device Driver NFS/CIFS iSCSI Driver iSCSI Driver NFS/CIFS FC Driver FCoE Driver TCP/IP Stack TCP/IP Stack TCP/IP Stack TCP/IP Stack HBA CNA NIC NIC Storage NIC NIC Transport SAN Unified Fabric IP Block I/O File I/O NIC NIC NIC NIC TCP/IP Stack TCP/IP Stack TCP/IP Stack TCP/IP Stack iSCSI Layer iSCSI Layer File System File System Storage FC FCoE Bus Adapter FC HBA Device Driver FC HBA Media FC FC Basics of FCoE What is Fibre Channel over Ethernet (FCoE) FCoE Benefits

• Mapping of FC Frames over • Fewer Cables Ethernet •Both block I/O & Ethernet traffic co-exist on same cable • Enables FC on a Lossless Ethernet Network • Fewer adapters needed • Overall less power Ethernet • Interoperates with existing Fibre SAN’s Channel •Management of SAN’s remains Traffic constant • No Gateway FCoE Protocol Fundamentals Protocol Organization – Data and Control Plane

FC-BB-5 defines two protocols required for an FCoE enabled Fabric

FIP (FCoE Initialization FCoE Protocol)

• Data Plane • It is the control plane protocol

• It is used to carry most of the • It is used to discover the FC entities FC frames and all the SCSI traffic connected to an Ethernet cloud

• Uses Fabric Assigned MAC • It is also used to login to and logout address (dynamic) : FPMA from the FC fabric • Uses unique BIA on CNA for MAC • IEEE-assigned Ethertype for FCoE traffic is 0x8906 • IEEE-assigned Ethertype for FCoE traffic is 0x8914 FCoE Protocol FCoEProtocol Fibre ChanneloverEthernet (FCoE) . . . . FCoE ‘is’ Fibre Channel approved a final standard for FCoE completed its work and unanimously FC FCoE is a standard Twoprotocols defined in the standard option for block based storage high capacity and lower cost transport Fibre Channel overEthernet provides a . . - BB FIP FCoE - 5 working group of T11 – Control Plane Protocol – Data Plane Protocol - June 3rd 2009, the Fundamentals Byte0 Ethernet Bit 0 Header EOF FCoE

Header ET= FCoE Encapsulated FCFrame (with CRC) FCoE FrameFormat FC Destination MACAddress

Header Reserved Source MACAddress (IEEE 802.1Q Tag) Reserved Reserved FCS FC Payload Ver Reserved Reserved SOF Byte2197 Bit 31 CRC EOF FCS FCoE Protocol Fundamentals FCoE Initialization Protocol (FIP)

. Neighbor Discovery and Configuration Enode FCoE Switch (VN – VF and VE to VE) Initiator FCF

. Step 1: FCoE VLAN Discovery VLAN VLAN Discovery Discovery FIP sends out a multicast to ALL_FCF_MAC address looking for the FCoE VLAN FCoE FIP frames use the native VLAN Initializatio FCF FCF n Discovery Discovery Protocol . Step 2: FCF Discovery (FIP) FIP sends out a multicast to the ALL_FCF_MAC address on the FCoE VLAN to find the FCFs answering for that FCoE VLAN FLOGI/FDISC FCF’s responds back with their MAC address FLOGI/FDISC Accept . Step 3: Fabric Login FIP sends a FLOGI request to the FCF_MAC found in Step 2 FC Establishes a virtual link between host and FCF FC Command FCoE Command FCF assigns the host a Enode MAC address to be used for FCoE Responses Protocol forwarding ** FIP does not carry any Fibre Channel data frames What happens after FLOGI

. FCoE Protocol carries FC model Initiator Target FC . Name server registration/query PLOGI . PLOGI from initiator to target ACCEPT PRLI . SCSI Commands and Data ACCEPT transmission Report LUNs SCSI Command

SCSI Command (1) (Read) SCSI_FCP_DATA SCSI Data (1) SCSI Status (1) STATUS (=good)

SCSI Inquiry Command (LUN n) SCSI SCSI Target Initiator SCSI Command (1) (Write) SCSI_FCP_DATA SCSI Data (1) SCSI Status (1) STATUS (=good) Recap of Fibre Channel Concepts

Name server WWNs Fabric controller FCIDs FSPF Zone Server Initiator target

Fibre Channel Fabric SCSI Fibre Channel HBA

Host System Disk Array Fibre Channel Addressing

• 64-bit WWNs are used as burnt-in unique FCoE Addressing Scheme: addresses assigned to fabric switches, • A fabric-provided Mac address ports, and nodes by the manufacturer (FPMA) is assigned to each Enode • These addresses are registered in the • Enode MAC composed of a FC-MAP fabric and mapped to an 24-bit FC_ID and FCID 8 bits 8 bits 8 bits • FCoE forwarding decisions will still be made based on FSPF and the Switch Domain Area Device FCID within the Enode MAC

• FC Switches assign FC_ID addresses to FC-MAP FC-ID N_Ports; (0E-FC-xx) 10.00.01

• FSPF Forwarding decisions are made on FC-MAC FC-MAP FC-ID domain ID Address (0E-FC-xx) 7.8.9 Basic Fibre Channel Port Types

FC E_Port E_Port F_Port NP_Port FC NPV Switch Switch

F_Port N_Port Node

FC Switch F_Port N_Port Node

FCF VE_Port VE_Port VF_Port VNP_Port FCoE_NPV Switch Switch

End VF_Port VN_Port Node End VF_Port VN_Port FCoE Switch : FCF Node Fibre Channel Cisco MDS 9000 A Virtual SAN (VSAN) Provides Family with VSAN Service a Method to Allocate Ports Within a Physical Fabric to Create Virtual Fabrics Physical SAN Islands Are • Analogous to VLANs in Ethernet Virtualized onto Common SAN • Virtual fabrics created from larger Infrastructure cost-effective redundant physical fabric • Reduces wasted ports of island approach • Fabric events are isolated per VSAN— maintains isolation for HA (i.e., RSCNs) pwwn 50:06:01:61:3c:e0:1a:f6 Fibre Channel Zoning Target

. Zones are the basic form of data path security FC/FCoE Fabric . zone members can only “see” and talk to other members of the zone . devices can be members of more than one zone . Default zoning is “deny” . Zones belong to a zoneset FCF with Domain ID 10 . Zoneset must be “active” to enforce zoning . Only one active zoneset per fabric or per VSAN

SAN pwwn 10:00:00:00:c9:76:fd:31 Disk2 Disk3 Initiator ZoneA Host1 Disk1 ZoneC zoneset name ZONESET_V1 vsan 1 zone name Z_FC1_b1_FC1_e1_V1 vsan 1 Disk4 Host2 fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [initiator] ZoneB fcid 0x11.00.01 [pwwn 50:06:01:61:3c:e0:1a:f6] [target] Fibre Channel Flow Control

. B2B Credits used to ensure that FC transport is lossless . Number of credits negotiated between ports when link is brought up . Each side informs the other side of the number of buffer credits it has 16 F ports - In the Fabric Login(FLOGI) R_RDY E ports – In the Exchange Link Parameters(ELP) . # Credits decremented with each packet placed on the wire Packet . Independent of packet size . If # credits == 0, no more packet transmission 16 . # of credits incremented with each “transfer ready” received 15 . B2B Credits need to be taken into consideration as distance and/or bandwidth increases 16 Host

DCB and QoS Standards for FCoE FCoE is fully defined in FC-BB-5 standard FCoE works alongside additional technologies to make I/O Consolidation a reality

FCoE IEEE 802.1 T11 DCB

FC on FC on Other other Network network PFC ETS Media DCBX media

Lossless Priority Configuration Ethernet Grouping Verification

FC-BB-5 802.1Qbb 802.1Qaz 802.1Qaz

Technically stable October, 2008 Standard Sponsor Ballot July 2010 Sponsor Ballot October 2010 Sponsor Ballot October 2010 Completed in June 2009 Published Fall 2011 Status Published in May, 2010 Published Fall 2011 Published Fall 2011 Data Center Bridging (DCB) Can Ethernet be lossless? . DCB features extend the Ethernet capabilities to the Data Center by ensuring delivery over lossless fabrics and I/O convergence.

. The three main features of the DCB architecture are

Priority Flow Control Priority Pause mechanism that can be controlled independently for each class of network service on a (PFC) shared multiprotocol link

Enhanced Transmission Defines a common management framework to assign bandwidth for each class of network service on shared Selection (ETS) links

Data Center Bridging Discovery and exchange of Capabilities to ensure Exchange(DCBX) consistent configuration between network neighbors DCBX: Data Center Bridging eXchange

IEEE 802.1Qaz “Hello?”

“Hello?” “Look’s Like We “Hello.” Allows network devices to All Speak the advertise their identities and “Hello.” Same Language.” capabilities over the network “Hello?” • Enables hosts to pick up proper configuration from “Hello.” the network • Enables switches to verify proper configuration Provides support for: • PFC • ETS • Applications (e.g., FCoE) Ethernet Wire PFC: Priority Flow Control IEEE 802.1Qbb

VLAN Tag enables 8 priorities for Ethernet traffic PFC enables Flow Control on a Per-Priority basis using PAUSE frames (802.1p). Therefore, we have the ability to have lossless and lossy priorities at the same time on the same wire FCoE • Allows FCoE to operate over a lossless priority independent of other Ethernet Wire priorities ETS: Enhanced Transmission Selection IEEE 802.1Qaz

Allows you to create priority groups Can guarantee bandwidth Can assign bandwidth percentages to groups Not all priorities need to be used or in groups

80% 20% 80% 20% FCoE

Ethernet Wire DCBX Basics

. DCBX operates for priority-based flow control (PFC), Layer 2 and Layer 4 applications such as FCoE and iSCSI, and RoCE.

. Via DCBX exchange, the switch can:

o Discover the DCB capabilities of peers.

o Detect DCB feature misconfiguration or mismatches between peers.

o Configure DCB features on peers if the peer is configured as “willing” to learn the configuration from other peers.

. To enable DCBX negotiation for applications, the applications must be configured and be mapped to IEEE 802.1p code points in an application map, and apply the application map to interfaces. Cisco Nexus 5K QoS Processing Flow

Ingress UPC If Buffer Usage Crosses Threshold: VoQs for Unicast Central Trust CoS/DSCP • Tail drop for drop class (8 per ingress port) Scheduler L2/L3/L4 info with ACL • Assert pause signal to MAC

Ingress Per-class Traffic MTU Cos/DSCP Ingress Buffer Usage MAC Classification Checking

Marking Policing Monitoring Multicast Queues

Crossbar Fabric Truncate or Drop Packets if MTU is Violated

PAUSE ON/OFF Egress Queues Signal Unicast MAC ECN Egress Egress Marking Policing Scheduling

Multicast Strict Priority + DWRR Scheduling Egress UPC Nexus QoS Priority Flow Control and No-Drop Queues . Default queuing buffer for PFC generation:

Configs for no- Pause Threshold Resume Buffer size drop class (XOFF) Threshold (XON)

N5500 79360 bytes 40320 bytes 20480 bytes Support tuning no drop distance for N5600 165120 bytes 88320 bytes 62720 bytes switch to switch ISLs between FCoE Forwarders . Tuning of the lossless queues to support a variety of use cases Gen 2 UPC . Extended switch to switch no drop traffic lanes

Unified Crossbar . Support for FCoE long distance Fabric . Increased number of no drop services lanes (4) for RDMA and other multi-queue HPC and compute applications Gen 2 UPC

5548-FCoE(config)# policy-map type network-qos 3km-FCoE 5548-FCoE(config-pmap-nq)# class type network-qos 3km-FCoE 5548-FCoE(config-pmap-nq-c)# pause no-drop buffer-size 152000 pause-threshold 103360 resume-threshold 83520 Nexus QoS - Enhanced Transmission Selection Bandwidth Management

. When configuring FCoE by default, each class is given 50% of the available bandwidth

. Can be changed through QoS settings when 5Gig FC vHBA higher demands for certain traffic exist (i.e. HPC 5Gig Ethernet vNIC traffic, more Ethernet NICs) CNAs

N5k-1# show queuing interface Ethernet 1/18 Ethernet1/18 queuing information: TX Queuing qos-group sched-type oper-bandwidth 0 WRR 50 1 WRR 50 Nexus QoS QoS Policy Types

. There are three QoS policy types used to define system behavior (qos, queuing, network-qos) . There are three policy attachment points to apply these policies to . Ingress interface . System as a whole (defines global behavior) . Egress interface Policy Type Function Attach Point system qos qos Define traffic classification rules ingress Interface system qos Strict Priority queue queuing egress Interface Deficit Weight Round Robin ingress Interface System class characteristics (drop or no- network-qos system qos drop, MTU), Buffer size, Marking System Default QoS Setting for FCoE

tme-5548up-1# show running-config ipqos

!Command: show running-config ipqos !Time: Thu May 28 18:11:53 2015

version 7.0(1)N1(1) system qos service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type qos input fcoe-default-in-policy service-policy type network-qos fcoe-default-nq-policy System Default QoS Setting for FCoE

tme-5548up-1# show policy-map tme-5548up-1# show policy-map system type network-qos system type queuing input

Type network-qos policy-maps Service-policy (queuing) input: ======fcoe-default-in-policy policy statistics status: policy-map type network-qos fcoe-default-nq-policy disabled class type network-qos class-fcoe match qos-group 1 Class-map (queuing): class- fcoe (match-any) pause no-drop Match: qos-group 1 mtu 2158 bandwidth percent 50 class type network-qos class-default match qos-group 0 Class-map (queuing): class- default (match-any) mtu 1500 Match: qos-group 0 multicast-optimize bandwidth percent 50 System Default QoS Setting for FCoE

tme-5548up-1# show policy-map system type queuing output tme-5548up-1# show policy-map system type qos

Service-policy (queuing) output: Service-policy (qos) input: fcoe-default-in- fcoe-default-out-policy policy policy statistics status: policy statistics status: disabled disabled

Class-map (qos): class-fcoe (match-any) Class-map (queuing): class- Match: cos 3 fcoe (match-any) set qos-group 1 Match: qos-group 1 bandwidth percent 50 Class-map (qos): class-default (match-any) Match: any Class-map (queuing): class- set qos-group 0 default (match-any) Match: qos-group 0 bandwidth percent 50 Agenda

• Introduction to FCoE Technology • FCoE SAN Design for Small and Mid-size Enterprise • Basic FCoE Configuration and Troubleshooting • Conclusion Requirements of Small and Mid-size SAN’s

. IO Consolidation - Need for converged networks with use of same links for SAN and LAN requirements?

. Performance and flexibility - Use the existing Ethernet network infrastructure for high performance SAN?

. Cost - Price might be a factor, cost efficient for dedicated SAN

. Growth with scale - Scalability to be taken care of without compromise in performance and much additional cost

. Data security – High availability and resiliency Think About FCoE SAN

Convergence Bandwidth Multi-protocol support to provide the flexibility for IO Resource sharing amongst multiple application consolidation workloads

Scalability Fabric Management Meet the increasing demands of network and data Unified Management tool to maintain, troubleshoot and without compromising performance manage the fabric

Latency High Availability Low latency high throughput networks for sensitive Hardware and software redundancy needs at the workloads Server, Fabric and Storage levels FCoE on Cisco Nexus Switches

Multi-protocol support High Convergence Flexible for IO consolidation

Scalable Networks Low Latency Predictable performance

QoS, DCB, FCoE TLV Ease of segregation of Storage traffic

Traffic Prioritization OS

- DCNM RBAC,AAA,ACL NX Highly secure Networks TrustSec

10/40/100Gb High Speed Interconnects

Ethernet,FC,FCoE Common Operating System Operating Common VDC,Hardware Redundancy Tool Management Common High Availability and Data Resiliency vPC, FabricPath FCoE Design Considerations Ethernet Traditional Data Center Design FC Ethernet LAN and Fibre Channel SAN

• Physical and Logical separation FC of LAN and SAN traffic Fabric ‘A’ Fabric ‘B’ • Additional Physical and Logical L3 separation of SAN fabrics L2

NIC HBA Isolation Convergence Ethernet Data Center Design with E-SAN FC Ethernet LAN and Ethernet SAN

• Same topologies as existing networks, but using Nexus Unified Fabric Ethernet switches for SANs FCoE Fabric ‘A’ Fabric ‘B’ • Physical and Logical separation of L3 LAN and SAN traffic L2 • Additional Physical and Logical separation of SAN fabrics • Ethernet SAN Fabric carries FC/FCoE & IP based storage (iSCSI, NAS, …)

• Common components: Ethernet NIC CNA Capacity and Cost or CNA Isolation Convergence Ethernet FC Converged FCoE link Dedicated Converged Network – Unified Access Layer FCoE link • Network Convergence occurs at the access layer FC LAN SAN • Consolidate I/O on 10G links Fabric ‘A’ Fabric ‘B’ • Drastically reduced CapEx and OpEx • Multiprotocol connectivity eased L3 purchasing decisions for server L2 refreshes • Prepared Data Centers for VM mobility requirements • Any VM could connect to FC storage if necessary, not just the ones with HBAs pre-installed CNA Isolation Convergence Converged Network with FCoE NPV – expanding Ethernet existing SAN Infrastructure FC Converged Fabric ‘A’ FCoE link • FCoE CNAs can be in active-standby or active-active Dedicated FCoE link to have redundancy along with bandwidth utilization Fabric ‘B’

• Nexus 5000 switches in Access layer makes use of FC FC both FCoE NPV and FC NPV to leverage the advantages of NPV (prevents domain ID sprawl, better manageability) FCoE NPV L3 Core • FCoE NPV and FC NPV connectivity helps easy L2 migration of servers from legacy FC network to FCoE VF VF F FC NPV network. VF VF F Core FCoE • Maintains the SAN-A, SAN-B isolation for SAN while VNP providing the vPC connectivity with existing Ethernet VNP NP NP network. FCoE NPV FCoE NPV VF • SAN can utilize higher performance, higher density, V lower cost Ethernet switches for the aggregation/core CNAN LAN/SAN Converged Network – Unified Fabric

Fabric ‘A’ • LAN and SAN traffic share Fabric ‘B’ physical switches L3 • LAN and SAN traffic share L2 consolidated links between switches

10,20 10 30 • Supports Spine/Leaf topology FCF 20,30 FCF

10,20 20,30 10 30 CNA1 CNA2 Array1 FCoE Array2

Isolation Convergence Deploying FCoE SAN using Cisco UCS, Nexus and MDS Industry’s Broadest FCoE Portfolio 40G FCoE has 50% greater Data Rate than 32G FC

Cisco NEXUS Cisco MDS Cisco UCS

Nexus 5696Q Nexus 5648Q Nexus 5624Q

Nexus 2348UPQ Nexus 7718 Nexus 7018 Nexus 56128P MDS 9710 Nexus 6296UP Nexus 7010 Nexus 2348TQ MDS 9706 Nexus 7710 Nexus 5672UP Nexus 6248UP Nexus 7706 Nexus 7006 Nexus 2332TQ MDS 9250i

Density Nexus 7700 Nexus 7000 Nexus 5600 Nexus 2300 MDS 9000 UCS FI

10G FCoE 768 768 384 (breakout) 48 384 96

384 192 40G FCoE 96 6 Future Future (June’15) (June’15)

Converge at Host Edge with 10G FCoE, Use 40G FCoE for ISLs, Deploy 16G FC at Storage Core FCoE Deployment Ethernet FC Nexus 5K as Unified TOR FCoE Converged

• Ethernet and FC I/O is carried over the same physical link from the FC server Nexus 5K/7K Fabric ‘A’ Fabric ‘B’ • Servers use Converged L3 LAN Network Adapters to L2 combine traffic (Eth & FC) MDS • Significantly reduces the SAN Nexus 5K amount of cabling required Unified Access • Reduces the number of port needed, as Eth and FC traffic uses the same NIC HBA port and cable

• The access layer switch acts a protocol splitter FCoE Host Ethernet FCoE Deployment FC FCoE Choices: Nexus 5K breakout to SAN Converged

FC FC FC FC SAN A SAN B SAN A SAN B SAN A SAN B SAN A SAN B

MD MD MD MD S S S S E E F F VE VE VF VF

Nexus E E Nexus N N Nexus VE VE Nexus VNP VN 5K 5K P P 5K 5K P

FCoE Host FCoE Host FCoE Host FCoE Host FC Switch Mode FC NPV Mode FCoE Switch Mode FCoE NPV Mode FCoE Deployment

FEX as Unified TOR Ethernet FC • The fabric extenders (FEX) FCoE can be used at the ToR Converged • FEX cannot be FCF LAN • ToR FEX connect to the end of row switches which manages the FEX Nexus 5K FC/FCoE SAN • FEX work like external line cards to the end or row switches

• The EoR switches splits the Nexus 2K Ethernet and FC traffic

B22 Dell FEX B22F FEX 1/10G FEX for 1/10G FEX for Dell Blade servers B22 HP FEX FTS Blade servers 1/10G FEX for B22 IBM FEX FCoE Host HP Blade servers 1/10G FEX for IBM Blade servers Ethernet FCoE Link FCoE Deployment Fibre Channel Nexus 5000/6000 + Nexus 2000 Connectivity Models Converged Link FCoE over FEX: Single-home or Dual-home

LAN Fabric LAN Fabric SAN A SAN B SAN A SAN B Native Native FC FC OR OR Dedicated Dedicated FCoE FCoE

FCF FCF N5K FCF FCF N5K N5K N5K

Nexus Nexus Consider FCoE Nexus 2000 FEX 2000 FEX 10GE FEX Traffic Path

FCoE with Single-Homed FEX FCoE with Dual-Homed FEX FCoE Deployment

UCS FI Connecting to Nexus 5K TOR Ethernet FC FCoE Converged LAN • FI has the default End Host Mode; Nexus 5K FC/FCoE SAN • It works as NPV Switch in SAN network; • NPIV feature needs to be enabled on N5K; • FI can have separate Nexus 2232 61xx/62xx FC uplinks to N5K; UCS FI • Or, have FCoE uplinks to N5K. …

UCS Servers FCoE Deployment UCS FI as Unified TOR

Ethernet FC FC FCoE Converged Fabric ‘A’ Fabric ‘B’

Nexus 5K/7K L3 MDS L2 SAN

61xx/62xx UCS FI

Nexus 2232

• FI connects to MDS SAN via FC links; … • Or, via FCoE links.

UCS Servers FCoE Deployment UCS FI with Direct Attach Storage

Ethernet • LAN/SAN IO by default FC FCoE converges inside UCS Converged servers by using converged adapters; Nexus 5K/7K • UCS FI is in FC Switch L3 FC FCoE Mode; L2 • UCS FI supports Unified

61xx/62xx ports which can connect UCS FI direct attached FC or FCoE storage devices; Nexus 2232 • Use different FCoE SAN A SAN B VLANs/VSANs for SAN A and SAN B separation …

UCS Servers FCoE Deployment Nexus 5K as Unified Access with Direct Attach Storage • LAN/SAN Convergence Ethernet FC happens at the access FCoE Converged layer; • Nexus 5K as access Nexus switch is also FCoE 5K/7K L3 Forwarder;

FC L2 FCoE • Nexus 5K supports Unified ports which can connect Nexus 5K direct attached FC or FCoE storage devices; • Use different FCoE VLANs/VSANs for SAN A and SAN B separation

SAN A SAN B • Shared Wires connecting to HOSTS must be FCoE Host configured as trunk ports FCoE Deployment Multi-hop FCoE extending FCoE through Aggregation

Ethernet FC FCoE Converged FC

Nexus 5K/7K Fabric ‘A’ Fabric ‘B’ L3 • VE links between LAN L2 access and MDS aggregation SAN switches; Nexus 5K • Dedicated links to Unified Access Storage VDC in N7K; • Can be converged wires between N5K and N5K.

FCoE Host FCoE Deployment End-to-end Multi-hop FCoE with Nexus Platforms

Ethernet FC FCoE Converged FCoE

Nexus 5K/7K Fabric ‘A’ Fabric ‘B’ L3 • End-to-end FCoE LAN L2 ISLs between Nexus Nexus switches; SAN • N7K as storage Nexus 5K core and N7K or Unified Access N5K as storage edge; • FCoE SAN Edge/Core or Edge/Core/Edge design. FCoE Host FCoE Deployment Ethernet Fabric connecting to SAN

Fabric ‘A’ Fabric ‘B’

FC/FCoE SAN Nexus 5K/7K FC/FCoE SAN L3 L2 Spine FabricPath Network or Standalone Fabric

FCoE Nexus 5K Native Uplink Leaf FC

CNA1 CNA2 FCoE Deployment Dynamic FCoE enables Full Convergence in the Unified Fabric

• All Leaf switches are FCoE FCF switches, spine is FCoE transparent

• Multi-hop convergence – one hop away Nexus from any leaf to any leaf 5/k/7K

• VE instantiation is created dynamically

• The adjacency between FCF leafs are Nexus established dynamically 5K

• Complete, load-balance ISLs dynamically created between FCF leafs SAN A SAN B • No Storage VDC, all Ethernet VDCs

• Dynamic FCoE enables full convergence

in Fabricpath Ethernet Fabric iSCSI/ Converged FC NFS Dynamic FCoE using FabricPath is simplifying the network infrastructure and achieves multiprotocol convergence in the cloud Data Center. Logical Separation of SAN A/B in the Dynamic FCoE Unified Fabric What Fibre Channel sees

• Server/Storage nodes reside on the leafs • Storage sees an edge-core topology equivalence • SAN A/B separation occurs at the most vulnerable part of the network – the server to the access layer switch (i.e., leaf)

SAN A SAN B

iSCSI/ Converged FC NFS Agenda

• Introduction to FCoE Technology • FCoE SAN Design for Small and Mid-size Enterprise • Basic FCoE Configuration and Troubleshooting • Conclusion FC/FCoE Configuration on Nexus 5K

What basic steps are needed 6) Create VSAN and assign native FC interfaces to VSAN 1) Install License 7) Create FCoE VLAN and map to VSAN • Storage Service License, or FCoE-NPV License * Best Practice: Reserve a Range with same 2) Enable features number of VSANs expected to be used

• LLDP, LACP, FEX (optional), FCoE or FCoE-NPV 8) , NPIV (optional), NPV (optional) Configure host Ethernet interfaces for FCoE traffic 3) Enable system QoS and policy for FCoE 9) Create VFC interfaces and bind them to the host ports * This step is only required on Nexus 55xx platforms prior to release 5.1(3)N1(1). * Best Practice: Formulate a method to assign vfc numbers for the bound Ethernet interface 4) Bring up FEX connections to parent switch (optional) 10) Configure VFC interfaces and assign the interfaces into VSAN 5) Create and configure native FC interfaces if needed 11) Zoning configuration Nexus 5K Configuration Example 1 - FC/FCoE

Ethernet FC FC FCoE FCoE Converged

E1/25 FC1/32 FC1/32 E1/7 E1/7 E1/25 Nexus 5548UP-1 E1/8 E1/8 Nexus 5548UP-2 E1/4 E1/4 E1/3 E1/3

SAN A SAN B Nexus 2232 1/25 1/25 Nexus 2232 FEX 100 FEX 101

Nexus 5548 code version: 7.0(1)N1(1) FCoE Host Configuration Example 1 – on Nexus 5K 1. Install FCoE License tme-5548up-1# install license bootflash:n5548-1.lic Tme-5548up-1# show license usage Feature Ins Lic Status Expiry Date Comments Count ------… … FC_FEATURES_PKG Yes - In use Never - … ….

2. Enable FEX feature tme-5548up-1(config)# feature fex tme-5548up-1(config)# feature vpc 3. Enable VPC feature tme-5548up-1(config)# feature lacp tme-5548up-1(config)# feature fcoe 4. Enable LACP feature tme-5548up-1(config)# show feature | incl ena 5. Enable FCoE feature fcoe 1 enabled fex 1 enabled lacp 1 enabled lldp 1 enabled  by default lldp feature is enabled vpc 1 enabled … …

tme-5548up-1(config)# show running-config ipqos … … 6. After FCoE feature is enabled, the system system qos enables the default FCoE QoS Policy service-policy type queuing input fcoe-default-in-policy service-policy type queuing output fcoe-default-out-policy service-policy type qos input fcoe-default-in-policy service-policy type network-qos fcoe-default-nq-policy Configuration Example 1 – on Nexus 5K 7. Configure VPC 8. Bring up FEX

tme-5548up-1(config)# vpc domain 1 tme-5548up-1(config)# fex 100 tme-5548up-1(config-vpc-domain)# peer-keepalive tme-5548up-1(config-fex)# pinning max-links 1 destination 192.168.10.65 tme-5548up-1(config-fex)# description "FEX0100" tme-5548up-1(config-fex)# fcoe tme-5548up-1(config)# interface Ethernet1/7 tme-5548up-1(config)# fex 101 tme-5548up-1(config-if)# channel-group 1 mode active tme-5548up-1(config-fex)# pinning max-links 1 tme-5548up-1(config-if)# no shutdown tme-5548up-1(config-fex)# description "FEX0101” tme-5548up-1(config)# interface Ethernet1/8 tme-5548up-1(config-if)# channel-group 1 mode active tme-5548up-1(config)# interface Ethernet1/3 tme-5548up-1(config-if)# no shutdown tme-5548up-1(config-if)# channel-group 100 tme-5548up-1(config-if)# no shutdown tme-5548up-1(config)# interface port-channel1 tme-5548up-1(config)# interface Ethernet1/4 tme-5548up-1(config-if)# switchport mode trunk tme-5548up-1(config-if)# channel-group 101 tme-5548up-1(config-if)# switchport trunk allowed vlan tme-5548up-1(config-if)# no shutdown 1,1001 tme-5548up-1(config)# interface port-channel100 tme-5548up-1(config-if)# vpc peer-link tme-5548up-1(config-if)# switchport mode fex-fabric tme-5548up-1(config-if)# vpc 100 tme-5548up-1(config-if)# fex associate 100 tme-5548up-1(config)# interface port-channel101 tme-5548up-1(config-if)# switchport mode fex-fabric tme-5548up-1(config-if)# vpc 101 tme-5548up-1(config-if)# fex associate 101 Configuration Example 1 – on Nexus 5K 9. Create native FC interfaces 12. Create FCoE VLAN and map to VSAN

tme-5548up-1(config)# slot 1 tme-5548up-1(config)# vlan 100 tme-5548up-1(config-slot)# port 27-32 type fc tme-5548up-1(config-vlan)# fcoe vsan 100 Port type is changed. Please reload the switch 13. Configure FCoE host interface 10. Configure native FC interface tme-5548up-1(config)# interface Ethernet100/1/25 tme-5548up-1(config)# interface fc1/32 tme-5548up-1(config-if)# switchport mode F tme-5548up-1(config-if)# switchport mode trunk tme-5548up-1(config-if)# no shutdown tme-5548up-1(config-if)# switchport trunk allowed vlan 100 tme-5548up-1(config-if)# spanning-tree port type edge trunk tme-5548up-1(config-if)# channel-group 102 mode active 11. Create VSAN and assign native FC tme-5548up-1(config)# interface vfc125 interface to VSAN tme-5548up-1(config-if)# bind interface Ethernet100/1/25 tme-5548up-1(config-if)# switchport trunk allowed vsan 100 tme-5548up-1(config)# vsan database tme-5548up-1(config-if)# no shutdown tme-5548up-1(config-vsan-db)# vsan 100 tme-5548up-1(config)# vsan database tme-5548up-1(config-vsan-db)# vsan 100 interface tme-5548up-1(config-vsan-db)# vsan 100 interface vfc125 fc1/32 Configuration Example 1 – on Nexus 5K 14. Configure FCoE target interface 16. Configure Zoning

tme-5548up-1(config)# interface Ethernet1/25 tme-5548up-1(config)# zone name demo-fcoe-host-1 vsan 100 tme-5548up-1(config-if)# switchport mode trunk tme-5548up-1(config-zone)# member pwwn 20:00:6c:20:56:a4:75:9c tme-5548up-1(config-if)# switchport trunk allowed vlan 100 tme-5548up-1(config-zone)# member pwwn 50:06:01:64:3e:a0:33:27 tme-5548up-1(config-if)# spanning-tree port type edge trunk tme-5548up-1(config)# zone name demo-fcoe-host-2 vsan 100 tme-5548up-1(config)# interface vfc25 tme-5548up-1(config-zone)# member pwwn 20:00:6c:20:56:a4:75:9c tme-5548up-1(config-if)# bind interface Ethernet1/25 tme-5548up-1(config-zone)# member pwwn 50:06:01:64:3e:a4:33:27 tme-5548up-1(config-if)# switchport trunk allowed vsan 100 tme-5548up-1(config-if)# no shutdown tme-5548up-1(config)# zoneset name demo-fcoe vsan 100 tme-5548up-1(config)# vsan database tme-5548up-1(config-zoneset )# member demo-fcoe-host-1 tme-5548up-1(config-vsan-db)# vsan 100 interface vfc25 tme-5548up-1(config-zoneset )# member demo-fcoe-host-2 17. Activate Zoneset 15. Check PWWNs of the end nodes tme-5548up-1(config)# zoneset activate demo-fcoe tme-5548up-1(config)# show flogi database vsan 100 vsan 100 ------tme-5548up-1# show zoneset active vsan 100 INTERFACE VSAN FCID PORT NAME NODE NAME zoneset name demo-fcoe vsan 100 ------zone name demo-fcoe-host-1 vsan 100 fc1/32 100 0x3500ef 50:06:01:64:3e:a0:33:27 50:06:01:60:be:a0:33:27 * fcid 0x3500ef [pwwn 50:06:01:64:3e:a0:33:27] vfc25 100 0x350001 50:06:01:68:3e:a4:33:27 50:06:01:60:be:a0:33:27 * fcid 0x350000 [pwwn 20:00:6c:20:56:a4:75:9c ] vfc125 100 0x350000 20:00:6c:20:56:a4:75:9c 10:00:6c:20:56:a4:75:9c zone name demo-fcoe-host-2 vsan 100 Total number of flogi = 3. * fcid 0x350000 [pwwn 20:00:6c:20:56:a4:75:9c] * fcid 0x350001 [pwwn 50:06:01:64:3e:a4:33:27] Nexus 5K Configuration Example 2 - FCoE-NPV

Ethernet FC FCoE Converged MD VF S97 VF 10 PO1001 FCoE-NPV VN VNP P Nexus 5696Q

SAN A SAN B

Nexus 2K

FCoE Host Configuration Example 2 – on Nexus 5696Q

TME-N5696Q# show license usage 1. FCOE_NPV License is required to enable FCoE-NPV Feature Ins Lic Status Expiry Date Comments Count feature. ------FCOE_NPV_PKG Yes - Unused Never - 2. Enable FCoE-NPV feature. FCoE-NPV feature does not FM_SERVER_PKG No - Unused - require to reboot the switch or module. Enabling FCoE-NPV … ….

feature enables QoS default settings for FCoE. TME-N5696Q(config)# feature fcoe-npv 3. Enable LACP feature. TME-N5696Q(config)# feature lacp 4. Create VSAN/VLAN Mapping. TME-N5696Q(config)# vsan database TME-N5696Q(config-vsan-db)# vsan 1001 5. Create Ethernet Port Channel. TME-N5696Q(config-vsan-db)# exit TME-N5696Q(config)# interface e1/1, e1/11, e2/1, e2/11 TME-N5696Q(config)# vlan 1001 TME-N5696Q(config-if-range)# switchport mode trunk TME-N5696Q(config-vlan)# fcoe vsan 1001 TME-N5696Q(config-if-range)# switchport trunk allowed vlan 1001 TME-N5696Q(config-if-range)# channel-group 1001 mode active TME-N5696Q(config-if-range)# exit TME-N5696Q(config-)# interface port-channel 1001 TME-N5696Q(config-if)# no shutdown

6. Create VFC for Port Channel 1001. Configure TME-N5696Q(config-if)# interface vfc1001 TME-N5696Q(config-if)# bind interface po1001 VFC to NP mode for NPV uplink. TME-N5696Q(config-if)# switchport mode np TME-N5696Q(config-if)# switchport trunk allowed vsan 1001 TME-N5696Q(config-if)# no shutdown Configuration Example 2 – on MDS 9710 1. On MDS 9710, the FCoE feature-set is enabled by default MDS9710-A(config)# show feature-set Feature Set Name ID State without requiring a FCoE capable module to be inserted in ------the system. . fcoe 1 enabled 2. Enable NPIV feature as MDS will behave as NPIV core.

3. Enable LACP feature. MDS9710-A(config)# feature npiv 4. Enable F_Port_Channel_Trunk feature . MDS9710-A(config)# feature lacp MDS9710-A(config)# feature fport-channel-trunk 5. Enable system default FCoE QoS setting.

MDS9710-A(config)# system qos MDS9710-A(config-sys-qos)# service-policy type network-qos default-nq-7e-1q1q-policy

6. Create VSAN/VLAN Mapping. MDS9710-A(config)# vsan database MDS9710-A(config-vsan-db)# vsan 1001 7. Create Ethernet Port Channel. MDS9710-A(config-vsan-db)# exit MDS9710-A(config)# vlan 1001 MDS9710-A(config)# interface e7/25, e7/37, e8/25, e8/37 MDS9710-A(config-vlan)# fcoe vsan 1001 MDS9710-A(config-if-range)# switchport mode trunk MDS9710-A(config-if-range)# switchport trunk allowed vlan 1001 MDS9710-A(config-if-range)# channel-group 1001 mode active MDS9710-A(config-if-range)# exit MDS9710-A(config-if-range)# interface ethernet-port-channel 1001 MDS9710-A(config-if)# no shutdown Configuration Example 2 – on MDS 9710

8. Create VFC for Port Channel 1001. Configure VFC to F mode. MDS9710-A(config)# interface vfc-port-channel 1001 MDS9710-A(config-if)# switchport mode F MDS9710-A(config-if)# switchport trunk allowed vsan 1001 MDS9710-A(config-if)# no shutdown 9. Verify the configuration on NPV Switch:

TME-N5696Q(config)# show fcoe database TME-N5696Q(config)# show npv status

------npiv is disabled INTERFACE FCID PORT NAME MAC ADDRESS ------disruptive load balancing is disabled vfc1001 0x150000 25:fa:54:7f:ee:ea:55:00 54:7f:ee:ea:55:00 vfc1011 0x150001 20:00:10:05:ca:71:78:cc 10:05:ca:71:78:cc External Interfaces: vfc1012 0x150002 20:00:10:05:ca:71:78:cb 10:05:ca:71:78:cb ======Interface: vfc1001, State: Trunking Total number of flogi count from FCoE devices = 3. VSAN: 1001, State: Up, FCID: 0x150000

TME-N5696Q(config)# show npv flogi-table Number of External Interfaces: 1 ------SERVER Server Interfaces: EXTERNAL ======INTERFACE VSAN FCID PORT NAME NODE NAME Interface: vfc1011, VSAN: 1001, State: Up INTERFACE Interface: vfc1012, VSAN: 1001, State: Up ------vfc1011 1001 0x150001 20:00:10:05:ca:71:78:cc 10:00:10:05:ca:71:78:cc vfc1001 Number of Server Interfaces: 2 vfc1012 1001 0x150002 20:00:10:05:ca:71:78:cb 10:00:10:05:ca:71:78:cb vfc1001

Total number of flogi = 2. Basic of FCoE Troubleshooting on Nexus 5K Which Symptom best describes your problem ?

Verify cable and SFP. Verify the VLAN configuration. Check Ethernet Interface down VLAN-VSAN Mapping

Check Ethernet Interface status and if VFC is bound to it. VFC Interface not Trunking Ensure VSAN mapping is correct

Check fcoe_mgr Events FIP Installation Failure for FIP Transitions

VFC VSAN goes down due to missing FKA Verify the VFC interface VSAN is Check QoS/PFC correct Verify the VSAN allow list is VFC VSAN Initializing – VSAN not up correct Check DCBX Check for FIP instantiation failure (LLDP)

Check CNA for Need to verify Ethernet and VFC status proper setup “show interface e1/1 fcoe” “Network is good!” Check Ethernet interface for Monitor PFC Performance problems , timeouts, drops discards, errors Check Ethernet Interface Check Queuing

FCoE and Native Additional FCoE Troubleshooting Tips Check License and feature installation, vlan configuration check FEX FCoE Configuration Check DCBX Status: CNA Connection

tme-5672up-1(config)# show lldp dcbx interface e1/25 o The configured TLV values on the switch Local DCBXP Control information: and the CNA can be different Operation version: 00 Max version: 00 Seq no: 1 Ack no: 1 Type/ o Sub-type of 2-4 TLVs are enabled on Subtype Version En/Will/Adv Config both sides 003/000 000 Y/N/Y 0808 o 004/000 000 Y/N/Y 8906001b21 08 The switch is not Willing to comprise its 002/000 000 Y/N/Y 0001000032 32000000 TLV value but will advertise all the 00000002 configured values; Peer's DCBXP Control information: o The End-node is usually Willing to Operation version: 00 Max version: 00 Seq no: 1 Ack no: 1 comprise the TLV values; because of that Type/ Max/Oper Subtype Version En/Will/Err Config the Error Bit will not be set. 004/000 000/000 Y/Y/N 8906001b21 08 003/000 000/000 Y/Y/N ff08 002/000 000/000 Y/Y/N ffffffff00 00000000 00000008 Check DCBX Status: CNA Connection (cont.)

tme-5672up-1(config)# show system internal dcbx info interface e1/25 Feature type App (4)sub_type FCoE iSCSI (0) Interface info for if_index: 0x1a018000(Eth1/25) [iSCSI TLV is not enabled for negotiation (value=0)] tx_enabled: TRUE feature type 4(DCX CEE-App)sub_type 0 rx_enabled: TRUE Feature State Variables: oper_version 0 error 0 local error 0 oper_mode 1 dcbx_enabled: TRUE feature_seq_no 0 remote_feature_tlv_present 1 remote_tlv_aged_out 0 DCX Protocol: CEE remote_tlv_not_present_notification_sent 0 … … Feature Register Params: max_version 0, enable 1, willing 0 advertise 1 6 Features on this intf for Protocol DCX CIN(0) disruptive_error 0 mts_addr_node 0x101 mts_addr_sap 0x179 … … [DCX CIN = 0, not CIN version, ignore the following info] 3 Features on this intf for Protocol DCX CEE(1) Desired config cfg length: 6 data bytes:89 06 00 1b 21 08 [DCX CEE = 1, CEE version, the following features are negotiated] Operating config cfg length: 6 data bytes:89 06 00 1b 21 08 Feature type PFC (3) feature type 3(DCX CEE-PFC)sub_type 0 Peer config cfg length: 0 data bytes: Feature State Variables: oper_version 0 error 0 local error 0 oper_mode 1 feature_seq_no 0 remote_feature_tlv_present 1 remote_tlv_aged_out 0 Feature type PriGrp (2) remote_tlv_not_present_notification_sent 0 feature type 2(DCX CEE-PriGrp)sub_type 0 Feature Register Params: max_version 0, enable 1, willing 0 advertise 1 Feature State Variables: oper_version 0 error 0 local error 0 oper_mode 1 disruptive_error 0 mts_addr_node 0x101 mts_addr_sap 0x179 feature_seq_no 0 remote_feature_tlv_present 1 remote_tlv_aged_out 0 remote_tlv_not_present_notification_sent 0 Desired config cfg length: 2 data bytes:08 08 Feature Register Params: max_version 0, enable 1, willing 0 advertise 1 disruptive_error 0 mts_addr_node 0x101 mts_addr_sap 0x179 Operating config cfg length: 2 data bytes:08 08 Desired config cfg length: 13 data bytes:00 01 00 00 32 32 00 Peer config cfg length: 0 data bytes: 00 00 00 00 00 02

[oper_mode= 1 for PFC/APP/ETS, operations are in active mode; Operating config cfg length: 13 data bytes:00 01 00 00 32 32 0 Operating cfg is negotiated to follow switch’s cfg. ] 0 00 00 00 00 00 02

Peer config cfg length: 0 data bytes: … … Check FCoE Interface Status: CNA Connection

tme-5548up-1# show queuing interface e1/13 [verify queuing on interface] tme-5548up-1# show interface e1/13 fcoe [Check FCoE Interface status] Ethernet1/13 is FCoE UP Ethernet1/13 queuing information: vfc10 is Up TX Queuing FCID is 0x5a0000 qos-group sched-type oper-bandwidth PWWN is 20:00:b0:fa:eb:ac:0f:1a PFC Pause Statistics 0 WRR 50 [ETS bandwidth settings for drop group MAC addr is b0:fa:eb:ac:0f:1a 1 WRR 50 and non-drop (FCoE) group]

tme-5548up-1# show interface e1/13 priority-flow-control [Monitor PFC] RX Queuing ======qos-group 0 Port Mode Oper(VL bmap) RxPPP TxPPP q-size: 360640, HW MTU: 1500 (1500 configured) ======drop-type: drop, xon: 0, xoff: 360640 Ethernet1/13 Auto On (8) 0 0 Statistics: Pkts received over the port : 282389 tme-5548up-1# show interface vfc10 [Check VFC interface status] Ucast pkts sent to the cross-bar : 4 vfc10 is trunking Mcast pkts sent to the cross-bar : 282385 Bound interface is Ethernet1/13 Ucast pkts received from the cross-bar : 0 Hardware is Ethernet Pkts sent to the port : 0 PFC Pause Threshold Port WWN is 20:09:00:05:73:de:cd:7f Pkts discarded on ingress : 0 Admin port mode is F, trunk mode is on Per-priority-pause status : Rx (Inactive), Tx (Inactive) snmp link state traps are enabled Port mode is TF qos-group 1 Port vsan is 10 q-size: 79360, HW MTU: 2158 (2158 configured) Trunk vsans (admin allowed and active) (10) drop-type: no-drop, xon: 20480, xoff: 40320 Trunk vsans (up) (10) VSAN up Statistics: Trunk vsans (isolated) () Pkts received over the port : 3359776 Trunk vsans (initializing) () Ucast pkts sent to the cross-bar : 3359776 1 minute input rate 128 bits/sec, 16 bytes/sec, 0 frames/sec Mcast pkts sent to the cross-bar : 0 1 minute output rate 464 bits/sec, 58 bytes/sec, 0 frames/sec Ucast pkts received from the cross-bar : 6202632 3359659 frames input, 335966304 bytes Pkts sent to the port : 6202632 0 discards, 0 errors Always verify no discards Pkts discarded on ingress : 0 [Drops upon buffer overflow] Per-priority-pause status : Rx (Inactive), Tx (Inactive) 6202416 frames output, 836301556 bytes or errors 0 discards, 0 errors Total Multicast crossbar statistics: Mcast pkts received from the cross-bar : 0 Agenda

• Introduction to FCoE Technology • FCoE SAN Design for Small and Mid-size Enterprise • Basic FCoE Configuration and Troubleshooting • Conclusion Takeaways

• FCoE helps to achieve the IO consolidation in the DC networking for Small and Mid-size Enterprise customers • Different FCoE design can satisfy different convergence needs for your environments at:

- Access layer only

- End-to-end FCoE with dedicated VE links

- Fully converged Unified Fabric • Cisco is the leader in SAN switching (FC + FCoE) market • You can easily deploy FCoE SAN into the production with Cisco UCS, Nexus and MDS products Q&A Call to Action

• Visit the Cisco Campus at the World of Solutions to experience the following demos/solutions in action: Cisco Unified Fabric/Multiprotocol - End-to-End LAN and SAN • Meet the Engineer Open for Discussions – Schedule a meeting with local storage engineers: Mark Allen, Hui Chen, Craig Ashapa • Discuss your project’s challenges at the Technical Solutions Clinics • Check out other storage networking sessions Complete Your Online Session Evaluation

• Give us your feedback to be entered into a Daily Survey Drawing. A daily winner will receive a $750 Amazon gift card. • Complete your session surveys though the Cisco Live mobile app or your computer on Cisco Live Connect. Don’t forget: Cisco Live sessions will be available for viewing on-demand after the event at CiscoLive.com/Online Thank you