#CLUS Networking for the IP Network Engineer and SAN Core Edge Design Best Practices

Chad Hintz - Principal SE

BRKDCN-1121

#CLUS Session BRKDCN-1121 Abstract

Fibre Channel Networking for the IP Network Engineer and SAN Core-Edge Design Best Practices

This session gives non-storage-networking professionals the fundamentals to understand and implement storage area networks (SANs). This curriculum is intended to prepare attendees for involvement in SAN projects and I/O Consolidation of Ethernet & Fibre Channel networking. You will be exposed to the introduction of Storage Networking terminology & Designs. Specific topics covered include Fibre Channel (FC), FCoE, FC services, FC addressing, fabric routing, zoning, virtual SANs (VSANs). The session includes discussions on Designing Core-Edge Fibre Channel Networks and the best practice recommendations around them. This is an introductory session and attendees are encouraged to follow up with other SAN breakout sessions and labs to learn more about specific advanced topics.

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 3 Goals of This Session…

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 4 Agenda • History of Storage Area Networks • Fibre Channel Technology

• Designing Fibre Channel Networks

• Storage Topologies

• Introduction to NPIV/NPV

• Core-Edge Design Review

• Recommended Core-Edge Designs for Scale and Availability

• Fibre Channel over Ethernet

• Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 5 History of Storage Area Networks It starts with the relationship... The #1 rule The #1 Rule

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 7 Not all storage is local

• Must maintain that 1:1 relationship, especially if there's a network in-between

• All storage networking is based on this fundamental principle

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 8 SCSI When things go wrong...

Must maintain that 1:1 relationship, especially if there's a network in-between

Otherwise, you get data loss Very Bad Things™ happen

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 9 This Is Why Storage People Are Paranoid

• Losing data means: • Losing job • Running afoul of government regulations • Possibly SEC filings • Lost revenue

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 10 Direct Attached Storage (DAS)

• Hosts directly access storage as block-level devices

• As Hosts need more disk externally connected storage gets added

• Storage is captive behind the server, limited mobility

• Limited scalability due to limited devices

• No efficient storage sharing possible

• Costly to scale; complex to manage

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 11 Network Attached Storage (NAS)

• Clients connect to a network-capable storage server

• Serves files to other devices on the network Ethernet Switch • Called "File-Based" storage

• SMB- successor of CIFS protocol with enhanced functionality, Microsoft

• NFS – Network File System,

• Different from DAS, which uses block-based storage (a block may only contain a part of a single file)

• TCP ‘or‘ UDP based

• Almost all NFS clients perform local caching of both file and directory data

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 12 Storage Area Networks Extending the SCSI Relationship across a network Extending the SCSI Relationship across a network

• Same SCSI protocol carried over a network transport via serial implementation

• Transport must not jeopardize SCSI payload FC (security, integrity, latency) Switch

• Two primary transports to choose from today, namely IP and Fibre Channel • Fibre Channel provides high-speed transport for SCSI payload via Host Bus Adapter (HBA) • Fibre Channel overcomes many shortcomings of parallel I/O and combines best attributes of a channel and a network together

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 13 Storage Area Networks iSCSI • A SCSI transport protocol that operates on top of TCP • Encapsulates SCSI commands and data into TCP/IP byte-streams Ethernet Applications Switch File System

Block Device

SCSI Generic

iSCSI

TCP/IP Stack

NIC Adapter Driver Driver

NIC SCSI Adapter Adapter

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 14 SAN Protocols for Block I/O

• Fibre Channel • A gigabit-speed network technology primarily used for storage networking

• Fibre Channel over Ethernet (FCoE) • An encapsulation of FibreChannel frames over Ethernet networks. This allows Fibre Channel to use 10 Gigabit Ethernet networks while preserving the Fibre Channel protocol

• iSCSI • A TCP/IP-based protocol for establishing and managing connections between IP- based storage devices, hosts and clients

• FCIP • A provides a standard way of encapsulating FC frames within TCP/IP, allowing islands of FC SANs to be interconnected over an IP-based network

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 15 Network Stack Comparison

SCSI SCSI SCSI SCSI SCSI

iSCSI FCP FCP FCP FC FC FC FCIP

TCP TCP IP IP FCoE Ethernet Ethernet Ethernet

PHYSICAL WIRE SCSI iSCSI FCIP FCoE FC

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 16 Agenda  History of Storage Area Networks  Fibre Channel Technology • Designing Fibre Channel Networks • Storage Topologies • Introduction to NPIV/NPV • Core-Edge Design Review • Recommended Core-Edge Designs for Scale and Availability • Fibre Channel over Ethernet • Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 17 Fibre Channel Technology Fibre Channel Basic Building Blocks

• Components

• Naming Conventions

• Connection Fundamentals

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 19 Fibre Channel SAN Components

• Servers with host bus adapters

• Storage systems • RAID • JBOD • Tape

• Switches

• SAN management software

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 20 Cisco Multi-Protocol Product Portfolio LAN/SAN SAN COMPUTE

Cisco Nexus 9000 Cisco Nexus Cisco MDS 48x16G 48x10GE Cisco UCS FI 7000 FCoE-NPV: Nexus 9710/9706 Line-Rate FC Line-Rate FCoE 6200 Series 9300 and 9500

Cisco UCS FI Cisco MDS Cisco MDS Nexus Cisco Nexus 9396S 9148S 6300 Series Cisco Nexus 5672UP-16G 5600 5500

Cisco MDS 9718

Cisco Nexus 2000 Cisco 24x40GE Cisco MDS 16G FC: Nexus Nexus 3000 Line-Rate FCoE 9250i 2348UPQ Cisco UCS B-Series Cisco UCS C-Series Blade Servers Rack Servers

Common NXOS, Common Management #CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 21 FC Switch Port Types • "N" for Node Port – Host or target port facing a switch

N_Port Node

N_Port Node

N_Port Node

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 22 FC Switch Port Types • "N" for Node Port – Host or target port facing a switch • "F" for Fabric – Switch port facing the end Nodes

F_Port N_Port Node

F_Port N_Port Node

F_Port N_Port Node

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 23 FC Switch Port Types • "N" for Node Port – Host or target port facing a switch • "F" for Fabric – Switch port facing the end Nodes • "E" for "Extender" – Switch port facing another switch

F_Port N_Port Node

F_Port N_Port Node

FC Node Switch E_Port E_Port F_Port N_Port

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 24 FC Switch Port Types • "N" for Node Port – Host or target port facing a switch • "F" for Fabric – Switch port facing the end Nodes • "E" for "Extender" – Switch port facing another switch

F_Port N_Port Node These names are very important! (we will be talking about them a lot) F_Port N_Port Node

FC Node Switch E_Port E_Port F_Port N_Port

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 25 Fabric Name and Addressing, Part 1: WWN-Like a MAC address

• Every Fibre Channel port and node has a hard-coded address called World Wide Name (WWN)

• During FLOGI the switch identifies the WWN in the service parameters of the accept frame and assigns a Fibre Channel ID (FCID)

• Switch Name Server maps WWNs to FCID • WWNN uniquely identify devices • WWPN uniquely identify each port in a device

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 26 Fibre Channel Addressing, Part 2: Domain ID

• Domain ID is native to a single FC switch FC Fabric Domain ID 11 • Limitation of domain IDs in a single fabric • Forwarding decisions made on domain ID found in first 8 bits of FCID FCID 11.00.01

Domain ID 10

FCID 10.00.01

Switch Switch Topology Model Area Device Domain

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 27 Start from the Beginning… Target FABRIC A FC • Start with the host and a target that need to communicate Core • Host has 2 HBAs (one per fabric) each with a WWN • Target has multiple ports to connect to fabric

• Connect to a FC Switch Edge • Port Type Negotiation • Speed Negotiation

• FC Switch is part of the SAN “fabric” HBA • Most commonly, dual fabrics are deployed for redundancy Initiator

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 28 My Port Is Up…Can I Talk Now? FC FLOGIs/PLOGIs Target • Step 1: Fabric Login (FLOGI) FC Fabric • Determines the presence or absence of Fabric E_Port • Exchanges Service Parameters with Fabric • Switch identifies the WWN in service parameters of the accept frame and assigns a Fibre Channel ID (FCID)

• Initializes the buffer-to-buffer credits F_Port

• Step 2: Port Login (PLOGI) FLOGI N_Port • Required between nodes that want to communicate

• Similar to FLOGI – transports a PLOGI frame to the FCID HBA designation node port • In p2p topology (no fabric present), initializes buffer-to- Initiator buffer credits

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 29 Buffer to Buffer Credits Fibre Channel Switch Fibre Channel Flow Control

• B2B Credits used to ensure that FC transport is lossless

• # of credits negotiated between ports when link is brought up 16

• # Credits decremented with each packet placed on the R_RDY wire • Independent of packet size

• If # credits == 0, no more packet transmission Packet

• # of credits incremented with each 16 “receiver ready” received

• B2B Credits need to be taken into 15 Host consideration as distance and/or bandwidth increases 16

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 30 Fabric Channel ID (FCID)-Like an IP address Fibre Channel Addressing Scheme • FCID assigned to every Fabric A WWPN corresponding to an N_Port FC Fabric

• FCID made up of switch domain, area and device

• Domain ID is native to a single FC switch

• limitation of domain IDs in a single fabric

• Forwarding decisions made

on domain ID found in first Switch Switch Topology Area Device bits of FCID Model Domain

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 31 Agenda  History of Storage Area Networks  Fibre Channel Technology

 Designing Fibre Channel Networks

• Storage Topologies

• Introduction to NPIV/NPV

• Core-Edge Design Review

• Recommended Core-Edge Designs for Scale and Availability

• Fibre Channel over Ethernet

• Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 32 Designing Fibre Channel Networks Storage Criteria Themes

• Fibre Channel Fabric Name Server

• Frame Forwarding

• Security/Segmentation

• Load Balancing

• Bandwidth Considerations

• HA and Reliability

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 34 Fabric Shortest Path First (like OSPF) Fibre Channel Forwarding • FSPF “routes” traffic based on destination domain ID found in the destination FCID

• For FSPF a domain ID identifies a single switch • The number of Domains IDs are limited to 239/75 (theoretical limited/tested and qualified) within the same fabric (VSAN)

• FSPF performs hop-by-hop routing

• FSPF uses total cost as the metric to determine most efficient path

• FSPF supports equal cost load balancing across links

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 35 Directory Server/Name Server (Like DNS)

• Repository of information regarding the components that make up the Fibre Channel network

• Located at address FF FF FC (some readings call this the name server)

• Components can register their characteristics with the directory server

• An N_Port can query the directory server for specific information • Query can be the address identifier, WWN and volume names for all SCSI targets show fcns database

------FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE ------0xc70000 N 20:47:00:0d:ec:f0:85:40 (Cisco) npv 0xc70001 N 20:fa:00:25:b5:aa:00:01 scsi-fcp:init fc-gs 0xc70002 N 20:fa:00:25:b5:aa:00:00 scsi-fcp:init fc-gs 0xc70008 N 20:fe:00:25:b5:aa:00:01 scsi-fcp:init fc-gs 0xc7000f N 20:fe:00:25:b5:aa:00:00 scsi-fcp:init fc-gs 0xc70010 N 20:fe:00:25:b5:aa:00:02 scsi-fcp:init fc-gs 0xc70029 N 20:00:00:25:b5:20:00:0e scsi-fcp:init fc-gs 0xc70100 N 50:06:0b:00:00:65:e2:0e (HP) scsi-fcp:target 0xc70200 N 20:20:54:7f:ee:2a:14:40 (Cisco) npv 0xc70201 N 20:00:54:78:1a:86:c6:65 scsi-fcp:init fc-gs #CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 37 Login Complete… Almost There Fabric Zoning FC Target pwwn 50:06:01:61:3c:e0:1a:f6 • Zones are the basic form of data path

security FC Fabric • Zone members can only “see” and talk to other members of the zone

• Devices can be members of more than one zone

• Default zoning is “deny”

• Zones belong to a zoneset pwwn Initiator 10:00:00:00:c9:76:fd:31 • Zoneset must be “active” to enforce zoning zone name EXAMPLE_Zone fcid 0x10.00.01 [pwwn 10:00:00:00:c9:76:fd:31] [Initiator] • Only one active zoneset per fabric or per fcid 0x11.00.01 [pwwn VSAN 50:06:01:61:3c:e0:1a:f6] [target]

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 38 SAN Islands

Production SAN Tape SAN Test SAN

SAN C SAN D SAN E SAN F SAN A SAN B DomainID=3 DomainID=4 DomainID=5 Domain ID=6 DomainID=1 DomainID=2 DomainID=7 DomainID=8

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 39 Virtual SANs (VSANs)-Where have I heard this before?

• Analogous to VLANs in Ethernet VSANs supported on MDS and Nexus Product lines • Virtual fabrics created from larger cost- effective redundant physical fabric

• Reduces wasted ports of a SAN island Physical SAN Islands Are approach Virtualized onto Common SAN • Fabric events are isolated per VSAN which Infrastructure gives further isolation for High Availability

• Statistics can be gathered per VSAN

• Each VSAN provides Separate Fabric Services • FSPF, Zones/Zoneset, DNS, RSCN

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 40 SAN Islands – with Virtual SANs Production SAN Tape SAN Test SAN

SAN A SAN B DomainID=1 DomainID=2

SAN C SAN D DomainID=3 DomainID=4

SAN E SAN F DomainID=5 Domain ID=6

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 41 VSANs and Zones—Complimentary Virtual SANs and Fabric Zoning Are Very Complementary Relationship of VSANs to Zones • Hierarchical relationship— Physical Topology • First assign physical ports to VSANs VSAN 2 • Then configure independent zones per VSAN Disk2 Disk3 Disk1 ZoneA Host1 • VSANs divide the physical infrastructure ZoneC

Disk4 Host2 • Zones provide added security and allow sharing of device ZoneB ports VSAN 3 ZoneD • VSANs provide traffic statistics Host4 • VSANs only changed when ports ZoneA needed per virtual fabric Host3 Disk5 Disk6 • Zones can change frequently (e.g. backup)

• Ports are added/removed non-disruptively to VSANs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 42 Storage Oversubscription

• Represents # of hosts connected to a single port of a Disk Oversubscription Disk do not sustain wire-rate I/O Tape Oversubscription storage array with ‘realistic’ I/O mixtures. Low sustained I/O rates. A major vendor promotes 12:1 All technologies currently have max host:disk fan-out. ‘theoretical’ native transfer rate << • Needs to be maintained on whole SAN design wire-speed FC (LTO, SDLT, etc.)

• Defines the SAN oversubscription ISL Oversubscription • Oversubscription is introduced at multiple points Typical oversubscription in two- tier design can approach 8:1, • Considerations some even higher • Switches are rarely the bottleneck in SAN implementations 8:1 O.S. (common)

• Device capabilities (peak and sustained) must be Host Oversubscription considered along with network oversubscription Most hosts suffer from PCI bus limitations, OS, and application limitations thereby limiting maximum • Must consider oversubscription during a network I/O and bandwidth rate failure event

• Remember, all traffic flows towards targets – main bottlenecks

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 43 Port Channels

Distribute Inter Switch Links Advantages • Different Line Cards • High Availability - Smaller Failure Domain • Different ASICs • Preserves in Order Delivery • Different Port-groups • Interfaces can both be added and removed in a • Up to 16 links per Port non-disruptive manner in production environments Channel

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 44 PortChannel vs. Trunking

• ISL = inter-switch link

• PortChannel = E_Ports and ISLs

• Trunk = ISLs that support VSANs

• Trunking = TE_Ports and EISLs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 45 Port Channeling Inter-Switch Links (ISLs) Proper Load-Balancing: SRC/DST/OXID vs. SRC/DST

SRC/DST/ Exchange 1 OXID Exchanges between the same Exchange 2 Exchange 1 2 FCIDs take different paths to achieve a better load balance Exchange 2 Exchange 3 across the port-channel. Exchange 3

SRC/DST Exchanges between the same 2 Exchange 1 Exchange 3 Exchange 2 Exchange 1 FCIDs take the same paths across the port-channels as requested by Exchange 2 the SAN administrator (FICON, for instance) Exchange 3

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 46 Storage Multipathing

• Multipathing: multiple Hardware paths to a single drive (LUN) Application Multipathing Active iSCSI Driver WWPN a • Active/Active: balanced I/O over Active WWPN b both paths (implementation Multipathing LUN Mapped Software Balances specific) over Multiple i/o over Available Paths Using iSCSI Interfaces Different • Active/Passive: I/O over primary Primary Path Controller WWPNs path—switches to standby path Application Multipathing Active upon failure iSCSI Driver WWPN a Passive WWPN b

Multipathing Software Monitors Standby (Failover) Path Active iSCSI Path

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 47 Agenda  History of Storage Area Networks  Fibre Channel Technology  Designing Fibre Channel Networks  Storage Topologies • Introduction to NPIV/NPV • Core-Edge Design Review • Recommended Core-Edge Designs for Scale and Availability • Fibre Channel over Ethernet • Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 48 Storage Topologies The Design Requirements Classical Fibre Channel T0 T1 T2 • Fibre Channel SAN • Transport and Services are on the same layer DNS FSPF FSPF Zone in the same devices Switch Switch Zone RSCN RSCN • Well defined end device relationships (initiators DNS DNS and targets) FSPF Switch RSCN Zone I5 • Does not tolerate packet drop – requires lossless transport I0 • Only north-south traffic, east-west traffic mostly irrelevant I1 I4 I2 I3 • Network designs optimized for Scale and Availability • High availability of network services provided through dual fabric Fabric topology, services and architecture traffic flows are structured • Edge/Core vs Edge/Core/Edge I(c) Client/Server • Service deployment T(s) Relationships are I(c) pre-defined

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 50 SAN Design – Single Tier Topology Collapsed Core Design • Servers connect to the Core switches

• Storage devices connect to one or more core FC switches

• Core switches provide storage services

• Large amount of blades to support Initiator (Host) and Target (Storage) ports Core Core

• Single Management per Fabric

• Normal for Small SAN environments

• HA achieved in two physically separate, but identical, redundant SAN fabrics

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 51 How Do We Avoid This?

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 52 SAN Design – Two Tier Topology

Core-Edge” Topology- Most Common FC

• Servers connect to the edge switches

• Storage devices connect to one or more core switches Core Core • Core switches provide storage services to one or more edge switches, thus servicing more servers in the fabric

• ISLs have to be designed so that overall fan-in ratio of Edge Edge servers to storage and overall end-to-end oversubscription are maintained

• HA achieved in two physically separate, but identical, redundant SAN fabrics

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 53 SAN Design – Three Tier Topology Edge-Core-Edge Topology FC • Servers connect to the edge switches

• Storage devices connect to one or more edge Edge switches Edge

• Core switches provide storage services to one or more edge switches, thus servicing more servers Core Core and storage in the fabric

• ISLs have to be designed so that overall fan-in Edge Edge ratio of servers to storage and overall end-to-end oversubscription are maintained

• HA achieved in two physically separate, but identical, redundant SAN fabrics

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 54 Agenda  History of Storage Area Networks  Fibre Channel Technology

 Designing Fibre Channel Networks

 Storage Topologies

 Introduction to NPIV/NPV

• Core-Edge Design Review

• Recommended Core-Edge Designs for Scale and Availability

• Fibre Channel over Ethernet

• Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public Introduction to NPIV/NPV What Is NPIV? and Why?

• N-Port ID Virtualization (NPIV) provides a means to assign multiple FCIDs to a single N_Port • Limitation exists in FC where only a single FCID can be handed out per F-port. Therefore an F-Port can only accept a single FLOGI

• Allows multiple applications to share the same Fiber Channel adapter port

• Usage applies to applications such as Virtualization

Application Server FC NPIV Core Switch

Note: you may hear

Email Email I/O F_Port of it as the “NPIV N_Port_ID 1 upstream” switch

Web Web I/O F_Port N_Port_ID 2

File Services File Services I/O N_Port_ID 3 N_Port

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 57 What Is NPV? and Why? • N-Port Virtualizer (NPV) utilizes NPIV functionality to allow a “switch” to act like a server performing multiple logins through a single physical link

• Physical servers connected to the NPV switch login to the upstream NPIV core switch

• No local switching is done on an FC switch in NPV mode

• FC edge switch in NPV mode does not take up a domain ID • Helps to alleviate domain ID exhaustion in large fabrics NPV Switch FC NPIV Core Switch

F-Port

FC1/1 Server1 NP-Port F-Port N_Port_ID 1

FC1/2 Server2 F_Port N_Port_ID 2

FC1/3 Server3 N_Port_ID 3 N-Port #CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 58 NPV Auto Load Balancing

• Uniform balancing of server loads on NP

Blade Server Chassis

Blade3

Blade2 Blade4 links Blade1 • Server loads are not tied to any uplink

• Benefit • Optimal uplink bandwidth utilization Balanced 1 2 load on 3 4 NP links

SAN

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 59 NPV Auto Load Balancing

• Automatically moves the failed servers to Blade Server Chassis

Blade3

Blade2 Blade4 other available NP links Blade1 • Servers is forced to re-login immediately after experiencing “short” traffic disruption. 1 Disrupted 2 • Benefit servers re- X login on 3 4 • Downtime greatly reduced other uplink SAN • Automatic failover of loads on NP links

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 60 F-Port Port Channel

• F-Port PortChannels Enhance NPV uplink Resiliency • Bundle multiple ports into 1 logical

link F-Port Port Core Director Channel • Similar to ISL portchannels in FC and Storage EtherChannels in Ethernet Blade N Blade 2 SAN • Blade 1 Benefits BladeSystem • High-Availability- no disruption if NP-Port F-Port cable, port, or line cards fail

Interface port-channel 1 • Optimal bandwidth utilization & no shut

higher aggregate bandwidth with Interface fc1/1 load balancing channel-group 1 Interface fc1/2 channel-group 1

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 61 NPV F-Port Port Channel No traffic disruption • Link failures do not affect the

Blade Server Chassis

Blade 2

1 3 Blade 4 Blade server connectivity Blade

• No application disruption

X

SAN

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 62 F-Port Trunking

• F-Port Trunking F-Port Trunking NPV on Core Director • Uplinks carry multiple VSANs F-Port Channel Storage

Blade N VSAN 1 • Benefits Blade 2 VSAN 2 SAN

• Extend VSAN benefits to Blade Blade 1 servers BladeSystem VSAN 3 NP-Port F-Port • Separate management domains

Interface fc1/1 • Traffic Isolation and ability to host trunk mode on trunk allowed-vsan 1- differentiated services on blades 3

Interface port-channel 1 • Extend VSAN Benefits to Blades trunk mode on trunk allowed-vsan 1- 3

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 63 Agenda  History of Storage Area Networks  Fibre Channel Technology  Designing Fibre Channel Networks  Storage Topologies  Introduction to NPIV/NPV  Core-Edge Design Review  Recommended Core-Edge Designs for Scale and Availability • Fibre Channel over Ethernet • Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public Core-Edge Design Review SAN Design – Two Tier Topology “Edge-Core” Topology- Most Common FC • Servers connect to the edge switches

• Storage devices connect to one or more core switches

Core Core • Core switches provide storage services to one or more edge switches, thus servicing more servers in the fabric Edge Edge • ISLs have to be designed so that overall fan-in ratio of servers to storage and overall end-to- end oversubscription are maintained

• HA achieved in two physically separate, but identical, redundant SAN fabrics

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 66 Blade Switch Explosion Issues

• Scalability • Each Blade Switch uses a single Domain ID • Theoretical maximum number of Domain IDs is 239 per VSAN • Supported number of domains is quite smaller (depends on OSM) • Cisco Tested: 90

• Manageability • More switches to manage • Shared management of blade switches between storage and server administrators

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 67 Server Consolidation with Top-of-Rack Fabric Switches

Top of Rack A B 96 Storage Ports at 2 G 28 ISL to Edge Top of Rack Design at 10 G

Ports Deployed: MDS 1200 2X ISL to 91XXs Storage Ports (4 G Dedicated): Core at 10 G 192 Host Ports (4 G Shared): 32 Host Ports 896 at 4 G Disk Oversubscription (Ports): A 9.3 : 1 Number of FC switches in the fabric B 30

14 Racks 32 Dual Attached Servers per Rack

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 68 Server Consolidation with Blade servers

A B 120 Storage Blade Servers Ports at 2 G 60 ISL to Edge at 4 G Blade Server Design Using 2 x 4 G ISL per Blade Switch; A B

Ports Deployed: 1608 2X ISL to Core at 4G Storage Ports (4 G Dedicated): 240 16 Host Host Ports (4 G Shared): 480 Ports at 4G Five Racks Disk Oversubscription (Ports): 8 : 1 96 Dual Number of FC switches in the fabric 62 Attached Blade Servers per Rack

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 69 Agenda  History of Storage Area Networks  Fibre Channel Technology  Designing Fibre Channel Networks  Storage Topologies  Introduction to NPIV/NPV  Core-Edge Design Review  Recommended Core-Edge Designs for Scale and Availability • Fibre Channel over Ethernet • Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public Recommended Core- Edge Designs for Scale and Availability SAN Design Key Requirements • High Availability - Providing a Dual Fabric (current best practice)

• Fabric scalability (FLOGI and domain-id scaling)

• Providing connectivity for diverse server placement and form factors • Meeting oversubscription ratios established by disk vendors • Effective zoning • Providing Business Function/Operating System Fabric Segmentation and Security

• Providing connectivity for virtualized servers

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 72 N-Port Virtualizer (NPV) Reduces Number of FC Domain IDs

Top of Rack A B Top of Rack Design 96 Storage Ports at 2 G NPIV Core 28 ISL to Edge Fabric Switches in at 10 G 2 ISL to NPV mode Core at 10 G 32 Host Ports MDS 91xx at 4 G in NPV Ports Deployed: 1200 mode Storage Ports (4 G Dedicated): 192 A Host Ports (4 G Shared): 896 B Disk Oversubscription (Ports): 9.3 : 1 Number of FC switches in the fabric 2

14 Racks 32 Dual Attached Servers per Rack

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 73 NPV Blade Switch

Blade Servers

A B Blade Server Design 120 Storage Ports at 2 G NPIV Core Using 2 x 4 G ISL per 60 ISL to Edge Blade Switch; at 4 G MDS 91xx Ports Deployed: 1608 in NPV Storage Ports (4 G Dedicated): 240 mode Host Ports (4 G Shared): 480 A B Disk Oversubscription (Ports): 8 : 1 Number of fabric switches to manage 2 2 ISL to Core at 4G 16 Host Ports at 4G Five Racks 96 Dual Attached Blade Servers per Rack

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 74 F-Port Port Channel Enhance NPV Uplink Resiliency • F-Port PortChannels F-Port Port Core Director Channel • Bundle multiple ports into 1 logical Storage link Blade N

Blade 2 • Similar to ISL portchannels in FC and SAN Blade 1

EtherChannels in Ethernet BladeSystem

• Benefits N-Port F-Port

• High-Availability- no disruption if Interface port-channel 1 cable, port, or line cards fail no shut Interface fc1/1 • Optimal bandwidth utilization & channel-group 1 Interface fc1/2 higher aggregate bandwidth with channel-group 1 load balancing

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 75 Summary of Recommendations

• High Availability - Provide a Dual Fabric

• Use of Port-Channels and F-Port Channels with NPIV to provide the bandwidth to meet oversubscription ratios

• Use NPIV/NPV to provide Domain ID scaling and ease of management

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 76 Agenda  History of Storage Area Networks  Fibre Channel Technology  Designing Fibre Channel Networks  Storage Topologies  Introduction to NPIV/NPV  Core-Edge Design Review  Recommended Core-Edge Designs for Scale and Availability  Fibre Channel over Ethernet • Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public FCoE Traditional Data Center Design Ethernet LAN and Fibre Channel SAN

• Physical and Logical separation of Ethernet LAN and SAN traffic FC

• Additional Physical and Logical FC separation of SAN fabrics L3 Fabric ‘B’ L2 Fabric ‘A’

FC Native Switch

NIC HBA

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 79 “Fabric vs. Network” or “Fabric & Network” SAN Dual Fabric Design • SAN and LAN have different design assumptions • SAN leverages dual fabric with multi-pathing failover initiated by the client • LAN leverages single fully meshed fabric with higher levels of component redundancy FC

FC

Core Core Fabric A Fabric B ?

Edge Edge

FC

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 81 Fibre Channel over Ethernet What Enables It? • 10Gbps Ethernet

• Lossless Ethernet • Matches the lossless behavior guaranteed in FC by B2B credits

• Ethernet jumbo frames • Max FC frame payload = 2112 bytes Normal ethernet frame, ethertype = FCoE Same as a physical FC frame

FC Payload

CRC FC Header

EOF FCoE Header

Ethernet Header FCS Control information: version, ordered sets (SOF, EOF)

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 82 Unified Fabric Why? • Fewer CNAs (Converged Network adapters) instead of NICs, HBAs and HCAs

• Limited number of interfaces for Blade Servers FC HBA FC Traffic FC HBA FC Traffic CNA NIC LAN Traffic All traffic CNA NIC LAN Traffic goes NIC Mgmt Traffic over NIC Backup Traffic 10GE HCA IPC Traffic

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 83 Reference Slide So...Standards how do we do forthis? Unified I/O with FCoE

IEEE 802.1 T11 FCoE

DCB

FC on FCother on Other Networknetwork Media media PE PFC ETS DCBX EVB

Port- Lossless Priority Configuration Edge Virtual Extender Ethernet Grouping Verification Bridge

FC-BB-5

802.1BR 802.1Qbb 802.1Qaz 802.1Qaz 802.1Qbg

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 84 Data Center Bridging eXchange (DCBX)

IEEE 802.1Qaz • Auto-negotiation of capability and configuration • Priority Flow Control, ETS, capability and associated CoS values • Allows one link peer to push config to other link peer “I'll show you mine • Simplifies Management: configuration and if you show me yours.” distribution of parameters from one node to another • DCBX is LLDP with new TLV fields • Discovers lossless Ethernet Capabilities • Responsible for Logical Link Up/Down signaling of Ethernet and FC

• DCBX negotiation failures will result in: • Per-priority-pause not enabled on CoS values with PFC configuration • vfc not coming up – when DCBX is being used in Ethernet Wire FCoE environment

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 85 IEEE 802.1Qbb Priority Flow Control

• PFC enables Flow Control on a Per- Priority basis

• Therefore, we have the ability to have lossless and lossy priorities at the same time on the same wire

• Allows FCoE to operate over a lossless priority independent of other priorities FCoE • Other traffic assigned to other CoS will continue to transmit and rely on Ethernet Wire upper layer protocols for retransmission

• Not only for FCoE traffic #CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 86 Enhanced Transmission Selection (ETS) IEEE 802.1Qaz • Required when consolidating I/O – It’s a QoS problem

• Prevents a single traffic class of “hogging” all the bandwidth and starving other classes

• When a given load doesn’t fully utilize its allocated bandwidth, it is available to other classes

• Helps accommodate for classes of a“bursty” nature

50% 50% FCoE

Ethernet Wire

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 87 FCoE Behind the Scenes

• Data Center Bridging eXchange (DCBX): • DCBX, is a protocol that extends the Link Layer Discovery Protocol (LLDP) defined in IEEE802.1Qaz. DCBX allows the FCF to provide Link Layer configuration information to the CNA and allows both the CNA and FCF to exchange status.

• FCoE Protocols: • FCoE Data plane protocol. FCoE data protocol requires lossless Ethernet and is typically implemented in hardware, and is used to carry the FC frames with SCSI. • "FIP" (FCoE Initialization Protocol) • Used to discover FCoE capable devices connected to an Ethernet network and to negotiate capabilities.

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 88 Introducing FCoE Initialization Protocol

• "FIP" discovers other FCoE capable devices within the lossless Ethernet Network

• Enables FCoE adapters (CNAs) to discover FCoE switches (FCFs) on the FCoE VLAN

• Establishes a virtual link between the adapter and FCF or between two FCFs

• "FIP" uses a different Ethertype from FCoE: allows for "FIP"- Snooping by DCB capable Ethernet bridges

• Building foundation for future Multitier FCoE topologies

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 89 "FIP" - Login Flow Ladder

ENode FCoE Switch

VLAN VLAN "FIP": Discovery Discovery FCoE FCF FCF Initialization Discovery Discovery Protocol

FLOGI/FDISC FLOGI/FDISC Accept

FC Command FC Command responses SAN Protocols

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 90 FCoE, Same Model as FC Ethernet Fabric FC Fabric Target • Same host to target communication • Host has 2 CNA’s (one per fabric) • Target has multiple ports to connect to fabric

• Connect to a DCB capable switch • Port Type Negotiation • Speed Negotiation DCB capable Switch • DCBX Negotiation acting as an FCF

• Access switch is a Fibre Channel over Ethernet Forwarder (FCF) in this directly connected model CNA • Dual fabrics are still deployed for redundancy ENode

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 91 Fibre Channel over Ethernet Forwarder (FCF)

FC FCoE Fibre Channel over Ethernet Forwarders (FCF) Allows switching of FCoE frames FCF across multiple hops

E VE Creates Standards based FCoE ISL Necessary for Multi-Hop FCoE E VE Nothing Further Required FCF (No TRILL or Spanning Tree)

E-Ports VE-Ports w/ FC w/ FCoE

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 92 Agenda  History of Storage Area Networks  Fibre Channel Technology

 Designing Fibre Channel Networks

 Storage Topologies

 Introduction to NPIV/NPV

 Core-Edge Design Review

 Recommended Core-Edge Designs for Scale and Availability

 Fibre Channel over Ethernet

 Converged Core-Edge Designs

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 93 Converged Core-Edge Designs Single Hop FCoE Design LAN Fabric SAN Fabric A SAN Fabric B • Single Hop (Directly Connected) FCoE from the CNA to FCF, then broken out to Native FC

• A VLAN can be dedicated for every Virtual Fabric in the SAN Virtual Virtual Fabric 2 • "FIP" discovers the FCoE VLAN and signals it to Fabric 3 the hosts FCF-B • Trunking is not required on the host driver – all FCF-A FCoE frames are tagged by the CNA VLAN 10,20 Using NPV from Edge • FCoE VLANs can be pruned from Ethernet links that are not designate for FCoE VLAN 10,30

• Maintains isolated edge switches for SAN ‘A’ and ‘B’ and separate LAN switches for NIC 1 and NIC 2 (standard NIC teaming or Link Aggregation)

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 95 FCoE and vPC Together LAN Fabric Fabric A Fabric B • VPC with FCoE are ONLY supported between hosts and FCF pairs…AND they must follow specific rules • A ‘vfc’ interface can only be associated with a single-link channel bundle VLAN 10 ONLY HERE! • While the link aggregation configurations are the same on FCF-A and FCF-B, the FCoE VLANs can FCF-A be different FCF-B

• FCoE VLANs are ‘not’ carried on the VPC peer-link VLAN 10,20 STP Edge Trunk • FCoE and "FIP" ethertypes are ‘not’ forwarded over the VPC peer link either VLAN 10,30 • VPC carrying FCoE between two FCF’s is NOT recommended to keep SAN A/B separation VLAN 20=Fabric A VPC contains only 2 X VLAN 30=Fabric B 10GE links – one to each Switch

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 96 UCS Core-Edge Design SAN A SAN B N-Port Virtualization Forwarding with MDS

• F_Port Channeling and Trunking from MDS to UCS F_ Port NPIV NPIV Channel & F_Port Trunk • FC Port Channel can carry all VSANs (Trunk) VSAN 1,2 VSAN 1,3 • UCS Fabric Interconnects remains in NPV end host N_Proxy mode 6100-A 6100-B

• Server vHBA pinned to an FC Port Channel vFC 1 vFC 2 vFC 1 vFC 2

• Server vHBA has access to bandwidth on any link F_Proxy member of the FC Port Channel

• Load balancing based on FC Exchange_ID N_Port vHBA 0 vHBA 1 vHBA 0 vHBA 1 • Per Flow • Loss of Port Channel member link has no effect on Server 1 Server 2 Server vHBA (hides the failure) VSAN 1 VSAN 2 • Affected flows to remaining member links • No FLOGI required #CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 97 Fibre Channel Aware Device FCoE NPV • ”FCoE NPV bridge" improves over a "FIP snooping bridge" by intelligently proxying FIP functions between a CNA and an FCF

• Active Fibre Channel forwarding and security element

• FCoE-NPV load balances logins from the CNAs FCF

evenly across the available FCF uplink ports VF Fibre Channel Configuration and Control Applied at the Edge Port • FCoE NPV will take VSAN into account when VNP mapping or ‘pinning’ logins from a CNA to an FCoE FCF uplink Proxy FCoE VLAN Discovery NPV

• Emulates existing Fibre Channel Topology Proxy FCoE FCF Discovery (same mgmt, security, HA)

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 99 Multi-Tier with FCoE-NPV NPV Considerations • Considered Multi Hop because of FCF FCF intelligence of edge. VE Ports • Designed the Same way as FC NPV

• No Local Switching on a NPV edge FCF NPIV device VF Core

• Usual traffic is North South from VNP Initiator to Target, but east to west FCoE NPV traffic could be an issue in this design VF

• Still want to use FCoE dedicated uplinks VN to meet or allocate enough bandwidth for oversubscription rates at 10G

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 100 Recommended Reading

• NX-OS and Cisco Nexus Switching, Second Edition (ISBN:158714-304-6), by David Jansen, Ron Fuller, Matthew McPherson. Cisco Press 2013.

• Storage Networking Fundamentals(ISBN-10:1-58705- 162-1; ISBN-13: 978-11-58705-162-3), by Marc Farley. Cisco Press. 2007.

• Storage Networking Protocol Fundamentals (ISBN: 1- 58705-160-5), by James Long, Cisco Press. 2006.

• CCNA Data Center DCICN 200-150 Official Cert Guide(ISBN-10: 1-58720-596-3ISBN-13: 978-1- 58720-596-5) by Chad Hintz, Cesar Obediente, Ozden Karakok. Cisco Press 2017 Available Onsite at the Cisco Company Store

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 101 Cisco Webex Teams

Questions? Use Cisco Webex Teams (formerly Cisco Spark) to chat with the speaker after the session How 1 Find this session in the Cisco Events App 2 Click “Join the Discussion” 3 Install Webex Teams or go directly to the team space 4 Enter messages/questions in the team space

Webex Teams will be moderated cs.co/ciscolivebot#BRKDCN-1121 by the speaker until June 18, 2018.

#CLUS © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 102 Complete your online session evaluation

Give us your feedback to be entered into a Daily Survey Drawing. Complete your session surveys through the Cisco Live mobile app or on www.CiscoLive.com/us.

Don’t forget: Cisco Live sessions will be available for viewing on demand after the event at www.CiscoLive.com/Online.

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 103 Continue Demos in Walk-in Meet the Related your the Cisco self-paced engineer sessions education campus labs 1:1 meetings

#CLUS BRKDCN-1121 © 2018 Cisco and/or its affiliates. All rights reserved. Cisco Public 104 Thank you

#CLUS #CLUS