Storage

Kh. Rashedul Arefin Outlines

• Types of Storage • Accessing Data • (SAN) • Storage Virtualization Types of Storage

• Primary Storage – Volatile storage that directly accessible by Computer’s CPU – Small Capacity – Very Fast • Secondary Storage – Nonvolatile – Not directly accessible by CPU – Require I/O channel – Slower • Tertiary Storage – Removable mass storage – Slower than other two types – Lower Cost Accessing Data

• Block Based Access – SCSI – Mainframe Storage Access – Advanced Technology Attachment (ATA) • File Access – Network Access Protocol (NAS) – Network (NFS) – Common Internet File System (CIFS) • Record Access – Open DataBase Connectivity (ODBC) – Java DataBase Connectivity (JDBC) – Structured Query Language (SQL) Chapter 8: Block Storage Technologies 239

■ Arbitrated loop: Permits up to 127 devices to communicate with each other in a looped connection. hubs were designed to improve reliability in these topolo- gies, but with the higher adoption of switched fabrics, Fibre Channel loop interfaces are more likely to be found on legacy storage devices such as JBODs or older tape libraries. ■ Switched fabric: Comprises Fibre Channel devices that exchange data through Fibre ChannelStorage switches and theoretically Area supports up Network to 16 million devices in a single (SAN) fabric. Figure 8-16 illustrates these topologies, where each arrow represents a single fiber connection.

NL_Port NL_Port N_Port

N_Port N_Port NL_Port NL_Port F_Port FL_Port Switch NL_Port E_Port E_Port Switch NL_Port NL_Port NL_Port F_Port Hub NL_Port NL_Port NL_Port NL_Port

Arbitrated LoopPoint-to-Point Switched Fabric ptg17120290

Figure 8-16 Fibre Channel Topologies

N_PortFigure 8-16: alsoNode introduces Port the following Fibre ChannelF_Port port: Fabrictypes: Port 8 E_Port: Expansion Port

NL_Port■ Node Port: (N_Port):Node LoopInterface Porton a Fibre ChannelFL_Port end host: Fabricin a point-to-point Loop or(Public) Port (Create ISL) switched fabric topology. ■ Node Loop Port (NL_Port): Interface that is installed in a Fibre Channel end host to allow connections through an arbitrated loop topology. ■ Fabric Port (F_Port): Interface Fibre Channel switch that is connected to an N_Port. ■ Fabric Loop Port (FL_Port): Fibre Channel switch interface that is connected to a public loop. A fabric can have multiple FL_Ports connected to public loops, but, per definition, a private loop does not have a fabric connection. ■ Expansion Port (E_Port): Interface that connects to another E_Port in order to create an Inter-Switch Link (ISL) between switches.

Fibre Channel Addresses Fibre Channel uses two types of addresses to identify and locate devices in a switched fab- ric: World Wide Names (WWNs) and Fibre Channel Identifiers (FCIDs).

In theory, a WWN is a fixed 8-byte identifier that is unique per Fibre Channel entity. Fol- lowing the format used in Cisco Fibre Channel devices, this writing depicts WWNs as colon-separated bytes (10:00:00:00:c9:76:fd:31, for example). Technet24.ir

240 CCNA Cloud CLDFND 210-451 Official Cert Guide

A Fibre Channel device can have multiple WWNs, where each address may represent a part of the device, such as:

■ Port WWN (pWWN): Singles out one interface from a Fibre Channel node (or a Fibre Channel host bus adapter [HBA] port) and characterizes an N_Port ■ Node WWN (nWWN): Represents the node (or HBA) that contains at least one port ■ Switch WWN (sWWN): Uniquely represents a Fibre Channel switch ■ Fabric WWN (fWWN): Identifies a switch Fibre Channel interface and distinguishes an StorageF_Port Area Network (SAN) Figure 8-17 displays how these different WWNs are assigned to distinct components of a Technet24.ir duplicated host connection to a Fibre Channel switch. 246 CCNA Cloud CLDFND 210-451 Official Cert Guide Switch WWN (sWWN) Server Switch1 Switch2 Port WWNs Fabric WWNs Array

pWWN1 fWWN1 HBA pWWN2 fWWN2 HBA Fabric Login (FLOGI) Fabric Login (FLOGI) Server Fibre Channel Node WWN Switch (nWWN) Port Login (PLOGI) ptg17120290 WWN:Figure 8-17 World Fibre ChannelWide WorldNames Wide Names pWWNIn opposition,: Represent FCIDs are administratively N_Port assigned addresses that are inserted on Fibre Channel frame headers and represent the location of a Fibre Channel N_Port in a switched Process Login (PRLI) nWWNtopology.: AnRepresent FCID consists a of node 3 bytes, (HBA)which are detailed in Figure 8-18. Figure 8-21 Fibre Channel Logins sWWN: Uniquely representSize a Fabricin Switch Figure 8-21 also shows subsequent PLOGI and PRLI processes between the same HBA and 88 88 Bits fWWN: Uniquely represent F_Port a storage array port. After all these negotiations, both devices are ready to proceed with FCID:Domain Administratively ID Area ID Port assignedID address à Domaintheir upper-layer ID + protocolArea communicationID + Port using ID Fibre Channel frames. Figure 8-18 Fibre Channel Identifier Format Zoning ptg17120290 A zone is defined as a subset of N_Ports from a fabric that are aware of each other, but Each byte has a specific meaning in an FCID, as follows: not of devices outside the zone. Each zone member can be specified by a port on a switch, WWN, FCID, or human-readable alias (also known as FC-Alias). ■ Domain ID: Identifies the switch where this device is connected Zones are configured in Fibre Channel switched fabrics to increase network security, intro- ■ Area ID: May represent a subset of devices connected to a switch or all NL_Ports con- duce storage access control, and prevent data loss. By using zones, a SAN administrator can nected to an FL_Port avoid a scenario where multiple servers can access the same storage resource, ruining the ■ Port ID: Uniquely characterizes a device within an area or domain IDstored data for all of them. A fabric can deploy two methods of zoning: To maintain consistency with Cisco MDS 9000 and Nexus switch commands, this writing describes FCIDs as contiguous hexadecimal bytes preceded by the “0x”■ symbol Soft zoning: (0x01ab9e, Zone members are made visible to each other through name server queries. for example). With this method, unauthorized frames are capable of traversing the fabric. ■ Hard zoning: Frame permission and blockage is enforced as a hardware function on the fabric, which in turn will only forward frames among members of a zone. Cisco Fibre Channel switches only deploy this method.

TIP In Cisco devices, you can also configure the switch behavior to handle traffic between unzoned members. To avoid unauthorized storage access, blocking is the recommended default behavior for N_Ports that do not belong to any zone.

A zone set consists of a group of one or more zones that can be activated or deactivated with a single operation. Although a fabric can store multiple zone sets, only one can be Chapter 8: Block Storage Technologies 247

active at a time. The active zone set is present in all switches on a fabric, and only after a zone set is successfully activated can the N_Ports contained in each member zone perform PLOGIs and PRLIs between them.

NOTE The Zone Server service is used to manage zones and zone sets. Implicitly, an active zone set includes all the well-known addresses from Table 8-5 in every zone.

Storage Area NetworkFigure 8-22 (SAN) illustrates how zones and an active zone set can be represented in a Fibre Chan- nel fabric.

Soft Zoning: Zone A ü Zone numbers are visible to each other through Name Zone C Servers ü Unauthorized frames are Zone B Zone A Zone B capable of traversing the Zone C fabric Zone Set ABC (Active) Hard Zoning: Figure 8-22Zone Zones A: andSingle Zone initiator Sets à Single Target ptg17120290 ü Only authorized frames are Although Zoneeach zone B: Multiple in Figure 8-22initiator (A, B, à andSingle C) contains Target two or three members, more hosts permitted among the could be insertedZone C: in Single them. Wheninitiator performing à Multiple a name Target service query (“dear fabric, whom can 8 members in a zone I communicate with?”), each device receives the FCID addresses from members in the same zone and begins subsequent processes, such as PLOGI and PRLI. Additionally, Figure 8-22 displays the following self-explanatory types of zones:

■ Single-initiator, single-target (Zone A) ■ Multi-initiator, single-target (Zone B) ■ Single-initiator, multi-target (Zone C)

TIP Because not all members within a zone should communicate, single-initiator, single- target zones are considered best practice in Fibre Channel SANs.

SAN Designs In real-world SAN designs, it is very typical to deploy two isolated physical fabrics with servers and storage devices connecting at least one N_Port to each fabric. Undoubtedly, such best practice increases storage access availability (because there are two independent paths between each server and a storage device) and bandwidth (if multipath I/O software is installed on the servers, they may use both fabrics simultaneously to access a single storage device). Technet24.ir

248 CCNA Cloud CLDFND 210-451 Official Cert Guide

There are, of course, some exceptions to this practice. In many data centers, I have seen SANs with only one fabric being used for the connection between dedicated HBA ports in each server, tape libraries, and other backup-related devices. Another key aspect of SAN design is oversubscription , which generically defines the ratio between the maximum potential consumption of a resource and the actual resource allocat- ed in a communication system. In the specific case of Fibre Channel fabrics, oversubscrip- tion is naturally derived from the comparison between the number of HBAs and storage ports in a single fabric.

Because storage ports are essentially a shared resource among multiple servers (which rarely use all available bandwidth in their HBAs), the large majority of SAN topologies are expect- ed to present some level of oversubscription. The only completely nonoversubscribed SAN 590 CCNA Data Center DCICN 200-150 Official Cert Guide topology is DAS, where every initiator HBA port has its own dedicated target port. SAN Design In classic SAN designs, an expected oversubscription between initiators and targets must be SAN design doesn’t have to be rocket science. Modern SAN design is about deploying ports and switches in a configuration that provides flexibility and scalability. It is alsoobeyed about when deciding how many ports will be dedicated for HBAs, storage ports, and ISLs. making sure the network design and topology look as clean and functional one, two, or five years later as the day they were first deployed. In the traditional FC SAN design,Typically, each these designs use oversubscriptions from 4:1 (four HBAs for each storage port, if host and storage device is dual-attached to the network. This is primarily motivated bythey a share the same speed) to 8:1. desire to achieve 99.999% availability. To achieve 99.999% availability, the network should be built with redundant switches that are not interconnected. The FC network is built on two separate networks (commonly called path A and path B), and each end node (host or storage) is connected to both networks. Some companies take the same approach withTIP their Such expected oversubscription is also known as fan-out. traditional IP/Ethernet networks, but most do not for reasons of cost. Because the tradi- tional FCStorage SAN design doubles the cost of network Area implementation, many Network companies are (SAN) actively seeking alternatives. Some companies are looking to iSCSI and FCoE as the answer, and others are considering single-path FC SANs. Figure 22-14 illustrates a typical dual-pathWith these concepts in mind, we will explore three common SAN topologies, which are ptg17120290 FC-SAN design. depicted in Figure 8-23. Target FC Single-Layer Core-Edge Edge-Core-Edge

FC Fabric FC Fabric “A” “B”

Edge SAN A SAN A SAN B SAN B

Core Core SAN A SAN B SAN A SAN B

SAN AA SAN B Edge Edge SAN A SAN B SAN A SAN B SAN A SAN A SAN B SAN B

HBA

Initiator Figure 22-14 SAN A and SAN B FC Networks Figure 8-23 Common SAN Topologies Infrastructures for data access necessarily include options for redundant server systems. Although server systems might not necessarily be thought of as storage elements, they are Storage Virtualization

Ø RAID Ø Virtualizing Storage Devices Ø Virtualizing LUN Ø Virtualizing File Systems Ø File / Record Virtualization Ø Tape Storage Virtualization Ø Virtualizing SAN v FCIP for SAN Extension and Traffic Engineering v IVR for Transit VSAN v NPV for Blade Server Hosting Data Center Ø N-Port ID Virtualization Ø FCoE and SAN Extension for LAN and SAN Management Separation Ø iSCSI 400 Data Center Virtualization Fundamentals

StorageFigure Virtualization 9-6 portrays some popular RAID levels, each one representing a different block aggregation scheme for the involved disk drives.

RAID 0 RAID 1 Disk Disk Disk Disk

0 1 0 0 2 3 1 1 4 5 2 2 6 7 3 3 8 9 4 4

RAID 5 RAID 1 + 0 Parity 6 – 8 Parity 0 – 2 RAID 1 Disk Disk Disk Disk

RAID 0 RAID 0 0 1 2 4 5 3 Disk Disk Disk Disk 8 6 7 9 10 11 0 1 0 1 2 3 2 3 ptg12380073 Parity 9 – 11 Parity 3 – 5 4 5 4 5 6 7 6 7

Figure 9-6 RAID Levels

In a group of disks deploying RAID level 0, sequential blocks of data are written across them, in an operation called “striping.” Figure 9-6 depicts a sequence of ten blocks being striped between two disks, which is the minimum quantity of devices for this level. RAID level 0 does not deploy any data redundancy, because a disk failure results in total data loss. However, when compared to a lonely disk drive with similar capacity, this RAID level improves I/O performance because it supports simultaneous reads or writes on all disks. RAID level 1, also known as “mirroring,” requires at least two disks simply because every write operation at one device must be duplicated to another. Hence, if one of the disks fails, data can be completely recovered from its mirrored pair. This RAID level adds latency for write operations, because they must be executed on both disks and reduces by 50 percent the overall capacity of the disk group.

RAID level 5 is a very popular method mainly because it nicely balances capacity and I/O performance when compared with other RAID levels. In summary, it deploys data block striping over a group of disks (minimum of three) and builds additional parity blocks that can be used to recover an entire sequence of blocks in the absence of a disk. Contrary to other parity-based methods, RAID 5 distributes the parity blocks evenly among the disks, minimizing I/O bottlenecks (because the write operation generates a change in its corresponding parity block). Technet24.ir

232 CCNA Cloud CLDFND 210-451 Official Cert Guide

Volumes allow a more efficient and dynamic way to supply storage capacity. As a basis for this discussion, Figure 8-8 exhibits the creation of three volumes within two aggregation groups (RAID groups or disk pools).

Disk Array Server1

Aggregation 6TB Volume Group 1 3TB Volume Server2

Aggregation 2TB Volume Group 2 Server3

Figure 8-8 Volumes Defined in Aggregation Groups

In Figure 8-8, three volumes of 6 TB, 3 TB, and 2 TB are assigned, respectively, to Server1, Server2, and Server3. In this scenario, each server has the perception of a dedicated HDD and, commonly, uses a software piece called a Logical Volume Manager (LVM) to create local partitions (subvolumes) and perform I/O operations on the volume on behalf of the server applications. The advantages of this intricate arrangement are ptg17120290 ■ The volumes inherit high availability, performance, and aggregate capacity from a RAID group (or disk pool) that a single physical drive cannot achieve. ■ As purely logical entities, a volume can be dynamically resized to better fit the needs of servers thatTechnet24.ir are consuming array resources.

232 CCNA Cloud CLDFND 210-451 Official Cert Guide There are two ways a storage device can provision storage capacity. Demonstrating the pro- Storage Virtualization vision method called thick provisioning, Figure 8-9 details a 6-TB volume being offered to Volumes allow a more efficient and dynamic way to supply storage capacity. As a basis forServer1. this discussion, Figure 8-8 exhibits the creation of three volumes within two aggregation groups (RAID groups or disk pools). Chapter 8: Block Storage Technologies 233 Server1 In Figure 8-9, the array spreads the 6-TB volume over members of Aggregation Group 1 (RAID or disk pool). Even if Server1 is only effectively using 1 TB of data, the array con- trollers dedicate 6 TB of actual data capacity for the volume and leave 5 TB completely Aggregation 6TB Volume unused. As you may infer, this practice may generate a huge waste of array capacity. For that reason, another method of storage provisioning, thin provisioning, was created, as Group 1 3TB Volume Server2 Figure 8-10 shows. Aggregation 2TB Volume Figure 8-9 Thick Provisioning Group 2 Server3

Figure 8-8 Volumes Defined in Aggregation Groups

In Figure 8-8, three volumes of 6 TB, 3 TB, and 2 TB are assigned, respectively, to Server1, Server2, and Server3. In this scenario, each server has the perception of a dedicated HDD and, commonly, uses a software piece called a Logical Volume Manager (LVM) to createFigure 8-10 Thin Provisioning

local partitions (subvolumes) and perform I/O operations on the volume on behalf of theIn Figure 8-10, a storage virtualizer provides the perception of a 6-TB volume to Server1, server applications. The advantages of this intricate arrangement are but only stores in the aggregate group what the server is actuallyptg17120290 using, thereby avoiding ptg17120290 waste of array resources due to unused blocks. ■ The volumes inherit high availability, performance, and aggregate capacity from a RAID

group (or disk pool) that a single physical drive cannot achieve. NOTE Although a complete explanation of storage virtualization techniques is beyond 8 ■ As purely logical entities, a volume can be dynamically resized to better fit the needsthe of scope of this book, I would like to point out that such technologies can be deployed on storage devices, on servers, or even on dedicated network appliances. servers that are consuming array resources. From the previous sections, you have learned the basic concepts behind storing data in There are two ways a storage device can provision storage capacity. Demonstrating the HDDs,pro- RAID groups, disk pools, and volumes. Now it is time to delve into the variety of vision method called thick provisioning, Figure 8-9 details a 6-TB volume being offeredstyles to a server may deploy to access data blocks in these storage devices and constructs. Server1. Accessing Blocks Per definition, block storage devices offer to computer systems direct access and complete control of data blocks. And with the popularization of x86 platforms and HDDs, multiple methods of accessing data on these storage devices were created. These approaches vary widely depending on the chosen components of a computer system in a particular scenario and can be based on direct connections or storage-area network (SAN) technologies. Nonetheless, it is important that you realize that all of these arrangements share a common characteristic: they consistently present to servers the abstraction of a single HDD exchang- ing data blocks through read and write operations. And as a major benefit from block stor- age technologies, such well-meaning deception drastically increases data portability in data centers as well as cloud computing deployments.

Figure 8-9 Thick Provisioning 402 Data Center Virtualization Fundamentals

In the next sections, I will outline some of these virtualization solutions, their character- istics, and the benefits they bring to the daily tasks of storage professionals. And because each vendor has different implementations for these solutions, in the next three sections I will aggregate their most common features using the level of data abstraction they act upon.

Note As an useful exercise, I recommend that you use the virtualization taxonomy explained in Chapter 1 to categorize each storage virtualization technology at the end of each section.

Virtualizing Storage Devices

Disk array virtualization can be currently deployed partitioning a single device or grouping several of them. Through partitioning, a physical disk array can be subdivided into logical devices, with assigned resources such as disks, cache, memory, and ports. Using the array as a pool of these resources, each virtual array partition can create exclu- sive LUNs or file systems to different departments, customers, or applications. Protecting data access between partitions and controlling hardware resources for each one of them, this style of virtualization encourages storage consolidation and resource optimization in ptg12380073 multitenant data centers. Storage virtualization also allows multiple physical arrays to work together as a single system, bringing advantages such as data redundancy and management consolidation. For example, array-based data replication illustrates how two distinct disk 404arrays Datacan work Center Virtualization Fundamentals in coordination to provide fault tolerance for the stored data transparently to hosts that mightStorage be accessing them. Virtualization There are two basic methods to deploy array-based replication, as Figure 9-7 illustrates.Figure 9-8 illustrates a generic tape library virtualization design.

Synchronous Asynchronous Replication Replication

1 4 1 2 3 4

2 3 Primary Array Secondary Primary Array Secondary Array Array Backup Disk Array “Real” Tape Server (“”) Library Figure 9-7 Synchronous Versus Asynchronous Replication Figure 9-8 Virtual Tape Library On the left side of Figure 9-7, both disk arrays are implementing synchronous replication. As the figure shows, this method of storage redundancy will always trace the following In the figure, a backup server is retrieving and sending data to the disk array, using actions: the same processes and protocols it would deploy with a traditional tape library. Consequently, a disk array imports and exports streams of data to the tape library, acting as a “cache” mechanism for the latter. The most common VTL vendors are IBM, FalconStor, and Oracle. Some of the available VTLs can reduce the number of stored data through a compression technique called data deduplication, which eliminates duplicate copies of the same data sequence. Through ptg12380073 this feature, a redundant chunk of data is replaced by a small reference that points to the only-once stored chunk. In real-world scenarios, deduplication can obliterate up to 95 percent of stored data on both the VTL and tape library.

Virtualizing LUNs

Chances are that a discussion about storage virtualization will invariably bring up LUN virtualization as the key topic. Because LUNs usually represent the final deliverable ele- ment from the storage team to its main consumer (server team), their management has been under rigorous scrutiny since the origins of SCSI. Here are some examples of the LUN management challenges faced by array administra- tors:

■ LUNs are statically defined in a single disk array.

■ The migration of LUNs to another disk array is a rather complicated task, and it gen- erally demands application interruption.

■ Servers rarely utilize the requested size of their LUNs, raising capital and operational costs and severely decreasing device utilization.

■ The resizing of a LUN is usually a disruptive operation.

In most LUN virtualization deployments, a virtualizer element is positioned between a host and its associated target disk array (in-path I/O interception). This virtualizer gener- Storage Virtualization Chapter 9: Storage Evolution 407

LAN

File Server Clients NFS CIFS NFS CIFS

Metadata Servers SAN

Storage Pool

Figure 9-10 File System Virtualization ptg12380073

Note Figure 9-10 depicts an out-of-band virtualization system because only control infor- mation is handled by the virtualization device (metadata servers). Adversely, virtualization systems that directly act upon the exchanged data are known as inband virtualizers.

Besides extending a consolidated file system to multiple different clients, a can also leverage a performance boost through SAN block access. IBM TotalStorage SAN File System is an example of a virtual file system solution.

Virtualizing SANs

Considering that storage area networks are critical components in modern storage envi- ronments, it is no surprise that they can also benefit from virtualization features.

Cisco entered the Fibre Channel switch market in 2003 with its MDS 9000 directors and fabric switches series. Since then, the company has applied its experience and resources to enable multiple virtualization features within intelligent SANs. In truth, this very subject characterizes the core content of the remaining chapters of Part III, “Virtualization in Storage Technologies”: 596 CCNA Data Center DCICN 200-150 Official Cert Guide

Traffic Management Are there any differing performance requirements for different application servers? Should bandwidth be reserved or preference be given to traffic in the case of congestion? Given two alternate traffic paths between data centers with differing distances, should traffic use one path in preference to the other? For some SAN designs, it makes sense to implement traffic management policies that influence traffic flow and relative traffic priorities. Fault Isolation Consolidating multiple areas of storage into a single physical fabric both increases storage utilization and reduces the administrative overhead associated with centralized storage man- agement. The major drawback is that faults are no longer isolated within individual storage areas. Many organizations would like to consolidate their storage infrastructure into a single 600 CCNA Data Center DCICN 200-150 Official Cert Guide physical fabric, but both technical and business challenges make this difficult. Storage Virtualization Technology such as virtual SANs (VSANs, see Figure 22-18) enables this consolidation while increasing the security and stability of Fibre Channel fabrics by logically isolating devices that are physically connected to the same set of switches. Faults within one fabric are con- tained wi thin a single fabric (VSAN) a nd are not propagated to other fabrics.

Physical SAN Islands Are Virtualized onto Common SAN Infrastructure

Fabric #3

Fabric #1 Fabric #2 End-of-Row Top-of-Rack Blade Server Core-Edge Edge-Core-Edge/End-of-Row Design Figure 22-20 Sample FC SAN Topologies Figure 22-18 Fault Isolation with VSANs

SAN designs that can be built using a single physical switch are commonly referred to as collapsed core designs (see Figure 22-21). This terminology refers to the fact that a design is conceptually a core/edge design, making use of core ports (non-oversubscribed) and edge ports (oversubscribed), but that it has been collapsed into a single physical switch. Traditionally, a collapsed core design on Cisco MDS 9000 family switches would utilize both non-oversubscribed (storage) and oversubscribed (host-optimized) line cards.

Figure 22-21 Sample Collapsed Core FC SAN Design Storage Virtualization466 Data Center Virtualization Fundamentals Chapter 11: Secret Identities 465 VSAN 20

JBOD VSAN A VSAN B 2 Gbps Trunk for VSANs 10, 20 and 30

fc2/2 IVR Server30 Array10 fc1/1 fc2/14 fc1/14 fc1/10 Array A IVR Zone Server B IVR MDS-A (pWWN = MDS-CORE pWWN = 10:00:00:00:c9:73:9c:2d) 50:00:40:21:03:fc:6d:28

After IVR VSAN 10 VSAN 30 VSAN A VSAN B Figure 11-7 IVR Topology

Notwithstanding, before any IVR zoning operation, a SAN administrator must first con- figure the IVR infrastructure of a physical fabric. There are four steps involved with this procedure: 1. Virtual Enable the IVR processes on all border switches. Virtual ptg12380073 Initiator Target 2. Enable Cisco Fabric Services (CFS) configuration distribution for IVR (this step is optional if there is only one border switch in the fabric). IVR Created Zone IVR Created Zone ptg12380073

Figure 11-6 Inter-VSAN Routing Example Tip If you recall from Chapter 6, “Fooling Spanning Tree,” CFS is also used on the configuration virtual PortChannels (vPC). Actually, this protocol was designed to provide configuration synchronization for selected features on a SAN with multiple MDS 9000 Note IVR only uses port world wide name (pWWN) addresses to select switches.devices from With CFS enabled on every IVR-enable switch, any additional IVR configuration different VSANs and is completely compliant with the Fibre Channel standards.must only be executed on a single device.

To maintain the illusion to the devices that they are connected to the same fabric,3. Enable the IVR Network Address Translation (NAT) to avoid routing problems because IVR-enabled switch proxies all communication received on a virtual device to itsof domainreal ID overlapping between different VSANs. counterpart from another VSAN. And as you will learn in the next sections, it can also 4. Create the IVR VSAN topology. adapt these frames before forwarding them. A VSAN topology defines the switches that provide a “meeting point” for the VSANs. IVR Infrastructure This mapping permits the correct exchange of FSPF information about IVR-zoned nodes. All four steps are detailed on Example 11-5, where MDS-CORE is configured as the only Exploring the innards of IVR, I will use a configuration example. Our workbenchIVR-enabled is the switch in the fabric. Note that the example also exhibits the resulting VSAN topology shown in Figure 11-7, where the VSANs are already created on bothtopology. MDS 9000 switches, and all depicted interfaces are configured accordingly. In this topology, MDS-CORE will deploy IVR to transport Fibre Channel frames between Server30 (which is in VSAN 30) and Array10 (unsurprisingly, in VSAN 10). MDS-CORE can be also referred as a border switch, because it will be configured as an IVR-enabled switch that is a member of more than one VSAN.