A Sample Configuration with Design Guidelines for Between Avaya™ P580/P882 Gigabit Switch Hunt Groups and Cisco EtherChannel - Issue 1.0

Abstract

These Application Notes describe a sample Hunt Group/EtherChannel Link Aggregation Group (LAG) configuration between an Avaya™ P882 Gigabit Ethernet switch and a 6509 switch. Design guidelines for deploying LAG in a mixed Avaya/Cisco infrastructure are included as an aid for network designers. A sample configuration diagram has been included along with provisioning notes. These Application Notes were created as a result of field requests for information on interoperability between Avaya P580/P882 Hunt group trunks and Cisco EtherChannel.

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 1 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

1. Introduction The Avaya™ P580/P882 Gigabit Ethernet Switch Hunt Group feature aggregates multiple switch ports together, combining the bandwidth into a single connection. This feature is normally deployed between switches to provide added bandwidth and fault tolerance. If one segment in a hunt group fails, the remaining active members will service the traffic for that segment. The Hunt Group Load-Sharing feature (enabled by default) distributes traffic load among the hunt group members for improved throughput performance. Hunt group member ports can be configured using various trunk modes including IEEE 802.1Q, Multi-layer, 3Com and Clear. Hunt group ports may also be assigned a router IP interface for layer 3 forwarding.

The Avaya™ Hunt Group feature is a manual (or static) implementation of link aggregation. This means the feature does not support dynamic LAG configuration or binding via some standard or proprietary protocol. Examples of such protocols include Link Aggregation Control Protocol (LACP) for dynamic 802.3ad and Cisco’s Port Aggregation Protocol (PAgP) for dynamic EtherChannel negotiation. It is possible to configure Avaya™ Hunt Groups to interoperate with third-party vendors. Forcing a LAG to be formed statically with a third-party vendor device without dynamic protocol negotiation is normally used for interoperability.

These Application Notes specifically address a sample configuration using a static 1000BaseT Hunt Group-to-EtherChannel trunk between an Avaya™ P882 Gigabit Ethernet switch and a Cisco Catalyst 6509 switch (Figure 1). Design guidelines for deploying hunt group/EtherChannel LAG trunks in a mixed Avaya/Cisco converged infrastructure environment have been included as an aid.

Avaya (TM) P882 VLAN Cisco VLAN 200 Gigabit Ethernet 6509 100 Switch VLAN 100 2/7 Host 1 3/7 4/4 4/8 Host 3 5/4 4/9 1000Base 3/13 LAG 2/17 P882 Router IP VLAN 100 - Host 4 Host 2 VLAN 200 -

Figure 1: Sample Avaya/Cisco LAG Configuration

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 2 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

2. Equipment and Software Validated The following equipment and software were used for the sample configuration provided:

Equipment Software Avaya™ P882 Gigabit Ethernet Version 5.4 Gigabit Ethernet Switch Switch Software 1 – M8000R-SUP 1 – M8024R-100TX 1 – M8008R-1000T Cisco Catalyst 6509 Switch CatOS Version 7.4(2) 1 – WS-X6K-SUP1A-2GE 1 – WS-X6548-RJ-45 1 – WS-X6316-GE-TX 4 -PC’s with 100BaseTX Adapters Microsoft Windows 2000 Professional

2.1. Typical Mixed Deployments In mixed Avaya/Cisco infrastructure environments, network engineers may decide to implement Link Aggregation Groups (LAGs) between Avaya P580/P882 Gigabit Ethernet switches and Cisco Catalyst switches. The environment will typically be either an Avaya Core/Distribution Layer with Cisco at the Access Layer or vice versa (Figure 2). In either case, the simple guidelines in section 2.2 can be used as an aid for designing such deployments.

Avaya P580/P882 Gigabit Ethernet Cisco Catalyst Sith

Avaya Cisco Core/Distribution Core/Distribution Layer Layer Fast Ethernet or Fast Ethernet or Gigabit Ethernet Gigabit Ethernet Trunks Passive OSPF or Trunks RIP Interfaces Passive OSPF or LAG LAG RIP Interfaces LAG LAG

Cisco Avaya Access Access Layer Layer

Avaya P580/P882 Avaya P580/P882 Cisco Catalyst Cisco Catalyst Gigabit Ethernet Gigabit Ethernet Sith Sith

Figure 2: Typical LAG Deployments for Mixed Avaya/Cisco Infrastructure

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 3 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

2.2. Guidelines for EtherChannel/Hunt Group Interoperability

• All EtherChannel/Hunt Group member ports must be assigned to the same native/port VLAN, or they must all be configured as IEEE 802.1Q trunk ports

• All EtherChannel/Hunt Group member ports must operate at the same speed and duplex

• All EtherChannel/Hunt Group member ports must be enabled in order for each LAG segment to forward traffic

• The Cisco EtherChannel Frame Distribution should be set to IP Both1

• Static VLANs should be used for all EtherChannel/Hunt Group trunk ports2

• The Cisco EtherChannel must be set to channel mode on

2.3. How to Optimize Hunt Group Performance and Survivability The following are suggestions for improving hunt group performance and survivability. It may be difficult to follow all of these guidelines simultaneously in a cost effective manner. In general, the suggestions higher on the list should be followed before those lower on the list when conflicts arise.

• Client endpoint ports should be distributed across as many module ports as possible on the switch. The Hunt Group load-sharing distribution algorithm balances traffic more effectively when many clients are diverse spread across the switching fabric. The sample configuration provided demonstrates this by keeping half the hosts on ports 1-12 of an M8024-100TX module and the other half on ports 13-24.

• Hunt group ports should be provisioned evenly across the extreme left and right sides (if possible) of a module in order to better distribute traffic across the switching fabric. This improves throughput performance and survivability, in the event of a fabric port failure.

• The number of hunt group ports deployed on a switch should be kept to the minimum amount needed to satisfy the configuration. If the switch has many more client ports than hunt group ports its ability to load-share is greater.

• Hunt group ports should reside on separate modules from client endpoints whenever possible. The sample configuration provided demonstrates this by keeping client ports on the M8024R-100TX module and hunt group ports on two M8008R-1000T modules.

1 Recommended in Cisco Catalyst Best Practices for Catalyst 4000, 5000, and 6000 Series Switches - 2 Recommended in Cisco Catalyst 6000 Series Switches Configuring EtherChannel - Cisco Systems

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 4 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

• Hunt group ports should be distributed across two or more media modules to improve survivability in the event of a module failure. The sample configuration provided demonstrates this by spreading the hunt group ports across two M8008R-1000T modules.

For further technical details on the Hunt Group feature and its hardware dependencies please see the Appendix at the end of this document. 3. Avaya P882 Hunt Group Web Agent Administration The following configuration steps discuss configuring the hunt group and router IP interface for the P882 depicted in Figure 1. Please consult Avaya switch documentation for details on configuring the client ports if needed. These Application Notes describe two different ways to provision the Avaya™ P882 switch:

• Section 3 - describes the setup using the Web Agent • Section 4 - describes the setup using the Command Line Interface (CLI)

3.1. Create a VLAN for the Hunt Group

1. Select Cajun Router ! L2 Switching ! VLANs ! Configuration from the Web Agent. The VLAN Configuration window opens.

2. Click the CREATE button. The Create VLAN window opens (Figure 3).

Figure 3: Create VLAN

3. Enter a unique VLAN name (e.g. vlan100) in the Name field.

4. Enter the VLAN ID 100 in the ID field.

5. Click the APPLY button.

3.2. Create the Hunt Group

1. Select Hunt Groups under the Cajun Router ! L2 Switching folder. The Hunt Group configuration window opens.

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 5 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

2. Select CREATE. The Create Hunt Group window opens (Figure 4).

Figure 4: Create Hunt Group

3. Enter a unique hunt group name (e.g. 1000) in the Name field.

Note: The hunt group name can be a string consisting of numbers or letters. In this example 1000 was used in order to match with the Cisco EtherChannel Admin Group ID 1000 used by the Catalyst for easier identification.

4. Click the APPLY button.

3.3. Assign the VLAN and Hunt Group to Member Ports

Attention: Manually disable all hunt group ports using the Web Agent or disconnect the physical cables used by the hunt group ports before executing the steps provided below.

1. Select Cajun Router ! Modules & Ports ! Configuration from the Web Agent. The Module Information window opens.

2. Select the switch ports for Module 4 under the Switch Ports column. The Switch Ports window opens.

3. Select port name Port 4.4 from the Name column. The Switch Port Configuration for Port 4.4 window opens (Figure 5).

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 6 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

Figure 5: Switch Port Configuration for Port 4.4

4. Select vlan100 from the Port VLAN drop-down menu.

5. Select IEEE 802.1Q from the Trunk Mode drop-down menu.

Note: If Cisco ISL tagging is desired select Multilayer instead.

6. Select 1000 from the Hunt Group drop-down menu.

7. Click the APPLY button to assign the “Base/Root Port”

8. Navigate to port 5/4 and select 1000 from the Hunt Group drop-down menu then, click Apply. Port 5/4 will automatically be assigned vlan100 with IEEE 802.1Q trunk mode. Once all member ports have been added, reconnect the hunt group physical ports or enable the ports from the Web Agent to activate the hunt group trunk.

Note: It is only necessary to configure VLAN information for the first port in the hunt group known as the “Base Port”. All remaining hunt group member ports will assume the identity of the base port automatically.

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 7 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

3.4. Create and Assign an IP Interface to the Hunt Group VLAN

1. Select Interfaces under the Cajun Router ! Routing ! IP ! Configuration folder. The Interfaces window opens.

2. Click the CREATE button. The Add IP Interface window opens (Figure 6).

Figure 6: Add IP Interface

3. Enter a unique interface name (e.g vlan100) in the Name field.

4. If RIP or no routing protocol is to be used, leave the Administrative Status field drop-down menu as Up. However, if OSPF routing is to be used, this field drop-down menu must be set to Down. After the interface instance has been created, this field must be changed to Up.

5. Select vlan100 from the VLAN drop-down menu.

6. Enter the IP address 100.100.100.1 in the Network Address field.

7. (OPTIONAL) Select Enable from the RIP or OSPF drop-down menus.

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 8 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

Note: If RIP or OSPF are to be used on a Client facing router IP interface that has no upstream routers, the administrator should modify the RIP or OSPF interface to be passive. This avoids having unnecessary route updates flooded on the LAN. 4. Avaya P882 Hunt Group CLI Administration

4.1. Create a VLAN for the Hunt Group

P882> enable (enter the privileged mode via user) P882# configure (enter the global configuration mode) P882(configure)# set vlan 100 name vlan100 (create the Virtual LAN)

4.2. Create the Hunt Group

P882(configure)# set huntgroup 1000 (create a hunt group called 1000)

4.3. Assign the VLAN and Hunt Group to Member Ports

Attention: Manually disable all hunt group member ports before executing these commands or physically disconnect the hunt group ports until these commands have been executed.

Note: If Cisco ISL tagging is desired use the keyword multi-layer instead of ieee-802.1Q in the command string shown below.

P882(configure)# set port vlan 4/4 100 (assign port VLAN to the Base Port) P882(configure)# set port trunking-format 4/4 ieee-802.1Q (enable 802.1Q trunking on the Base Port) P882(configure)# set port huntgroup 4/4,5/4 1000 (assign the hunt group to to ports 4/4 and 5/4 making 4/4 the Base/Root port)

4.4. Create and Assign an IP Interface to the Hunt Group VLAN

P882(configure)# interface vlan100 (create the interface) P882(config-if:vlan100)# ip address 100.100.100.1 255.255.255.0 (assign the interface an IP address and mask) P882(config-if:vlan100)# ip vlan name vlan100 (assign the interface to the VLAN) P882(config-if:vlan100)# copy run start (save the configuration)

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 9 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

5. Cisco 6509 EtherChannel Administration

5.1. Create a VLAN and Assign it to the EtherChannel Ports Note: In order for Cisco native VLAN egress traffic to be tagged, the set dot1q-all-tagged feature must be enabled. Otherwise, a separate “Clear” native VLAN will need to be used between the P882 and the Cisco Catalyst for STP and clear traffic only. If the P882 port is configured as an 802.1Q trunk, the native/port VLAN is tagged on egress. The exception is 802.1D STP BPDU’s, which are sent untagged.

6509> (enable) set vlan 100 name vlan100 (create the Virtual LAN) 6509> (enable) set dot1q-all-tagged enable (allow native VLAN tagging) 6509> (enable) set port membership 4/8 static (set port VLAN membership static) 6509> (enable) set port membership 4/9 static (set port VLAN membership static) 6509> (enable) set vlan 100 4/8-9 (set VLAN 100 as the Native VLAN) 6509> (enable) set trunk 4/8 on dot1q (enable 802.1Q trunking on port 4/8) 6509> (enable) set trunk 4/9 on dot1q (enable 802.1Q trunking on port 4/9) 6509> (enable) clear trunk 4/8-9 1-99,101-1005,1025-4094 (optional - remove all Virtual LANs allowed on the trunk that pose a potential security risk)

5.2. Create the EtherChannel and Assign its Ports

6509> (enable) set port channel 4/8-9 1000 (assign ports to admin-group 1000) 6509> (enable) set port channel 4/8-9 mode on (static channel with no PAgP)

5.3. Modify the EtherChannel Distribution Algorithm for Performance Note: For the configuration depicted in these Application Notes, the EtherChannel distribution algorithm was hashed based on “IP Both”, meaning source and destination IP address. If source and destination IP are not available in a pure L2 frame, Cisco’s algorithm will hash based on the MAC layer. Cisco has several options for EtherChannel distribution. By default, the EtherChannel distribution algorithm is hashed based on Destination MAC address for Catalyst 6000 Series switches. In some configurations, this may lead to excessive traffic on one EtherChannel segment over another as specified in Cisco’s document titled “Configuring EtherChannel Between a Catalyst Switch Running CatOS and a Workstation or Server.” Consult Cisco documentation for further details on using the available distribution methods.

6509> (enable) set port channel distribution ip both (set EtherChannel frame distribution based on source and destination IP address)

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 10 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

6. Verification Steps

1. Generate a constant PING request from all hosts and verify that PING traffic load is shared on each LAG member. Check the switch statistics to verify load sharing.

2. Verify fail-over by disconnecting one member from the LAG and verifying that traffic continues to flow between all hosts. 7. Conclusion Connectivity between Avaya™ P882 Gigabit Ethernet switches using hunt groups and Cisco Catalyst 6509 switches using EtherChannel can be achieved by following the guidelines demonstrated in these Application Notes. 8. Additional References The following reference documents can be obtained online from Avaya Support: • Avaya P550R, P580, P880 and P882 MultiService Switch User Guide 9. Appendix - Hunt Group Technical Design Notes

9.1. Hunt Group Feature Terminology

Base Port/Flood port – When a hunt group is configured, there is one port designated “Base Port”. All ports in the hunt group assume the identity of the base port. The base port passes all flood frames, broadcast frames, destination unknown unicast, and multicast frames for VLANs associated with the hunt group. Spanning Tree treats all ports in the hunt group as one port. The base port sends and receives Bridge Protocol Data Units (BPDU’s).

Member port - A port that is a member of the hunt group. Sometimes referred to as a “Participating Port”.

Non-member port - A port that is not a member of a hunt group. Sometimes referred to as a “Non-Participating Port”.

Forwarding Engine (FE) - A generic name for major hardware components that make forwarding decisions for Layer 2 bridging and Layer 3 routing.

Participating Forwarding Engine – A Forwarding Engine that has one or more ports that are designated as hunt group member ports.

Non-Participating Forwarding Engine – A Forwarding Engine that has no ports designated as hunt group member ports (i.e. all ports associated with the FE are non-member ports).

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 11 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

9.2. Hunt Group Load Sharing Considerations Avaya™ 80-Series Media Module hardware architecture must be considered when implementing hunt groups, especially when the Hunt Group Load Sharing feature is enabled. Each 80-series media module has at least two Forwarding Engines, and as many as eight. The Avaya™ P580/P882 switch has two backplane connections known as Fabric Ports dedicated to each media module slot. Each module can use one or both fabric ports in the slot, with the number of Forwarding Engines accessing each fabric port varying with the type of media module. Consider the two 80-series media modules depicted (Figure 7).

M8024R-100TX Module: M8008R-1000T Modules 2 Forwarding Engines 8 Forwarding Engines 2 Fabric Ports 2 Fabric Ports 1 Forwarding Engine per Fabric Port 4 Forwarding Engines per Fabric Port

FE #1 is dedicated to Physical Ports 1-12 Dedicated Forwarding Engine FE #2 is dedicated to Physical Ports 13-24 Per Physical Port

Figure 7: Forwarding Engine Considerations on 80-Series Modules

In order to understand the Hunt Group Load-Sharing feature, consider the configuration depicted in Figure 8. All ports, including the hunt group ports, reside in the same VLAN. For a given Destination MAC Address, each Ingress Forwarding Engine (FE) is paired with a hunt group member. FEs are numbered starting with the Supervisor, which always uses FE #1 and FE #2, and so on down the chassis, bypassing empty slots.

Slot 1: Supervisor FE#1 FE#2 Slot 1: Supervisor Slot 2: 4-port GbE GBIC FE#3 FE#4 FE#5 FE#6 Slot 2: 4-port GbE GBIC Slot 3: empty Slot 3: empty Slot 4: 24-port 100BaseTX FE#7 FE#8 Slot 4: 24-port 100BaseTX Slot 5: 24-port 100BaseTX FE#9 FE#10 Slot 5: 24-port 100BaseTX Slot 6: empty Slot 6: empty Slot 7: empty Slot 7: empty P580 switch #1 P580 switch #2 Hunt Group member #1 2/1 2/1 SYSMON SYSMON

2/4member #2 2/4

5/1 5/2 5/13 5/14

Destination MAC PC1 PC2 PC3 PC4 00:00:00:00:00:01

Note: M8004R-GBIC module has 4 Forwarding Engines, 2 Fabic Ports, 1 FE Per Physical Port

Figure 8: Forwarding Engine Hunt Group Member Port Pairing

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 12 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

Once Destination MAC 00:00:00:00:00:01 is learned by switch #1 from switch #2 each Forwarding Engine is assigned a hunt group member in a round-robin fashion based on the number of hunt group member ports available. In this example, there are two member ports.

For traffic destined for 00:00:00:00:00:01, FE#1 will use member #1, FE#2 will use member #2, FE #4 will use member #1, FE #5 will use member #2, FE #7 will use #1, FE #8 will use member #2, FE #9 will use member #1, and FE #10 will use member #2. This will remain fixed until the Address Forwarding Table (AFT) times out or a hunt group member port fails. If a link failure occurs redistribution (rehashing) is performed based on the remaining hunt group member ports and available ingress Forwarding Engines.

Note: Forwarding Engines #3 and #6 for ports 2/1 and 2/4 are considered Participating Forwarding Engines. The load-sharing distribution for them is outside the scope of these Application Notes.

Example #1: Hosts 1-2 are connected to a common M8024R-100TX module. The supervisor module is automatically assigned Forwarding Engines #1 and #2 internally, which are not shown. The switch Forwarding Table has learned Host 4’s MAC address from the trunk. All non-participating Forwarding Engines including FE #3 and FE #4 on the M8024R-100TX module have been assigned a hunt group member port using round-robin. All ingress traffic flows from hosts residing on FE #3 destined for Host 4 will traverse one member port and all flows from hosts residing on FE #4 will traverse another member port (Figure 9).

Ports 1-12 FE #3 Host 1 00:00:00:00:00:01 Hunt Group

Host 2 Ports 00:00:00:00:00:02 13-24 FE #4 Host 4 00:00:00:00:00:04 (Destination)

Figure 9: Traffic Flows from Different Forwarding Engines

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 13 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

Example #2: As depicted previously in Figure 9, all ingress traffic flows destined for Host 4 from hosts residing on FE #3 will traverse one member port and all flows from hosts residing on FE #4 will traverse another member port. The difference in this example is that Host 2 and Host 3 traffic flows both traverse the same member port because they hash identically. They are both sending traffic to the same destination MAC through Forwarding Engine #4 (Figure 10).

Ports 1-12 FE #3 Host 1 00:00:00:00:00:01 Hunt Group

Host 2 Ports 00:00:00:00:00:02 13-24 FE #4 Host 4 00:00:00:00:00:04 (Destination)

Host 3 00:00:00:00:00:03

Figure 10: Common Ingress Forwarding Engine Traffic Flows

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 14 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc

©2003 Avaya Inc. All Rights Reserved. Avaya and the Avaya Logo are trademarks of Avaya Inc. All trademarks identified by ® and ™ are registered trademarks or trademarks, respectively, of Avaya Inc. All other trademarks are the property of their respective owners. The information provided in these Application Notes is subject to change without notice. The configurations, technical data, and recommendations provided in these Application Notes are believed to be accurate and dependable, but are presented without express or implied warranty. Users are responsible for their application of any products specified in these Application Notes.

Please e-mail any questions or comments pertaining to these Application Notes along with the full title name and filename, located in the lower right corner, directly to the Avaya Solution & Interoperability Test Lab at [email protected]

GAK; Reviewed: Solution & Interoperability Test Lab Application Notes 15 of 15 WCH 7/18/2003 ©2003 Avaya Inc. All Rights Reserved. cislaginterop.doc