Nortel Multimedia Communication Server 5100 MCS Planning and Engineering

NN42020-200 . Document status: Standard Document version: 04.02 Document date: 8 October 2009

Copyright © 2007-2009, Networks All Rights Reserved.

Sourced in Canada

The information in this document is subject to change without notice. The statements, configurations, technical data, and recommendations in this document are believed to be accurate and reliable, but are presented without express or implied warranty. Users must take full responsibility for their applications of any products specified in this document. The information in this document is proprietary to Nortel Networks.

Nortel, Nortel (Logo), and the Globemark are trademarks of Nortel Networks.

IBM, Lotus, Lotus Notes, BladeCenter and BladeCenter T are trademarks of International Business Machines. Microsoft, and Windows are trademarks of Microsoft. Oracle is a trademark of Oracle Corporation. Red Hat is a registered trademark of Red Hat, Inc. RIM, BlackBerry and BlackBerry Enterprise Server are trademarks of Research in Motion, Inc.

All other trademarks are the property of their respective owners. 3 Contents

New in this release 11 Feature changes 11 New hardware 11 New operating system 12 New deployment options 12 Updated system components 12 New and updated clients 12 Easier installation, upgrades and commissioning 13 IP Phone 1535 Phase 2 13 Mobile Communication 3100 13 Mobile Personal Agent 13 Override Route 13 Other changes 14 How to get help 15 Finding the latest updates on the Nortel web site 15 Getting help from the Nortel web site 15 Getting help over the phone from a Nortel Solutions Center 15 Getting help from a specialist by using an Express Routing Code 16 Getting help through a Nortel distributor or reseller 16 Regulatory and license information 17 Red Hat Software 17 About this document 19 Audience for this document 19 Organization of this document 19 Indication of hypertext links 21 Indication of trademarks 21 Related documents 21 Acronyms 23 Introduction 29 MCS 5100 overview 29 MCS 5100 network topology 30 MCS 5100 functional components 31 Call processing engine 32

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 4 Contents

Signaling gateways 32 Media servers 33 Media gateways 33 Media clients 34 Operation, Administration, Maintenance, and Provisioning 36 Provisioning tools 37 Deployment options and functional components 39 Deployment Options 39 Core components 39 Other components 40 Small system (1 server) 40 Medium nonredundant system (4 servers) 41 Medium redundant system (8 servers) 42 MCS hardware platform 44 Physical server features 44 Software deployment on hardware components 45 Functional components 46 Session Manager 46 Database Manager 54 System Manager 56 Accounting Manager 57 Provisioning Manager 59 IP Client Manager 63 Wireless Client Manager 65 Border Control Point 67 Media Gateway 3200 71 Media Application Server 73 IP Phone 2007 73 IP Phone 2004 (Phase 1 or Phase 2) 77 IP Phone 2004 with SIP firmware 80 IP Phone 2002 81 IP Phone 1120E and IP Phone 1140E 82 Multimedia PC Client 82 Multimedia Web Client 85 Deployment considerations 87 MCS logical hierarchy 87 Level 5: Network Control Center (NCC) 89 Level 4: Network Signaling Center (NSC) 89 Level 3: Media Concentration Center (MCC) 89 Level 2: Local Point-of-Presence (PoP) 90 Level 1: MCS Client Site 90 Location infrastructure 90 MCS virtual network topology 92

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Contents 5

Location precedence rules 95 MCS virtual network for Border Control Point 97 Border Control Point selection process 99 Border Control Point deployment considerations 106 MCS logical hierarchy design 107 Phase 1: MCS domain trees planning 107 Phase 2: MCS location trees planning 113 Phase 3: Routability Groups planning 116 Phase 4: Border Control Point Groups planning 117 Phase 5: MCS component capacity planning 118 Phase 6: IP network capacity planning 122 IP architecture components 124 Virtual Local Area Network (VLAN) and 802.1p 124 Differentiated Services (DiffServ) 126 Queuing 129 IP routing 131 Virtual Redundancy Protocol (VRRP) 132 Multi-Link Trunking (MLT) and Split Multi-Link Trunking (SMLT) 133 IP addressing 135 Packet filters and firewalls 136 Network Address and Port Translation (NAPT) 137 IP Virtual Private Network (VPN) 138 IP reference networks 140 Enterprise reference network 140 MCS 5100 deployment scenario 141 MCS logical hierarchy 142 Network architecture 142 IP network recommendations 145 MCS IP network topology 145 Protected MCS network 146 Enterprise Network 146 Service quality requirements and IP network performance requirements 147 Voice quality rating: MOS 148 Voice quality rating: R-values 148 QoS parameters 151 IP network qualification 160 QoS recommendations 163 Layer 2: Ethernet 802.1Q/P 164 Layer 3: IP Differentiated Services (DiffServ) 164 Nortel Networks Service Classes (NNSC) 166 NNSC queue mappings 167 MCS 5100 QoS support 170 QoS markings comparison 173

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 6 Contents

Global IP Sound (GIPS) NetEQ integration 175 IP Phone network considerations 177 Single Ethernet drop with wall power supply 177 Single Ethernet drop with power over Ethernet (POE) 178 Second Ethernet drop for IP Phones with AC power 179 Server-side network considerations 181 MCS 5100 LAN 181 IP network routing recommendations and considerations 182 IP addressing 182 IP routing 182 IP over WAN 183 Low-speed access link considerations 184 IP address requirements 185 IP network services requirements 186 Dynamic Host Configuration Protocol (DHCP) 186 Domain Name System (DNS) 186 Network Time Protocol (NTP) 187 Interworking with other networks 191 Interworking with circuit-switched networks 191 MCS 5100 support of circuit-switched signaling 192 Interworking with PBX 197 Interworking with PSTN 199 SIP Trunking 200 Traffic considerations 201 Traffic flow overview 201 Voice traffic 202 Overview 202 Enterprise deployment scenarios 203 Traffic within an enterprise site 204 Voice traffic in multi-site enterprise deployment 205 Voice media bandwidth 210 Video traffic 212 Instant Messaging traffic 212 Collaboration traffic 213 Converged Desktop traffic 213 Converged Desktop I service 214 Features for Converged Desktop I 214 Converged Desktop II service 219 Converged Desktop subscriber configuration 220 Features available in Converged Desktop II 221 Interfaces supported by Converged Desktop services 221 Session Manager cluster interworking 222 Multi-site interworking 222

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Contents 7

IP network configuration 222 Network configuration 222 Converged Desktop II call flows 223 Converged Desktop media traffic 228 Security 231 Security strategies overview 231 Protecting MCS servers and gateways 232 MCS 5100 network topology 232 MCS network public interfaces 234 MCS network public interface protection 237 Access to Protected MCS Network 238 Access to MCS Public Accessible Network 241 Communication between MCS servers and gateways 242 Remote access security 263 SIP security options 263 Protecting end user traffic 264 Clients 264 Signaling 264 Media 264 Packet filter rules 264 Internal threat security 265 MCS operating system hardening measures 266 Update to latest patch 267 Linux operating system hardening 267 Windows 2000 operating system hardening measures 269 Guidelines for reliability and survivability 271 General guidelines for high availability design 271 Component reliability 271 Session Manager 271 System Manager 274 Accounting Manager 274 Database Manager 275 Provisioning Manager 275 Border Control Point 276 Media Application Server 278 Media Gateway 3200 278 IP Client Manager 278 IP Phones 280 Network Considerations 281 SIP-PSTN gateway reliability strategy 281 Domain considerations 283 Domain and subdomain 283 MCS telephony routes 285

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 8 Contents

Class of Services 286 Enhanced 911 services 287 Overview 287 Emergency number provisioning 291 E911 translation and routing 291 Interworking with third-party components 295 Gateways 295 Vegastream 295 Mediatrix FXO and FXS Gateways 296 Voice mail servers 298 CAS interoperability 298 OAMP 301 Management system components 301 Fault management 304 Integrating with an OSS 305 Notifications 306 NE enrol notifications 306 NE de-enrol notifications 307 NE OSI state change notifications 308 Alarm notifications 308 Polling management data 309 Audit 310 Data synchronization 310 Active alarm status 311 State Information 312 Configuration management 312 Provisioning management 313 Accounting management 314 Accounting records 314 Accounting record transport 314 Accounting file rotation 316 Accounting file naming convention 316 Accounting alarms 316 IM related accounting 317 Performance management 317 OM data to be monitored for network growth and planning 317 Security management 317 Backup and restore strategy 318 Appendix A IP functional components 319 Routing 319 IP addressing 320 Firewall types 322 Network Address and Port Translation (NAPT) 323

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Contents 9

IP Virtual Private Network (VPN) 327 Appendix B MCS deployment checklist 335 Appendix C DSCP code values 341 Appendix D Customer specific information datasheets 343 Appendix E Presence impact on system capacity 351 Overview 351 Weighted SIP transaction cost 351 Presence state update 352 Building call pattern models 353 Calculating cost of NOTIFY transaction 354 Calculating cost of NOTIFY and REGISTER transactions 358 Calculation cost of NOTIFY, REGISTER and authentication transactions 359 Additional considerations 360 Appendix F Constructing a reference model for system capacity planning 363 SIP transaction overview 363 SIP transaction cost calculation 365 Instant Messages 366 Basic SIP-SIP calls 366 Client registrations 366 Presence-state-change generated REGISTER transactions 367 Presence subscription transactions 368 Presence notification transactions 368 Address book transactions 369 Service package subscription transactions 370 Gateway call transactions 370 Calculating the total transaction cost 370 Additional considerations 372

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 10 Contents

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 11 New in this release

WARNING Do not contact Red Hat for technical support on your Nortel version of the Linux base operating system. If technical support is required for the Nortel version of the Linux base operating system, contact Nortel technical support through your regular channels.

This section describes what is new in this document for Multimedia Communication Server (MCS) 5100 Release 4.0.

Feature changes The following are the main new features for MCS 5100 Release 4.0: ¥ "New hardware" (page 11) ¥ "New operating system" (page 12) ¥ "New deployment options" (page 12) ¥ "Updated system components" (page 12) ¥ "New and updated clients" (page 12) ¥ "Easier installation, upgrades and commissioning" (page 13) ¥ "IP Phone 1535 Phase 2" (page 13) ¥ "Mobile Communication 3100" (page 13)

For more information about all the MCS 5100 Release 4.0 features, see MCS 5100 New in this Release (NN42020-404).

New hardware The following are the new main hardware components for MCS 5100 Release 4.0: ¥ IBM eServer x306m ¥ BladeCentre T for MAS and BCP ¥ MAS and BCP residency on BCT

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 12 New in this release

New operating system MCS 5100 Release 4.0 runs on Red Hat Enterprise Linux ES3. This platform provides the following: ¥ Only 1 installation CD is required for the operating system. ¥ Installation time is reduced to 15 minutes. ¥ The operating system is open source and reliable.

New deployment options The following new deployment options are available: ¥ Small system (1 server) ¥ Medium nonredundant system (4 servers) ¥ Medium redundant system (8 servers)

Updated system components The following are the main system components for MCS 5100 Release 4.0: ¥ Database Manager ¥ System Manager (Fault Performance Manager) ¥ Accounting Manager ¥ Session Manager ¥ Provisioning Manager ¥ IP Client Manager (optional) ¥ Wireless Client Manager (optional) ¥ Media Application Server (optional) ¥ Border Control Point (BCP), formerly RTP Media Portal ¥ Media Gateway 3200, formerly MCP Trunking Gateway (optional) ¥ Web Collaboration - Web Dialogs 7.1 (optional)

New and updated clients The following are new or updated clients for MCS 5100 Release 4.0: ¥ Nortel Multimedia PC Client ¥ Nortel Multimedia Web Client ¥ Nortel Multimedia Wireless Client ¥ Nortel Multimedia Office Client ¥ Nortel Multimedia PC Client for IBM Lotus Notes ¥ Nortel next-generation IP Phone 1120E, IP Phone 1140E

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Feature changes 13

Easier installation, upgrades and commissioning The following are new features for MCS 5100 Release 4.0 installation, upgrades and commissioning: ¥ one CD for operating system ¥ one DVD for Oracle ¥ one DVD for MCS Core software ¥ simplified operating system and Oracle Patching ¥ easier Maintenance Release installation

IP Phone 1535 Phase 2 This feature introduces enhancements to the Nortel IP Phone 1535. The enhancements include: ¥ Improved video ¥ Universal power supply ¥ Support in MCS 5100 Release 4.0

Mobile Communication 3100 This feature provides limited support for the Mobile Communication 3100 Series Portfolio (MC 3100). MCS 5100 Release 4.0 supports the Mobile Communication Client 3100 for Windows Mobile (MCC 3100).

The MCC 3100 for Windows Mobile is a Session Initiation Protocol (SIP) Soft Phone that provides consolidated voice and messaging services on supported dual-mode (Wireless Fidelity [WiFi] and cellular) Windows Mobile handheld devices.

Mobile Personal Agent Mobile Personal Agent provides a web interface that users can use to access a limited set of Personal Agent functionality using the web browser on a mobile device. Users can access Mobile Personal Agent using mobile devices that supports WML or xHTML MP. If a mobile browser supports both WML and xHTML, Mobile Personal Agent uses xHTML MP.

Override Route The Override Route is a unique route that, when enabled, is applied to incoming calls before any other routes, and redirects all incoming calls to a single number or Session Initiation Protocol (SIP) URL (the override number). You can configure Override Route using Personal Agent, Mobile Personal Agent, or the Bulk Provisioning Tool.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 14 New in this release

Other changes This document is renamed and renumbered from Network Deployment and Engineering Guide (NN10313-191) to MCS Planning and Engineering (NN42020-200).

Citrix Application Gateway support is removed for MCS 5100 Release 4.0.

Table 1 Revision History October 2009 Standard 04.02. This document is up-issued to support MCS 5100 Release 4.0. October 2008 Standard 04.01. Added support for Mobile Personal Agent and Override Route. December 2007 Standard 01.05. This document is up-issued to support Multimedia Communication Server 5100 Release 4.0. This document addresses CR Q01537753. March 2007 Standard 01.04. This document is up-issued to support Multimedia Communication Server 5100 Release 4.0. This document addresses CRs Q01543719 and Q01542900. January 2007 Standard 01.01. This document is issued to support Multimedia Communication Server 5100 Release 4.0. This document contains information previously contained in the following legacy document, now retired: Network Deployment and Engineering Guide (NN10313-191). July 2006 Standard 5.0. This document is up-issued to support MCS 5100 Release 3.5. It addresses CR Q01348948. January 2006 Standard 4.0. This document is up-issued to support MCS 5100 Release 3.5. November 2005 Standard 3.0. This document is up-issued to support MCS 5100 Release 3.5. October 2005 Standard 2.0. This document is up-issued to support MCS 5100 Release 3.5. October 2005 Standard 1.0. This document is up-issued to support MCS 5100 Release 3.5.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 15 How to get help

This chapter explains how to get help for Nortel products and services.

Finding the latest updates on the Nortel web site The content of this documentation was current at the time the product was released. To check for updates to the latest documentation for Multimedia Communication System (MCS) 5100, go to www.nortel.com and navigate to the Technical Documentation page for MCS 5100.

Getting help from the Nortel web site The best way to get technical support for Nortel products is from the Nortel Technical Support web site: www.nortel.com/support This site provides access to software, documentation, bulletins, and tools to address issues with Nortel products. From this site, you can: ¥ download software, documentation, and product bulletins ¥ search the Technical Support web site and the Nortel Knowledge Base for answers to technical issues ¥ arrange for automatic notification of new software and documentation for Nortel equipment ¥ open and manage technical support cases

Getting help over the phone from a Nortel Solutions Center If you do not find the information you require on the Nortel Technical Support Web site, and you have a Nortel support contract, you can also get help over the telephone from a Nortel Solutions Center.

In North America, call 1-800-4NORTEL (1-800-466-7835).

Outside North America, go to the following web site to obtain the telephone number for your region:

www.nortel.com/callus

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 16 How to get help

Getting help from a specialist by using an Express Routing Code To access some Nortel Technical Solutions Centers, you can use an Express Routing Code (ERC) to quickly route your call to a specialist in your Nortel product or service. To locate the ERC for your product or service, go to:

www.nortel.com/erc

Getting help through a Nortel distributor or reseller If you purchased a service contract for your Nortel product from a distributor or authorized reseller, contact the technical support staff for that distributor or reseller.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 17 Regulatory and license information

This chapter contains regulatory and license information.

Red Hat Software

WARNING Do not contact Red Hat for technical support on your Nortel version of the Linux base operating system. If technical support is required for the Nortel version of the Linux base operating system, contact Nortel technical support through your regular channels.

Passthrough End User License Agreement

This section governs the use of the Red Hat Software and any updates to the Red Hat Software, regardless of the delivery mechanism and is governed by the laws of the state of New York in the U.S.A. The Red Hat Software is a collective work under U.S. Copyright Law. Subject to the following terms, Red Hat, Inc. ("Red Hat") grants to the user ("Customer" a license to this collective work pursuant to the GNU General Public License. Red Hat Enterprise Linux (the "Red Hat Software") is a modular operating system consisting of hundreds of software components. The end user license agreement for each component is located in the component’s source code. With the exception of certain image files identified below, the license terms for the components permit Customer to copy, modify, and redistribute the component, in both source code and binary code forms. This agreement does not limit Customer’s rights under, or grant Customer rights that supersede, the license terms of any particular component. The Red Hat Software and each of its components, including the source code, documentation, appearance, structure and organization are owned by Red Hat and others and are protected under copyright and other laws. Title to the Red Hat Software and any component, or to any copy, modification, or merged portion shall remain with the aforementioned, subject to the applicable license. The "Red Hat" trademark and the "Shadowman" logo are registered trademarks of Red Hat in the U.S. and other countries. This agreement does not permit Customer to distribute the Red Hat Software

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 18 Regulatory and license information

using Red Hat’s trademarks. If Customer makes a commercial redistribution of the Red Hat Software, unless a separate agreement with Red Hat is executed or other permission granted, then Customer must modify any files identified as "REDHAT-LOGOS" and "anaconda-images" to remove all images containing the "Red Hat" trademark or the "Shadowman" logo. As required by U.S. law, Customer represents and warrants that it: (a) understands that the Software is subject to export controls under the U.S. Commerce Department’s Export Administration Regulations ("EAR"); (b) is not located in a prohibited destination country under the EAR or U.S. sanctions regulations (currently Cuba, Iran, Iraq, Libya, North Korea, Sudan and Syria); (c) will not export, reexport, or transfer the Software to any prohibited destination, entity, or individual without the necessary export license(s) or authorizations(s) from the U.S. Government; (d) will not use or transfer the Red Hat Software for use in any sensitive nuclear, chemical or biological weapons, or missile technology end-uses unless authorized by the U.S. Government by regulation or specific license; (e) understands and agrees that if it is in the United States and exports or transfers the Software to eligible end users, it will, as required by EAR Section 740.17(e), submit semi-annual reports to the Commerce Department’s Bureau of Industry & Security (BIS), which include the name and address (including country) of each transferee; and (f) understands that countries other than the United States may restrict the import, use, or export of encryption products and that it shall be solely responsible for compliance with any such import, use, or export restrictions. Red Hat may distribute third party software programs with the Red Hat Software that are not part of the Red Hat Software. These third party programs are subject to their own license terms. The license terms either accompany the programs or can be viewed at http://www.redhat.com/licenses/. If Customer does not agree to abide by the applicable license terms for such programs, then Customer may not install them. If Customer wishes to install the programs on more than one system or transfer the programs to another party, then Customer must contact the licensor of the programs. If any provision of this agreement is held to be unenforceable, that shall not affect the enforceability of the remaining provisions.

Copyright © 2003 Red Hat, Inc. All rights reserved.

"Red Hat" and the Red Hat "Shadowman" logo are registered trademarks of Red Hat, Inc. "Linux" is a registered trademark of Linus Torvalds. All other trademarks are the property of their respective owners.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 19 About this document

This document provides recommendations and guidelines for Multimedia Communication Server (MCS) 5100 system design and deployment. The MCS 5100 system is an enterprise solution within the Nortel Multimedia Communication Portfolio (MCP).

The document describes the functional components and explains the major deployment scenarios and the IP network infrastructure required to support them. For complete information, this document should be used in conjunction with the full suite of MCS 5100 documentation.

The document is intended to be equally useful for a large-scale deployment and an understanding of the issues and requirements of deploying the MCS system in the IP network. The focus is on providing performance and capacity information for the MCS 5100 system so that the system can be properly planned and scaled to meet requirements.

Audience for this document This document is written to assist Nortel Channel Partners, Nortel customers, engineers, and other technical professionals in deploying the MCS 5100 system over IP networks.

Organization of this document This document is divided into the following sections and appendices. Each section serves to further the understanding of the MCS system and to offer practical recommendations and perspectives. ¥ "Introduction" (page 29): This section provides a high-level overview of the MCS 5100 system, its components, and network elements. ¥ "Deployment options and functional components" (page 39): This section describes different system configuration models. This section also provides a description of the functional components provided by Nortel. The description includes function overview, hardware platform, scalability, redundancy, and customer visible capacity. ¥ "Deployment considerations" (page 87): This section describes the MCS logical hierarchy, location infrastructure, MCS virtual network for Border

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 20 About this document

Control Point, logical hierarchy design, IP architecture components, IP reference networks, and different deployment scenarios. ¥ "IP network recommendations" (page 145): This section describes service quality, IP network performance requirements, IP network qualification, MCS QoS support, and recommendations on the IP and lower-layer network to support the MCS system. ¥ "Interworking with other networks" (page 191): This section describes scenarios and issues regarding interworking with the circuit-switched networks, such as PSTN and PBXs. ¥ "Traffic considerations" (page 201): This section explains the media, signaling, and management flows between different components distributed at different geographical locations. This section covers the impact of different types of traffic loads on the MCS system, including voice, video, Instant Messaging, collaboration, and Converged Desktop services. ¥ "Security" (page 231): This section describes the MCS security strategies. The focus is on measures that provide protection to the MCS servers and gateways as well as the end-user networks. This section also provides information about the operating system hardening measures. ¥ "Guidelines for reliability and survivability" (page 271): This section describes network design and engineering principles that need to be observed in order to meet the high availability requirements. ¥ "Domain considerations" (page 283): This section explains important deployment issues such as translations, domain assignment, emergency call, multi-site deployment considerations, and PBX with Converged Desktop. ¥ "Interworking with third-party components" (page 295): This section describes requirements for interworking with third-party components. ¥ "OAMP" (page 301): This section describes the management components, operations, and provisioning architecture for various deployment scenarios. It also covers operations regarding the hierarchy of provisioning agents, accounting record uploading, and storage. ¥ Appendix "IP functional components" (page 319): This appendix provides background information related to IP routing, IP addressing, firewalls, NAT, NAPTs, and VPNs. ¥ Appendix "MCS deployment checklist" (page 335): This appendix provides a checklist for network readiness before deploying the MCS system. The checklist contains tables with items that must be checked for each phase of the deployment.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Related documents 21

¥ Appendix "DSCP code values" (page 341): This appendix provides the corresponding binary, hex, and decimal values of DSCP codes. ¥ Appendix "Customer specific information datasheets" (page 343): This appendix provides configuration datasheets for assembling customer specific IP network configuration information before MCS 5100 network deployment. ¥ Appendix "Presence impact on system capacity" (page 351): This appendix provides a description of the impact of presence state change on the MCS system. It also provides call pattern models that can be used to estimate system capacity based on the mix of transaction types. ¥ Appendix "Constructing a reference model for system capacity planning" (page 363): This appendix provides information on the calculation of SIP transactions. The calculation is used to build a reference model for system capacity planning.

Indication of hypertext links Hypertext links in this document are indicated in blue. When viewing a PDF version of this document, click on a hyperlink to jump to the associated section or page.

Indication of trademarks The asterisk used behind a word, such as Microsoft*, indicates that the word is a trademark.

Related documents The following is a list of MCS 5100 documents that are related to this document: ¥ MCS 5100 Overview (NN42020-143) ¥ MCS Installation and Commissioning (NN42020-308) ¥ MCS Upgrades—Maintenance Releases (NN42020-303) ¥ MCS Upgrades—Release 3.x to Release 4.0 (NN42020-405) ¥ High Availability Fundamentals (NN42020-152) ¥ Database Manager Fundamentals (NN42020-142) ¥ IP Client Manager Fundamentals (NN42020-106) ¥ Provisioning Manager Fundamentals (NN42020-111) ¥ Session Manager Fundamentals (NN42020-107) ¥ System Manager Fundamentals (NN42020-109) ¥ Accounting Manager Fundamentals (NN42020-144) ¥ Border Control Point Fundamentals (NN42020-108)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 22 About this document

¥ Wireless Client Manager Fundamentals (NN42020-118) ¥ MCS Feature Description Guide (NN42020-125) ¥ Interworking Fundamentals (NN42020-127) ¥ Media Application Server Planning and Engineering (NN42020-201) ¥ Routine Maintenance (NN42020-502) ¥ SIP Phone Commissioning (NN42020-302) ¥ IP Phone 2002 User Guide (NN42020-126) ¥ IP Phone 2004 User Guide (NN42020-103) ¥ IP Phone 2007 User Guide (NN42020-104) ¥ Multimedia PC Client User Guide (NN42020-102) ¥ Multimedia Office Client User Guide (NN42020-139) ¥ Multimedia PC Client for IBM Lotus Notes User Guide (NN42020-148) ¥ SIP Firmware for IP Phone 1120E Implementation Guide (NN43112-300) ¥ SIP Firmware for IP Phone 1140E Implementation Guide (NN43114-300) ¥ IBM eServer x306m documentation ¥ Nortel Mobile Communication Client 3100 for Windows Mobile Administration (NN42030-601) ¥ Nortel Mobile Communication Gateway 3100 Administration (NN42030-600) ¥ Nortel Mobile Communication Gateway 3100 Release Notes (NN42030-403) ¥ Nortel Mobile Communication Client 3100 Release Notes (NN42030-400) ¥ Nortel Mobile Communication Gateway 3100 Installation (NN42030-300) ¥ Nortel Mobile Communication 3100 Planning and Engineering (NN42030-200) ¥ Nortel Mobile Communication Client 3100 for Windows Mobile User Guide (NN42030-100) ¥ Nortel IP Phone 1535 Installation and Commissioning (NN43160-103) ¥ Nortel IP Phone 1535 User Guide (NN43160-101) ¥ Nortel IP Phone 1535 New in this Release (NN43160-400)

For the list of all the documents in the MCS 5100 documentation suite, see MCS 5100 New in this Release (NN42020-404).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Acronyms 23

Acronyms The following is a list of acronyms used in this document.

Acronym Extension ALG Application Level Gateway ALI Automatic Location Information ARP Address Resolution Protocol ATM Asynchronous Transfer Mode BBUA Back-to-Back User Agent BCM Business Communication Manager BGP Border Gateway Protocol BHCA Busy Hour Call Attempt BHHCA Busy Hour Half Call Attempt BHSTA Busy Hour SIP Transaction Attempt BIP Breaker Interface Panel BPS Business Policy Switch BPT Bulk Provisioning Tool BSN Broadband Service Node CAD/CAM Computer Aided Design/Computer Aided Manufacturing CAM Central Accounting Manager CBR Constant Bit Rate CCS Centum Call Seconds CDP Coordinated Dialing Plan CIR Committed Information Rate CLI Command Line Interface CODEC COmpression/DECompression or COder/DECoder COS Class of Service CPE Customer Premise Equipment CPL Call Processing Language CPU Central Processing Unit CS1000 Communication Server 1000 CSV Comma Separated Value DDoS Distributed Denial of Service DHCP Dynamic Host Configuration Protocol DID Direct Inward Dialed DiffServ Differentiated Service

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 24 About this document

Acronym Extension DMLT Distributed Multi-Link Trunking DNS Domain Name Server DoS Denial of Service DSCP DiffServ Code Point DSL Digital Subscriber Line DSP Digital Signal Processor DSPI Dynamic Stateful Packet Inspection DTMF Dual-Tone Multi-Frequency EF Expedited Forwarding ECMP Equal Cost Multi-Path E&M Ear and Mouth ERL Emergency Response Location FEFI Far End Fault Indication FIFO First-In First-Out FOIP Fax over Internet Protocol FXO Foreign Exchange Office FXS Foreign Exchange Station GB Gigabyte GPS Global Positioning System HTTP Hypertext Transfer Protocol HSC Hot Swap Controller IANA Internet Assigned Number Authority ICMP Internet Control Message Protocol IGP Interior Gateway Protocol IM Instant Messaging IP Internet Protocol IPCM Internet Protocol Client Manager IPDR Internet Protocol Detail Record ISDN Integrated Services Digital Network IST Inter-Switch Trunk JDBC Java Database Connectivity Kbps Kilobits per second KVM Keyboard, Video, and Mouse LDAP Lightweight Directory Access Protocol

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Acronyms 25

Acronym Extension LAM Local Accounting Manager LAN Local Area Network LCD Liquid Crystal Display LOM Lights Out Management MAN Metro Area Network MAS Media Application Server Mbps Megabits per second MCP Multimedia Communication Portfolio MCS Multimedia Communication Server ME Managed Element MGCP Media Gateway Control Protocol MHz Megahertz MIB Management Information Base MLT Multi-Link Trunking MOS Mean Opinion Score MP-BGP Multi-Protocol Border Gateway Protocol MPCP Media Portal Control Protocol MPLS Multi-Protocol Label Switching MWI Message Waiting Indicator NAPT Network Address and Port Translation NAT Network Address Translation NES Network Energy Source NIC Network Interface Card NMS Network Management System NNSC Nortel Networks Service Classes NPI Numbering Plan Indication NSSA Not So Stubby Area NTP Network Time Protocol OAM&P Operation, Administration, Maintenance and Provisioning OEM Oracle Enterprise Manager OM Operation Measurement OPI Open Provisioning Interface OS Operating System OSI Open Systems Interconnection

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 26 About this document

Acronym Extension OSPF Open Shortest Path First OSS Operation Support System PAT Port Address Translation PBX Private Branch eXchange PCM Pulse Code Modulation PHB Per Hop Behavior PIR Peak Information Rate PoP Point of Presence PRI Primary Rate Interface PSAP Public Safety Answering Point PSTN Public Switched Telephone Network PTE Physical Telecommunications Environment QoS Quality of Service RIP Routing Information Protocol RSIP ReStart In Progress RTP Real-time Transport Protocol RTCP Real-time Transport Control Protocol SimRing Simultaneous Ringing SDP Session Description Protocol SIP Session Initiation Protocol SLA Service Level Agreement SMDI Station Message Desk Interface SMLT Split Multi-Link Trunking SNMP Simple Network Management Protocol SSH Secure Shell SSL Secure Socket Layer STP Spanning Tree Protocol TAF Transparent Application Failover TCP Transmission Control Protocol TDM Time Division Multiplexing TFTP Trivial File Transport Protocol TLS Transport Layer Security TON Type of Number ToS Type if Service

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Acronyms 27

Acronym Extension UA User Agent UDP User Datagram Protocol UDP Universal Dialing Plan UFTP UNIStim File Transfer Protocol UNIStim Unified Networks IP Stimulus Protocol UPS Uninterruptible Power Supply URI Uniform Resource Indicator URL Uniform Resource Locator USB Universal Serial Bus UTC Universal Time Coordinated UTP Unshielded Twisted Pair VAD Voice Activity Detection VBR Variable Bit Rate VLAN Virtual Local Area Network VLSM Variable Length Subnet Masks VoIP Voice over Internet Protocol VPN Virtual Private Network VR Virtual Router VRF Virtual Routing and Forwarding VRRP Virtual Router Redundancy Protocol WAN Wide Area Network WiCM Wireless Client Manager WML Wireless Markup Language WRED Weighted Random Early Discard XML Extensible markup language xHTML Extensible HyperText Markup Language xHTML MP xHTML Mobile Profile

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 28 About this document

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 29 Introduction

WARNING Do not contact Red Hat for technical support on your Nortel version of the Linux base operating system. If technical support is required for the Nortel version of the Linux base operating system, contact Nortel technical support through your regular channels.

This section contains the following: ¥ "MCS 5100 overview" (page 29) ¥ "MCS 5100 network topology" (page 30) ¥ "MCS 5100 functional components" (page 31)

MCS 5100 overview The Public Switched Telephone Network (PSTN) and Internet Protocol (IP) network are the most popular networks in the world. PSTN networks are good for real-time voice communication, but are not built for next-generation multimedia communication. The IP network is built for transaction-based data applications, but it is not built for real-time applications such as voice and video. The solution lies in the next-generation network. One of the goals of the next-generation network is to build a standard-based multiservice network over IP to support converged voice, video, and multimedia applications. The next-generation network provides a totally integrated communication experience that enables users, for example, to click on a button in Microsoft Outlook to place a call in response to an E-mail, to send an instant message to colleagues while talking in a conference call, or to send a file for project collaboration. Another important aspect of the future network is to offer users seamless location-independent services. Users define their communication preferences, while the network maintains these communication profiles. Furthermore, services and communications continue without interruption no matter where they are. Nortel Multimedia Communication Portfolio (MCP) is a solution based on the Session Initiation Protocol (SIP). The solution offers a wide range of next-generation multimedia services in a variety of network configurations.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 30 Introduction

The MCP solution provides a powerful platform for hosting a full suite of SIP features, a diverse range of IP-based clients, gateways, and media servers for SIP interworking. It is positioned to: ¥ facilitate Voice over IP (VoIP) with the convergence of voice and data onto a single network ¥ bring multimedia and location-independent services to subscribers over an IP network ¥ integrate the communication experience with other familiar devices such as Universal Serial Bus (USB) devices and PC applications ¥ integrate IP network resources with real-time oriented conversation ¥ enable subscribers to converse and collaborate over an IP network ¥ provide enhanced terminal devices and applications to enrich and simplify subscriber experiences ¥ provide a set of services to meet multimedia communications needs

The MCP solution offers two types of systems: the MCS 5100 for the enterprise market and MCS 5200 for the carrier market. This document describes the MCS 5100 system only.

The MCS system uses standard-based protocols and provides an open platform for interworking with the PSTN and PBXs. The MCS use of standard protocols enables the MCS system to interoperate with telephones, proxy servers, voice mail servers, conference servers, and gateways provided by third-party vendors participating in Nortel Interoperability Program. The MCS system also enables subscribers to modify their services using a Web-based Personal Agent. The MCS system maintains information related to the actual locations and the presence of subscribers in the network. When there is an incoming communication request, the MCS system forwards the communication requests to the new location without subscriber intervention.

MCS 5100 network topology Figure 1 "MCS 5100 network topology" (page 31) shows a high-level logical network topology of the MCS 5100 system.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 functional components 31

Figure 1 MCS 5100 network topology

In the network topology as shown in Figure 1 "MCS 5100 network topology" (page 31), all MCS 5100 clients, gateways, and Media Application Servers are located in the enterprise network. The MCS System Management Console core servers are located in the protected MCS network to prevent unauthorized access, and to avoid system-level service disruption from other parts of the enterprise network. The protected MCS network is protected by enforcing network policies through the IP network infrastructure. For more details, see "Security" (page 231).

MCS 5100 functional components The MCS 5100 system is a collection of functional components that are combined in different ways for different SIP network solutions. The functional components are grouped into the following categories: ¥ Call processing engine: Session Manager ¥ Signaling gateways: IP Client Manager ¥ Media servers: Media Application Server

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 32 Introduction

¥ Media gateways: Media Gateway 3200, Border Control Point, SIP FXS Gateway, SIP FXO Gateway ¥ Media clients : IP Phone 2002, IP Phone 2004, IP Phone 2007, IP Phone 1120E, IP Phone 1140E, Multimedia PC Client, Multimedia Web Client, Multimedia Client Set, Multimedia Office Client, Multimedia PC Client for IBM Lotus Notes ¥ OAMP components: System Management Console, System Manager, Accounting Manager, Database Manager, Provisioning Manager, Media Gateway 3200 Graphical User Interface (GUI), Media Application Server Console ¥ Provisioning tools: Personal Agent, Provisioning Client, Bulk Provisioning Tool

Note: The term IP Phones is used in this document to mean the IP Phone 2002, IP Phone 2004, IP Phone 2007, IP Phone 1120E, and IP Phone 1140E telephones.

Call processing engine

The Session Manager processes SIP signaling messages, handles SIP sessions, and provides core services that facilitate communication between SIP clients. The Session Manager communicates through SIP to the endpoints. It identifies the SIP clients and their method of connecting through authentication and registration. It also supports presence and Call Processing Language (CPL) scripting for call screening and the execution of the scripts. All services are executed on the Session Manager. The Session Manager and other components rely on the Database Manager, which holds all the subscriber information, CPL scripts, service data, and component configuration data.

Signaling gateways

The IP Client Manager (IPCM) performs SIP to Unified Network IP Stimulus (UNIStim) protocol conversion for IP Phone 2002, IP Phone 2004, and IP Phone 2007. The IP Client Manager sends all outgoing SIP messages such as INVITEs and BYE to the Session Manager. The Session Manager forwards the information to the appropriate destination. The IP Client Manager provides all services required by the IP Phones, including instructing the IP Phones to play a tone and initiating requests. The IP Client Manager also provides all the call features for the IP Phone

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 functional components 33

such as transfer, second call indication, call hold, and mute. The soft keys on the IP Phone are controlled by the IP Client Manager.

Media servers

The Media Application Server (MAS) provides Ad Hoc and Meet Me audio conferencing services for two or more parties that use the MCS SIP clients such as the Multimedia PC Client, Multimedia Web Client, and IP Phones. The MAS provides functions that manage creation, modification, addition, and removal of parties for the conferencing calls. The conferencing service is enabled through software applications and requires no Digital Signal Processing (DSP) or other specialized server hardware. The MAS platform also supports these services: ¥ The IM Chat service enables subscribers to exchange text messages online. ¥ The Music on Hold service enables the subscriber receiving a call to play music for a caller on hold. ¥ The Announcement service enables administrators to provide various notifications to callers, such as call status, network situation, or greetings.

Media gateways

The Media Gateway 3200 provides an interface between the SIP-based MCS system and systems that use ISDN Primary Rate Interface (PRI) protocol or Channel Associated Signaling (CAS) protocol for E1/T1 spans. The Media Gateway 3200 converts packet-based voice streams to and from circuit-based voice streams. The Media Gateway 3200 integrates fully with the Session Manager. The gateway supports all SIP and PSTN call features, Ad Hoc and Meet Me audio conferencing services, IM Chat, Music on Hold, and Announcement services. The Border Control Point acts as a media proxy through which the media flows between endpoints. The Border Control Point provides a meeting place in the network that enables enterprise Network Address Translation (NAT) and firewall traversal. It also provides a point where endpoints can proxy media. The Border Control Point hides and protects IP addresses of the respective endpoint. The Border Control Point handles Real-time Transport Protocol (RTP), Real-time Transport Control Protocol (RTCP) and User Datagram Protocol (UDP) streams for collaboration such as file transfer. The Border Control Point is controlled

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 34 Introduction

by the Session Manager. The Border Control Point performs media Network Address and Port Translation (NAPT) functions that enable it to relay packets between two endpoints located in different networks using different or the same IP address spaces. Border Control Point can perform NAPT on both the source and destination IP address and port for every media packet permitted to traverse. If the IP network over which the MCS is deployed contains no NAT or NAPT devices, a Border Control Point is not required. The SIP Foreign eXchange Office (FXO) Gateway provides an FXO interface from the SIP-based IP network to a TDM-based Central Office switch. The SIP FXO Gateway acts as a SIP signaling agent and media gateway between the SIP VoIP domain and a system using the FXO interface. Sitting at the edge between the packet- and circuit-based worlds, the SIP FXO Gateway is responsible for the conversion of packet-based voice streams to and from circuit-based voice streams. The SIP Foreign eXchange Station (FXS) Gateway provides an FXS interface from the SIP-based IP network to legacy telephony end-user terminals such as phones and fax machines. The SIP FXS Gateway acts as a SIP signaling agent and media gateway between the SIP VoIP domain and end-user terminals using the FXS interfaces. Sitting at the edge between the packet- and circuit-based worlds, the SIP FXS Gateway is responsible for the conversion of packet-based voice streams to and from circuit-based voice streams.

Media clients

The IP Phone 2002, IP Phone 2004, IP Phone 2007, IP Phone 1120E, and IP Phone 1140E are Nortel IP-based telephones. These IP Phones provide advanced multimedia features over a standard 10/100-Mbps twisted pair Ethernet connection. The IP Phone 2002, IP Phone 2004, and IP Phone 2007 communicate with the IP Client Manager or Multimedia PC Client using the UNIStim protocol. The IP Client Manager or Multimedia PC Client communicates with the Session Manager using SIP. IP Phones The IP Phone 2004 can also communicate directly with the Session Manager when it is upgraded with SIP firmware. The IP Phone 1120E and IP Phone 1140E are SIP telephones.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 functional components 35

The IP Phone 1535 Phase 2 is a Nortel SIP telephone that provides advanced multimedia features over a standard 10/100-Mbps twisted pair Ethernet connection. Phase 2 introduces improved video, universal power supply, and support for MCS 5100 Release 4.0. The Multimedia PC Client is a Windows-based application installed on a personal computer . It is an intelligent SIP endpoint that provides advanced IP telephony features, many of which are unavailable in a traditional Public Switched Telephone Network (PSTN). With the use of the Multimedia PC Client, subscribers can access the services provided by the MCS system over an IP network using a PC. The Multimedia PC Client supports Microsoft Windows* 98 and Windows XP*. Using the existing Multimedia PC Client software as the base, the MCS system also offers Converged Desktop services. A Converged Desktop enables end users to use their PCs for the multimedia portion of their communication and continue to use their existing telephony system for voice. The switching systems are connected to the MCS through SIP or PRI gateways. The signaling and media of the calls are directed to the MCS system for processing. The Multimedia Client Set uses the Multimedia PC Client to control IP Phones. In this mode of operation, the IP Phones do not require the IP Client Manager for services. The IP Phones provide the audio capabilities, while the Multimedia PC Client provides multimedia services such as Video, Collaboration, and Instant messaging.

The Multimedia Web Client is a Web-based client that uses a Web browser on a Microsoft Windows PC platform. Because the Multimedia Web Client is Web-based, it is easy to add and deploy new services as they become available.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 36 Introduction

The Multimedia Office Client is a Microsoft Outlook plug-in that enables Microsoft Outlook to access the features of the Multimedia PC Client. This transforms the e-mail application into a complete communications center that includes advanced IP telephony features.

The Multimedia PC Client for IBM Lotus Notes is an IBM Lotus Notes plug-in that enables IBM Lotus Notes to access the features of the Multimedia PC Client. This transforms the e-mail application into a complete communications center that includes advanced IP telephony features.

Mobile This feature provides limited support for the Mobile Communication Communication 3100 Series Portfolio (MC 3100). (MC) 3100 MCS 5100 Release 4.0 supports the Mobile Communication Client 3100 for Windows Mobile (MCC 3100). The MCC 3100 for Windows Mobile is a Session Initiation Protocol (SIP) Soft Phone that provides consolidated voice and messaging services on supported dual-mode (Wireless Fidelity [WiFi] and cellular) Windows Mobile handheld devices. The MCC 3100 for Windows Mobile provides a wide range of telephony and messaging features, which extend enterprise Public Branch Exchange (PBX) functionality to mobile handheld devices.

Operation, Administration, Maintenance, and Provisioning

The MCS management system consists of the System Management Console, System Manager, and Database Manager. The System Management Console, a JAVA application that operates on a Windows-based PC, provides the interface to the key MCS management functions. The System Manager provides operational, administrative, and maintenance access to all MCS core components and services. The System Management Console is used to deploy software loads and to configure MCS components. The System Manager collects operation, administration,

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 functional components 37

and maintenance (OAM) information from MCS components and displays the information on the System Management Console. The Database Manager uses an Oracle database and interfacing software to store subscriber, provisioning, and configuration data associated with the MCS system. Network-based call logs are also stored in the database. The primary function of the Database Manager is to provide data storage and retrieval capability to support subscriber services. The Accounting Manager provides a framework that enables the transport of accounting information from the MCS system to a back-end billing system. It provides storage and enables flexible and extensible formatting of accounting files into the IPDR (Internet Protocol Detail Record) format.

The Provisioning Manager provides a secure and robust interface to the Database Manager. It provides a logical view of service data that hides complex internal relational database structures. Using logical views has simplified the provisioning tasks for the administrators. The Provisioning Manager provides a Web interface through the Provisioning Client for administrators to provision subscribers, services and service packages, and translation information. The Provisioning Manager provides another Web interface through the Personal Agent that enables subscribers to update or provision their personal configuration. The Provisioning Manager performs data validation before committing data to the Database Manager.

Provisioning tools

The Personal Agent is a web-based tool that subscribers use to configure personal preferences, store pictures, sign up for multimedia services, manage passwords, and start the Multimedia Web Client. The Web interface enables users to access the tool from any location with no software installation required. In addition, administrators can announce new features and run banner advertisements within the Personal Agent window.

Mobile Personal Mobile Personal Agent provides access to a limited set Agent of Personal Agent functionality using the web browser

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 38 Introduction

on your mobile device. You can access the following functions using Mobile Personal Agent: ¥ Enable/disable routes ¥ Edit the number for override route ¥ Display route description

All other Personal Agent functions are not supported on Mobile Personal Agent. For more information about the Personal Agent, see:Personal Agent User Guide (NN42020-100). The Provisioning Client is a web-based tool that administrators use to perform common administrative tasks. Administrators can configure security, access, and permissions for components and roles in the MCS user framework.

Bulk Provisioning Tool In addition to the Provisioning Client, the Bulk Provisioning Tool (BPT) provides an alternate interface for the provisioning of MCS services by administrators. The BPT can be used to add a large number of subscribers through importing data created in other applications. The BPT can also be used to perform bulk modification on existing subscribers, such as adding a new feature to all subscribers. The BPT contains an export function that enables provisioning data related to large number of subscribers to be exported. The BPT is built on Open Provisioning Interface (OPI) where all OPI commands are accessible. The BPT communicates with the Provisioning Manager through OPI.

For more information about the BPT, see MCS 5100 Application Programming Interfaces Reference (NN42020-146).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 39 Deployment options and functional components

The MCS system is a SIP-based service platform created to offer a wide range of next-generation multimedia services in a variety of network configurations. The MCS system provides a powerful platform that enables a full set of SIP features for a diverse range of IP-based clients, media, servers and gateways. This section describes MCS 5100 deployment options. Each section provides a description of the component function, hardware platform, scalability, redundancy, and capacity. This section contains the following: ¥ "Deployment Options" (page 39) ¥ "Functional components" (page 46)

Deployment Options The MCS 5100 supports the following base configurations: ¥ Small system (1 server) ¥ Medium nonredundant system (4 servers) ¥ Medium redundant system (8 servers)

These configurations use IBM eServer x306m servers.

Core components All configurations contain the following core functional components: ¥ Session Manager ¥ Accounting Manager ¥ System Manager ¥ Provisioning Manager ¥ Database Manager

All core components reside on IBM eServer x306m servers.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 40 Deployment options and functional components

Other components The following noncore MCS components, with the exception of the Wireless Client Manager (WiCM), do not reside on the IBM eServer x306m servers: ¥ MRV terminal server (required) ¥ PC for System Management Console (required): provided by customers ¥ Border Control Point (optional) ¥ Media Gateway 3200 (optional) ¥ Media Application Server for Ad Hoc audio conferencing service (optional) ¥ Media Application Server for Meet Me audio conferencing service (optional) ¥ Media Application Server for IM Chat service (optional) ¥ Media Application Server for Music on Hold service (optional) ¥ Media Application Server for Announcement service (optional) ¥ Nortel Business Policy Switch (BPS) 2000, an Ethernet switch (optional) ¥ Nortel Ethernet Switch 470-48T (optional) ¥ Enhanced Breaker Interface Panel (BIP): required for DC applications only (provided by customers) ¥ WiCM (optional): deployed in a single server or pairs of servers on IBM eServer x306m

Small system (1 server) Base configuration The small system is a nonredundant configuration that consists of one core IBM eServer x306m server. Optional components include an IBM x336 for the Media Application Server Ad Hoc audio conferencing service, and an IBM eServer x306 for the WiCM. Additional components such as gateways and Border Control Points can be added to the single server system. However, no additional servers can be added to the base configuration. Any Media Application Server-based services in addition to Ad Hoc audio conferencing are installed on IBM x336 servers. The single IBM eServer x306m server contains the following software modules: ¥ Database Manager ¥ System Manager ¥ Accounting Manager ¥ Oracle Database Monitor ¥ Session Manager

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Deployment Options 41

¥ IP Client Manager (IPCM) ¥ Provisioning Manager ¥ UFTP server for updating IP Phone firmware loads

Maximum configuration Additional servers can be added to the base configuration to support the following totals: ¥ Database Managers: 1 ¥ Management and Accounting Managers: 1 ¥ Session Managers: 1 ¥ IP Client Managers: 1 ¥ Media Gateway 3200s: 5 ¥ Media Application Servers: 6 ¥ Border Control Points: 1 ¥ Provisioned subscribers: 400 ¥ Domains: 25 ¥ SIP BHCA for SIP-to-SIP basic calls: 2000 ¥ SIP PRI DS0 trunks: 100

Features including presence, address book, list of friends, call types such as conferencing and call forking (multiple terminations) have an impact on the system performance figures as stated above.

Contact your Nortel channel partner for information about special engineering requirements that exceed the above described maximum capacities.

Medium nonredundant system (4 servers) Base configuration The medium nonredundant system consists of four IBM eServer x306m servers for core components. Optional components include the IBM x336 or IBM BladeCenter T for the Media Application Server and an IBM eServer x306m for the WiCM. Additional components such as gateways and Border Control Points can be added to the 4-server system.

The following diagram shows the allocation of core functional components on four IBM eServer x306m servers.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 42 Deployment options and functional components

Figure 2 Core component allocation for 4-server configuration

Maximum configuration Additional servers can be added to the base configuration to support the following totals: ¥ Active Database Managers: 1 ¥ Primary Management and Accounting Managers: 1 ¥ Active Session Managers: 6 ¥ IP Client Managers: 26 (13 load-sharing pairs) ¥ Media Gateway 3200s: 18 ¥ Media Application Servers: 24 ¥ Border Control Points: 4 ¥ Additional Personal Agent Managers: 5 ¥ Provisioned subscribers: 3333 per active Session Manager ¥ Domains: 1000

Features including Presence, address book, list of friends, call types such as conferencing and call forking (multiple terminations) have an impact on the system performance figures as stated above.

Contact your Nortel channel partner for information about special engineering requirements that exceed the above maximum capacities.

Medium redundant system (8 servers) Base configuration The Medium redundant system consists of eight IBM eServer x306m servers for core components.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Deployment Options 43

Figure 3 Core component allocation for 8-server configuration

Optional components include the IBM x336 or IBM BladeCenter T for the Media Application Server and an IBM eServer x306m for the WiCM. Additional components such as gateways and Border Control Points can also be added.

Maximum configuration Additional servers can be added to the base configuration to support the following totals: ¥ Database: 2 (1 primary,1 secondary) ¥ System Manager (FPM) and Accounting Manager: 2 (1 active, 1 standby) ¥ Additional Fault Performance Manager and Accounting Managers: 4 ¥ Session Managers: 12 (6 active, 6 standby) ¥ Provisioning Managers (PA): 2 (1 active, 1 standby) ¥ Additional Personal Agent Managers: 5 ¥ IP Client Managers: 26 (13 load-sharing pairs) ¥ Media Gateway 3200s: 40 (2-span devices) ¥ Media Application Servers: 24 ¥ Border Control Points: 4 ¥ Provisioned subscribers: 3333 per active Session Manager ¥ Domains: 1000

Features including presence, address book, list of friends, call types such as conferencing and call forking (multiple terminations) have an impact on the system performance figures as stated above.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 44 Deployment options and functional components

Contact your Nortel channel partner for information about special engineering requirements that exceed the above described maximum capacities.

MCS hardware platform The MCS 5100 system uses the IBM eServer x306m as the hardware platform for core MCS components.

IBM eServer x306m The IBM eServer x306m is equipped with the following: ¥ Intel Pentium 4 3.6 GHz processor ¥ 4 GB PC4200 DDR II memory ¥ 2 x 80 GB SATA HDD ¥ DVD drive ¥ 2 x 100 Base-T Ethernet ports ¥ RoHS compliant ¥ AC power supply

Components shipped loose MCS 5100 components are shipped to customers as loose parts. There are no frames with the servers and gateways premounted. The servers, gateways, software, and documentation are shipped as parts.

Other hardware platforms In addition to the IBM servers, the MCS system uses other physical platforms, including the IBM BladeCenter T for the Border Control Point, IBM x336, or BladeCenter T for the Media Application Server, and AudioCodes Mediant 2000 (TP-1610 CompactPCI board) on a 19-inch 1 U chassis for the Media Gateway 3200.

Information about the hardware platform used by the Media Application Server (MAS) is found in Media Application Server Planning and Engineering (NN42020-201).

Physical server features The MCS product is rack-mountable. Each piece of equipment has a fixed position in the frame to ease upgrades. The MRV terminal server (20 serial ports with modem) supports remote management functions. It provides dial-up support for network elements when the network is down. For IBM eServer x306m servers, a Terminal Server IP Address and Terminal Server Port can be specified in order to identify where the Lights Out Management (LOM) connection is located.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Deployment Options 45

Note: The MCS system uses the MRV LX-4008 and LX-4016 of the MRV LX-4000 series as its terminal servers. The MRV IN-REACH (IR) 8020 of the MRV IR 8000 series used in previous releases are still supported in the current release.

The LOM is a software add-on to the server that enables remote power management. The power management functions, which include reboot, power-on, and power-off, are issued from the System Management Console. These commands require an LOM port on the server. If an LOM connection is not provided, the System Management Console cannot perform control operations on the server. With the LOM functionality, all commands can be performed regardless of the operating system running status even when the status is out-of-band. Nortel offers the following dense QoS enabled Ethernet switches: ¥ BPS 2000: a high-density stackable intelligent 10/100 Ethernet Layer 2 switch that provides Layer 2, 3 and 4 packet classification, prioritization, Quality of Service (QoS) policy, Virtual Local Area Network (VLAN) support and Gigabit-Ethernet uplinks to the network infrastructure. Other Layer 2 and 3 10/100Mbps Ethernet switches supporting packet classification and QoS policy can also be used. For more information about the BPS 200, go to the Nortel product Web site at: http://www.nortel.com/products/02/bstk/switches/bps/index.html ¥ Ethernet 470: provides similar QoS capabilities as the BPS 2000. The Ethernet 470 is a 48-10/100 BaseT-port Ethernet switch. For more information about the Ethernet 470, go to the Nortel product Web site: http://www.nortel.com/products/02/bstk/switches/baystack_470/in- dex.html ¥ Passport 8600: a QoS-based Layer 2 and 3 Ethernet routing switch that offers high performance, high availability and scalability. Passport 8600 supports up to 48-port 10/100 BT Ethernet connectivity. In addition, it also offers a wide variety of interfaces including 10 Gigabit Ethernet, Gigabit Ethernet, Packet over SONET (PoS) and Asynchronous Transfer Mode (ATM). For more information about the Passport 8600, go to Nortel product Web site at: http://www.nortel.com/products/01/passport/8600com/index.html

Software deployment on hardware components The following tables show how the different software modules are deployed on the physical servers

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 46 Deployment options and functional components

Table 2 Software module to server mapping for 4-server and 8-server configuration models Servers for Servers for Servers for Servers for Database System Session Manager IP Client Manager Manager Manager Accounting Manager Database X Manager System X Manager Accounting X Manager Session X Manager Provisioning X Manager IP Client X Manager

Functional components This section provides information on MCS 5100 components. The information includes a function overview, platform, redundancy, scalability, and capacity.

Session Manager Function overview The Session Manager provides the following functions: ¥ JAVA-based application platform that handles SIP sessions and core services that facilitate communication between SIP clients ¥ Registrar Server, Redirect Server, Proxy Server, Back-to-Back User Agent (BBUA) functions, and DNS lookup support Note: The proxy, redirect, location, BBUA, and registrar servers are logical entities. This implementation combines them into a single application.

¥ identification of user agent (UA) method of access and supports UA registration with authentication (which can also apply to all requests) ¥ SIP session setup signaling and maintains the states of the entire call and uses subscriber, group, and domain profile data stored in the database to execute the MCS services for active sessions ¥ dialed digits translation

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 47

¥ SIP user location through database lookup ¥ Call Processing Language (CPL) user script execution environment ¥ control of MCS system resources such as the Border Control Point ¥ communication with third-party gateways and PSTN interconnections using SIP ¥ direct registration of SIP-enabled IP Phones

The Session Manager provides the Presence management engine for presence authentication, subscription, and notification service.

As the Session Manager performs services for subscribers, it reports faults, alarms, logs, and performance information to the System Manager.

The Session Manager contains the Local Accounting Manager (LAM) that collects raw accounting data for the Central Accounting Manager (CAM). The Session Manager is engineered based on customer requirements to store up to 24 contiguous hours worth of raw accounting data in the event of communications loss with the Central Accounting Manager. Customer requirements could include calls processed for each hour, busy hours in the day, types of calls such as conference, gateway, and number of calls.

Platform The Session Manager is deployed on IBM eServer x306m servers. For specifications of the servers, see "MCS hardware platform" (page 44).

Redundancy The nonredundant single server and 4-server configurations do not support redundancy for the Session Manager.

In the 8-server redundant configuration, the MCS supports an autofailover mechanism.

Additional information on this subject is available in the "Guidelines for reliability and survivability" (page 271) section.

Scalability The base 8-server system can be scaled to support a maximum of six active Session Managers. The base 4-server system can be scaled to support a maximum of six active Session Managers.

Capacity A Session Manager supports the following capacities: ¥ IBM eServer x306m server: — 3300 subscribers

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 48 Deployment options and functional components

— 590 000 weighted SIP transactions per hour

The Session Manager is the service execution engine of the MCS system. The module relies on interfaces into other modules such as the Database Manager and, if configured in a given network, the IP Client Manager. The MCS system provides a new breed of services and functionality that legacy telephony does not provide. In order to implement these new services and features, a new paradigm is adopted in order to calculate system capacity. The most common metric used for legacy telephony traffic is BHCA (Busy Hour Call Attempts) or BHHCA (Busy Hour Half-Call Attempts). For SIP-based systems such as MCS, the BHSTA (Busy Hour SIP Transaction Attempts) metric is the most relevant metric. A SIP transaction is defined as a Request followed by a Response. For example, a REGISTER request followed by a 200 OK response is considered a SIP transaction. Different types of MCS call and service types are comprised of one to many SIP transactions. The amount of SIP transaction traffic that can be sustained is highly dependent on the SIP transaction type. The type of SIP transaction will also dictate how much processing the Session Manager will need to accomplish in order to complete the transaction. For this reason, capacity and transaction costs are stated using a Weighted SIP Transaction metric. The limiting factors for a single Session Manager include: ¥ SIP Weighted Transactions for each hour ¥ Number of registered User Agents (SIP terminals) either direct or through the IP Client Manager ¥ Regular reregistration period of SIP User Agents ¥ Periodic pinging by clients to keep FW and NAT pinholes open if configured behind a firewall or NAT device ¥ Number of presence friends in a watch list. Note: Even though the maximum number of friends can be as high as 10 000, Nortel recommends that the list be kept to 20 to conserve Central Processing Unit (CPU) resources for keeping members on the list frequently updated. The maximum number of friends should be configured to 50 when there is a large population of subscribers.

¥ Number of presence watchers for a given subscriber ¥ Number of presence subscriptions by a given subscriber. Note: The MCS system can enable a subscriber to make up to 1000 presence subscriptions through provisioning in the subscriber’s service package. However, presence subscription over 100 can cause serious system performance issues. Nortel recommends that a default value of 50 be configured in the subscriber’s service

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 49

package. The formula for calculating the maximum number of presence subscriptions is as follows: Maximum number of presence subscription generated by a subscriber = (Maximum number of clients a subscriber registered in their user name) x (Maximum number of friends based on service package limitation)

¥ Number of presence state changes for each hour (on-the-phone, manual, active or inactive) ¥ Session average hold time ¥ DNS lookup support can have a 3-5% impact on Session Manager resources

Table 3 "Weighted SIP transaction cost for a IBM eServer x306m server" (page 49) shows the weighted SIP transaction cost for a IBM eServer x306m server.

Table 3 Weighted SIP transaction cost for a IBM eServer x306m server Weighted SIP SIP Additional cost transaction transaction for type Description cost authentication Instant Messa Basic IM from SIP client to SIP client 1.00 0.14 ge Basic SIP-SIP Basic call from SIP client to SIP client (non 2.30 0.63 calls E.164) Client Uses REGISTER messages. This type of 0.54 0.31 registrations transaction accounts for a client registration with the Session Manager. Presence state Uses REGISTER messages. This type of 0.58 0.28 change transaction does not include corresponding NOTIFY messages and requires less processing than the client registration REGISTER message. SIP Ping A PING message in SIP (Not to be confused 0.06 NA with ICMP PING)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 50 Deployment options and functional components

Weighted SIP SIP Additional cost transaction transaction for type Description cost authentication Call forking One subscriber has multiple clients 0.80 0.63 registered. All registered clients are alerted. Forking should not be confused with Personal Agent-based sequential ringing or simultaneous ringing where translation logic must be traversed for each address. Note: The cost of this transaction is the incremental cost per additional registered client that is alerted. To calculate the overall cost, add the basic call weight (SIP-SIP or E.164 call) to the weight given in this row, multiplied by the number of additional registered clients. As an example, ((SIP-SIP call wSIP-T + 0.80) x #terminating clients -1). Presence Uses SUBSCRIBE messages. This type of 0.57 0.31 subscription transaction does not include corresponding NOTIFY messages that are sent from the Session Manager to the client. Service Uses NOTIFY messages. This type of 0.31 NA notification transaction is usually sent by the Session Manager. Address book Uses SUBSCRIBE messages. This type of 0.54 0.31 subscription transaction subscribes to the Address Book service. After subscribing to the Address Book service, clients proceed to download the address book from the Provisioning Manager. Service Uses SUBSCRIBE messages: This 0.51 0.29 package type of transaction accounts for service subscription package subscriptions. After subscribing to their service package, clients proceed to download the service package from the Provisioning Manager. Gateway Calls that require the Session Manager 2.53 0.67 calls (E164 to find a route using translations for the translations) requested destination address. NA = not applicable

The SUBSCRIBE messages support caching for authentication information. Authentication credentials are sent automatically with the SUBSCRIBE messages. The cache timeout for authentication is 5 minutes. This

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 51

credential caching also reduces authentication transaction costs for interactive sessions such as Instant Messaging between individuals. If a client sends six instant messages in a 5 minute window, only the first instant message is forced to authenticate. Subsequent instant messages are sent with credentials. A typical client registration handshake for a subscriber with presence and address book enabled involves the following transactions: ¥ 2 REGISTER messages initiated by the client: — 1 for REGISTER — 1 for REGISTER with password (if authentication enabled)

Note: Calculation of transaction cost of a client registration with authentication includes the cost of a REGISTER and the authentication overhead. Example: 1.33 + 0.46 is the cost for both REGISTER messages.

¥ SUBSCRIBE transactions initiated by the client: — 1 SUBSCRIBE message for service package download — 1 SUBSCRIBE message for address book download — 1 SUBSCRIBE message for each friend (plus 1 for self) the subscriber is watching

Note: This subscribe cost applies to IP Phone 2004 with SIP Firmware (Phase 2), IP Phone 1120E and IP Phone 1140E telephones, but does not apply to the other IP Phones or the IPCM acting on their behalf.

¥ 2 NOTIFY transactions: 1 for service package, 1 for address Book, initiated by the Session Manager in response to the SUBSCRIBE requests ¥ n+1 NOTIFY transactions for each registered client, initiated by the Session Manager for each friend (n) in the subscriber’s presence watch list Note: This notify cost applies to IP Phone 2004 with SIP Firmware (Phase 2), IP Phone 1120E and IP Phone 1140E telephones, but does not apply to the other IP Phones or the IPCM acting on their behalf.

A typical de-registration from the network includes the same message sequence as a registration sequence with one exception: the timeout value specified in the REGISTER and SUBSCRIBE messages is 0 for de-registration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 52 Deployment options and functional components

A presence NOTIFY message is sent when a user changes their presence state. Presence states supported include Active, Connected, and Unavailable. When an error is detected, the Unknown state is displayed. For more information, see Presence Fundamentals (NN42020-141).

Subscriber information is cached by the Session Manager. Transactions where the subscriber information is present in the cache do not require a fetch to the Database Manager. In this way the workload is reduced and performance improved.

The subscriber cache on the Session Manager is cleared on restart or failover. New subscriber entries result in a cache miss on the next transaction involving the new subscriber. Additional work is required on the Session Manager to fetch subscriber data and place them into the cache.

Each Session Manager has sufficient capacity to store its rated number of subscriber’s data in the cache. The rated transaction capacity of the module assumes steady-state operation with none of the transactions requiring fetches from the Database Manager. A Session Manager failure or restart occurring while running at the rated capacity will result in a high cache miss rate. The corresponding additional work on the Session Manager to fetch subscriber data from the Database Manager is capable of driving the Session Manager into overload. This is an expected condition.

Overload conditions exhibit denial of new transactions or increased processing delay or retransmissions. When the transaction rate diminishes or when sufficient data is populated into the cache, the overload due to the cold cache condition abates.

Transaction model notes Note 1: Presence: Presence subscriptions are made individually. A Presence subscription message is sent to the Session Manager for each friend a subscriber is watching. The cost of a Presence subscription is based on the number of friends subscribed to. Additionally, for every friend subscribed to, a notification is sent. Therefore, the cost of the notifications must also be factored in to the relative cost. The actual cost of any given presence subscription cannot be determined ahead of time due to issues such as caching, timing, and use case which are not calculable. The formula given here represents empirical measurements based on controlled traffic runs. These subscriptions are refreshed by the MCS clients once per day. Where a user has included a friend that is not homed on the particular Session Manager processing the user’s presence subscription, additional costs are incurred. This also applies where more than one Session Manager is present in a 2+1 configuration. In such a scenario, if

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 53

the subscriber is homed to another Session Manager in the 2+1 cluster, or on a foreign system, the Session Manager processing the subscription request must proxy that portion of the subscription to the other system. In addition to the cost of making subscriptions, there is a cost each time a friend being watched changes their presence state. The cost varies depending on how many people are watching a subscriber at the time the state change is made. The maximum number of subscribers who can be watching another subscriber at a given point in time is impossible to determine. The system configuration limits provide for upper bounds on the maximum number of watchers. Because service notifications are server generated, they are not subject to authentication costs. However, the registration message that triggers the state change is still subject to authentication overhead. If a subscriber changes their state with one client, all of the subscriber’s other clients that are currently logged in, including the client the state change takes place, will receive a notification of the state change. The notification keeps the clients synchronized. The Maximum Subscriptions Allowed parameter found in the Domain level of the Provisioning Client limits the maximum number of watchers of any given subscriber. The default value is configured to 50. This limit is designed to enable administrators to control bursts of NOTIFY messages. For example, every subscriber may want to watch the presence status of the CEO of the company. For a large number of watchers a single presence state change could cause system overload controls to be invoked. The subscription is performed on a first-come-first-serve basis. After the maximum number of subscribe messages to watch a given user has been reached, all other subscription requests will receive an Unknown presence state response. Any state changes made by the subscriber being watched will not be sent. Note 2: Address Book and Service Package Subscription: Whenever an MCS client logs in, there is a separate address book subscription and service package subscription made to the system. These subscriptions are refreshed at a rate of once for each day. In addition to the cost of the subscription, whenever the address book or service package is updated, notifications are sent to the clients affected.

See Appendix "Presence impact on system capacity" (page 351) and Appendix "Constructing a reference model for system capacity planning" (page 363) for additional information and formulas for calculating capacities on the Session Manager.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 54 Deployment options and functional components

Database Manager Function overview The Database Manager uses the Oracle* database for storing subscriber, provisioning, configuration data, and network based call logs associated with the MCS system, site, servers, components, and services. The primary function of the Database Manager is to provide data storage and retrieval capability to support MCS services.

The Database Manager stores subscriber and provisioning information as well as subscriber location information and registration status based on information received from SIP registrations.

To reliably support the MCS system, two Oracle master (primary and secondary) databases operate in a replicated mode for redundant MCS system deployment. Replication enables changes made at one master site to be applied at the other master site. In the MCS database, only database tables are replicated. The Oracle stored procedures and functional views are not replicated. Therefore, any change to nonreplicated database objects has to be made at both of the master sites independently. This is handled by database upgrade scripts. Changes to replicated objects must be made at the master site only.

Since the MCS database is running in a replicated mode, it is very important that the two databases involved in replication are always in sync. During the database replication, it is possible that a transaction is successfully propagated to the backup master site without being successfully applied at the site. Such an error can result from a database problem such as lack of available space in a table. This can cause the databases to drop out of sync. It is good practice to look for error transactions at least twice a day using Oracle Enterprise Manager (OEM) tools. To ensure the Oracle database is properly replicated, the operator must monitor the replication jobs on a daily basis.

The OEM Console and Oracle Monitor Application, launched from the System Management Console, are used for most of the maintenance operations. OEM enables the database administrator to perform database configuration, backup and recovery, and database fault management. Using Oracle SNMP Agents, the Oracle Monitor Application gathers information on the operational status of the database (for example, disk and table space utilization) and sends alarms and operational measurements (OMs) to the System Management Console. It is essential that the MCS database operator is a trained person who has sufficient knowledge of Oracle and the management databases.

Platform The Database Manager is deployed on IBM eServer x306m servers. For information about the servers, see "MCS hardware platform" (page 44).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 55

Redundancy In a nonredundant configuration such as the single server and 4-server configurations, the Database Manager is deployed on a single server. There is a single primary Oracle database and no secondary database. The subscriber and component data are not replicated.

In a redundant 8-server configuration, the MCS database is fully protected using internally mirrored disk drives for physical data store and using Master-Master database replication of domain, subscriber, service, routing, provisioning, and system configuration data. Additional information is available in "Guidelines for reliability and survivability" (page 271).

Servers for the Database Manager connect to the LAN using redundant Ethernet links.

Scalability The MCS system can be scaled up to one pair servers for the Database Managers. The following are the estimated number of provisioned subscribers supported by the Database Manager in different MCS systems: ¥ Small system 1 server: 50 to 400 subscribers ¥ Medium simplex nonredundant system: 20 000 subscribers ¥ Medium redundant system: 20 000 subscribers

Capacity The Database Manager capacities are characterized by subscriber count.

The Session Manager caches subscriber information. The Database Manager is only used for call processing when the information is not found in the cache. Information in the Session Manager does not expire. This mechanism improves the overall performance of the MCS system and reduces the workload on the Database Manager.

In the steady-state, the Database Manager has few if any subscriber data fetches, so the engineering of the Database Manager workload is not required. Reserve capacity exists on the Database Manager to account for recovery or restart of a Session Manager where significant transactions can require fetching of subscriber data from the Database Manager. However, the capacity can be exceeded and recovery time and transaction delay be affected, both to the recovering Session Manager and other system elements transactions to the Database Manager.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 56 Deployment options and functional components

Simultaneous failure or restart of multiple Session Managers in short succession during periods of heavy load on the Session Managers may exceed the capacity of the Database Manager as the additional workload of multiple Session Manager subscriber data fetches may exceed the capacity of the Database Manager.

The primary capacity factor during the steady-state operation is the subscriber count.

For more information, see "Capacity" (page 47).

System Manager Function overview The MCS Management System consists of the System Management Console and the System Manager. The System Manager sits between the MCS system components and the System Management Console. The console, a JAVA application that operates on a Windows PC, provides an interface to all management functions. The System Manager provides the following functions: ¥ interfaces for deployment of software loads for service components and configuration of properties of system, site, server, component, and services ¥ collection of operations, administration, and maintenance (OAM) information from MCS servers to display on the System Management Console ¥ monitoring memory, CPU, IO, and disk utilization and capturing various alarm events in log files stored on the server running the System Manager using the Simple Network Management Protocol (SNMP). The System Manager can also provide SNMP traps to northbound operational support systems. The System Manager monitors a subset of the SYSLOG logs. ¥ monitoring the performance of the MCS system through the Operational Measurements. Every 15 minutes, the OM report information is captured and archived in Comma Separated Value (CSV) format to disk on the server running the System Manager. OMs are usually represented in term of Groups, which contain related performance data. For example, one group can provide data such as number of successful calls, number of calls rejected, and unauthorized attempts, while another group can provide data such as average call holding time and duration of a call. A snapshot of the report is viewable through the OM browser from the System Management Console.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 57

The System Manager is not responsible for monitoring third-party components and gateways. The Media Application Server and applications are monitored through the Media Application Server Console.

Platform The System Manager is deployed on IBM eServer x306m servers. The System Manager shares the same server platform with the Accounting Manager. For information about the servers, see "MCS hardware platform" (page 44).

Redundancy In a redundant configuration, the System Manager operates on one server, while the Accounting Manager operates on the other. In the event of a failure, the System Manager application needs a manual restart on the backup server.

Servers for the System Manager connect to the LAN using redundant Ethernet links.

For a detailed discussion of redundancy and reliability of the System Manager, see "Deployment options and functional components" (page 39).

Scalability In a redundant architecture, the System and Accounting Managers require one pair of servers per system to provide network reliability.

Capacity The following list shows the number of System Management Console clients supported by the System Manager for different MCS systems: ¥ single server: 2 ¥ 4-server or 8-server: 6

The storage capacities on the System Manager server are: ¥ 7 days of formatted OM files ¥ 7 days of log files

Accounting Manager Function overview The MCS system provides an accounting system that contains the following key logical entities: ¥ the Central Accounting Manager (CAM) that resides on the Accounting Manager. The CAM receives raw accounting data from the Local Accounting Manager (LAM) and formats it into an IPDR-based (IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 58 Deployment options and functional components

Detailed Record) accounting record. The formatted output is always stored on a disk. It can be stored in a compressed format. ¥ the Local Accounting Manager that resides on the Session Manager. The LAM collects raw (unformatted XML) accounting data from active sessions on the Session Manager. It is also responsible for the transport of raw accounting data to the Accounting Manager and for the local storage of this accounting data in the event of communication problems with the Accounting Server. LAM uses Data Transport Protocol (DTP) over TCP to reliably transport data to the Accounting Manager.

The Accounting Manager provides a framework that enables the transport of accounting information from the MCS system to a back-end billing system. The Accounting Manager provides the following functions: ¥ reliable transport of accounting information ¥ persistent storage of accounting information ¥ flexible and extensible formatting of accounting fields into an Internet Protocol Detail Record (IPDR)-like format

Platform The System Manager and Accounting Manager are deployed on IBM eServer x306m servers. For information about the servers, see "MCS hardware platform" (page 44).

Redundancy The redundant configuration requires that the primary Management and secondary Accounting Managers be installed on one server and the primary Accounting Manager and the secondary System Manager be installed on the other server.

Servers for the Accounting Manager connect to the LAN using redundant Ethernet links.

Additional information is available in "Guidelines for reliability and survivability" (page 271).

In a redundant architecture, the Management and Accounting Managers requires one pair of servers per system to provide network reliability.

Scalability The CAM can simultaneously receive accounting data from multiple LAMs. For each connected LAM (one for each Session Manager), the CAM supports reception of both a primary and a recovery accounting data stream.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 59

Capacity The average billing record size is approximately 1.2 KB per session. The LAM on the Session Manager server can store an estimated 24 hours of billing records locally under normal load. The Accounting Manager server can store up to seven days call billing records under normal load. The seven-day storage capacity is only available when compression is used.

The Accounting Manager server can store up to seven days of accounting records locally.

Note: Enabling accounting records for Instant Messages (IM) can cause one record to be generated for each IM. This will have a performance impact on the Session Managers as well as the Accounting Manager. In order to store seven days of accounting records with the IM feature enabled, compression must be enabled on the Accounting Manager.

Provisioning Manager Function overview The Provisioning Manager provides a logical view of complex database structures in a user-friendly Web interfaces. The presentation of a logical view greatly simplifies the provisioning tasks by combining certain or all columns of one or more underlying tables into a single entity for provisioning. The Provisioning Manager offers the following functions: ¥ provisioning of subscribers, service profiles, and translation information in an open, and protected manner ¥ data validation

The Provisioning Manager offers the following functions through the Provisioning Client interface: ¥ delivery of service information to client devices ¥ delivery of service data to the Session Manager and clients

The Provisioning Client and the Personal Agent provide the provisioning interfaces for the MCS system. The interfaces communicate using HTTP/TCP (port 80) to the Provisioning Manager, which includes a Web server. The Provisioning Manager communicates with the Database Manager for Meet Me conferencing service. The Provisioning Manager communicates through SIP with the Session Manager for the Click-to-Call application available in the Personal Agent.

By default, Mobile Personal Agent is accessible through HTTPS protocol only, on port 8444. However, you can also enable HTTP access. In either case, you must ensure that the appropriate port is accessible from outside

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 60 Deployment options and functional components

the trusted network. For more information about the ports used by Mobile Personal Agent, and the format of the path you must use to access it, see Table 4 "Example access paths" (page 60).

Table 4 Example access paths Protocol Port Example access path https 8444 https://mcs5100:8444/mpa/ http 8081 http://mcs5100:8081/mpa/

Multimedia PC Clients communicate with the Provisioning Manager to retrieve address book and service package information using SOAP (Simple Object Access Protocol) over Transmission Control Protocol (TCP) for management purpose. Other clients receive this information from their respective client managers.

The Provisioning Manager communicates with the System Manager using TCP interfaces. The System Manager sends management and configuration data to the Provisioning Manager. The Provisioning Manager sends performance data, logs, and alarms to the System Manager.

The Provisioning Manager communicates with the Media Application Server for services such as Meet Me audio conferencing. This interface is used to synchronize subscriber information such as chairperson and bridge numbers.

Data access can be protected by three different mechanisms: authentication, secure transport, and data access restrictions. The authentication mechanism requires that all users must first log in before data access requests are processed. Data access restrictions can be defined to ensure that users can only access data to which they have the proper privileges. These privileges are based on user roles and responsibilities that are datafilled by an administrator.

The Web server used by the Provisioning Manager is also used for the automatic client update feature on the Multimedia PC Client. This feature assists end users in keeping their client software application current.

Manage HTTP access for Mobile Personal Agent

Nortel recommends that you allow only HTTPS access to Mobile Personal Agent. HTTPS is the default access method. Use the procedures in this section to enable or disable HTTP access, or to view the current HTTP configuration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 61

Enable HTTP access for Mobile Personal Agent

If you require HTTP access, use the following procedure to enable it.

Prerequisites ¥ Ensure that you have deployed MCS5100 4.0.27 or later.

Enabling HTTP access for Mobile Personal Agent Step Action

1 Log on to the server command line using the root user account.

2 Enter the command: cd /var/mcp/run/MCP_9.1//bin For example: cd /var/mcp/run/MCP_9.1/PROV1_0/bin

3 Enter the command: ./mpaProtocol.pl http enable HTTP port 8081 is mapped to the Mobile Personal Agent feature. For an example of the path to access HTTP, see Table 4 "Example access paths" (page 60).

—End—

Table 5 Variable definitions Variable Definition The name assigned to the Provisioning server or Personal Agent Manager server.

View status of HTTP access for Mobile Personal Agent Use the following procedure to view information about whether HTTP access for Mobile Personal Agent is enabled or disabled.

Viewing HTTP access status for Mobile Personal Agent Step Action

1 Log on to the server command line using the root user account.

2 Enter the command: cd /var/mcp/run/MCP_9.1//bin For example: cd /var/mcp/run/MCP_9.1/PROV1_0/bin

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 62 Deployment options and functional components

3 Enter the command: ./mpaProtocol.pl show

—End—

Disable HTTP access for Mobile Personal Agent If HTTP access for Mobile Personal Agent is enabled, you can use the following procedure to disable it.

Disabling HTTP access for Mobile Personal Agent Step Action

1 Log on to the server command line using the root user account.

2 Enter the command: cd /var/mcp/run/MCP_9.1//bin For example: cd /var/mcp/run/MCP_9.1/PROV1_0/bin

3 Enter the command: ./mpaProtocol.pl http disable

—End—

Platform The Provisioning Manager is deployed on a IBM eServer x306m server. For information about the servers, see "MCS hardware platform" (page 44).

The Provisioning Manager is coresident with the IP Client Manager on the same server.

Redundancy The Provisioning Manager requires manual intervention in the event of failure.

Servers for the Provisioning Manager connect to the LAN using redundant Ethernet links.

Scalability One Provisioning Manager is used for communication with all Multimedia PC Clients. Multiple Provisioning Managers can be installed for use by the Personal Agent.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 63

Capacity The Provisioning Manager supports up to 1000 connections.

IP Client Manager Function overview IP Phones that are not SIP-enabled are simple endpoints that must be controlled by a Multimedia PC Client or an IP Client Manager.

The IP Client Manager initiates SIP requests to the Session Manager in response to stimuli from the IP Phones. The IP Client Manager also provides the IP Phones with access to all of the multimedia services provided by the MCS system, including call forward, ignore, decline, transfer, waiting, hold, and mute. The softkeys of the telephone are also controlled by the IP Client Manager.

The IP Client Manager uses the Nortel UNIStim (Unified Network IP Stimulus) proprietary protocol to communicate with the IP Phones. The IP Client Manager acts as the UNIStim-to-SIP signaling gateway for the IP Phones. The IP Client Manager is a SIP endpoint, which implements both SIP server and client applications: the IP Client Manager is a user agent server when receiving requests such as INVITE; the IP Client Manager is a user agent client when sending requests such as INVITE.

The IP Client Manager uses SIP to communicate with the Session Manager. The IP Client Manager performs SIP-UNIStim conversion for IP Phone clients, which enables the interworking of IP Phones with the Session Manager. Using its interface at the enterprise public network, the IP Client Manager sends all outgoing SIP messages (INVITE, BYE) to the Session Manager, which then forwards the information to the correct destination.

The IP Client Manager is deployed using the MCS System Management Console. The System Manager server stays in contact with the IP Client Manager through OMI (Open Management Interface) which is an XML-based protocol using a TCP/IP with the Transport Layer Security (TLS) and Secure Socket Layer (SSL) transport mechanism. The TLS is a protocol for secure, encrypted transmission of information.

The IP Client Manager only communicates with the IP Phones in the signaling plane. Voice media are not sent between IP Client Manager and the IP Phones. The IP Phones communicates voice media directly with the other endpoints or the Border Control Point. Instant Messaging functions are provided through the signaling plane and therefore are communicated through the IP Client Manager to the Session Manager.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 64 Deployment options and functional components

The IP Phones contact the IP Client Manager to register to receive service. In the first message from the IP Phones, the MAC (Media Access Control) address and IP address of the phone are sent to the assigned IP Client Manager. The MAC address is included in the payload of the IP packet.

The IP Client Manager identifies IP Phones by their MAC addresses rather than by their IP address. Since the IP Client Manager associates device information with the MAC address of the phone rather than with the IP address of the phone, the IP address of the phone can change, as it often does in a DHCP (Dynamic Host Configuration Protocol) enabled network, and the IP Client Manager is still able to associate the user information with the appropriate phone.

When IP Phones move from location to location, they require no provisioning. The device parameters are already present in the same IP Client Manager. When the phone is reconnected into the network, it finds the IP Client Manager and requests service. The IP Client Manager notes the new IP address of the phone, associates the device information with the MAC address, which has not changed, and service is granted.

All programmed configurations such as call forwarding, call reject, redirect for IP Phones are stored in the Database Manager. Since this information is stored in the database, the data is persistent even if the IP Client Manager is restarted.

Platform The IP Client Manager is deployed on a IBM eServer x306m server. For information about the servers, see "MCS hardware platform" (page 44).

Redundancy Redundancy of the IP Client Manager is supported in 8-server systems.

The IP Client Manager is equipped with internal mirrored disks to protect software and local persistent configuration.

For the 8-server system, the IP Client Manager also connects to the LAN using redundant Ethernet links.

For more information on redundancy and reliability aspects, see "Guidelines for reliability and survivability" (page 271).

Scalability IP Client Managers can be added as required. The scalability is only limited by the maximum number of subscribers supported by each MCS system.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 65

Capacity The IPCM module on the small system can support up to 400 nonSIP enabled IP Phone clients. For the medium system one active IPCM can support up to 3333 nonSIP enabled IP phone clients.

On the medium system it is good practice to distribute the 3333 nonSIP enabled IP phone clients across a pair of servers to minimize the impact of service outage.

The IPCM supports a maximum of 1,500 address book entries for each subscriber.

Wireless Client Manager Function overview The Wireless Client Manager (WiCM) is the application that converts the HTTP requests from the wireless client into SIP and SOAP. It communicates with the MCS Session Manager using SIP. WiCM uses SIP on behalf of the wireless clients to interact with the MCS system to send and receive instant messages, make and receive calls, query presence states, etc. It communicates with the MCS Provisioning Server using SOAP to access a subscriber’s address book and routes. Figure 4 MCS 5100 network with WiCM and wireless access devices

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 66 Deployment options and functional components

The major WiCM to Wireless Client functionality offered in this release includes: ¥ Instant Messaging ¥ Click to Call ¥ Presence ¥ Corp Dir ¥ Reachability Routes (including Temporary Address) ¥ Address Book ¥ Connect and Disconnect

The WiCM is not deployed in the MCS System Management Console as the other MCS servers. The RIM BlackBerry is the only wireless client that is supported. Research in Motion (RIM) is the company that produces the BlackBerry. For additional information about RIM, see www.rim.com.For information specific to the RIM BlackBerry, see www.blackberry.com.

Platform The Wireless Client Manager is deployed on a IBM eServer x306m . For information about the server, see "MCS hardware platform" (page 44).

Redundancy There is an option of deploying the WiCM platform as a single server architecture, or as a two-server redundant architecture. Hardware redundancy is achieved by means of paired active servers. The wireless client fails over to an alternate WiCM if the connection to the current WiCM is lost. The addresses of both servers (in a redundant pair) are configured in the wireless clients. If one WiCM server of the redundant pair is unavailable to the wireless client, then the client connects to the other WiCM server. Redundant WiCM servers are not aware of each other. Paired WiCM servers are both active, but the supported number of subscribers does not double when two WiCM servers are paired for redundancy. See Wireless Client Manager Fundamentals (NN42020-118).

Scalability Additional Wireless Client Manager servers can be deployed as capacity demand increases as long as the number of active users of the entire MCS system is not exceeded.

Capacity The IBM eServer x306m supports 3000 subscribers. This platform can buffer 150 messages for each subscriber.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 67

Border Control Point Function overview In IP networks, there are situations where either one or both endpoints are obscured from each other. Acting as the meeting place between two endpoints, the Border Control Point performs the Network Address and Port Translation (NAPT) function. NAPT enables the Border Control Point to relay packets between two endpoints located in different networks. The Border Control Point can use different or the same address spaces. The Border Control Point can perform NAPT on both the source and destination address and port and then forward incoming packets for authorized media sessions and drop packets that are not. The Border Control Point is an optional component of the MCS system. The Border Control Point is controlled by the Session Manager through Media Portal Control Protocol (MPCP) messages. The Border Control Point NAPT function provides the MCS platform with the following benefits: ¥ mapping between two address spaces that cannot communicate directly ¥ opening and closing media pinholes only at the request of the Session Manager which has authenticated the request ¥ rejecting packets arriving on inactive ports ¥ accepting only media packets on active ports that originate from the specific source (IP address and port) associated with the active port, and match this session’s media packet signature such as RTP or UDP ¥ media Meet Me functions that enable multimedia sessions between clients in different enterprises such as partners sitting behind firewalls or even clients in the same enterprise served by disparate network islands by providing an addressable point-of-presents (POP) in the network. ¥ media anchor/pivot functions that enable media connections to be manipulated transparently to the endpoint. This is useful for less-capable clients and endpoints (for example, those that do not support mid-call media manipulations) Note: With the media anchor/pivot function, the endpoint of a media stream can be changed dynamically without destroying the connection.

A multimedia session consists of control plane (SIP, MPCP) and the bearer plane (RTP, RTCP and UDP). The Session Manager operates in the control plane and controls session establishment, manipulation, and termination. The Border Control Point operates in the bearer plane, forwarding media packets between endpoints.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 68 Deployment options and functional components

The Session Manager has an MPCP control component that controls the Border Control Point through MPCP messages. The Border Control Point processes the received MPCP request and performs the associated media resources (IP addresses and ports) allocation and de-allocation. The result of the request and all associated information is returned to the Session Manager for call processing. The Border Control Point communicates with each Session Manager (active or inactive). This enables a standby Session Manager to be aware of Border Control Points and shortens downtime associated with Session Manager failovers. For example, upon receiving a SIP INVITE message for a session that requires the Border Control Point, the Session Manager sends an MPCP Create Connection (CRCX) message to request media resources for the session. The Border Control Point processes the CRCX request and allocates media resources consisting of IP addresses and UDP ports for the session. It associates the allocated media resources to a unique session identifier to ensure that subsequent requests for the session manipulate the appropriate media resources. The allocated media resources are communicated back to the Session Manager, which then uses this information to modify the IP address and port media descriptions in the Session Description Protocol (SDP) contained in the received SIP INVITE. This will advertise the newly allocated RTP media for the session to the terminating SIP endpoint. Upon receiving a SIP BYE message, the Session Manager sends an MPCP Delete Connection (DLCX) message to the Border Control Point requesting the de-allocation of media resources associated with this session identifier. The result of the de-allocation is communicated back to the Session Manager to acknowledge successful processing of the request. The MCS system uses a three-tiered approach to manage Border Control Point resources. This three-tiered resource management strategy involves: ¥ Media Portal Selection — Each Border Control Point is configured to register with specific Session Manager(s). — There can be many Border Control Points registered to provide service to a single Session Manager. — All Border Control Points registered with a Session Manager are placed in a pool. — As multimedia sessions occur that require Border Control Point facilitation, the Session Manager selects a target Border Control Point from the pool based on a selection algorithm.

¥ Blade Selection

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 69

— The Border Control Point is comprised of two functional components: a host CPU that supports MPCP and up to six blades that perform the media packet relay functions. — Every new allocation request causes the Border Control Point host CPU to query the occupancy of each associated blade. — The Border Control Point host CPU selects the best blade, the one with the most idle media resources, to process the media session.

¥ UDP Port Selection — The Border Control Point can be configured to use UDP ports within a specified range. To avoid interfering with the well-known ports, this configuration parameter should be configured to a higher port range like 40 000-60 000. — When a blade, the best blade as determined in Blade Selection, in the Border Control Point is requested to process a new media session it randomly selects an available port in the specified range and allocates it against the session.

Rules for Border Control Point use Routing media packets using the Border Control Point requires a lot of network resources. Therefore, it is desirable to route media flow directly between clients without the use of the Border Control Point when both clients are in the same network.

Rules have been established to optimize the voice path between two endpoints by optimizing (or avoiding) the user Border Control Points. There are two sets of rules used by the Session Manager to control the Border Control Point: the Media Portal Insertion and Selection rules. The insertion rules determine whether a Border Control Point is required based on the network routability such as private/public addresses and subscriber domain information. The insertion rules determine when a Border Control Point should be involved in a call. The selection rules are concerned with how the resources should be allocated. The selection rules determine which portal should be used for the call.

To add location-based information into the decision process, MCS provides the ability to capture the virtual network topology and associated location information for call processing. Location information is provided to the system by constructing domain-based location trees based on the service providers’s knowledge of the network topology and locations of various MCS entities such as subscribers, clients, gateways, and portals. MCS has also improved the insertion and selection decision processes based on the locations of the subscribers and Border Control Point resources.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 70 Deployment options and functional components

The location-based deployment model gives service providers the ability to deploy the Border Control Points and other system resources closer to the subscribers to meet the service quality requirements and to scale MCS services across large geographic areas. More information is available in the "Location infrastructure" (page 90) and "MCS virtual network for Border Control Point" (page 97) section.

Redundancy The Border Control Point provides the following redundancies: ¥ Host IP failover: The host CPU has two physical Ethernet ports. The default configuration is that one interface is configured to active and the other is configured to standby. In this configuration, if any problems are detected on the active link, the Border Control Point will automatically switch activities to the standby link. ¥ Network connectivity: The host CPU physical Ethernet ports are connected to different network equipment such as switches and routers. ¥ Service redundancy: Each Border Control Point can service calls from multiple Session Managers. Failure of a Session Manager will not isolate the Border Control Point resources. Failure of a Border Control Point will decrease overall MCS system capacity.

For detail redundancy information, see "Guidelines for reliability and survivability" (page 271).

Scalability The Border Control Points are scaled as pooled resources. Half shelves for the Border Control Point can be added as needed. The only limit is that the increase of Border Control Points should not exceed the maximum number of managed elements in the System Manager.

Capacity The following limits apply to the Border Control Point: ¥ 48 000 busy hour session attempts for each half shelf ¥ 400 simultaneous voice calls for each resource card - assumes G.711, 20 ms frame, no silence suppression ¥ 2400 simultaneous voice calls for each half shelf

When engineering capacity on a Border Control Point media blade, the following two key points are important: ¥ Audio uses high packets per second and small packet sizes (small bandwidth). ¥ Video uses low packets per second and large packet sizes (high bandwidth).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 71

Packets per second and bandwidth numbers can vary from what is provided above depending on the CODEC used or video CODEC settings such as resolution and frames per second. The Border Control Point is constrained by both packet per second and bandwidth. For packet per second (pps), a 400-session limit for audio is established based on packet per second processing limit. The following are the calculations of the packet per second and bandwidth: ¥ packet per second: 400 x 50 pps (based on 20ms G.711) x 2 directions = 40 000 pps Note: This is the maximum for each media blade.

¥ bandwidth: 400 x 95 Kbps (based on 20ms G.711) x 2 directions (transmit and receive) = 72 Mbps. Note: The worst-case scenario in terms of bandwidth involves communications that enter and exit the same 100 BaseT Ethernet link.

When the MCS operates in full duplex, it can theoretically run at 100 Mbps. However, Nortel recommends that the link be engineered to run at 75% of the maximum rate in order to preserve service quality and performance.

Optimization For calls that traverse two Session Managers within the same cluster, as is the case when the terminator is homed to a different Session Manager than the originator, only the first Session Manager allocates a media portal. This prevents wasting of the Border Control Point resources. This behavior is independent of client type such as third-party PRI gateways and call type such as basic call, transfer, conference, and client firewall status.

Media Gateway 3200 Function overview The Media Gateway 3200 is a media gateway that can interwork with both the ISDN Primary Rate Interface (PRI) and Channel Associated Signaling (CAS) interface. The Media Gateway 3200 supports interworking with CallPilot voice mail system and the PSTN or PBX systems. The Media Gateway 3200 features echo canceller with a 32 ms tail and silence suppression with Comfort Noise Generation. The Media Gateway 3200 supports the following CODECs: ¥ G.711 ¥ G.729A

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 72 Deployment options and functional components

Platform The Media Gateway 3200 hardware platform is a single hot-swap compact PCI (cPCI) TP-1610 board contained in a 19-inch 1 U chassis provided by AudioCodes. The chassis contains two compact PCI (cPCI) slots. The chassis supports 110/240V AC or optional 48V DC power supply.

Redundancy The Media Gateway 3200 has two 10/100 BaseT Ethernet interface connections to the IP network. The connections provide LAN redundancy.

Scalability The Media Gateway 3200 can be added as demand increases.

Capacity The Media Gateway 3200 supports 1, 2, 4, 8, and 16 spans of independent, simultaneous calls. The following list shows the BHCA supported at each span level: ¥ 1 span: 6250 BHCA ¥ 2 span: 12 500 BHCA ¥ 4 span: 25 000 BHCA ¥ 8 span: 50 000 BHCA ¥ 16 span: 100 000 BHCA

The Media Gateway 3200 cPCI board can support up to 480 DSP channels. In order to support the maximum capacity, two gateways are required. Each gateway supports 240 DSP channels. The following list shows the different configurations of the Media Gateway 3200: ¥ E1 configurations: — 30 Channels on 1 E1 span with 1 gateway — 60 Channels on 2 E1 spans with 1 gateway — 120 Channels on 4 E1 spans with 1 gateway — 240 Channels on 8 E1 spans with 1 gateway — 480 Channels on 16 E1 spans with 2 gateways

¥ T1 configurations: — 24 Channels on 1 T1 span with 1 gateway — 48 Channels on 2 T1 spans with 1 gateway — 96 Channels on 4 T1 spans with 1 gateway

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 73

— 192 Channels on 8 T1 spans with 1 gateway — 384 Channels on 16 T1 spans with 2 gateways

For more information about the Media Gateway, see the Media Gateway documentation.

Media Application Server

ATTENTION The Media Application Server (MAS) is an optional component in the MCS system. The MAS supports Ad Hoc and Meet Me audio conferencing services, Music on Hold, Announcements and IM Chat services. However, these services cannot be combined on a single platform. The MAS has its own suite of documentation.

IP Phone 2007 Function overview The IP Phone 2007 is a standalone IP telephone. It can be controlled by either an IP Client Manager or Multimedia PC Client.

The IP Phone 2007 contains a large, multi-field, touch-sensitive color LCD display that provides self-labeling feature keys for expansion of the service suite. The IP Phone 2007 connects to the network over a standard Ethernet connection. Each IP Phone 2007 is configured with two server or controller IP addresses called S1 and S2. The IP Phone 2007 first tries to connect to the controller using the S1 address and if unsuccessful tries to connect to the S2 address. These two IP addresses (S1 and S2) can be either the IP Client Manager or a Multimedia PC Client.

The IP Phone 2007 always contacts the IP Client Manager or Multimedia Client to receive service. In the original message from the IP Phone 2007, the MAC (Media Access Control) address and IP address of the IP Phone 2007 are sent to the IP address of the configured controlling device: The IP Client Manager or Multimedia Client. The controlling device identifies the IP Phone 2007 telephones by their MAC addresses rather than their IP address, since the IP address of the phone can change dynamically, just as it often does in a DHCP (Dynamic Host Configuration Protocol)-enabled network.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 74 Deployment options and functional components

The network control setup for the IP Phone 2007 enables easy moving of phones and multiple user logins. A IP Phone 2007 can be moved from one building to another without the need for provisioning. When it connects to the network, the IP Phone 2007 will find the same IP Client Manager or Multimedia PC Client to request service. The device parameters are already present. The IP Client Manager or Multimedia PC Client note the new IP address of the IP Phone 2007, associate the device information with the MAC address, which has not changed, and service is granted. In addition, a IP Phone 2007 can be logged in by different users at different times to support user mobility.

Other messaging occurs when the IP Phone 2007 initializes, sending an initial UNIStim message to the controller’s IP address. There are as many as ten handshake messages that travel between the IP Phone 2007 and the controller. After this handshake sequence is complete, the controller sends a SIP REGISTER message to the Session Manager for any user that is attached to that IP Phone 2007. When the IP Phone 2007 is under the control of the IP Client Manager, there can be multiple simultaneous users. In this case, a SIP REGISTER message is sent from the IP Client Manager to the Session Manager for each user. The Session Manager replies with a 200 OK message. The IP Client Manager then sends a UNIStim message to the IP Phone 2007 that instructs the IP Phone 2007 to display the phone icon and the user name and indicates successful registration.

When a user manually logs into the IP Phone 2007, the user must press the login key and provide login information. This information is sent to the IP Client Manager, which sends it to the Session Manager. A 200 OK message is returned. The IP Client Manager sends a UNIStim message to the IP Phone 2007 that instructs the IP Phone 2007 to display the phone icon and the user name and indicates successful registration.

After successful registration, the IP Client Manager begins to send Keep-Alive messages to prompt a response from the IP Phone 2007, which resets the NAT and NAPT timers to keep the signaling path open.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 75

Information about the IP Phone 2007 service features is available in the MCS 5100 Overview (NN42020-143) document.

Firmware upgrade From time to time, a new firmware version can become available for the IP Phone 2007. In the IP Client Manager configuration, there is an indicator of what the desired firmware load is for connecting clients. When a IP Phone 2007 is initialized and upon reregistration, a query is done to detect if the firmware matches the desired load. If not, the user is prompted to change firmware versions. The administrator can also manually request an upgrade for a particular device from the Management Console. The IP Client Manager can use a UFTP (UNIStim File Transfer Protocol) server to download this firmware to the phone. The UFTP server is installed in the IP Client Manager.

When under control of the PC, the firmware on the IP Phone 2007 cannot be upgraded.

Theoretically, several thousand firmware upgrades could be performed simultaneously since the limiting factors are the number of available assigned UDP ports and the amount of system memory. In order to prevent performance issues with the other processes running on the same machine, the number of simultaneous firmware upgrades should be limited to 20 (or 5 if 100% success is required). Failed downloads revert to the previous firmware version and the user must retry.

The firmware upgrades can be started by the System Management Console or initiated from the IP Phone 2007. The System Management Console tells the IP Client Manager to upgrade the firmware on a certain device. To begin the download, the IP Client Manager tells the IP Phone 2007 to contact the UFTP Server at the default port 50020. See the following illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 76 Deployment options and functional components

Figure 5 IP Phone 2007 firmware update using UFTP

The UFTP server issues a passive open (listens) on UDP port 50020 and waits for the IP Phone 2007 to initiate connections. Port 5000 (same one as for signaling) on the IP Phone 2007 is used to connect to port 50020 on the UFTP server. The Administrator can check the outcome firmware upgrades through the System Management Console. The progress for this upgrade can be viewed on the display of the phone, which shows the percentage of download completed as the download progresses. Control must be temporarily shifted to the IP Client Manager during a firmware upgrade.

Port 50020 is configurable in the ESetMgr.properties file. The line is: UserAgent.Download.ServerPort=50020 to use port 50020. This Port is also configurable in the UFTPServer.properties file. The line is: UFTP_PORT=50020. Both files need to contain the same value.

The UFTP uses UDP to download the file. The way that UFTP is implemented, the same UDP port can be shared to do simultaneous downloads. Firewalls that may be directly in front of an IP Phone 2007 need to be able to accept incoming UDP from port 50020 to port 5000. Whether to use UFTP or TFTP for the firmware download is also configurable in the UFTPServer.properties file. The line is: UserAgentDownload.UFTP=true to use UFTP. Firmware files are usually a little smaller than 2 Mbytes. UFTP uses a 2K byte block for transferring the file. It transfers fewer packets than with TFTP, the download times improve anywhere between 5-20 percent. TFTP sends the firmware binary file in small 512-byte blocks so the upgrade can take as long as 5 minutes.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 77

Redundancy The IP Phone 2007 is configured with two IP addresses for access to the IP Client Manager or Multimedia PC Client. Additional information is available in the "Guidelines for reliability and survivability" (page 271) section.

IP Phone 2004 (Phase 1 or Phase 2) Function overview The IP Phone 2004 is a standalone IP telephone. It can be controlled by either an IP Client Manager or Multimedia PC Client.

The IP Phone 2004 contains a large multi-field LCD display that provides self-labeling feature keys for expansion of the service suite. The IP Phone 2004 connects to the network over a standard Ethernet connection. Each IP Phone 2004 is configured with two server or controller IP addresses called S1 and S2. The IP Phone 2004 first tries to connect to the controller using the S1 address and if unsuccessful tries to connect to the S2 address. These two IP addresses (S1 and S2) can be either the IP Client Manager or a Multimedia PC Client.

The IP Phone 2004 always contacts the IP Client Manager or Multimedia PC Client to receive service. In the original message from the IP Phone 2004, the MAC (Media Access Control) address and IP address of the IP Phone 2004 are sent to the IP address of the configured controlling device: The IP Client Manager or Multimedia PC Client. The controlling device identifies the IP Phone 2004 telephones by their MAC addresses rather than their IP address, since the IP address of the phone can change dynamically, just as it often does in a DHCP (Dynamic Host Configuration Protocol)-enabled network.

The network control setup for the IP Phone 2004 enables easy moving of phones and multiple user logins. A IP Phone 2004 can be moved from one building to another without the need for provisioning. When it connects to the network, the IP Phone 2004 will fine the same IP Client Manager or Multimedia PC Client to request service. The device parameters are already present. The IP Client Manager or Multimedia PC Client note the

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 78 Deployment options and functional components

new IP address of the IP Phone 2004, associate the device information with the MAC address, which has not changed, and service is granted. In addition, a IP Phone 2004 can be logged in by different user at different times to support user mobility.

Other messaging occurs when the IP Phone 2004 initializes, sending an initial UNIStim message to the controller’s IP address. There are as many as ten handshake messages that travel between the IP Phone 2004 and the controller. After this handshake sequence is complete, the controller sends a SIP REGISTER message to the Session Manager for any user that is attached to that IP Phone 2004. When the IP Phone 2004 is under the control of the IP Client Manager, there can be multiple simultaneous users. In this case, a SIP REGISTER message is sent from the IP Client Manager to the Session Manager for each user. The Session Manager replies with a 200 OK message. The IP Client Manager then sends a UNIStim message to the IP Phone 2004 that instructs the IP Phone 2004 to display the phone icon and the user name and indicates successful registration.

When a user manually logs into the IP Phone 2004, the user must press the login key and provide login information. This information is sent to the IP Client Manager, which then sends it to the Session Manager. A 200 OK message is returned. The IP Client Manager sends a UNIStim message to the IP Phone 2004 that instructs the IP Phone 2004 to display the phone icon and the user name and indicates successful registration.

After successful registration, the IP Client Manager begins to send Keep-Alive messages to prompt a response from the IP Phone 2004, which resets the NAT and NAPT timers to keep the signaling path open.

Information about the IP Phone 2004 service features is available in MCS 5100 Overview (NN42020-143).

Firmware upgrade From time to time, a new firmware version can become available for the IP Phone 2004. In the IP Client Manager configuration, there is an indicator of what the desired firmware load is for connecting clients. When a IP Phone 2004 is initialized and upon reregistration, a query is done to detect if the firmware matches the desired load. If not, the user is prompted to change firmware versions. The administrator can also manually request an upgrade for a particular device from the Management Console. The IP Client Manager can use a UFTP (UNIStim File Transfer Protocol) server to download this firmware to the phone. The UFTP server is installed in the IP Client Manager.

When under control of the PC, the firmware on the IP Phone 2004 cannot be upgraded.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 79

Theoretically, several thousand firmware upgrades could be performed simultaneously since the limiting factors are the number of available assigned UDP ports and the amount of system memory. In order to prevent performance issues with the other processes running on the same machine, the number of simultaneous firmware upgrades should be limited to 20. Sometimes the download may fail. If 100% success is important, then five is the recommended maximum number of simultaneous downloads. The failed downloads revert to the previous firmware version and the user will have to try again.

The firmware upgrades can be started by the System Management Console or initiated from the IP Phone 2004. The System Management Console tells the IP Client Manager to upgrade the firmware on a certain device. To begin the download, the IP Client Manager tells the IP Phone 2004 to contact the UFTP Server at the default port 50020. See the following illustration.

Figure 6 IP Phone 2004 firmware update using UFTP

The UFTP server issues a passive open (listens) on UDP port 50020 and waits for the IP Phone 2004 to initiate connections. Port 5000 (same one as for signaling) on the IP Phone 2004 is used to connect to port 50020 on the UFTP server. The Administrator can check the outcome firmware upgrades through the System Management Console. The progress for this upgrade can be viewed on the display of the phone, which shows the percentage of download completed as the download progresses. Control must be temporarily shifted to the IP Client Manager during a firmware upgrade.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 80 Deployment options and functional components

Port 50020 is configurable in the ESetMgr.properties file. The line is: UserAgent.Download.ServerPort=50020 to use port 50020. This Port is also configurable in the UFTPServer.properties file. The line is: UFTP_PORT=50020. Both files need to contain the same value.

The UFTP uses UDP to download the file. The way that UFTP is implemented, the same UDP port can be shared to do simultaneous downloads. Firewalls that may be directly in front of an IP Phone 2004 need to be able to accept incoming UDP from port 50020 to port 5000. Whether to use UFTP or TFTP for the firmware download is also configurable in the UFTPServer.properties file. The line is: UserAgentDownload.UFTP=true to use UFTP. Firmware files are usually a little smaller than 2 Mbytes. UFTP uses a 2K byte block for transferring the file. It transfers fewer packets than with TFTP, the download times improve anywhere between 5-20 percent. TFTP sends the firmware binary file in small 512-byte blocks so the upgrade can take as long as 5 minutes.

Redundancy The IP Phone 2004 is configured with two IP addresses for access to the IP Client Manager or Multimedia PC Client. Additional information is available in the "Guidelines for reliability and survivability" (page 271) section.

IP Phone 2004 with SIP firmware Additional to all the features of the IP Phone 2004 telephones, the IP Phone 2004 with SIP Firmware does not require the IPCM to translate UNIStim to SIP and can directly register to the Session Manager.

Only the IP Phone 2004 (Phase 2) telephone can be upgraded to SIP.

In SIP mode, the telephones have all the functionality and User Interface (UI) model as the IP Phone 2004 telephones connected to an MCS PC Client. The only difference is that most features are done locally.

Firmware Upgrade When an IP Phone 2004 start up, the firmware is queried to detect if the phone firmware matches the desired load. If the firmware does not match the desired load, then, depending on system configuration, the phone either automatically downloads the firmware, or prompts the user to change firmware versions. The firmware downloads are accomplished through TFTP (Trivial File Transfer Protocol) servers. The number of phones that can simultaneously receive firmware downloads caries with the speed of the TFTP server. There are up to three files downloaded to the telephones during a complete upgrade. These are the IPPhone2004tftp.cfg, IPPhone2004sysConfig.dat, .lng and IPPhone2004SIP.img. When the telephones are initialized and an firmware download is required the telephones will access

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 81

the TFTP server on the default port 69 from its 5060 port. The progress for this upgrade can be viewed on the display of the phone, which shows the percentage of download completed as the download progresses. The upgrade process time can be summarized with the following formula: Total firmware download time = 42 sec (first set) + (12 seconds * # sets).

Note: TFTP server details used for the measurements:

¥ Software = PUMPKIN 2.5 ¥ Hardware = 2.6 GHz Dell PC

It take up to 5 minutes for each telephone to be either registered or to get to the user login prompt when complete new firmware is downloaded. This time includes the power up time, complete firmware download, file decompression and writing data to telephone memory.

Registration time The registration time was measured when connected to the IPCM in UNIStim mode. It is expected that in SIP mode, the telephones will have a similar time for telephone registration which can be summarized by the following formula: Total registration time = 45 seconds (first set) + (1 sec * # sets).

IP Phone 2002 Function overview The IP Phone 2002 is a Nortel IP-based telephone. The telephone functions as a User Agent that is managed by the IP Client Manager or Multimedia PC Client.

The IP Phone 2002 has the same features as the IP Phone 2004 subsection. The IP Phone 2002 supports the same services and firmware upgrade process as the IP Phone 2004. However, the IP Phone 2002 cannot be upgraded to SIP like the 2004 phone.

The main difference between the IP Phone 2002 and IP Phone 2004 is that the IP Phone 2002 has a smaller screen display. User interactions are limited to the one-main line display and the four softkeys. In addition, the IP Phone 2002 can support only four user logins, as compared to the six user logins supported by the IP Phone 2004.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 82 Deployment options and functional components

Firmware upgrade The IP Phone 2002 uses the same firmware upgrade function as the IP Phone 2004.

Redundancy The IP Phone 2002 is configured with two IP addresses for access to the IP Client Manager or Multimedia PC Client. Additional information is available in the "Guidelines for reliability and survivability" (page 271) section.

IP Phone 1120E and IP Phone 1140E The IP Phone 1120E and the IP Phone 1140E use SIP and can directly register to the Session Manager. For more information about these telephones, including provisioning, upgrades and features, see: ¥ SIP Firmware for IP Phone 1120E User Guide (NN43112-101) ¥ SIP Firmware for IP Phone 1120E Quick Reference Card (NN43112-102) ¥ SIP Firmware for IP Phone 1120E Implementation Guide (NN43112-300) ¥ SIP Firmware for IP Phone 1140E User Guide (NN43113-101) ¥ SIP Firmware for IP Phone 1140E Quick Reference Card (NN43113-102) ¥ SIP Firmware for IP Phone 1140E Implementation Guide (NN43113-300)

Multimedia PC Client Function overview The Multimedia PC Client is a Windows application built with Java and C and C++ multimedia technologies. It is an intelligent SIP endpoint. The Multimedia PC Client provides advanced IP telephony features, many of which are unavailable in a traditional PSTN. With the use of this SIP soft client, subscribers can access the services provided by the MCS system over an IP network using a personal computer. Information about the Multimedia PC Client service features is available in Multimedia PC Client User Guide (NN42020-102) .

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 83

The Multimedia PC Client support point-to-point video and the following DivXª and H.263 CODEC resolutions: ¥ 160x120 ¥ 176x144 ¥ 320x240 ¥ 352x288

The Multimedia PC Client supports the following audio CODECs and packet times: ¥ G.729A (incoming and outgoing) ¥ G.729B (incoming) ¥ G.711 (mu-law) (incoming and outgoing) ¥ G.711 (a-law) (incoming and outgoing) ¥ Packet times for G.711 include 10, 20, 30, and 60 msec. ¥ Packet times for G.729 include 10, 20, 30, and 60 msec.

The Multimedia PC Client can work as a controller for the IP Phones. When the Use the 200x telephone for voice instead of PC check box is selected in the Multimedia PC Client preferences, the PC sends a SIP Message containing Switch Controller information for the IP Phone to the Session Manager. The Session Manager sends a message to all IP Client Managers that the user is registered. After receiving this message, the IP Client Manager tells the IP Phone to configured the S3 internal IP address and S3 Port to the values contained in the message. The IP Client Manager tells the phone to switch to the S3 controller. The IP Phone sends a UNIStim message to the Multimedia PC Client to establish a connection.

Note: The Contain-Type header of the SIP Message tells the receiver this message contains Switch Controller information. The body of the message includes the PC Client’s IP address, MAC address, and port number, where the PC Client listens for UNIStim from the controlled IP Phones.

Three-way calling requires the Multimedia PC Client to mix two RTP audio streams such that each call party only hears the mixed audio from the other two parties.

Be aware that three-way calling cannot be supported if the Multimedia PC Client is being used in conjunction with the IP Phones. The reason is the voice generally goes directly to the IP Phones and therefore the audio cannot be mixed by the Multimedia PC Client. Selecting the check box PC Client routes voice to/from 200x (for private IP addresses) forces the RTP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 84 Deployment options and functional components

packets to go to the Multimedia PC Client first for NAPT function. Because the Multimedia PC Client only translates IP address and Port information in the packets and sends them on, three-way calling is still not possible. It does not decode the packets and therefore cannot mix the audio.

Collaboration is set up through normal SIP messaging like a regular call, except that the two end points are only negotiating UDP ports for establishing a collaboration pipe. This pipe uses a reliable UDP protocol to deliver packets between the two endpoints. Collaboration tools simply reference this pipe and the pipe takes care of transmission and reception of packets and delivers them to the correct collaboration tool. Collaboration uses a reliable UDP protocol so that packets are resent if lost. The format of the packets varies depending upon the collaboration tool requirements. For example, the whiteboard application uses text based messaging, while the file exchange application uses binary messages.

Platform Information about hardware requirements for the Multimedia PC Client is contained in the Multimedia PC Client User Guide (NN42020-102).

Capacity The Multimedia PC Client supports a maximum of 1,500 address book entries for each subscriber.

Disk space consideration In addition to disk space requirements for the actual application, client logs and Instant Messaging history files must be taken into consideration when planning for disk space for the Multimedia PC Client. Client logs can be found on the PC under the Profile directory for each user. These client logs are not managed by the client software. The default limits for Instant Messaging history files are as follows: ¥ Maximum size of history file: 100 KB ¥ Maximum individual users: 100 ¥ Maximum age: 7 days

The Multimedia PC Client controls the size, number of users and the age of the Instant Messaging history files. The system removes the oldest messages from the history file if the file size exceeds the maximum. The removal ends when the file size is within limit. The system removes the history files of the oldest user when the number of users exceeds the limit. The removal ends when the number of users is within limit. The system removes a history file if it has not been modified for over 7 days regardless of its size.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Functional components 85

Each Multimedia PC Client user has two history files. Each file is limited to 100 KB. The maximum amount of disk space required for Instant Messaging history files is about 20 000 KB or under 20 MB. The limits for Instant Messaging history files can be configured by resetting the following parameters in the Raider.ini file located in \Program Files\Nortel Networks SIP Client\Data: ¥ imSizeK= ¥ imUsers= ¥ imAgeDays=

The age audits are performed at startup and once every 7 days thereafter. The audit period cannot be configured.

Multimedia Web Client Function overview The Multimedia Web Client is a user agent based on the Multimedia PC Client software. This browser-based user agent turns supported Internet browsers into IP communications access clients. The Multimedia Web Client is a thin client that uses a Web browser on a PC platform. Because this multimedia client is browser based, it is easy to add and deploy new services as they become available. This is a Web-based user agent that leverages the multimedia capabilities of the PC. It is installed as the user accesses the Web site that hosts these capabilities. Information about the Multimedia Web Client service features is available in the MCS 5100 Overview (NN42020-143) document. The Multimedia Web Client support point-to-point video and the following DivXª CODEC resolutions: ¥ 160x120 ¥ 176x144 ¥ 320x240 ¥ 352x288

The Multimedia Web Client supports the following CODEC and packet time: ¥ G.711 (PCMU-Mu Law and PCMA-A Law) ¥ G.729 (A and B)

Platform Information about hardware requirements for the Multimedia Web Client is available in the Multimedia PC Client User Guide (NN42020-102).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 86 Deployment options and functional components

Capacity The Multimedia Web Client supports a maximum of 1,500 address book entries for each subscriber.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 87 Deployment considerations

Deployment of an MCS system in various IP environments is inherently complex due to the possible involvement of multiple owners and administration entities. The enterprise IP Local Area Network (LAN) and Wide Area Network (WAN), SIP endpoints, and various MCS components can be owned, managed, and controlled by several different administrative entities within the same enterprise. For example, the IP network can be owned and managed by the data network engineering group, the SIP components can be owned and managed by the telecommunication group, and the SIP endpoints can be owned by various business divisions.Issues such as IP addressing, routing, bandwidth, QoS, security, and where and how the MCS components and SIP endpoints should be deployed in order to achieve the best overall business benefits are different for different network architectures. The following topics are discussed in this section: ¥ "MCS logical hierarchy" (page 87) ¥ "Location infrastructure" (page 90) ¥ "MCS virtual network for Border Control Point" (page 97) ¥ "Border Control Point deployment considerations" (page 106) ¥ "MCS logical hierarchy design" (page 107) ¥ "IP architecture components" (page 124) ¥ "IP reference networks" (page 140) ¥ "MCS 5100 deployment scenario" (page 141)

MCS logical hierarchy In a large enterprise IP network, there can be tens, hundreds, or in some rare cases thousands of sites interconnected by IP networks. For example, a large enterprise with world-wide presence can have several regional offices connecting many large branch offices of different countries, which in turn connect tens or hundreds of sites within the country to form intranets

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 88 Deployment considerations

and extranets, and to connect them to the Internet. A large bank is another example, which typically has several large regional offices (hubs) that connect tens of small branches in the region to its regional resources.

When deploying an MCS system in a large network, it is apparent that not all functional components are required for all sites. The MCS logical hierarchy offers a consistent top-down view for the MCS system deployment. Levels in the logical hierarchy represent different levels of allocation (or concentration) for media, signaling, and management resources.

Figure 7 MCS 5100 logical hierarchy

The MCS logical hierarchical model offers network planners a guide to decide what functional components should be positioned what location at different levels of the hierarchy. The goal is to ensure that the media, signaling, and management functions are better positioned for efficient communications over the lower-layer IP networks. The hierarchy helps network planners to optimize the allocation of precious MCS resources for the purposes of redundancy, reliability, and quality of communications. As a result, the MCS logical hierarchy also provides a consistent methodology for MCS system deployment in a network.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy 89

From an MCS system point-of-view, the IP network is flat because these functional components can talk to each other directly. Although there are five levels in the MCS hierarchy, these levels can be flexibly combined in different ways suitable for different types of MCS deployments. The MCS logical hierarchy concept is therefore different from the PSTN class-based switching concept, which is primarily designed for the purpose of traffic aggregation.

Level 5: Network Control Center (NCC) The NCC hosts the System Manager, the Accounting Manager, and the Database Manager. The System Manager manages all servers and services by providing tools for provisioning and deploying software loads and configuration data to the application service components. The Accounting Manager provides storage and reliable transport of accounting reports from various service components to an existing Operation Support System (OSS) and billing system. The Database Manager stores subscriber, provisioning, and configuration data associated with the system.

The location for the NCC is mainly determined by the enterprise administrative considerations such as ease of access, the headquarters locations, and costs.

Level 4: Network Signaling Center (NSC) The NSC hosts the Session Managers, IP Client Managers, and Provisioning Managers. Connectivity to the Database Manager is critical. It is good practice to colocate the Session Managers and Database Managers. The following factors should be considered when choosing sites for hosting the NSC: ¥ High transport network availability ¥ Low transport network latency ¥ Site diversity considerations ¥ Physical site security

Level 3: Media Concentration Center (MCC) The MCC hosts the Media Application Server and Border Control Point. It can also host some Level-2 functions such as the Media Gateway 3200. Level-3 sites should be positioned close to or colocated with major Level-2 PoPs to shorten the media paths.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 90 Deployment considerations

Level-3 sites carry significant volumes of media traffic. The locations chosen must match the topology of IP transport that can provide sufficient bandwidth and low latency for media traffic. A large enterprise network covering a nation like the United States can require tens of MCCs.

Level 2: Local Point-of-Presence (PoP) The PoPs host Media Gateway 3200 and SIP FXO Gateways. The Media Gateway 3200 provides a Primary Rate Interface (PRI) interface from the SIP-based IP network to a circuit-based switching network, such as PSTN or PBX. The Media Gateway 3200 can also provides a Channel Associated Signaling (CAS) interface to the CallPilot voice mail system when used in conjunction with a terminal server for SMDI signaling.The SIP FXO Gateway provides an interface between the SIP-based IP network and the analog PBX or central office.

To ensure good voice quality, the Level-2 PoPs should be closer to the large SIP client population. Like Level-3 sites, the Level-2 locations should closely match the underneath IP network topology, offer ample bandwidth and very low latency for the media traffic, and offer the availability PSTN and Enterprise PRI access. A large enterprise network can require tens of PoPs.

Level 1: MCS Client Site The MCS client site hosts MCS clients, including IP Phones, Multimedia PC clients, and Multimedia Web clients that are located in various enterprise sites. Depending on the sites, they can also host existing PBX systems, SIP FXS gateways, and terminals. These sites should be closer to the existing Level-3 and Level-4 locations. A large enterprise network can serve hundreds of Level-1 sites.

Location infrastructure The MCS system uses SIP domains as a control mechanism for subscribers, services, devices, routing, and translations. In order to provide the system with a view into the geography of the service areas, MCS uses the Location Infrastructure concept.

Locations are defined at the parent level of a domain. Locations cannot be defined at the subdomain level. Every location has a name that must be unique within the defined domain. A location name of a domain is specific to the domain. The same location name in different domains may not necessarily represent the same geographic location. All locations provisioned in an MCS system are fully qualified by their domain and location name. Locations are organized into a tree-like structure. The location infrastructure enables the specification of an MCS virtual network topology that describes the geography of the service area. System administrators can build a set of domain-specific location trees to create a virtual representation

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Location infrastructure 91

of the geography serviced by the domain. Whenever a root SIP domain is created, a default location named Other is automatically created. See Figure 8 "Location hierarchy example" (page 91) for an illustration.

Figure 8 Location hierarchy example

As shown in Figure 8 "Location hierarchy example" (page 91), three location trees are created in the company.com domain to form a hierarchical representation of the three service areas supported in the domain. The location hierarchy is organized into a tree structure with branches and leafs. Branches are locations that contain other locations. Leafs are locations that do not contain other locations. For example, under location Santa Clara, there are three branches called Building 1 (Bldg. 1), Building 2 (Bldg. 2), and Building 3 (Bldg. 3). Branch Building 1 contains a leaf called First Floor (1st Floor) and Lab. Through the provisioning of a location hierarchy for a domain, a virtual representation of the geographic area serviced by the domain can be created.

In addition to regular domains and their corresponding location hierarchies, there also exists a special system domain. The system domain is automatically created in every system. The system domain is used to manage and control system resources. The system domain enables the system administrator to associate system resources such as gateways with locations.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 92 Deployment considerations

Foreign domains are also included in the location infrastructure, but in a more limited fashion. Foreign domains only have one location: Other.

Figure 9 System location example

Figure 9 "System location example" (page 92) shows the location infrastructure as defined for an entire MCS system. The figure shows the location trees for System domain, two regular domains (partner and enterprise) and a foreign domain. Note that there is an Other location created for each domain, including the foreign domain. The diagram shows that a virtual representation of the geographic area serviced by a domain can be represented by the location trees for the domain. Furthermore, the geographic area serviced by an entire MCS system is represented by the location trees of all domains provisioned in the system.

MCS virtual network topology The location infrastructure provides a structure for an MCS Virtual Network Topology. The topology provides a means of mapping system resources to geography. Through the association of system resources with location, the MCS system gains a spatial view of how the system resources are concentrated and distributed across the service area. The location infrastructure is only useful when all MCS entities have a provisioned location association that enables system resources and services to be delivered based on the location proximity of the service recipients.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Location infrastructure 93

The following MCS entities can be provisioned with location association through the Provisioning Client: ¥ Domain: The Location Services folder is added at the domain level of the Provisioning Client. The folder is where the location hierarchy can be created for the domain. A default location named Other is automatically created when a domain, including the system domain, is initially created. While the Other location exists in every domain, each instance is unique of each other and is meaningful only in that domain. The domain’s default location can also be changed to a desired location after the creation of the domain’s location trees. This can be done through the new Domain Locations field that appears in the domain provisioning Details page. The tree structure defined at the domain level also makes it easier for subscribers to select a location when registering on an MCS core clients. If there is no default location assigned to a subscriber, the domain’s default location is used for the subscriber. The default location is useful for call processing to allocate proper gateway or pooled resources when no specific location information is available to guide the call processing. The default location cannot be deleted until all references to the default location are removed. Deletion of the default location of a domain will cause the domain’s associated location to revert back to the default Other location. ¥ Subdomain: The Domain Locations field that can be found on a subdomain’s Detail page contains a list of locations defined for the parent domain. When a new subdomain is created, it is associated with the Other location as the default location. A subdomain’s default location can be updated to any location defined in the Location Hierarchy of the parent domain by selecting a location from the Domain Locations list. ¥ Subscriber: The Location field on the Add User page of the Provisioning Client provides information about the subscriber’s default location. The initial default location for a new subscriber is the domain or subdomain’s default location. The Location field contains a list of all locations from the location hierarchy defined for the subscriber’s domain. Any location can be selected from the list for association with a subscriber. The domain’s default location provides location support for third-party clients and older versions of the MCS core clients. ¥ IP Phones: A Location field in the Add Device page of the Provisioning Client provides the default location information for the phones. The initial default location for a new IP Phone is the Other location. An IP Phone’s default location can be updated to any location defined in the Location Hierarchy for the assigned domain. The default location appears first in the location list presented to the first subscriber to log into each IP Phone.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 94 Deployment considerations

¥ Foreign domain: There is no access provided to manipulate or control the location hierarchy of foreign domains. However, foreign domains are integrated into the Location Infrastructure internally through the automatic creation of the default Other location for each foreign domain. The default Other location cannot be updated because no location tree can be created for a foreign domain. The Other location is the extent of the Location Hierarchy that can be constructed for foreign domains. ¥ Pooled entity: The Media Application Server (MAS) uses pooled entities which are servers used for Ad Hoc and Meet Me Audio Conferencing, and IM Chat. The Location field on the Pooled Entity page contains the locations defined in the system domain. When a new pooled entity is created for a domain, the system domain’s default Other location is the initial default value for the Location field. The default value can be updated to any location defined in the System location tree. ¥ Service node: Like pooled entities, gateways such as Media Gateway 3200 are system resources. Gateways can only be associated with locations defined within the system domain. A new Location field is added to the Gateway Provisioning page. When a new gateway is created, the System domain’s default Other location is the initial default value for the Location field. The default value can be updated to any location defined in the System location tree.

In addition to the above described static (datafill) location associations, the MCS core clients have the ability to prompt a subscriber to dynamically select their current location from a list of locations during the initial SIP registration. The subscriber can select the appropriate location from the list that best matches the physical location. Note that the selected location identifies the location of the device, not the location of the subscriber. The subscriber can log into several devices simultaneously from different physical locations. Since an MCS entity such as subscribers, clients, gateways can also acquire its location information from its provisioned default location, from the domain or subdomain default location, or from the default Other location, there needs to be a predictable way of determining the location of an entity.

For the IP Phones, the subscriber selected location is stored in the Database Manager in relation to the device. For the Multimedia PC Client and Multimedia Web Client, the subscriber selected location is stored on the Windows PC where the client is running.

The subscriber’s location is selected and updated through the registration process. When the subscriber registers from a Multimedia PC Client, the Service Package Subscribe Event provides the Multimedia PC Client with the IP address of the server for the Provisioning Manager. The Multimedia PC Client uses the IP address to initiate a SOAP request to obtain a location list from the Provisioning Manager. The subscriber is prompted to select a

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Location infrastructure 95

location from the list. The list with the database time stamp is stored locally. The Multimedia PC Client stores the last location selected. The selected location is always shown first on the list presented to the subscriber. When the subscriber reregisters, the Multimedia PC Client updates the location file with any changes made between the current time and the last database time stamp in the local location cache. If the location is valid, the subscriber is not prompted for location upon reregistration. If the location is no longer valid, the subscriber is prompted to select a new location.

The IP Client Manager obtains the location information for the clients, including the IP Phones and the Multimedia Web Client. The IP Client Manager retrieves the location list directly from the Database Manager when the clients start up. The client manager is updated with the latest change in location through a synchronization process from the Provisioning Manager. If the location is no longer valid, the subscriber is prompted to select a new location.

The MCS core clients send the subscriber selected location ID in the x-nt-location header of each INVITE and REGISTER message. The location ID sent in the x-nt-location header is stored in association with the subscriber’s registered destination in the Database Manager. Calls originated by other entities such as gateways or Media Application Servers have their own location ID derived from the provisioned locations using the Location Precedence Rules.

Location trees of different domains are independent of each other. When designing a location tree for the system domain, one important consideration is how the current and future MCS system resources are deployed so that they can support services in close proximity of the subscribers. Therefore, when constructing location trees for nonsystem domains, it is important to coordinate with the MCS system administrator to ensure that the location definitions are consistent with what have been defined in the system domain and that the subscribers are assigned to the intended locations for accessing services provided.

Location precedence rules The location precedence rules govern the use of location information in the processing of calls to provide predictable use of the location information that is available across the MCS system. The location precedence rules defined in Table 6 "Location precedence rules" (page 96) specify the most suitable location of an entity.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 96 Deployment considerations

Table 6 Location precedence rules MCS core nonMCS Gateways & Foreign clients clients Media domains Application Servers Subscriber selected 1st NA NA NA location Provisioned subscriber or 2nd 1st 1st NA gateway location Provisioned domain 3rd 2nd 2nd NA location The default Other location 4th 3rd 3rd 1st for domain NA=Not Applicable

As shown in the above table, the Location Precedence Rules are applied as follows: ¥ MCS core clients: MCS clients include the IP Phones, the Multimedia PC Client, the Multimedia Client Set, and the Multimedia Web Client. The subscriber selected location takes the precedence over all other locations. The following list shows the order of precedences for MCS core clients: — subscriber-selected location — provisioned-subscriber-default location — provisioned-domain-default location — the Other location

¥ nonMCS clients: nonMCS clients cannot dynamically select location. The statically provisioned subscriber location takes precedence over all other locations. The following list shows the order of precedences for nonMCS clients: — provisioned-subscriber-default location — provisioned-domain-default location — the Other location

¥ Gateways: The provisioned gateway location takes precedence over all other locations. The following list shows the order of precedences for gateways: — provisioned-gateway-default location

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS virtual network for Border Control Point 97

— provisioned-domain-default location — the Other location

¥ Media Application Server (MAS): The provisioned location is for termination only. Originations are based on the originating domain’s default location information. The following list shows the order of precedences for the Media Application Server: — provisioned-pooled-entity-default location — provisioned-domain-default location — the Other location

¥ Foreign domains: Foreign domains are limited to their default Other location.

MCS virtual network for Border Control Point The location-based deployment of the Border Control Point relies on the accurate provisioning of three pieces of information: the Location Infrastructure, the Routability Group, and the Border Control Point Group. The creation of the Location Infrastructure, the Routability Group and the Border Control Point Group defines the MCS Virtual Network. It is good practice to provision the Location Infrastructure first, Routability Group second, and Border Control Point Group last.

As discussed in the "Location infrastructure" (page 90) section, a set of domain-based location trees defines the MCS Location Infrastructure. Through the provisioned location trees, devices and subscribers can be associated with location information by a system or domain administrator or by subscribers themselves during login.

Routability Groups define the direct IP routing visibility between any pair of endpoints that belong to the same group. If one or both endpoints are behind a firewall or NAPT device and there is no direct routing visibility between them, they should not belong to the same Routability Group. Calls within the same Routability Group do not require the assistance of a Border Control Point. Calls that extend beyond the group will cause a Border Control Point to be inserted between the endpoints.

Based on the established location infrastructure, Routability Groups can be defined to include locations that can directly route packets between each other. Routability Groups are datafilled using the Add Routability Group and Add Group commands in the new Media Portal menu of the Provisioning Client. Using these commands, system and domain administrators can

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 98 Deployment considerations

capture routing information usable for MCS. A list all Routability Groups defined in the system can be viewed and modified using the List of Routability Groups command within the new Media Portal menu.

An example of Routability Groups is shown in Figure 10 "Routability Group example" (page 98). In this example, there are five Routability Groups defined that overlay the location tree. Locations enclosed in the same color area belong to the same Routability Group. Endpoints at Richardson and Galatyn C belong to the same Routability Group. Therefore, no Border Control Point is required for calls between these two locations.

If the Routability Group is provisioned incorrectly, a Border Control Point can be wastefully inserted between endpoints that do not require one. On the other hand, calls between endpoints in different color areas such as Richardson and Santa Clara require the use of a Border Control Point to complete the calls. If the Border Control Point is not inserted due to incorrect provisioning of the Routability Groups, there is only one-way or no media path between the endpoints of these two locations.

Figure 10 Routability Group example

Depending on the IP network topology, a single domain can include multiple Routability Groups. It is also possible that a Routability Group includes locations from multiple domains, including foreign domains. See Figure 10 "Routability Group example" (page 98) for an illustration. In the figure,

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS virtual network for Border Control Point 99

there are five Routability Groups under the company.com domain. Both the Raleigh (yellow) and Santa Clara (light blue) Routability Group contain a foreign domain in the group. The Data Center (purple) Routability Group includes the domain’s default Other location, which means that all endpoints associated with the Other location belong to the same purple Routability Group. These endpoints do not require the insertion of the Border Control Point for calls between themselves. If the domain’s default Other location is not included in any Routability Group, all the endpoints associated with this location will require insertion of a Border Control Point for calls. For example, all endpoints associated with the Other location of the partner.com domain in Figure 11 "Border Control Point group example" (page 100) require the insertion of a Border Control Point for calls originated from or terminated to them.

If there is no Routability Group defined for an MCS system and the new location-based Border Control Point Insertion rules have been enabled, the Session Manager will insert a Border Control Point in all calls by default. If there is only one Routability Group containing only the Other location and all endpoints are associated with the Other location, the Session Manager will not insert a Border Control Point by default. The accuracy of the routability grouping impacts greatly on both the success of establishing calls or media sessions with or without Border Control Point resources. Detailed IP network information is required to define the Routability Groups by determining where direct IP routing paths exist between the endpoints participating in the calls. Therefore, it is essential that the MCS administrators work closely with the IP network administrators to ensure that Routability Groups and associated locations are correctly so that direct routing paths exist within the same Routability Group.

The Routability Group information is stored in the Database Manager and is cached on the Session Manager for rapid access during runtime call processing. Using the Routability Groups, the Border Control Point insertion rules become a simple test of group membership of the endpoints.

Border Control Point selection process The creation of Border Control Point Groups enables the administrator to provide the MCS system the location information of the colocated Border Control Point resources. The Add Media Portal Group command under the system-level Media Portal Groups menu enables the system administrator to assign RTP resources such as groups of colocated Border Control Points to their associated locations. After the initial creation, the system administrator can use the List Media Portal Group command under the Media Portal Groups menu to list, modify, and delete the created Media Portal Groups. By defining the Media Portal Groups, the administrator provides the Border Control Point presence in the area to support calls originating from or terminating to the area.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 100 Deployment considerations

Figure 11 Border Control Point group example

Like Routability Groups, the Border Control Point Groups are provisioned on top of established Location Infrastructure. The system administrator must ensure that Border Control Point Groups are associated with the appropriate locations and all MCS clients associated locations are included in the defined Routability Group and Border Control Point Group. One location can only be assigned to a single Border Control Point Group. A single location tree can include multiple Border Control Point Groups. a Border Control Point Group can include locations from multiple domains, including foreign domains. As shown in Figure 11 "Border Control Point group example" (page 100), the red Border Control Point Group includes locations from both xyz.com and partner.com domains and the foreign.com domain which is a foreign domain. Routability Groups and the Border Control Point Groups are created independently and overlaid on the same Location Infrastructure. Therefore, one Border Control Point can be created to span across more than one Routability Group entirely or just a portion. For example, in Figure 11 "Border Control Point group example" (page 100), the dashed green Border Control Point Group spans the entire light blue, purple, and pink Routability Groups, while the dashed red Border Control Point Group covers the entire green routability group, but only a portion of the yellow Routability Group. Conversely, one Routability Group can be created to span more than one Border Control Point Group. As shown in

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS virtual network for Border Control Point 101

Figure 12 "MCS virtual network topology overlays" (page 101), the Yellow Routability Group spans the dashed purple and dashed gray Border Control Point Groups. Figure 12 MCS virtual network topology overlays

The flexibility of Border Control Point Group to span Routability Groups enables administrators to create Border Control Point Groups to support the demand of call volumes from the same or different Routability Groups when required. As shown in the example in Figure 12 "MCS virtual network topology overlays" (page 101), a dashed gray Border Control Point Group is created for Bldg. 1 location to support the high call volumes of the building that are going out of the yellow Routability Group. A dashed white Border Control Point Group is created to support the call volumes of both pink and green Routability Groups. The creation of the Location Infrastructure, Routability Groups, and Border Control Point Groups defines the MCS virtual network topology of an MCS service network. See Figure 13 "Example MCS service network" (page 103) for an example MCS service network. The Border Control Point Selection Rules leverage the local Border Control Point Groups of the virtual network topology. For processing efficiency, the provisioned MCS virtual network topology information is cached in the memory during system initialization. If there are no Border Control Point Groups defined, the Border Control Point Selection Rules will use the round-robin method to select an available Border Control Point from the default Border Control Point resource pool. The default

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 102 Deployment considerations

resource pool is populated when Border Control Points advertise their availability to process calls by sending Media Portal Control Protocol (MPCP) Restart In Progress (RSIP) messages to the Session Manager at startup and registering with the Session Manager. The following Border Control Point Selection process describes how the Session Manager selects a Border Control Point resource after it determines that a Border Control Point is required: 1. The Session Manager searches the originator’s location tree to find a Border Control Point close to the originator. ¥ If the originator’s location is known, the Session Manager checks to see if the originator’s location has an associated Border Control Point Group. — If a Border Control Point Group is found in the originator’s location, the Session Manager searches the Border Control Point Group for an available Border Control Point. — If there is a Border Control Point available in this RTP Media Group, the Session Manager processes the call using an available Border Control Point. — If there is no Border Control Point available in the Border Control Point Group, the Session Manager continues to search the originator’s location tree upward until an available Border Control Point is found. — If no Border Control Point Group is found in the originator’s location tree, the Session Manager begins searching the terminator’s location tree.

¥ If the originator’s location is unknown, the Location Precedence Rules are applied to determine a suitable location for the originator. If a Media Portal Group cannot be determined for the originator, the Session Manager begins searching the terminator’s location tree.

2. The Session Manager searches the terminator’s location tree to find a Border Control Point close to the terminator. ¥ If the terminator’s location is known, the Session Manager checks to see if the terminator’s location has an associated Border Control Point Group. — If a Border Control Point Group is found at the terminator’s location, the Session Manager searches the Border Control Point Group for an available Border Control Point. — If an available Border Control Point is found at this location, the Session Manager processes the call using the available Border Control Point.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS virtual network for Border Control Point 103

— If there is no Border Control Point available at this location, the Session Manager continues to search the terminator’s location tree upward until a usable Border Control Point is found. — If no Border Control Point Group is found in the terminator’s location tree, the Session Manager begins searching for any available resources in the default Border Control Point pool.

¥ If the terminator’s location is unknown, the Location Precedence Rules are applied to determine a suitable location for the originator. If a suitable Media Portal Group cannot be determined for the termination, the Session Manager searches for any available resources in the default Border Control Point pool.

3. The Session Manager selects any available Border Control Point in the system. ¥ If there is an available Border Control Point found in the default resources pool, the Session Manager will process calls using the available Border Control Point. ¥ If there is no available Border Control Point in the default resource pool, a SIP 503 Media Resource Unavailable error message is generated.

Figure 13 Example MCS service network

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 104 Deployment considerations

See Figure 13 "Example MCS service network" (page 103) for an illustration of the scenarios described below regarding how the location-based Border Control Point Selection Rules work in conjunction with the Border Control Point Selection Process. Example 1: No Border Control Point Required Endpoint A, B, and C belong to the same yellow Routability Group. Calls between these endpoints require no Border Control Point to be inserted. No Border Control Point is required for calls between F and G since they belong to the same blue Routability Group.

Example 2: Insertion of a Border Control Point close to the originator

1. When endpoint A or B originates a call to any endpoint outside of the yellow Routability Group such as endpoint D or E, the Session Manager attempts to select a resource from Media Portal Group 1 since it is the Media Portal Group closest to endpoints A and B. ¥ If Media Portal Group 1 does not have an available resource, Media Portal Group 2 is the next to be considered since it is the next closet resource group on the originator’s location tree. ¥ If Media Portal Group 2 does not have an available resource, Media Portal Group 7 is the next to be considered. ¥ If Media Portal Group 7 does not have an available resource, the Session Manager will select a Border Control Point close to the terminator whose location is known.

2. When endpoint C calls any endpoint outside of the yellow Routability Group such as endpoint D or E, the Session Manager will initially attempt to select a resource from Media Portal Group 2 since it is the Border Control Point Group closest to endpoint C. ¥ If Media Portal Group 2 does not have available resources, Media Portal Group 7 is the next to be considered. ¥ If Border Control Point Group 7 does not have available resources, the Session Manager will select a Border Control Point close to the terminator whose location is known.

3. When endpoint D calls any endpoint outside of the pink Routability Group such as endpoint A, B, C, E, F, G, or H, the Session Manager will attempt to select a resource from Media Portal Group 3 since it is the closest Media Portal Group to endpoint D. ¥ If Media Portal Group 3 does not have available resources, Media Portal Group 4 is the next to be considered because it is the next resource group found in endpoint D’s location tree.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS virtual network for Border Control Point 105

¥ If Media Portal Group 4 does not have available resources, Media Portal Group 7 is the next to be considered. ¥ If Media Portal Group 7 does not have available resources, the Session Manager will select a Border Control Point close to the terminator whose location is known.

4. When endpoint E calls any endpoint outside of the pink Routability Group such as endpoint A, B, C, D, F, G, or H, the Session Manager will attempt to select a resource from Media Portal Group 4 since it is the closest Border Control Point Group to endpoint E. ¥ If Media Portal Group 4 does not have available resources, Media Portal Group 7 is the next to be considered because it is on endpoint E’s location tree. ¥ If Media Portal Group 7 does not have available resources, the Session Manager will select a Border Control Point close to the terminator whose location is known.

5. When endpoint F calls any endpoint outside of the pink Routability Group such as endpoint A, B, C, D, E, G, or H, the Session Manager will attempt Media Portal Group 6 since it is the closest Media Portal Group to endpoint F. ¥ If Media Portal Group 6 does not have available resources, Media Portal Group 7 is the next to be considered because it is on endpoint F’s location tree. ¥ If Media Portal Group 7 does not have available resources, the Session Manager will select a Border Control Point close to the terminator whose location is known.

6. When endpoint G calls any endpoint outside of the pink Routability Group such as endpoint A, B, C, D, E, G, or H, the Session Manager will attempt to select Media Portal Group 5 since it is the closest Media Portal Group to endpoint G. ¥ If Media Portal Group 5 does not have available resources, Media Portal Group 6 is the next to be considered because it is on endpoint G’s location tree. ¥ If Media Portal Group 6 does not have available resources, Media Portal Group 7 is the next to be considered. ¥ If Media Portal Group 7 does not have available resources, the Session Manager will select a Border Control Point close to the terminator whose location is known.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 106 Deployment considerations

Example 3: Insertion of a Border Control Point close to the terminator

1. When a Media Portal Group cannot be found in the call originator’s location tree, the Session Manager will attempt to select a Border Control Point that is closest to the terminator. For example, when endpoint C terminates calls from endpoint H, the Session Manager will select Media Portal Group 2 because H’s location does not have an associated Media Portal Group. ¥ If Media Portal Group 2 does not have available resources, Media Portal Group 7 is the next to be considered. ¥ If Media Portal Group 7 does not have available resources, the Session Manager will select any available Border Control Point in the system.

2. When the Session Manager cannot find an available Border Control Point on the originator’s location tree, the Session Manager will attempt to select an available Border Control Point from the terminator’s location tree. For example, when endpoint C calls endpoint E and there is no available resources found on endpoint C’s location tree, the Session Manager will select an available resource from Media Portal Group 4. ¥ If Media Portal Group 4 does not have available resources, Media Portal Group 7 is the next to be considered. ¥ If Media Portal Group 7 does not have available resources, the Session Manager will select any available Border Control Point in the system.

The localized Border Control Point deployment model requires the system administrator to properly define the MCS virtual network topology and engineer Border Control Point capacity for efficient MCS operations. The localized deployment model assumes detailed knowledge of the IP network topology to ensure accurate provisioning of the Routability Groups and Border Control Point Groups, and association with the Location Infrastructure. Without such topological knowledge of the IP network and sufficient capacity provided, it is very likely that new call sessions will not be established properly, resulting in 1-way or 2-way media through a nonoptimal Border Control Point.

Border Control Point deployment considerations This section describes the requirement for deploying the Enhanced Border Control Point Insertion and Selection functions.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 107

MCS logical hierarchy design The MCS system supports a localized presence of media gateways and servers to eliminate the need for backhauling the media traffic to a central location. Local media presence reduces the bandwidth demands on the IP core networks because most media traffic can be handled locally. However, the realization of the full advantages of the MCS system requires careful planning of MCS domains, subdomains, location infrastructure, Routability Groups, Border Control Point Groups, and Media Application Server services so that the traffic demands can be met by the resources deployed at the local locations.

Careful MCS network logical hierarchy design is critical for the MCS system deployment. Failure to do so will result in nonoptimal use of the network resources and less than optimal quality for services offered. When planning for the MCS logical network hierarchy, network planners need to address the considerations identified in each planning phase.

Note: In the following discussion, the terms site and location are used interchangeably.

Phase 1: MCS domain trees planning The MCS system uses domain and subdomain structures to manage subscribers and control services available to the subscribers. The scheme for domain and subdomain naming and the criteria for setting up domains and subdomain are two major considerations that must be addressed for the MCS domain trees planning phase.

There are two types of domains used in MCS: the system domain and the subscriber (nonsystem) domains. The system domain is created automatically when an MCS system is first initialized and is used by the system administrator to manage and control system resources. For more information, see "Location infrastructure" (page 90). The subscriber domains are created by the MCS system planners based on the needs and requirements.

There are many ways to devise a domain and subdomain naming scheme. Assigning domain names based on the locations at higher level domain hierarchies tends to go well with the MCS location-based framework and is also easier to understand. See the following diagram for an illustration of the MCS domain hierarchy.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 108 Deployment considerations

Figure 14 Example MCS domain hierarchy

For example, the system planners of a service provider might want to create a top (SIP) domain for each of the enterprise customers, such as xyz.com, or abc.com, and then let the enterprise domain administrators manage and define their own domain spaces. Within an enterprise domain, the enterprise system planners could create, at the higher level, a multi-level subdomain trees representing different geographic locations, such as campus1.dallas.tx.us.xyz.com, and then define, at the lower level, a multi-level subdomain tree representing functional units of a particular location, for example, callcenter.support.campus1.dallas.tx.us.xyz.com.

The criteria that can influence the decision on creation of domains and subdomains include the following: domain management, subscriber processing load distribution, local pooled entities deployment for traffic optimization, and local telephony routing requirements.

Domain management The MCS system enables administrators to share various management responsibilities, such as users, devices, service packages, pooled entities, gateways, telephony routes, IP Client Managers, voice mails, at the domain and subdomain levels.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 109

Figure 15 Domains for administration

For example, a large enterprise MCS system administrator can create additional top-level domain administrators and delegate domain management responsibilities to them. Each domain administrator is provisioned with the management rights to one or more domains, but is restricted access to any other domains. The domain administrators can then delegate the management responsibilities of subdomains to other administrators.

The MCS system planners can take advantage of this feature when planning the domain and subdomain trees. See the following diagram for an illustration of the domains for administration.

Subscriber processing load distribution In an MCS system, a domain or a subdomain must be homed to a particular Session Manager server so that the subscribers of the domain or subdomain can log in to receive MCS services. One or more domains or subdomains can be homed to a single server. However, a server can only handle service requests from a limited number of subscribers; the actual number can vary depending on the loads of different types of services mix. For more information, see the Capacity section in "Session Manager" (page 46).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 110 Deployment considerations

The following diagram shows a subdomain (ny.us.na.xyz.com) that has been created and rehomed to the AppSrv 2 to reduce the load on AppSrv 1.

Figure 16 Domain and subdomain homing

When planning the domain and subdomain trees, ensure that the subscribers and services of these domains and subdomains do not overload the server to which they are homing. When this situation happens, add more servers and more domains and subdomains to accommodate the loads.

Localizing pooled entities for traffic and services optimization Pooled entities for Media Application Server services, such as Music on Hold, Branding, Announcement, Meet-Me audio conferencing, Ad-hoc audio conferencing, IM Chat, are provisioned (Domain/Pooled Entities/Add Pooled Entity) at the top domain level and are inherited by subdomains. The type of Media Application Server services requested affects how the MCS system routes Media Application Server services requests: ¥ Music on Hold: the MCS system selects servers based on the subdomain of the call terminator, the shortest distance to called party. ¥ Announcement: services are routed based on the subdomain of the call originator, the shortest distance to calling party. ¥ Branding: services are based on the subdomain of the call terminator.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 111

¥ Chat: services are routed based on the subdomain of the chat originator. ¥ Ad Hoc audio conferencing: the MCS determines the conference server by consulting the location of the Ad-hoc conference initiator consults with the domain’s location tree. ¥ Meet-Me audio conferencing: it is routed based on the subdomain of the conference initiator.

As shown from the above list, there is no single way to determine the Media Application Server locations for service routing.

The following table summarizes the server selection rules of various Media Application Server services:

Table 7 Summary of server selection rules Services offered by Media Server selection criteria Application Server Music on Hold Terminator’s subdomain Branding Terminator’s subdomain Announcement Originator’s subdomain Chat Originator’s subdomain Meet Me Initiator’s subdomain Ad Hoc Initiator selected or provisioned location Unified Messaging Provisioned server

To determine the appropriate server, the MCS system searches the originating or terminating subscriber’s subdomain for a provisioned server and routes the call to the provisioned server. If no server is provisioned (or mapped) at the subscriber’s subdomain level, the next higher subdomain is consulted. The subscriber’s subdomain tree is traversed upward until an appropriate server is found. Otherwise, an Server not found error is returned. See Figure 17 "Pooled entity subdomain mapping" (page 112) for an illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 112 Deployment considerations

Figure 17 Pooled entity subdomain mapping

The MCS system does not enforce geographic mapping of resources to subdomains. It is up to the system planners to design the entity-subdomain mappings in a geographically sensible manner that optimizes network performance. Therefore, Nortel recommends that the subdomains be geographically sensitive and that the pooled entities be assigned in a manner that minimizes long distance backhaul.

Both service quality and the costs of delivering the Media Application Server services can be significantly improved in a carefully planned MCS domain and subdomain infrastructure because users are always routed to servers close to them.

Routing translation considerations The MCS system enables a different dial plan and Class of Service (COS) to be set up for each domain or subdomain.

The dial plan, a set of provisioned telephony routes (or a routing table), is used for processing, including prepend, remove, and match, the dialed digits of the incoming calls, and forwarding the calls to their destinations. The domain-based dial plan gives the administrator the control of digit

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 113

translations that should be applied when processing calls, such as private routes for intra-(sub)domain calls, SIP routes for inter-(sub)domain calls, and gateway routes for PSTN calls.

COS is used to screen and restrict users from accessing specific telephony routes within a domain or subdomain. A single COS value can be assigned to a list of routes, domain, subdomain or a subscriber. COS is determined using the following precedence order: subscriber COS, subdomain COS, parent domain COS (recursively walk the domain tree to find parent COS). Users have access to a telephony route if their COS value is equal to or greater than the value assigned to the route.

Using this capability, the administrator can devise different (public and private) dial plan for the managed domains and subdomains and to determine how an incoming call from another domain or subdomain should be handled. For example, the administrator may want to block all calls from outside our domain using our gateway. To prevent fraud, MCS uses the COS of the terminating domain where the telephony route is provisioned whenever the call originator is not a member of the terminating domain.

The rules that the MCS system uses for determining the class of service and dial plan are summarized in the following table.

Table 8 Summary of rules for Class of Service and dial plan Call from a different Call from a Call from a non domain subscriber in the subscriber in the same domain same domain Dial plan Use terminating Use subscriber Use the domain’s domain’s dial plan domain’s dial plan dial plan Class of Use terminating Use subscriber Use the domain’s Service domain’s Class of domain’s Class of Class of Service Service Service

Phase 2: MCS location trees planning The MCS system contains many logical and physical elements which can be deployed at multiple physical locations to meet a broad set of service requirements. Although domain trees are independent of location trees, the planner can use the previously designed domain trees as the starting point to design the domain’s location trees. This can be done by converting the (sub)domain trees into corresponding location trees.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 114 Deployment considerations

Figure 18 Domain tree to location tree transformation

The system domain location trees are first to be considered when planning location trees because these locations represent the system’s resources (i.e., pooled entities, media portals, and gateways) need to be deployed to deliver the desired service targets to the subscribers of the nonSystem domains. In addition to considering locations for system resources, the planner must also consider the nearest PSAP (Public Safety Access Point) locations for emergency services. In general, when adding a location to the System location trees, the planner should consider the following issues: ¥ Is this location needed to identify a routability group? ¥ Is this location needed to offer location based routing for emergency calls? ¥ Is this location needed for deploying system resources such as gateways, pooled entities, or media portals?

Other locations the planner can also consider are the locations of the existing facility Points-of-Presence (PoPs) and the network control centers, where are also ideal for deploying MCS Layer 2-Layer 4 system resources as well. See the following diagram for an illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 115

Figure 19 System location planning example

For a nonsystem domain, the domain planner can add locations to the System location trees to represent the subscriber and device concentrations of the new locations. This will ensure the domain location definitions are consistent with those defined in the system domain. The followings are issues need to be considered when adding locations to a nonSystem domain: ¥ Is this location needed to identify a routability group? ¥ Is this location needed to correctly identify emergency response location (ERL) of the emergency caller required by law? ¥ Is this location needed for accessing system resources such as gateways, pooled entities, or media portals? ¥ Is this location needed to ensure MCS clients getting the correct location representation?

The MCS system creates a single Other location when it initialize a domain. Care must be taken to ensure the Other location is properly handled. The following diagram shows the planned locations of xyz.com are properly mapped to its own domain hierarchy and the facility locations of the system domain.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 116 Deployment considerations

Figure 20 Nonsystem domain location planning example

All locations defined are logical locations. MCS does not enforce geographic mapping of resources and clients to these locations. It is up to the planner to implement the entity-location mappings through provisioning, such as defining routes to these resources which tie to the proper locations.

Phase 3: Routability Groups planning Using the location trees created in the phase2 and the detail IP network diagrams, the system planner should be able to identify the IP routability between any pair of locations. For the locations that can route packets directly to each other should be included in the same routability group. As shown in the figure, all provider locations belong to a single routability group because these locations belong to a single VPN.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 117

Figure 21 Routability group planning example

All xyz sites are placed in their own routability group because the company has installed firewall or NAPT devices at every site for protection and address and port translation of the IP traffic. Notice that not every location needs to belong to a routability group. Only the locations that have associated SIP endpoints need to be assigned to the routability groups. The locations that do not have any SIP endpoints subscribed to would not have to belong to any routability groups. For example, the tx and us locations of xyz.com do not belong to any routability groups, and are there to mimic the provider’s location tree and to glue the sites with SIP endpoints together. The Other location of xyz.com, used for mobile users, also does not belong to any routability group. It enables MCS to insert a Border Control Point into the media path for communications with the mobile users; even among the mobile users themselves.

Phase 4: Border Control Point Groups planning The media traffic between endpoints of different routability groups must traverse Border Control Point. Using the completed location infrastructure and routability group charts, the MCS planner can determine the number and the locations of the Border Control Point groups required to support the inter-routability group communications of the SIP endpoints.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 118 Deployment considerations

Figure 22 Border Control Point group planning example

As shown in the figure, the system planner has decided to install Border Control Points in the Dallas east, west, and dallas PoPs to support SIP endpoints located in the building1 and building2 of campus1 and campus2. A blue media portal group is planned for the locations of building1, building2, campus1, and Dallas east PoP. A red media portal group is planned for the locations of campus2, and Dallas west PoP. This plan enables the SIP endpoints on the campus1 to use the closest Border Control Points located in the Dallas east PoP and the SIP endpoints on the campus2 to use the closest Border Control Points located in the Dallas west PoP.

Phase 5: MCS component capacity planning At this planning phase, the system planner has completed the high-level MCS system analysis by determining the following information: ¥ the MCS subscriber distribution, concentration, and location information ¥ the locations for Layer 3-Layer 5 centers and Layer 2 PoPs ¥ the detailed IP network topology information for the entire MCS network, which includes the detailed information for the IP service (backbone, aggregation, access) networks, IP VPN overlay networks, peering ISP networks, and remote access networks. The detail information should also include information such as network connection type (LAN, WAN), bandwidth, speed, addressing, routing, security policy, firewalls, NAPTs, QoS and Service Level Agreements (SLAs). ¥ the domain hierarchies of the system and nonsystem domains

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 119

¥ the location tress of the system and nonsystem domains ¥ the routability groups of the planned MCS network ¥ the Border Control Point groups of the planned MCS network

To perform MCS capacity planning, the system planner will also need to determine the following additional information for services and usage behaviors for each site: ¥ the quantities of Multimedia PC clients, Multimedia Web Clients, IP Phones, third-party SIP phones, Converged Desktop clients, SIP FXS gateways for analog phones, third-party clients, and PBX users. There must be an estimate of the percentage of these clients that will concurrently register to the MCS system because not all clients will simultaneously register to the Session Manager. For example, 1,000 licenses of Multimedia Web Clients have been purchased, but only 300 of them are actively registered to the server in a busy hour. ¥ the supported voice of CODECs and packetization times ¥ the supported video CODECs, resolutions, and frame rates ¥ the type of interfaces connecting to the PSTN and PBX switches if required ¥ the number of calls a subscriber makes in a busy hour and average call holding time (Erlang per user) ¥ the percentages of the calls between SIP Clients only. You must also determine the call percentage for intra-site, inter-site (intra-routability group), inter-site (inter-routability group). ¥ the percentage of calls between SIP clients and TDM phones behind PBX. You must also determine the call percentage for intra-site, inter-site (intra-routability group), inter-site (inter-routability group). ¥ the percentage of calls between SIP clients and local PSTN phones using Media Gateway 3200 ¥ the percentage of calls between SIP clients and long distance PSTN phones using Media Gateway 3200 ¥ the supported MCS services (IM, IM chat, Presence, collaboration, Ad Hoc and Meet Me audio conferencing, announcement) and their take rates ¥ the average number of IMs sent by a subscriber ¥ the percentage of the total traffic that will require conference and voice mail servers

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 120 Deployment considerations

To enable efficient access to the database, the servers for the Session Manager server, Management and Accounting Server, Database Server, and IP Client Manager are normally deployed at a single-network control site. To perform capacity planning on these components would require the planner to analyze the site specific service demands and to sum up these loads into a network-wide service demand. The following list shows how the capacity of the core MCS modules is measured: ¥ Session Manager: the capacity as well as the load of a Session Manager is measured by the units of weighted SIP transactions. ¥ Management, Accounting, and Database Managers: the capacity is measured by the number of supported subscribers at all sites in the network. ¥ IP Client Manager: the capacity is measured by the collective number of supported IP Phones.

The following simple example illustrates how to calculate the multi-site weighted SIP transaction loads on the Session Managers . More detailed discussion on this subject can also be found in Appendix "Constructing a reference model for system capacity planning" (page 363). Example assumptions: There are 10 sites and 1000 subscribers for each site. Each subscriber has the following usage profile: ¥ uses one client device (1000 clients per site) ¥ makes 2 calls per hour and an average of 3 minutes per call ¥ does 1 registration per day (2/24 subscriptions per hour) ¥ sends 5 instant messages per hour ¥ has 20 buddies on the list ¥ does 3 manual state changes per day (i.e., morning, lunch, leave office) ¥ 80% calls are to PSTN

Based on the usage profile and the Weighted SIP Transaction table, the SIP-T (with no authentication performed for each transaction) for each site is calculated as follow. ¥ 2 calls x 1000 x 20% (SIP calls) x 2.3 = 920 SIPT per hour (SIP basic calls) ¥ 2 calls x 1000 x 80% (PSTN calls) x 2.53 = 4048 SIPT per hr (SIP-PSNT calls) ¥ 1 register/24 x 1000 x 0.54 ~ 23 SIPT per hr (registration)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 121

¥ 1 register/24 x 1000 x 0.51 = 22 SIPT per hr (service package subscription - happens during registration) ¥ 1 register/24 x 1000 x 0.54 = 23 SIPT per hr (address book subscription - happens during registration) ¥ 1 registers/24 x 1000 x [0.57 + 20 x 0.31] ~ 266 SIPT per hr (presence subscription - happens during registration) ¥ 5 x 1000 x 1 = 5000 SIPT per hr (instant messaging) ¥ 1 register/24 x 1000 x {0.57 + 21 x 0.31} ~ 279 SIPT per hr (presence updates due to (re)-registration) ¥ [3 changes/24 x 1000] x {0.58 + [20 (buddies) + 1 (self)] x 0.31} = 887 SIPT per hr (presence updates due to manual state change) ¥ [2 changes x 2 calls per hr x 1000] x [0.58 + 21 x 0.31] x 2 = 56 720 SIPT per hr ¥ Estimated total transaction loads per site = 57 886 SIPT per hr ¥ Total loads for all sites = 57 886 x 10 = 578 860 SIPT per hr

Media gateways, Border Control Points, and Media Application Servers can be distributed across multiple sites or can be deployed at a centralized site. Their capacities are measured by the number of channels that they can support. To perform capacity planning on these components deployed at a central site, the planner needs to analyze the site specific service demands and to sum up these loads into a network-wide service demand. To perform capacity planning for distributed configuration, the planner needs to analyze the site specific service demands and to sum up these loads into a site specific service demand.

Figure 23 A distributed media gateways and portals example

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 122 Deployment considerations

Using the same example above, if these ten subscriber sites are behind their own firewall or NAPTs and served by two PoPs close to them, based on the subscriber’s usage profile (assuming all subscribers are active), the total Erlang for each site going to PSTN is calculated as follows:

Total PSTN Erlang/site = [2 calls per hr x 180 sec x 1000 subs x 80%] /3600 sec = 80 Erlang

The Erlang Calculator shows that 96 channels are required to support the call volume of one site. To support the call volumes of these five sites, the PoP would need 480 channels for Border Control Points and 480 channels for PSTN media gateways.

Phase 6: IP network capacity planning Using the same example above, if these ten subscriber sites are behind their own firewall or NAPTs and served by two PoPs close to them, based on the subscriber’s usage profile (assuming all subscribers are active), the total Erlang per site going to PSTN is calculated as follows:

Total PSTN Erlang/site = [2 calls per hr * 180 sec * 1000 subs * 80%] /3600 sec = 80 Erlang

The Erlang Calculator shows that 96 channels are required to support the call volume of one site. To support the call volumes of these five sites, the PoP would need 480 channels for Border Control Points and 480 channels for PSTN media gateways.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS logical hierarchy design 123

Figure 24 Traffic flows for IP network capacity planning

After identifying the loads of all the flows among MCS components of various sites, the system planner should be able to calculate LAN and WAN traffic, in terms of bandwidth and number of packets, for each site and PoP. The loads on the QoS IP network are the sums of the loads of all sites that are applied to the network. Using the same example above, approximately 9.6 Mbps are required to support 96 channels, (96 x 100 Kbps) PSTN voice traffic of each site, if the G.911 CODEC with 20 ms packetization is used for the voice call.

Several iterations of adjustments may be needed to complete the entire planning phase. At the end of the planning exercise, the planner must ensure that sufficient bandwidths are available (during the network normal and failure conditions) for all traffic, which include voice, video, IM, collaboration, data, and signaling. Even after the system has been successfully deployed, from time to time, the system planner still needs to revisit the logical hierarchy design and to optimize the design for performance, reliability, security, and load balancing considerations.

For more information for IP network QoS and security planning, see "IP network recommendations" (page 145).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 124 Deployment considerations

IP architecture components Traditional IP networks provide only best-effort packet transmission services. With this service, packets from a source are sent to a destination with no guarantee of timely or orderly delivery. The connectionless and dynamic nature of IP networks permits the voice traffic to take different paths from source to destination and to share those paths with other types of traffic. Consequently, IP networks have always been perceived as low-to-medium quality, less reliable, and insecure compared with existing voice networks.

If VoIP networks are to deliver good voice quality, they must avoid the current IP network impairments, primarily delay, delay variation (jitter) and packet loss to provide the service quality found in traditional voice networks. These impairments have a major effect on the voice quality of a conversation.

The IP network is the foundation of a multimedia system over IP network. The media quality and performance is only as good as the underlying IP network. Before deploying the MCS system, a basic understanding of the IP network architecture components (or building blocks) that are helpful to the MCS deployment project.

This section focuses on the IP architecture components useful for building the next-generation IP networks upon which MCS 5100 components are deployed.

Virtual Local Area Network (VLAN) and 802.1p A VLAN is a collection of switch ports that make up a single broadcast domain. A VLAN can be defined for a single switch or it can span multiple switches. A port can be a member of multiple VLANs. Because VLANs define individual broadcast domains, there is no communication between members of different VLANs. This arrangement conserves bandwidth, especially in networks supporting broadcast and multicast applications that flood the network with traffic.

Historically, LAN traffic has always been based on best-effort or First-In-First-Out (FIFO) network services. There have not been any simple methods to prioritize LAN traffic until the release of the updated IEEE (Institute of Electrical and Electronics Engineers) 802.1Q and 802.1p standards.

To support LAN-based Quality of Service (QoS) in Layer 2, the IEEE 802.1Q standard adds four additional bytes to the Ethernet header as shown in the following diagram. Within this new 4-byte tag, there are two important fields related to QoS - 802.1p field and the VLAN ID field.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 125

Figure 25 Ethernet header with 802.1Q tag

802.1Q has a number of rules that apply to frames as they traverse a switch. First are ingress rules, which are used to tag frames with their VLAN membership. Based on the VLAN membership, one can assign priority values to the frames associated with the VLAN. Forwarding rules maintain the priority of the frame and ensure the frame is put into the proper outbound port queue. Egress rules determine the correct frame format (tagged or untagged), as well as apply queuing methods on the outbound port.

Within this extended header are three bits that are defined by the 802.1p specification as priority bits. The priority bits permit eight different prioritization levels to be assigned to packets. Different output queues are defined for different classes of traffic. The Priority of Traffic Class table is used to direct frames to the proper queue for a particular output port. Traffic class 0 is best-effort delivery of packets, while everything else is considered some level of expedited traffic.

Network managers assign priority values to VLAN switch ports as either default or based on a policy. This value becomes the user priority for the frame. Arriving frames that are not prioritized on an 802.1Q VLAN member port get their three-bit 802.1p priority based on the value assigned to the VLAN.

If the frame leaves the switch formatted as an 802.1Q tagged frame, the priority assigned to the frame is carried forward to the next 802.1p capable switch. This enables the frame to carry the assigned priority through the network until it reaches its destination (provided the entire network is 802.1Q / 802.1p compliant). It also enables each 802.1p capable switch to treat these frames with the proper priority, forwarding high priority frames before low priority frames.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 126 Deployment considerations

These new 802 standards provide ways to partition traffic within the MCS system and enable traffic to be prioritized within the LAN. By prioritizing traffic flows within the LAN, high priority traffic can be queued ahead of lower priority traffic, expediting the transmission of time-critical information in local area network (LAN) environments. Nortel has standardized on an 802.1p value of 110 (6 decimal) for all voice packets. 802.1p marking is supported for marking media packets of the IP Phone which is controlled by the IP Client Manager or Multimedia PC Client. See Figure 26 "Prioritized traffic within a LAN" (page 126) for an illustration.

Figure 26 Prioritized traffic within a LAN

Differentiated Services (DiffServ) DiffServ is an essential QoS mechanism that must be planned and implemented in IP networks. Using DiffServ, network planners can provide the required network QoS for MCS signaling and media flows so that the desired application level quality for users can be achieved end-to-end. The following diagram illustrates the locations where DiffServ applies to the MCS network.

Although the Real-time Transport Protocol (RTP) was developed as an end-to-end delivery service for real time traffic, RTP by itself does not address resource reservation and does not guarantee QoS for real-time

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 127

services. Specifically, RTP does not provide any mechanism to ensure timely delivery or provide other QoS guarantees, but relies on lower layer protocols to do so.

VLAN prioritization is useful for forwarding traffic between LAN segments, but it does not offer end-to-end QoS. Differentiated Services or DiffServ defines an architecture that addresses OSI layer 3 QoS issues. The DiffServ architecture defines a service as the overall treatment of a defined subset of a customer traffic within a DiffServ domain or end-to-end.

DiffServ defines the marking of the second-byte of the IP packet header for IP QoS. This marking is known as the DiffServ Code Point (DSCP). The original definition of this byte is referred to as the TOS (Type Of Service) byte. The IETF has redefined the six most significant bits of this byte for DSCPs. A single DSCP identifies a particular behavior aggregate, which signal the network nodes how to treat a particular type of traffic.

See Figure 27 "DiffServ Code Point" (page 127) for an illustration of the DiffServ Code Point marking.

Figure 27 DiffServ Code Point

A table of DSCP codes and their corresponding binary, hex, and decimal values are provided in Appendix "DSCP code values" (page 341).

Note: In a DiffServ domain, all the IP packets that require the same forwarding treatment when crossing a link constitute a Behavior Aggregate.

DiffServ defines a number of functional elements that need to be implemented in network nodes. These include a set of Per-Hop Behaviors (PHB), packet classification functions, and traffic conditioning functions including metering, marking, shaping, and policing. Classification selects incoming packets based on IP header field or payload parameters. Policing enables the traffic conditioner to meter and monitor traffic behavior (burstiness, for example) and manage throughput by marking packets (in

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 128 Deployment considerations

or out of profile) or dropping them. Marking is the process of setting the DiffServ byte according to policies configured in the traffic conditioner. Packets become DiffServ traffic at this step. Shaping enables the traffic conditioner to enforce bandwidth policies by reallocating buffer space and smoothing burst rates.

DiffServ scalability is achieved by implementing complex classification and conditioning functions only at network ingress and boundary nodes. DiffServ-compliant transit nodes only select PHB by matching against the entire six-bit DiffServ Code Point field. The DSCP value is treated as a table index that is used to select a particular packet-handling mechanism. Simply marking packets with DSCP values does not ensure QoS. It is up to each node in the network to enforce the DiffServ-based policies. The DiffServ QoS mechanisms in the network devices give the network manager the ability to add services and capabilities to the network that enable the differentiation of traffic.

The packet-forwarding treatment delivers the differentiated service to packets at network nodes. The IETF has defined the following PHBs: Expedited Forwarding (EF, RFC 2598), Assured Forwarding (AF, RFC 2597), Default Forwarding (DE, RFC 2474), and Class Selector (CS, RFC 2474) to support Legacy routers. EF is described as premium service. It provides a virtual leased-line-like service to endpoints. AF is described as tiered service. DE is best-effort, such as exists today.

See Figure 28 "DiffServ in an enterprise network" (page 129) for an illustration of how DiffServ can be used in an enterprise network.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 129

Figure 28 DiffServ in an enterprise network

DiffServ is an essential QoS mechanism that must be planned and implemented in IP networks. Using DiffServ, the network planner can provide the required network QoS for MCS signaling and media flows so that the desired application level quality for users can be achieved end-to-end.

Queuing Queuing means to store packets for later transmission. It can cause delay and make the packet’s arrival behavior at the end point unpredictable. To minimize delay and jitter (delay variation), all MCS signaling and media packets must be given priority treatment above all other data packets. Signaling and media packets should be placed in the highest priority queue, separate from other traffic types, using a strict-priority scheduler or a scheduler that can be configured to behave as a strict-priority scheduler. A strict-priority scheduler schedules all packets in a higher-priority queue before servicing any packets in a lower priority queue. Because a strict priority scheduler can starve the servicing of all other traffic queues, you must configure a threshold to limit the maximum bandwidth that the VoIP traffic can consume. Most products configure this using the rate limiting option.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 130 Deployment considerations

Note: Other weighted schedulers such as Weighted Round Robin (WRR) or Weighted Fair Queuing (WFQ) are not recommended. If the router or switch does not support a priority scheduler and only supports a weighted scheduler, then the queue weight for VoIP traffic should be configured to 100 percent. If you cannot configure a 100% weight due to some product limitation, then consider replacing the product because it will cause unpredictable voice quality.

Different products can have a different number of queues on different interfaces. Depending on the number of queues and NNSC supported by the product, the mapping of DSCP to NNSC and queues can vary.

For MCS 5100 deployment, Nortel Ethernet Switch 470-48T is recommended for Layer 2 functions. It supports an NNSC 4-queue scheduler and places the voice packets in the highest priority queue using a strict-priority scheduler when QoS is enabled on an interface. Nortel Network’s Passport 8600 switch is recommended for both Layer 2 and Layer 3 functions. It supports an NNSC 8-queue scheduler. Passport 8600 supports all eight NNSC service classes. Premium Service Class is provided for traffic with stringent requirements such as real-time voice and video.

DSCPs must be mapped in such a way that the PHBs are preserved as the packets traverse the network.

Figure 29 "8-queue scheduler" (page 131) shows an illustration of an 8-queue scheduler.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 131

Figure 29 8-queue scheduler

IP routing IP routing has been defined as the process of choosing a best forwarding path based on a Layer 3 address over which to send packets. Routers use Layer 3 (Protocol) addresses to make decisions about where traffic should go. Hosts send packets to other networks to their default router. The router selects the optimal route between itself and the destination networks using routing tables and decision trees. The router then forwards the packet to the next hop router or the locally attached host. There are a number of routing protocols used in IP networks (for more discussion, see Appendix "IP functional components" (page 319)). Open Shortest Path First (OSPF) protocol supports the features such as area hierarchy, equal-cost multi-path, and routing summarization makes it especially suitable for use in a large network. In general, take the following recommendations into consideration when planning routing architecture for MCS 5100 deployment: ¥ Assign a meaningful OSPF cost based on bandwidth, security, cost, and reliability to each link to ensure predictable traffic flow across the network. ¥ Ensure that there is no single point-of-failure in the network and at critical network segments such as the MCS server farm.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 132 Deployment considerations

¥ Analyze and fine-tune the OSPF Hello and Dead Interval timers for the underlying LAN and WAN topology to provide the convergence time needed by the applications. ¥ Maintain small amount of routing information for OSPF network, using OSPF area summaries and OSPF stub areas. ¥ Keep bandwidth consumed by routing protocols to the minimum for a stub AS domain. Import only a default route to OSPF. For nonOSPF branch networks, use static and default routes to increase the network stability and minimize the routing update traffic ¥ Map the MCS logical hierarchy design to the network topology for better management.

Virtual Router Redundancy Protocol (VRRP) The VRRP operation is defined in RFC 2338. It eliminates the single point-of-failure that can occur when the configured default router for an end station is lost. When more than one router exists and the primary forwarding router fails, VRRP provides a standards-based method that automatically detects the failure, reassigns the IP forwarding function to a standby router, and continues to maintain network connections for the hosts on the LAN. See Figure 30 "VRRP in the MCS 5100 network" (page 133) for an illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 133

Figure 30 VRRP in the MCS 5100 network

VRRP operates transparently to the end user and requires no special configuration on host devices. It uses the concept of a virtual IP address shared between two or more routers connecting a subnet to the enterprise network. Using the virtual IP address as the default gateway on end hosts, VRRP provides dynamic default gateway redundancy in the event of a failure.

VRRP protocol increases network resiliency for the attached hosts and is very useful for the LAN segments where MCS 5100 servers and gateways are located.

Multi-Link Trunking (MLT) and Split Multi-Link Trunking (SMLT) The MLT is a Nortel method for utilizing multiple physical connections between a given pair of switches, or between a switch and a server with multiple network interface cards (NICs), as a single logical link. Routing protocols such as OSPF treat the entire MLT bundle as a single logical interface. Should one of the links fail, the aggregate bandwidth is reduced by the amount of the failed link, but the MLT trunk itself will remain up

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 134 Deployment considerations

and continue to forward traffic as long as at least one physical connection is available. MLT provides network redundancy, requires no routing convergence, and enables incremental bandwidth increase when needed. When forwarding packets on a trunk, MLT uses the addresses in the packet to calculate and select the physical port in the MLT bundle. MLT always chooses the same link for packets with the same source and destination address pair because it uses the same path selection algorithm to forward the packets. This is important for applications requiring that packets in a given session arrive in sequence. When multiple Layer 2 Ethernet connections are dual-homed to network center aggregation (core) switches from a close or CPE edge switch, Spanning Tree Protocol (STP) must block one of the redundant network paths to avoid Layer 2 forwarding problems such as broadcast storm. While providing Layer 2 redundancy, STP is costly from the perspective of recovery time and wasted bandwidth. Nortel SMLT is an MLT with one logical end split into two switches. SMLT protocol solves STP problems by enabling both paths in a dual-homed switch configuration to be active and forwarding traffic simultaneously as well as providing very fast traffic failover in the event of a link failure. This is accomplished by implementing a method that enables two aggregation switches to appear as a single device to dual-homed edge switches. The aggregation switches make use of an Inter-Switch Trunk (IST) between them over which they exchange information, permitting rapid fault detection and forwarding path modification. See Figure 31 "Multi-link Trunking" (page 135) for an illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 135

Figure 31 Multi-link Trunking

Both MLT and SMLT can be implemented in the same network. MLT and SMLT can be used to build the following: ¥ a scalable network by adding bandwidth when needed ¥ a resilient network by eliminating a single point-of-failure whether it is caused by a link failure or a switch failure ¥ a robust network by reducing or eliminating routing protocol convergence time.

Both MLT and SMLT offer great value for MCS 5100 deployment. MLT is used in networks to provide the OSPF core network the scalable bandwidth and the link -level redundancy for greater routing stability. SMLT is used to connect closet switches to the aggregation switches. SMLT offers more bandwidth and reliability to the terminal devices. With SMLT, the detection of link or switch failure and failover takes less than one second.

IP addressing If a large number of IP Phones are to be deployed in the network, assigning IP addresses to these new terminals requires careful planning.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 136 Deployment considerations

To better use the address space given and to increase the network performance and stability for MCS 5100 implementation, structured IP addressing technique must be used to map the IP address space to a network topology in a hierarchical fashion. Additional information is available in the Appendix "IP functional components" (page 319) section.

Packet filters and firewalls A firewall is the most important line of defense between a network and the outside world. A firewall provides the following functions: ¥ Serves as a security checkpoint between an organization’s internal network (Intranet) and external networks (Internet and Extranet). ¥ Implements a corporate security policy. ¥ Screens and restricts data packets entering and leaving the corporate network, based upon the defined security policy. ¥ Alerts administrators when an attack is occurring and provides a record of an attack that has taken place.

Although the use of firewalls can provide more security for the protected networks, very few firewalls are VoIP application aware. Most firewalls need to open up specific addresses and ports for packets of specific applications to flow through.

Packet filters are stateless, unlike many firewalls. Packets are treated individually rather than as session streams. Packet filters can be implemented by simply applying access rules to routers and switches.

To secure MCS deployment, firewalls must be used to protect perimeters of both the provider and the customer networks. However, to enable communication to take place between the clients and servers, proper security filters must be set up for the permitted packet flows. For example, the security filters must be set up for UNIStim (5000) and UFTP (50020, configurable) packet flows for the IP Client Manager, SIP (5060) message flow for the Session Manager, media packet flow for the Border Control Point (40000-60000, configurable).

Based on definitions provided by Internet Assigned Number Authority (IANA), well-known ports are 0-1024, registered ports are 1024-49151, and the dynamic and private ports are 49152 to 65535. See the "Security" (page 231) section for detailed information on security and firewalls.

Note: Registered ports are ports that are taken for some purposes, but not necessarily used for MCS 5100.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 137

Nortel strongly recommends the use of firewalls within the MCS network. Figure 32 "Use of firewall in the MCS network" (page 137) shows the use of firewalls to secure MCS networks.

Figure 32 Use of firewall in the MCS network

Additional information on the types of firewalls is available in Appendix "IP functional components" (page 319).

Network Address and Port Translation (NAPT) Network Address Translation (NAT) is commonly used to circumvent the shortage of IPv4 addresses. NATs break any server applications run behind NAT and any applications wanting to distribute IP addresses and port numbers between cooperating processes. There are solutions for these problems, which include the use of static address bindings, Application-Level Gateway (ALG) on NAT devices, and playing games with DNS to try to make it look like the servers on the inside have visible external IP addresses. However, these solutions do not always solve the problem, and sometimes make things worse. For example, the use of static mappings are not scalable and error prone. In addition, ALGs are neither available for nor compatible with the new applications.

Additional information is available in Appendix "IP functional components" (page 319).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 138 Deployment considerations

SIP contains IP addresses in the message headers and in the carried SDP (Session Description Protocol) body. Since most NAT and NAPTs are not SIP aware, when a SIP message passes through a NAT or NAPT device, the IP source address and port is changed in the header, but the IP address and port in the SIP message header and SDP are not changed. This can break the MCS applications. To solve this problem, the MCS 5100 solution has been enhanced to handle both signaling and media packets to and from the private enterprises sitting behind firewalls and NAT devices.

To make the Session Manager aware that a SIP client is behind an NAT or NAPT device, all SIP request messages (Register, Invite, Message) contain a Proxy-Require: com.nortelnetworks.firewall header. This triggers the Session Manager to store the NAPTed address from the packet header as opposed to the IP in the contact header (as it usually does). The client’s home Session Manager is the only one that knows how to route to the client. The Session Manager is the only one that will understand the Proxy-Require: com.nortelnetworks.firewall.

Note: Proxy-Require means I require that you understand this.

ATTENTION A NAT or NAPT device cannot be placed in the IP network between the following devices and the Session Manager: Media Gateway 3200, third-party gateways, and MAS applications.

NAT or NAPT devices and firewalls are covered in more detail in the "Security" (page 231) section as well as Appendix "IP functional components" (page 319).

IP Virtual Private Network (VPN) VPN technologies are very useful for an enterprise to create its own logical network environment using service provider network infrastructure. An enterprise VPN can be seen from the outside world as a single routable domain rather than many smaller routable networks. Issues such as IP addressing, routing, bandwidth, QoS, security, and where and how the MCS components should be positioned are greatly simplified by using VPN. Figure 33 "Sample Layer 2 and Layer 3 VPN" (page 139) shows a sample Layer2 and 3 IP VPN.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP architecture components 139

Figure 33 Sample Layer 2 and Layer 3 VPN

The simplification comes from: ¥ flexibility of using a large IP addressing space ¥ total control of routing space and protocols ¥ more accurate estimation of bandwidth requirements among intranet and extranet sites ¥ consistent end-to-end QoS strategy within intranet sites and over the VPN backbone ¥ complete control of security policies at controlled border locations, whether these policies are intranet, extranet, or Internet-based ¥ flexibility and control of the MCS components deployment ¥ no requirement of the use of RTP Media Port when communicating within the secure intranet (unless NAT technology is used within the tunnel)

Additional information is available in Appendix "IP functional components" (page 319).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 140 Deployment considerations

Security and QoS are vital to a successful MCS deployment project. Advanced VPN technologies provide secure routing and a high performance network needed for MCS applications. Both Layer 2 and Layer 3 VPNs are quickly becoming the foundation for delivering converged services for data, voice, and video traffic.

Nortel Layer 3 VPN products support a full range of access options including Frame Relay, ATM, Ethernet, and DSL as well as network connections including IP, ATM, and MPLS and support for both VR-VPN and BGP MPLS VPN. T

Nortel Layer 2 VPN products include Ethernet high-performance Layer 2 switches, Layer 3 optical Ethernet switches, and multiservice Frame Relay and ATM switches.

Nortel Optical Ethernet (OE) building blocks include simple Ethernet over Fiber, Ethernet over Resilient Packet Ring (RPR), and Ethernet over DWDM. Ethernet over RPR can be used to leverage an existing customer SONET infrastructure. Ethernet over DWDM delivers very high-bandwidth for data center interconnection. OE uses Ethernet UNI to separate the Service Provider and customer network and to separate customer traffic immediately upon its entry into the network to ensure that customer traffic remains private and secure through the network. In addition, OE integrates Layer 2 MPLS function to take advantage of the advanced services over standard-based MPLS core, which cannot use Ethernet for transport. MPLS brings significant benefits to the OE network, including preprovisioned paths for faster network restoration and advanced traffic engineering capabilities for improved SLA management.

IP reference networks Enterprise reference network The enterprise reference network describes the network topology of a typical enterprise campus network. The edge router in the following reference diagram represents the edge of the DiffServ domain. The WAN router represents a DiffServ Boundary node, which provides access to a Service Provider Network for connectivity to the public Internet or to other enterprise network locations through a service provider managed IP network described in the previous section. As a DiffServ boundary node, the WAN router must enforce the SLA (Service Level Agreement) between the enterprise and service provider.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 deployment scenario 141

Figure 34 Enterprise reference network

The Layer 2 switches are used mainly for Layer 2 traffic aggregation at the network closets. In terms of routing, there can be one or more Layer 3 edge switches or routers grouped into one or several nonbackbone OSPF areas. The OSPF backbone area contains one or more core Layer 3 switches or routers, which are used to connect nonbackbone OSPF areas. To connect to the external networks (Extranets or the Internet), the WAN router uses BGP or static routing to connect to the provider’s Network. The Firewall and NAT functions can be integrated into the WAN router or collocated with the WAN router to protect the enterprise network. Also, the WAN routers can use the VPN technologies supported by the provider to connect all other enterprise sites to form the closed and secure enterprise intranet.

MCS 5100 deployment scenario Enterprise deployment is a scenario in which an enterprise completely owns and operates all MCS functional components. By deploying MCS, the enterprise can gradually migrate local communications to the converged IP network. MCS SIP clients can be adopted to displace the existing phones and to provide advanced multimedia services.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 142 Deployment considerations

The MCS servers deployed in the enterprise network serve all the users within the domain. To provide a clean operating environment, MCS servers and gateways should be deployed in the data centers located in the headquarters, major campuses, or major branch offices.

Depending on the locations and the capacity requirements of an enterprise, more than one set of MCS servers and gateways can be deployed to improve the overall availability and quality of the MCS network. The quality of the local communication is improved because the signaling and media paths of the local clients are setup and maintained by the local MCS servers and gateways. The Media Gateway 3200 and the SIP FXO Gateway handle in and out-bound telephony traffic from and to the PSTN network and PBXs.

MCS logical hierarchy A three-level MCS logical hierarchy is recommended for enterprise deployment. At the highest level, one or two Level-5 MCS Network Control Centers can be located at the headquarters or one of the major campuses where sufficient support personnel and network bandwidth are available.

MCS Level-4 Network Signaling Center, Level-3 Media Concentration Center, and Level-2 PoP can be consolidated into one level for enterprise deployment. The consolidated signaling and media centers can be deployed in headquarters, major campuses, and major branch offices. These sites must offer high transport network availability and low transport network latency. To ensure good voice quality and maintain the dynamic of collaboration, these sites should also be closer to the large SIP client population.

The headquarters, regional campuses, and branch offices are inter-connected using Layer 2 or Layer 3 VPN technologies. The Layer 2 and Layer 3 transport services are provided by the local providers, who can meet the required bandwidth and QoS requirements for all traffic types including MCS. The home office users and telecommuters can connect to headquarters, campuses, and branch offices using secure IP client software. This type of arrangement enables off-campus users to use the address space on the campus through DHCP.

The detailed enterprise MCS hierarchy planning and design should follow the methodology outlined in the "MCS logical hierarchy design" (page 107) section of this section.

Network architecture VLANs should be used to separate voice and signaling from data traffic in the data centers and media centers. At the network edges, the MCS traffic from the servers and gateways should be marked with the recommended

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 deployment scenario 143

Layer 2 and Layer 3 priorities and placed in the recommended queues. For more information, see "Virtual Local Area Network (VLAN) and 802.1p" (page 124) and "Queuing" (page 129) .

For quicker Layer 3 convergence, OSPF is highly recommended for the campus routing protocol. RIPv2 should only be used in smaller branch offices. Alternatively, use static and default routes to increase network stability and minimize the routing update traffic. BGP or static routes should be used between the Internet service providers and business partners. OSPF Hello timers should be configured to a lower value (1-2 seconds) to shorten the Layer 3 convergence time.

For more information about IP network design, see "IP routing" (page 131) and "IP network recommendations" (page 145).

To provide transparent default router failover at the edge networks, redundant Layer 3 switches and VRRP should be deployed to support MCS servers, gateways, and critical user applications. MLT and SMLT are recommended for supporting the bandwidth and redundancy required for the enterprise core, distribution, and access networks. These technologies can significantly reduce network Layer 1 failover and Layer 2 to Layer 3 convergence time from tens of seconds to a few seconds or sub-seconds depending on the network size. Rapid convergence can significantly reduce impact to the voice quality during network topological changes.

In the enterprise scenario, the enterprise owns or has control of the entire address space. Structured IP addressing schemes must be used in a carefully thought out MCS deployment plan. Power-of-2 block of contiguous address blocks must be assigned to different routing regions for ease of management and address summaries. IP Client Managers must be deployed in major campuses to support large numbers of MCS clients. An MCS server farm should use its own subnet address block for easy identification and troubleshooting.

Firewall filters and routing policy filters should be implemented at enterprise border routers to secure traffic in and out of enterprise. If required, firewall(s) could also be deployed at Level-5 data center locations to protect sensitive OAMP operations.

To communicate with the external networks such as the Internet, NAT may be required at the enterprise network borders when the IETF-reserved IP private addresses are used within the enterprise. QoS enabled VPN technology is highly recommended for both Intranet and Extranet. The following diagram shows an example for MCS network creation using VPN, Firewall, and NAT.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 144 Deployment considerations

Figure 35 MCS deployment for a large enterprise

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 145 IP network recommendations

All of the applications and communications within an MCS 5100 network solution run on top of the IP protocol. Therefore, it is essential that the IP network infrastructure be carefully planned and evaluated to support an MCS 5100 network solution. This section describes recommendations for QoS, IP addressing, routing, as well as requirements for network performance and IP network services. This section contains the following: ¥ "MCS IP network topology" (page 145) ¥ "Service quality requirements and IP network performance requirements" (page 147) ¥ "IP network qualification" (page 160) ¥ "QoS recommendations" (page 163) ¥ "MCS 5100 QoS support" (page 170) ¥ " IP Phone network considerations" (page 177) ¥ "Server-side network considerations" (page 181) ¥ "IP network routing recommendations and considerations" (page 182) ¥ "Low-speed access link considerations" (page 184) ¥ "IP address requirements" (page 185) ¥ " IP network services requirements" (page 186)

MCS IP network topology To protect the servers for core MCS components from being attacked within the enterprise network, Nortel recommends that these servers be placed in a Protected MCS Network as shown in Figure 36 "MCS 5100 network topology" (page 146).

As shown in the network topology, the MCS 5100 contains two types of networks: the Protected MCS Network and Enterprise Network.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 146 IP network recommendations

Figure 36 MCS 5100 network topology

Protected MCS network The Protected MCS Network should be protected by a firewall or packet filter. By default, rules should block all incoming traffic unless specifically permitted. Permitted traffic should be restricted to particular IP addresses and IP port ranges.

Resources on the Protected MCS Network are typically colocated. All resources may reside on the same subnet. If multiple subnets are used, active and backup servers must reside on the same subnet.

Publicly reachable resources on the Protected MCS Network such as the Session Manager, IP Client Manager, and Provisioning Manager must be accessible to all clients. For information on ports to the Protected MCS Network, see "Security" (page 231).

Enterprise Network The Enterprise Network hosts MCS functional components as well as MCS clients.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Service quality requirements and IP network performance requirements 147

For the MCS functional components, the Enterprise Network can be geographically distributed for placement of resources close to communities of interests in MCS services. Management traffic can pass between the managing components on the Protected MCS Network and components on the Enterprise Network. Components found in the Enterprise Network can include: Media Gateway 3200, Media Application Servers, SIP FXO Gateways and SIP FXS Gateways. These shared resources should be protected by access control lists or firewalls fronting the resources. There should be no NAT or NAPT between the Protected MCS Network and the Enterprise Network. Components in this network should have publicly addressable IP addresses.

MCS clients such as the IP Phone, Multimedia PC Client, Multimedia Web Client, Personal Agent and Provisioning Client communicate with one of the above publicly connected MCS network elements through the Enterprise Network.

Within an Enterprise Network, it is possible to find the use of IP addresses assigned from the private space as defined in RFC 1918, publicly assigned IP addresses, or a combination of both. What needs to be addressed is how the MCS clients access the necessary MCS applications or resources.

If a mixture of public and private IP addresses are used for MCS components in the Enterprise Network, then the enterprise must deploy NAPT and Border Control Point in the network. When such firewall or NAT or NAPT devices are deployed in the enterprise end-user network, it is imperative that information be made known to the security personnel and data network planners so that the proper configurations can be made to support communications between MCS clients and the MCS servers.

Service quality requirements and IP network performance requirements The MCS 5100 system usesIP networks for transporting voice and video packets. IP networks introduce impairments, primarily delay, delay variation (jitter), and packet loss, at levels higher than those found in traditional voice networks. These impairments have a major effect on the voice quality of a conversation. To effectively manage these impairments, it is important to understand how IP networks contribute to these impairments.

This section provides information on standard MOS (Mean Opinion Score) and ITU-E model R-values. The section shows typical end-to-end delay breakdown and general network performance requirements to support reasonable voice quality in a VoIP network environment.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 148 IP network recommendations

Voice quality rating: MOS Based on ITU-T P.800 recommendation, MOS ratings provide a subjective quality score averaged over a large number of speakers, utterances, and listeners. To determine an MOS value, a series of spoken sentences are transmitted to listeners through a repeatable speech sample under scrutiny. For each sentence, the listeners must allocate a number from 1 to 5. The numerical average of the results from all the listeners is then used to provide an MOS score for the particular speech sample. The table indicates the value of MOS and the respective quality rating. The following factors describe what is MOS: ¥ Estimates acceptability of voice performance ¥ Average rating given by a group of listeners or users ¥ Obtained from listening tests and conversation tests ¥ Five-point scale — 5.0 Excellent — 4.0 Good — 3.0 Fair — 2.0 Poor — 1.0 Bad

The following table shows the MOS ranges and their applicable voice quality.

Table 9 MOS ranges and voice quality MOS Range Quality 5.0 to 4.0 Toll quality, Good-to-Better (GoB) 4.0 to 3.0 Communication quality Less than 3.0 Synthetic quality, Poor-to-Worse (PoW)

Voice quality rating: R-values The R-value ratings are based upon ITU-T E-model. E-model results are calculated using the Impairment Factor method in which impairment values along the speech path (such as loss, distortion, echo, delay, noise) are combined to obtain an overall transmission rating R.

The advantage of the E-model is that it is an objective evaluation of speech quality as compared to the subjective MOS evaluation. Also its standardization (ITU-T G.107) enables greater consistency in voice quality

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Service quality requirements and IP network performance requirements 149

ratings among carriers and vendors. The R-value can also be converted to a comparable MOS-value. See Figure 37 "R-Value compared to MOS Value" (page 149) for an illustration.

Figure 37 R-Value compared to MOS Value

The input to the E-model is a set of measurement parameters that includes telephone specifications, network loudness, delay and echo as well as background and circuit noise levels. CODECs are treated in the E-model as network equipment that causes impairment to speech quality. Each CODEC is assigned a unique impairment factor. The E-model also uses an expectation factor on the assumption that poorer speech quality may be accepted by a user in exchange for access advantage, such as mobility. Using a predetermined formula, the model attempts to predict the speech communication quality through a transmission network as perceived by the user at the receiving side.

The formula is:

where

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 150 IP network recommendations

Ro is the basic signal-to-noise ratio based on send and receive loudness ratings and the circuit and room noise. Is is the sum of real-time or simultaneous speech transmission impairments, for example, loudness levels, side tone and pulse code modulation (PCM) quantizing distortion. Id is the sum of delayed impairments relative to the speech signal, for example, talker echo, listener echo and absolute delay. Ie is the Equipment Impairment factor for special equipment, for example, low bit-rate coding (determined subjectively for each CODEC and for each percentage of packet loss and documented in Appendix I). A is the Advantage factor added to the total. It improves R for new services, like satellite phones, to take into account the advantage of having the service where the alternative might be no service and to reflect acceptance of lower quality by users of such services. We have assigned the Advantage factor a value of 0 (no advantage) for VoIP. Note: Ie is derived from subjective testing results and makes the E-Model a powerful tool for estimating the relative user satisfaction of IP-base conversations. Nortel subjective evaluation lab has run a number of studies to support the extension of Ie tables to include G.711, G.726, and G.722 with packet loss, and G.711, G.726, G.729, GSM, and G.722 with packet loss concealment.)

Figure 38 "ITU-E model" (page 150) illustrates the various areas such as room environment, transmission equipment, switching equipment, and telephones that may introduce noise and other impairments affecting voice quality.

Figure 38 ITU-E model

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Service quality requirements and IP network performance requirements 151

QoS parameters Nortel recommends the following network performance targets: ¥ One-way delay of less than 150-ms (Mouth to Ear) ¥ Jitter of less than 20 ms ¥ Packet loss of less than 1%

A number of Quality of Service (QoS) parameters can be measured and monitored to determine whether a service level offered or received is being achieved. These parameters consist of the following: ¥ Network Availability ¥ Bandwidth ¥ Delay ¥ Jitter ¥ Packet Loss ¥ Echo

The following is a description of the general performance requirement for each parameter to support reasonable voice quality for VoIP.

Network availability Network availability is the overall result of the availability of many functional items that are used to build a network. These include networking device redundancy, for example, redundant interfaces, processor cards or power supplies in routers and switches, resilient networking protocols, multiple physical connections, for example, fiber or copper, backup power supplies. Network operators can increase their network availability by implementing 6 varying degrees of each of these items. PSTN provides high reliability or carrier grade (99.999%) reliability for voice transport. The challenge is to provide cost-effective solutions that offer the same level of reliability and performance over an IP network that has traditionally transported data services such as file sharing, e-mail, and HTTP traffic.

Bandwidth One of the keys to acceptable voice quality is bandwidth. The PSTN network is a TDM-based connection-oriented network. This type of network provides dedicated bandwidth availability and reliability. An IP network is a connectionless best-effort type of network. It is important that sufficient bandwidth be available to enable high-quality voice and accommodate other MCS components over an IP network.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 152 IP network recommendations

Bandwidth allocation can be subdivided into two types: ¥ Available Bandwidth: This usually means that the bandwidth a user subscribed to is not always available to them. Many users are competing for this available bandwidth, and each user gets more or less bandwidth depending upon the amount of traffic from other users on the network at any given time. ¥ Guaranteed Bandwidth: Users may share the same network infrastructure with the Available Bandwidth service traffic. When subscribers share the same network infrastructure, the network operator must prioritize the traffic of Guaranteed Bandwidth subscribers over that of the Available Bandwidth subscribers so that in times of network congestion, the Guaranteed Bandwidth subscriber Service Level Agreement (SLA) is met. In other situations, these Guaranteed Bandwidth network connections may be leased from another service provider, which can be expensive.

Bandwidth is expressed in kilobits, megabits or gigabits per second. In order to move data from one point on the network to another point on the network routing information contained in a packet header is added to the data at each layer of the OSI stack. This routing information adds to the bandwidth requirements. When calculating bandwidth it is important to consider all of the headers at each of the different OSI layers. Different transport types add different amounts of overhead, hence it is important to consider where (using what kind of transport technologies) in the network bandwidth is being considered.

Bandwidth is typically considered at points of concentration within a network for example at WAN links, server or gateway locations. Last mile bandwidth may be considered in networks with residential or small business IP access as they are typically low bandwidth links. MCS bandwidth can be broken down into 3 layers: Media, signaling, and management. Media can consist of audio, video, collaboration (file transfer, clipboard transfer). Signaling bandwidth is used for providing services and is much smaller than media bandwidth. Management bandwidth from a client perspective could be downloading a new software release in the client, downloading address book or service package information for a client, configuration data for a server or gateway.

Media bandwidth can be calculated based upon the type and number of expected media flows over a given point in the network. For each media type multiply the number of flows by the bandwidth required for a single flow. Bandwidth requirement tables have been provided in this document for audio and video.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Service quality requirements and IP network performance requirements 153

Signaling bandwidth can vary and is more challenging to calculate it accurately. This is partially due to the flexibility of SIP addressing for example: A SIP message could be sent to: [email protected] or it could be send to that same user with the address: lastname.fi[email protected] This variation in address length can be seen in a number of fields in the SIP headers. SIP signaling is much smaller than media requirements. As an example a basic SIP to SIP call with authentication over Ethernet between a client and the Session Manager requires ~3800 bytes. At a rate of 50 000 BHCA, signaling will need 3800 x 8 bits per byte x 50 000 calls x 2 legs per call / 3600 seconds per hour = 844 kbps.

Comparing that to the media for the number of G.711 20ms sessions with 3 minute hold time we get: 2500 simultaneous calls of G.711 at a rate of 95.2 kbps = 238 mbps.

Management bandwidth can vary depending upon the circumstances; peak bandwidth use can occur, for example, when a large number of Multimedia PC clients download a 10MB updated client file from the Provisioning Manager server.

Delay Delay (or latency) is defined as the amount of time it takes for a packet to travel from the source to the destination. Delay can introduce issues with voice transport such as breaking the dynamics of a conversation by talking and backing off at the same time and degrading the quality of the conversation by introducing reluctance in a positive response, echoes. When one-way delay exceeds 150ms from mouth to ear, the voice quality is dropped into the satisfactory region from the very satisfactory region. If one-way delay exceeds 250ms from mouth to ear, the voice quality becomes dissatisfactory for some users. There are three possible areas within a network path that can introduce various delay factors. The areas are described as follows: ¥ Call Origination — CODEC encoding — Packetization — Ingress queuing

¥ Packet Network — Ingress link delay — Backbone transmission — Backbone queuing — Egress link delay

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 154 IP network recommendations

¥ Call Termination — Egress queuing — Jitter buffering — CODEC decoding

These delay factors can be either fixed or variable delays that are part of the overall one-way delay budget. Fixed delay types could be coder-processing at source and destination (G.711, G.729), packetization at the source (depending on the voice packet size and compression algorithm used), and propagation delay (depending on the link speed and distance traversed). Types of variable delays include queuing delay (ingress, backbone, egress), nodal processing delay (from inbound queue to outbound queue), and jitter buffer delay. The total End-To-End Delay would be equal to the fixed delay plus the variable delay.

Typical packet network one-way delay targets are under 50-60ms depending upon the factors listed above.

Figure 39 "Delay impact on QoS" (page 154) illustrates a scaled graph of user satisfaction against one-way delay.

Figure 39 Delay impact on QoS

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Service quality requirements and IP network performance requirements 155

Jitter Jitter is caused by packets being queued in different buffers along the transmission path due to contention of a resource. Jitter is hard to predict. Removing jitter requires collecting packets and holding them at the buffer of the receiving device so that they can be corrected for order and spacing. This buffering queue is called the jitter buffer. Depending upon the endpoint, jitter buffers are found in two types: static and dynamic (also called adaptive). Buffering for playback causes additional delay. Network jitter must be kept within bounds to maintain conversation quality. Excessive jitter in the network ultimately becomes packet loss.

Nortel recommends that jitter should be less than 20 ms.

Packet Loss Packet loss can occur due to errors introduced by the physical transmission medium. It can also occur when congested network nodes drop packets. Packet loss for time-sensitive media can also occur when packets arrive outside of boundaries of jitter buffers. Depending on the CODEC type used, there are different levels of acceptable packet loss. For example, less than 3% packet loss is acceptable for G.711 coder using packet loss concealment technology. Less than 1% packet loss should be the goal for other coder types to achieve an overall better quality. If the network is used to transport analog data such as fax and analog modem, packet loss should be kept under 0.1%. Most observed packet loss is bursty in nature such as occasional long losses rather than random or frequent short losses. Some factors that may contribute to bursty packet losses include network routing convergence, lower level topology restoration, improperly designed network, and inadequately provisioned bandwidth. Packet loss creates a sensation of choppy or distorted voice and jerky or jumpy video. In extreme packet loss scenarios, it can even lead to dropped calls. One of the most common contributors to packet loss is duplex mismatches. Duplex mismatches occur when two endpoints for a given link do not agree on a duplex mode of operation: one device operates in full duplex while the other operates in half duplex. The problem is the device that is operating in full duplex mode is not detecting packet collisions nor is it capable of retransmitting the packet once a collision has occurred. The packet is lost. See the following table for mismatches found in the Ethernet. Note: After numerous Network Assessments and problem solving sessions at VoIP sites, it can be said unequivocally that duplex mismatches is the biggest problem in running VoIP installations. For example, one device such as the switch port is operating in full duplex, while another device such as Border Control Point, Media Application

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 156 IP network recommendations

Server, Media Gateway 3200, IP phone, PC or router is operating in half duplex on the same medium.

Table 10 Autonegotiation and duplex mismatch 10Mb half 10Mb full 100Mb halt 100Mb full Autonegotiation 10Mb halt OK Mismatch X X OK 10Mb full Mismatch OK X X Mismatch 100Mb half X X OK Mismatch OK 100Mb full X X Mismatch OK Mismatch Autonegotiation OK Mismatch OK Mismatch OK

The primary cause of the duplex mismatch problem is a LAN switch hard configured to either 10Mb Full Duplex or 100Mb Full Duplex, with all the devices connected to the switch configured to autonegotiation. For a device carrying a single VoIP call, this is not catastrophic. However, extremely poor voice quality can result from duplex mismatch for any device carrying a concentration of calls, such as a SIP to PRI gateway, Border Control Point, or a router or LAN-LAN switch in the critical path of the calls. The problem is noted in the 802 specifications for 100BaseT. A device configured for autonegotiation that is connected to a device forced (also known as hard set, configured) for speed and mode will: 1. detect the speed properly, 2. select Half Duplex mode no matter what the other device is forced.

If switch ports are forced to any speed and full duplex, but the end devices directly connected to the switch are not forced to the same speed and full duplex, connections are created with duplex mismatches. This is a very common situation because: 1. Autonegotiation was unreliable and failed frequently in the early days, causing many administrators to develop the habit of forcing the speed of switch ports. 2. It is a very common misconception that one can force the switch ports to 10Mb full or 100Mb full duplex, and the autonegotiating device will correctly match the speed and duplex. Although the autonegotiation device will match the hard-coded speed, it will remain at half-duplex, causing a duplex mismatch. This behavior is related to the autonegotiation specification. The current best practice is to configure both the switch port and the autonegotiation device to autonegotiation.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Service quality requirements and IP network performance requirements 157

Duplex mismatches cause packet loss. The device that is operating in full duplex is not detecting packet collisions, hence transmits packets without temporarily storing the packet in the event of a collision. When packet collisions occur, the packet is discarded and it is up to the endpoints to retransmit upon detection. The device operating in half-duplex detects the packet collision correctly and retransmits the packet. The device operating in full duplex does not. See Table 10 "Autonegotiation and duplex mismatch" (page 156) for correct switch settings. Since voice is a full duplex application, the optimal method for running LAN and WAN links is full duplex. Furthermore, Ethernet switches should be used for VoIP applications instead of Ethernet hubs. Ethernet switches provide the ability to support full duplex operation. Ethernet hubs only operate at half duplex. Most wireless technologies for IP networks today operate in a half-duplex mode. The number of wireless endpoints using multimedia in a cell must be carefully monitored. Because an Ethernet hub is a shared medium operating in half-duplex, all of the devices connected to the hub must share the medium. In other words, only one device may transmit at a time. VoIP generate a high packet per second transmit rate. If more than one or two VoIP devices are connected to a shared medium, a high number of packet collisions will occur. Because packets must be retransmitted, this increases jitter in the network. High jitter can significantly degrade voice quality. MCS components operate in the following modes: ¥ series 1 IP Phone 2004 (standalone or with external 3-port switch module) — must autonegotiate with the LAN switch because speed and mode cannot be forced — support 10 BaseT half and full duplex — support 100 BaseT half duplex

¥ series 2 IP Phones with built-in 3-port switch — must autonegotiate with the LAN switch because speed and mode cannot be forced — support 10 BaseT half and full duplex — support 100 BaseT half and full duplex

¥ Media Gateway 3200 — must autonegotiate with the LAN switch — Supports 100 BaseT full duplex

¥ Media Application Server:

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 158 IP network recommendations

— must autonegotiate with the LAN switch — supports 100/1000 BaseT full duplex

Modeling establishes that as networks approach 90 percent utilization there are more congestion losses, which tends to be bursty. Link performance exponentially worsens as utilization increases. Utilization should be kept below the knee of the curve as shown in the following diagram.

Figure 40 Link performance compared to utilization

Echo Echo is a distorted and attenuated version of the originator’s audio transmission. When echo occurs, the originator hears a delayed version of their own speech.

Echo is an undesirable phenomenon that occurs when a portion of the energy that is transmitted over the network in one direction may be reflected at the receiving end back to the originating end. There are two main factors that contribute to echo: delay and reflection. The network introduces the delay. Reflection is caused by impedance mismatches at a 4-wire to 2-wire hybrid or other conversion devices.

With only a very small delay, the reflection is not a problem. In general, talker echo on networks with a delay time of less than 30 ms is usually not detrimental. On networks where the round-trip time for the echo to return back to the talker is greater than 30 ms, degradation of the transmission quality of the connection begins to become noticeable. As the delay time increases, the quality of the transmission becomes worse.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Service quality requirements and IP network performance requirements 159

Echo control is needed when the round-trip delay exceeds 30ms. Packet voice systems usually require echo control due to the delays associated with the packetization and de-jitter buffering. Some voice compression algorithms also impose a significant delay when compressing from PCM to the coding scheme in use.

Figure 41 Effect of echo

There are two basic techniques for eliminating echoes in long distance connections: echo suppression and echo cancellation. An echo suppressor based on ITU-T G.164 suppresses echo by permitting conversations to proceed in only one direction at a time such as half-duplex operation. The suppressor inserts a large loss into the transmission path when it detects a signal on the receive path. Echo suppressors, while reducing echo, impair the quality of the speech. For this reason, they have been replaced by echo cancellers.

Echo cancellers as specified in ITU-T G.165 improve echo performance by synthesizing a replica of the echo and subtracting it from the actual echo. Unlike echo suppressors, echo cancellers support full duplex operation. The attenuators that diminish the volume of the received signal are not needed in the transmission path. The process of developing a model of the echo and then using it in a subtractive estimation process is called convergence. In packet networks, convergence is an ongoing adaptive process because delay may change from packet to packet due to nodal buffering and routing changes. It is therefore important that the echo canceller converges quickly on the new delay to minimize interference with the conversation.

G.165 defines that the convergence time shall be no longer than 500ms for a white noise test signal. In practice, real speech signals can take up to 10 times of this amount for the echo canceller to converge. The effect of this is for echo to be heard during the first few seconds of a conversation.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 160 IP network recommendations

Note: Echo has been experienced on the IP Phones when used with some amplified headsets. Contact your supplier for a list of recommended headsets for use with the IP Phones.

Common problems causing echo include incorrect tail size settings for echo cancellors and incorrect loss level settings on TDM trunks such as PRI circuits.

IP network qualification Qualification is an assessment of the ability of an existing network to support an IP Telephony implementation. The assessment is used to determine the scope of work required for an IP Telephony network design project. In particular, this section describes qualifying an IP network for VoIP deployment in two basic scenarios, a Greenfield deployment (brand new data and voice network) or an Evergreen deployment (integrating voice into an existing IP network). In today’s TDM-based voice systems, the voice traffic experiences a fixed amount of delay and essentially no packet loss due to the structure of the circuit-switched telephone network resulting in very high quality voice. In contrast, IP networks do not traverse along fixed, dedicated circuits with a small, constant amount of delay and no packet loss as in circuit-switched telephone networks. Most IP networks treat all traffic the same and are referred to as best-effort networks. Because of this, the traffic may experience different amounts of packet delay, loss or jitter at any given time. It is because of these reasons that many IP networks are generally not ready for voice traffic. When evaluating an existing IP network for an IP Telephony implementation, Nortel recommends that an assessment be conducted of the network ability to meet an end user’s expectations for voice quality, reliability, and costs. All IP networks are not designed to immediately support an IP Telephony implementation. If a network cannot meet customer expectations, the customer must know why and what can be done to meet those expectations. To qualify a Greenfield MCS implementation, gather MCS-specific information that includes: ¥ number of PoPs (points-of-presence) ¥ number of enterprises deploying MCS clients ¥ detailed breakdown for MCS clients such as IP Phones, Multimedia PC Clients, and Multimedia Web Clients ¥ the estimated end-to-end traffic patterns between clients (IP client-to-IP client and IP client-to-PSTN client)

Other useful information that needs to be collected include: ¥ supported CODECs (for example, G.711, G.729)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP network qualification 161

¥ packetization intervals (10ms, 20ms, 30ms) ¥ VAD (voice activity detection) Note: VAD is not supported in MCS clients

The next step is to overlay the collected information on the top of the new network design to make sure that the network can support the projected traffic load with the desired Quality of Experience (QoE) goals. The goal of this information gathering exercise is to develop an accurate list of network requirements, which include bandwidth and delay calculations. VoIP bandwidth calculators (Nortel or third-party) and network diagramming tools such as Microsoft Visio and Netformx are very useful for the Greenfield qualification process. Network qualification for an Evergreen VoIP deployment is more involved than with the Greenfield VoIP deployment. The network planners need to do a thorough network assessment of the current IP network and evaluate what modifications or additions need to be made in order to support VoIP and other MCS components. The most important step in qualifying a network is performing a network audit for voice and multimedia traffic. The objective of the audit is to understand the current topology and performance of the IP network. The accuracy of the qualification process depends on the accuracy of the audit information. An understanding of the network topology, performance of network routes and the impact of adding voice traffic to the existing data applications is needed before deployment. An audit for voice and multimedia traffic can often reveal network misconfigurations that directly impact voice quality. These misconfigurations often go undetected by most data applications. Before starting the audit, understand the anticipated ingress and egress points for voice traffic and use this information to evaluate the anticipated IP routes for voice traffic. At this time, there is no need to know the Erlang or Centi-Call-Seconds (CCS) traffic load. Note that the Erlang and CCS load is used for the traffic engineering, not for network qualification. The following summarizes the major network audit tasks: ¥ Understand network topology: To understand the topology, information about the routes between the network distribution points is needed, particularly the routes used for voice traffic. Routes are the connections between access points such as client workstations, servers, and gateways. Access points are clustered at distribution points such as campus sites, regional sites, and remote offices. When documenting the topology, include all of the links, link speeds, duplex modes, VLANs, and switching and routing nodes between the anticipated voice ingress and egress points.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 162 IP network recommendations

Note: According to Nortel Professional Services, one of the main VoIP deployment problems is the Ethernet duplex mode mismatch.

¥ Collect performance statistics: Gather performance statistics such as route delay characteristics and link utilization for the IP routes. Performance statistics are end-to-end route delay (one-way), delay variation and packet loss. Running ping (round trip delay), RMON or third-party monitoring tools can determine the performance characterization. Note: If QoS is already implemented in the network, measurement tools should collect performance statistics based upon the appropriate classification of service such as a simple ping tool will use a basic data class that is a different priority than voice traffic class. Link utilization should be done for all the links in the topology. Remember to do the following when auditing the network: ¥ Collect performance data multiple times during the peak business hours of the day and for multiple days. ¥ Within each day, compare each sampling to validate the consistency of the performance characteristics of the peak hour.

¥ Determine workflow characteristics: Understand the impact of current business applications on voice traffic and of voice on the application traffic. Applications with heavy sustained traffic like scheduled file transfers or interactive CAD/CAM applications can affect voice quality by increasing voice jitter. If these applications are present, determine if alternate routes exist for the data or voice traffic. If not, collect the performance statistics when the applications are active. If the performance characteristics are worse (more jitter or higher link utilization), then substitute the statistics for the peak hour.

The use of network utilization tools such as a customer NMS (network management system) and investigation tools such as NetIQ Chariot with VoIP module, and LAN and WAN Analyzers are key in assisting the network planner(s) for VoIP network qualification. Be aware that the tools themselves can only pinpoint the network problems, but will not solve the problems. A good test and verification method is essential to accurately pinpoint the problem areas in the IP network. Any changes that are suggested as result of the investigation should be thoroughly tested before implementing. Also, it may be necessary to provide evidence that problems are fixed after changes are made in the IP network. The following summarizes the tools mentioned above: ¥ Netformx* (http://www.netformx.com) — configuration wizards for novice users, advanced features for experts

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. QoS recommendations 163

— graphical presentation of node, LAN, building, MAN and WAN — copy and paste configuration at all levels — automatic creation network documentation

¥ Qcheck* (Ixia) (http://www.ixiacom.com) — TCP/UDP response time (ping) — UDP streaming — TCP/UDP throughput — TCP/UDP traceroute

¥ Chariot* with VoIP module (NetIQ) (http://www.netiq.com) — console design: specify tests to be run by and between endpoints — endpoints send and receive specified traffic and analyze it for: end-to-end delay, jitter, and packet loss — estimates voice call quality, R → MOS — graphic results — diagnostic tool — test with QoS enabled

¥ LAN / WAN Analyzers — Filtering for display such as IP address and UDP port — look at inter-frame times — look at RTP timestamps — look at sequence numbers — find out where packets are getting lost or delayed

QoS recommendations Quality of Service (QoS) is the ability to offer differentiated services to customers. It provides a certain level of assurance for a particular application that the network can meet its service requirements. From a technical perspective, QoS can be characterized by several performance criteria, such as availability (low down time), throughput, connection setup time, percentage of successful transmissions, speed of fault detection and correction. In an IP network, QoS can be measured in terms of bandwidth, packet loss, delay, and jitter. In order to provide a high QoS, the IP network needs to provide assurances that for a given session or set of sessions, the measurement of these characteristics will fall within certain bounds.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 164 IP network recommendations

Since IP is by nature a best-effort service that offers no guarantees for data delivery, QoS mechanisms are highly recommended when running voice applications over an IP network.

This section provides information on different Layer 2 and Layer 3 QoS mechanisms that can be used in an MCS network to achieve IP network performance close to what is expected by end users in a TDM-based network. We will also explain the different Nortel Service Classes (NNSC) and their queue mappings.

Layer 2: Ethernet 802.1Q/P VLANs are logical entities created in the software configuration to control traffic flow and ease network management. VLANs enables the division of an enterprise into logical groups for the purpose of providing separate communication channels for common interest groups or isolating specific types of network traffic.

Network managers can also assign priority values to VLAN switch ports as either default or based on policy. This value becomes the user priority for the frame. Arriving frames that are not prioritized on an 802.1Q VLAN member port get their three-bit 802.1p priority based on the value assigned to the VLAN. Nortel recommends the use of QoS-capable Layer 2 switches such as the BPS 2000, Ethernet 460, Ethernet 470, and Passport 8600.

Layer 3: IP Differentiated Services (DiffServ) VLAN prioritization is useful for forwarding traffic between LAN segments, but it does not offer end-to-end QoS. Successful MCS deployment requires the IP network to support real-time end-to-end QoS. DiffServ defines an architecture that addresses end-to-end QoS issues.

Table 11 DSCP compared to 802.1p DSCP 802.1p CS7 7 CS6 EF, CS5 6 AF4x, CS4 5 AF3x, CS3 4 AF2x, CS2 3 AF1x, CS1 2 DE, CS0 0

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. QoS recommendations 165

A table of DSCP codes and their corresponding binary, hex, and decimal values are provided in Appendix "DSCP code values" (page 341).

To minimize the loss, delay, and jitter of voice packets, Nortel has standardized its products on the Expedited Forwarding (EF) PHB for voice media packets and the Class Selector 5 (CS5) PHB for voice signaling (control) packets. This feature provides the QoS differentiated service to the media and signaling packets generated by MCS clients.

QoS encompasses a broad range of technologies, architectures and standards such as ATM, 802.1pq, and MPLS. These QoS technologies may all be used in an IP network, but they may not be implemented consistently. However, the MCS system requires consistent End-to-End QoS services.

To provide the required consistent QoS behavior end-to-end, Nortel has defined a default DiffServ DSCP to 802.1p mapping scheme to simplify as well as standardize end-to-end QoS support. This extends the IP QoS to Layer 2 switches that are not IP-aware. The default mappings also enable the Nortel Passport 8600 and Business Policy Switch to map the DSCP to and from an 802.1p tag without manual configuration.

The ATM Forum has defined a number of ATM classes of service, the most popular being CBR, rt-VBR, nrt-VBR and UBR. In order to provide end-to-end IP QoS over the ATM networks, a consistent mapping must be provided between DiffServ and ATM COS mechanisms. The mapping of these default DSCPs to the different ATM COS is needed because most ATM switches are not Layer 3 aware and can only interpret ATM COS. Hence, they cannot mark the DSCP nor apply the appropriate behavioral treatment for a given DSCP. Therefore, in order to preserve the end-to-end QoS, a Layer 3 device must map DSCP to the equivalent ATM COS for the Layer 3-ignorant ATM switch. Nortel has standardized the DSCP to ATM mapping scheme to simplify as well as standardize end-to-end QoS support.

Table 12 DSCP compared to ATM COS DiffServ Code Point (DSCP) ATM COS CS7 rt-VBR or CBR CS6 rt-VBR or CBR EF, CS5 CBR or rt-VBR AF4x, CS4 rt-VBR AF3x, CS3 AF2x, CS2 nrt-VBR AF1x, CS1 DF, CS0 UBR

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 166 IP network recommendations

Nortel has standardized its DiffServ QoS Classes implementation to simplify the support of QoS on its product portfolio. These service classes provide several differentiated levels for traffic.

Nortel Networks Service Classes (NNSC) To simplify QoS management and to help customers focusing on solving business related challenges, Nortel provides intuitive Service Class names that can be easily understood. For example, it would be more intuitive to use the Premium Service Class than the DiffServ EF PHB for the VoIP traffic.

Table 13 Nortel Networks Service Classes Traffic Nortel Networks Example Application DSCP Category Service Class Network Critical Alarms, VRRP heartbeats Critical CS7 Control Routing, Billing, Critical OAM Network CS6 Interactive IP Telephony (VoIP, FoIP) Premium EF, CS5 Video Conferencing, Interactive Gaming Platinum AF4x, CS4 Responsive Streaming audio/video, Video on Demand Gold AF3x, CS3 Transaction processing (eCommerce) Silver AF2x, CS2 Timely E-mail, noncritical OAM Bronze AF1x, CS1 Best effort Standard DF, CS0

Nortel Networks Service Classes (NNSCs) are used to provide a standard set of default behaviors across all Nortel products. Each Service Class, which corresponds to a DSCP code, enables a unique behavior to be applied to the traffic end-to-end. The NNSCs enable the products to provide the internal mapping to the appropriate queue and DiffServ Code Point. A description of each class is as follows: ¥ The Critical NNSC is used for types of traffic like heartbeats between core network switches and routers. Heartbeats are used to determine if the next hop is available. The Network NNSC is used for types of network control traffic such as routing protocol traffic and critical OAM traffic. Traffic in this category is required to keep the network operational. The Premium NNSC is used predominantly for IP telephony services and provides the low latency and jitter required to support such services. The Premium NNSC uses the DiffServ EF (Expedited Forwarding) PHB. The Platinum, Gold, Silver and Bronze NNSCs are collectively referred to as the metal classes. ¥ The metal NNSCs provide minimum bandwidth guarantee using Committed Information Rate (CIR) and are used for variable bit rate or

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. QoS recommendations 167

bursty types of traffic using Peak Information Rate (PIR). Metal classes support mechanisms that can dynamically adjust transmit rate and burst size based on congestion (packet loss) detected from the network. Under congestion, DiffServ nodes use drop precedence and multi-level Weighted Random Early Detection (WRED) mechanisms to control variable bit rates that exceed the minimum assured bandwidth. The metal NNSCs use the DiffServ AF (Assured Forwarding) PHB. ¥ The Platinum NNSC traffic provides the low latency required for inter-human (interactive) communications. The Gold NNSC is used for applications that require near-real-time service and are not as delay sensitive. It assumes that the traffic is buffered at the source and therefore the traffic is less sensitive to delay and jitter. Such applications include streaming audio and video, video (movies) on demand and surveillance video. The Silver NNSC is used for short-lived TCP-based flows for applications such as transaction processing. The Bronze NNSC is used for long-lived, TCP-based flows such as file transfers, E-mail or noncritical OAM traffic. Finally, The Standard NNSC is used for best-effort services that typically have no bandwidth assurances.

NNSC queue mappings Different products may have a different number of queues on different interfaces. Depending on the number of queues and NNSC supported by the product, the mapping of DSCP to NNSC and queues will vary. Packets marked with the standardized DSCPs must be mapped in such a way that the PHBs are preserved as the packets traverse the network. For products that have two queues, Nortel recommends the following default mapping of Service Classes to DSCP and queues.

Table 14 Two-queue mappings Logical Queue Recommended DiffServ Code Point (DSCP) NNSC Number Scheduler EF, CS7, CS6, CS5 1 (highest priority) Strict priority Premium AF4x, AF3x, AF2x, AF1x, CS4, CS3, CS2, 2 (lowest priority) Strict priority Standard CS1, DF (CS0), all unspecified DSCPs

Note: In a two-queue system, multiple DSCP markings are mapped into the same queue. When multiple DSCPs are mapped to the same queue, the NNSC egress behavior applied to the queue is the same for all packets

If the bandwidth is sufficiently high, it can be acceptable for MCS components that are at the edge of the network to connect end systems (not MCS servers). It is definitely not recommended for use beyond the network edge. Usually this type of system has very limited CPU and processing

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 168 IP network recommendations

power. Therefore, it will not be adequate for policy-based networks. Nortel recommends replacing this type of system whenever possible with a minimum four queue systems.

For products that have three queues, Nortel recommends the following default mapping of Service Classes to DSCP and queues. According to RFC2597, packets in one AF class must be forwarded independently from packets in another AF class.Since there are only three queues available, one can choose any one of the medal classes for the second queue. In this case, the silver class is selected for the second queue. Similar to a two-queue system, Nortel recommends that this type of system be replaced with a minimum four-queue system that is ready for policy-based network environment.

Table 15 Three-queue mappings Logical Queue Recommended DiffServ Code Point (DSCP) NNSC Number Scheduler EF, CS7, CS6, CS5 1 Strict priority Premium AF2x, CS2 2 WFQ Silver AF4x, AF3x, AF1x, CS4, CS3, CS1, DF 3 WFQ Standard (CS0), all unspecified DSCPs

For products that have four queues, Nortel recommends the following default mapping of Service Classes to DSCP and queues. A four-queue system is adequate for handling traffic at the edge of a network and may be adequate for deploying at the core network for a network that has simple applications mix. However, for a large network with more complex applications mix, we recommend either the six-queue or the eight-queue systems for the core network.

Four-queue mappings Logical Queue Recommended DiffServ Code Point (DSCP) NNSC Number Scheduler CS7, CS6 1 WFQ Network EF, CS5 2 Strict priority Premium AF1x, CS1 3 WFQ Bronze AF4x, AF3x, AF2x, CS4, CS3, CS2, DF 4 WFQ Standard (CS0), all unspecified DSCPs

For products that have six queues, Nortel recommends the following default mapping of Service Classes to DSCP and queues. A 6-queue system is very adequate for handling traffic both at the edge and at the core of an

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. QoS recommendations 169

enterprise network. Nortel recommends using 8-queue systems for ISPs and MCS enterprises so that they can preserve the granularity of the service levels of the connected customers.

Table 16 Six-queue mappings Logical Queue Recommended DiffServ Code Point (DSCP) NNSC Number Scheduler CS7, CS6 1 WFQ Network EF, CS5 2 Strict priority Premium AF4x, CS4 3 WFQ Platinum AF2x, CS2 4 WFQ Silver AF1x, CS1 5 WFQ Bronze AF3x, CS3, DF (CS0), all unspecified DSCPs 6 WFQ Standard

For products that have eight queues, Nortel recommends the following default mapping of Service Classes to DSCP and queues. An eight-queue system is recommended for handling traffic both at the edge and at the core of an ISP’s or an MCS 5100 service provider’s network.

Table 17 Eight-queue mappings Logical Queue Recommended DiffServ Code Point (DSCP) NNSC Number Scheduler CS7 1 Highest - Strict Critical priority CS6 2 WFQ Network EF, CS5 3 2nd highest - Premium Strict priority AF4x, CS4 4 WFQ Platinum AF3x, CS3 5 WFQ Gold AF2x, CS2 6 WFQ Silver AF1x, CS1 7 WFQ Bronze DF (CS0), all unspecified DSCPs 8 WFQ Standard

For MCS deployment, Nortel BPS (Business Policy Switch) 2000 switches are recommended for Layer 2 functions. It supports a NNSC 4-queue scheduler and places the voice packets in the highest priority queue using a strict-priority scheduler when QoS is enabled on an interface. Nortel 8600 switches are recommended for both Layer 2 and Layer 3 functions. Passport 8600 supports all eight NNSC service classes.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 170 IP network recommendations

MCS 5100 QoS support A QoS policy enabled layer Ethernet switch such as the Nortel Business Policy Switch (BPS) 2000, Passport 8600, or Ethernet 460 or 470 is recommended at media and signaling concentration points. Such a switch is able to enforce policy and if necessary mark packets with the appropriate Layer 2 and 3 QoS values.

The MCS clients and gateways are connected to the LAN in an enterprise or small office/home office (SOHO) site. The Multimedia PC clients, IP Phones, Multimedia Web Clients, and gateways provide DiffServ markings within their nodes for originating media streams enabling DiffServ-enabled routers to manage the flows more efficiently for better voice service between endpoints. The Managed IP network should support DiffServ (DS) and DS-to-MPLS mappings if MPLS is utilized in the backbone network.

The following table shows the QoS support for MCS 5100 components. For information on QoS support for the Media Application Server, see Media Application Server Planning and Engineering (NN42020-201).

Note: Layer 2 QoS support (802.1 p&q) is not available for the Multimedia PC Clients and Web Clients due to Network Interface Card (NIC) limitations.

Table 18 DiffServ support for MCS 5100 components Multimedia PC Client Operating Layer Voice Media Voice Media Native System Signaling to Application Server Win XP L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4) Win 2K L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4) Win NT L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 QoS support 171

Win 98 L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4) Multimedia Web Client Operating Layer Voice Media Voice Media Native System Signaling to Application Server Win XP L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4) Win 2K L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4) Win NT L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4) Win 98 L3 Modifiable Modifiable No in Service in Service Package (EF) Package (AF4) IP Phones Layers Voice Media Signalling L3 Modifiable Modifiable in Service in Service Package (EF) Package (EF) L2 Modifiable Modifiable in Service in Service package (6) Package (EF) 3-port Switch L3 DSCP pass DSCP pass through only through only 802.1 p&q 802.1 p&q pass through. pass through. Priority given Priority given to phone port. to phone port. Media Gateway 3200 Layers Voice Media Signalling

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 172 IP network recommendations

L3 DSCP config Configured on urable QoS enabled policy switch L2 Support Configured on QoS enabled policy switch Media Application Server Layers Voice Media Signalling L3 DSCP config Configured on urable QoS enabled policy switch L2 Support Configured on QoS enabled policy switch Signaling Servers Signalling IPCM Configured on QoS enabled policy switch Session Configured on Manager QoS enabled policy switch

The following table shows the MCS components between which DiffServ QoS marking for media can be performed.

For information on DiffServ media marking for the Media Application Server, see Media Application Server Planning and Engineering (NN42020-201).

Table 19 DiffServ media marking between MCS 5100 nodes To/From IP Multime Multime Border Media MAS Phones dia dia Control Gatew PC Client Web Cli Point ay Send/ ent 3200 Receive IP Phones Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Ye s Multimedia Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Ye PC Client s Multimedia Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Ye Web Client s

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 QoS support 173

To/From IP Multime Multime Border Media MAS Phones dia dia Control Gatew PC Client Web Cli Point ay Send/ ent 3200 Receive Border Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Ye Control s Point Media Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Ye Gateway s 3200 MAS Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Yes Yes/Ye s

The following table shows the MCS components between which natively support DiffServ QoS marking for signaling that can be performed and notes any exceptions.

Table 20 DiffServ signaling marking between MCS nodes

To/From IP Media IP Multimedia Session Client Gateway MAS Phones PC Client Manager Send/Receive Manager 3200 IP Phones NA NA Yes / No NA NA NA Multimedia PC NA NA NA Yes / No NA NA Client IP Client Manager No / Yes NA NA NA NA NA Session Manager NA No / Yes No / No No/ No No/ No No/ No Media Gateway NA NA NA No/ No NA NA 3200 MAS NA NA NA No/ No NA NA

Note: Signaling packet marking natively on the node will be supported in a future release. Signaling for this release should be marked at a QoS enabled policy switch.

QoS markings comparison A comparison between Nortel and Cisco recommended QoS markings for various applications is shown in the following table.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 174 IP network recommendations

Table 21 Comparison between Cisco and Nortel recommended QoS markings Cisco Recommended Nortel MarkingNetworks Nortel Recommended Marking Application/ Service Flow Type 802.1p ToS DSCP Class 802.1p ToS DSCP Voice media 55EF 6 5 EF (RTP) Voice signaling 33AF31 6 5 CS5 (with EF PHB) Premium Fax not kno not kno not kno 6 5 CS5 (with EF wn wn wn PHB) Voiceband data not kno not kno not kno 6 5 EF wn wn wn Video 44AF41 5 4 AF41, AF42 Conferencing or AF43 Video (depending upon video BW) Video 44AF41 5 4 AF41 Platinum Conferencing Audio Video not kno not kno not kno 5 4 AF41 Conferencing wn wn wn Signaling Streaming 11AF31 Gold 43AF31, AF32, Media AF33 or CS3 (dependent upo n application type) Data 22AF23 Silver 32 AF21 Data 11AF11 Bronze 21AF11 (CS1 for noncritical OAM traffic) Best Effort Data 00DF Standard 00 DF

Note: Unknown indicates Cisco recommendations could not be found in the web site documentation.

Even though Cisco makes DSCP recommendations to applications, Cisco routers and switches currently do not have default QoS settings. Cisco routers and switches need to be configured to properly support the application, such as classify the traffic based on the DSCP or place it into the queue with appropriate scheduler. All these items must be configured through many IOS commands. Cisco’s recommendations have no effect

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS 5100 QoS support 175

on the Nortel Networks Service Classes (NNSC) since the QoS behaviors to support the NNSCs or QoS in general must be configured on the Cisco products anyway.

On the other hand, Nortel products such as BPS 2000 and Passport 8600 have implemented default QoS behaviors that support the NNSCs. The BPS 2000s serve as the DiffServ edge device that performs packets mapping and network classification based on NNSC. Packets sent from BPS 2000 uplink ports to the Passport 8600 are trusted. Based on NNSC, the easiest way to set up Passport 8600 is to use the Device Manager to configure the port connecting to BPS2000 as the trusted core port by checking the DiffServEnable box and setting the DiffServType to Core.

Global IP Sound (GIPS) NetEQ integration Nortel uses the advanced NetEQ audio processing technology from Global IP Sound (GIPS) into its IP products, including the Multimedia PC Client, Multimedia Web Client, Media Application Server Meet Me Audio Conferencing Service (premium conferencing), and the IP Phones. The NetEQ application provides delay management, adaptive jitter buffer control, and a packet loss concealment algorithm. These capabilities help maintain quality voice communication by reducing the effects of packet loss and delay inherent in IP networks. The NetEQ application is only integrated into the receive side of the audio stream. NetEQ does not require any special preprocessing of the transmitted audio at the originating end of a voice session. Consequently, it does not change the bandwidth requirement of the network. Nortel has observed the following benefits of using the NetEQ technology: ¥ reduction in delay by 20ms to 70ms while maintaining voice quality ¥ increase in packet loss robustness by 100% to 400% ¥ increase significantly in voice quality for G.711 under poor IP network performance conditions ¥ increase in voice quality for G.729 under poor IP network performance conditions

As shown in the following diagram, the receiving client injects encoded RTP audio packets into NetEQ. The host application provides all supported CODEC decoding routines to the NetEQ. Based on the type of CODEC present in the incoming audio stream, NetEQ makes appropriate calls to the decoders to make the decoded audio data available to the host application for playback.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 176 IP network recommendations

Figure 42 Receive-side audio packet processing with NetEQ

The following table shows the CODECs and packet times that are supported by Nortel IP Telephony clients.

Table 22 CODECS and packet times supported with NetEQ Supported CODEC Packet Time G.711, 64.0 Kbps 10ms, 20ms, 30ms, 60ms G.729A, 8.0 Kbps 10ms, 20ms, 30ms, 60ms

For TDM users, the NetEQ must be included in the receiving path of the gateway to obtain enhanced audio quality. This function is not supported in this release.

Additional information related to the NetEQ is available from the Global IP Sound web site at: http://www.globalipsound.com/.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP Phone network considerations 177

IP Phone network considerations The IP Phones (Phase 1 and Phase 2) are IP-based telephones. They offer better voice quality through dedicated hardware resources. There are a number of issues that must be addressed in the enterprise network before deploying the IP Phones. The issues include: ¥ whether there is a second Ethernet drop for each cubicle or office to support the IP client ¥ whether the IP clients are powered from an AC wall outlet or down the Ethernet line using Power over Ethernet (POE) ¥ whether the customers want a separate VLAN for the IP clients and PCs

The following are the four basic scenarios for IP Phone client deployment.

Single Ethernet drop with wall power supply For this particular installation, there is a single Ethernet drop from the wiring closet to support both the IP client phone and the user’s desktop PC. In this scenario, the IP Phone has a built-in 3-port switch to connect the IP phone and PC to the LAN drop. There are two physical ports: 1 uplink and 1 link to PC and 3 logical ports: 1 uplink, 1 to the IP Phone (internal), and 1 to the PC.

The wall supply powers the IP Phone. The connection from the 3-port switch to the closet switch is full-duplex 100Mbps through autonegotiation. Both PC traffic and IP phone traffic are merged onto this connection. The IP Phones support 802.1p, and therefore the special 3-port switch can provide uplink priority from the desktop and the closet Ethernet switch can provide downlink priority to the internal switch.

Figure 43 Single Ethernet drop with wall power supply

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 178 IP network recommendations

Single Ethernet drop with power over Ethernet (POE) Nortel Ethernet 460-24T-PWR is an IEEF P802.3af compliant Power over Ethernet (POE) switch that provides power directly to Nortel IP telephones, including the IP Phones.

Figure 44 Single Ethernet drop with power over Ethernet

The AC powered Ethernet 460 switch can provide up to 200 Watt to power about 12 802.3af devices. When used with an external DC power source such as Network Energy Source (NES) by Invensys, the Ethernet 460 can provide up to 15.4 Watts per port to all 24 ports as required by the IEEF P802.3af standard. The BS460 requires a total of 370 Watt (15.4 Watts x 24 ports) power input to support power supply to the 24 ports.

If the Ethernet 460 fails or there is a power outage, the NES DC power system can continue to supply up to 200 Watts of power to the 802.3af compliant devices that are connected to the switch. Depending on the amount of modules loaded on an NES system, the system can provide 8 hours of battery life. NES ensures that power be available for E911 VoIP applications during power outage.

The Ethernet 460 features advanced dynamic power management, which enables and disables power to each port based on power availability and the priority property configured on each port. The priority properties include Low, High and Critical. The power management continuously monitors the total available power of the switch and turns off low priority ports if the total available power is fully utilized.

All current versions of the IP Phones contain built-in prestandard capacitor discovery scheme. These phones require the power splitter to draw power from Unshielded Twisted Pair (UTP) CAT-5 cable. The Ethernet 460 is compatible with the IP Phone power splitter.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP Phone network considerations 179

Second Ethernet drop for IP Phones with AC power In this IP client deployment scenario, there are two separate Ethernet drops: one for the IP phone and one for the client PC. Figure 45 Second Ethernet drop for IP Phones with AC power

This deployment scenario offers some advantages over the single Ethernet drop installation. Since the IP client and desktop PC each have their own wire run back to the wiring closet, there is more flexibility in handling the IP client traffic. For example the IP client traffic may be configured on a separate VLAN from the PC traffic on the Ethernet switch since both exist on separate switch ports. Separating the IP phone traffic and the desktop PC traffic into different VLANs has advantages in the areas of Traffic Management, Security, and Network Management as outlined below. ¥ Traffic Management — Allows traffic with similar QoS requirements to be grouped together to receive similar treatment. — The phone is shielded from Layer 2 data storms or broadcast traffic that may interfere with the voice packets. — Troubleshooting becomes easier because of traffic isolation.

¥ Security — The phone is shielded from malicious traffic that may be targeted at the phone in order to steal or disrupt service. — VLAN header is added at Layer 2, which makes it difficult for applications or users to add it intentionally.

¥ Network Management — Allows for the partitioning of the voice and data network.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 180 IP network recommendations

— Voice and data clients may be managed by different parts of the Enterprise organization. Second Ethernet drop for IP Phones with power over Ethernet

The major difference between this deployment scenario and the previous one is how power is supplied to the IP client. A central UPS supply such as a Network Energy Source is located in or near the wiring closet, which powers a power patch panel. The power patch panel feeds power through the Ethernet cable down to the IP client. A special cable is required to split out the power to the cylindrical power connector of the IP Phones. One advantage with this installation is that power to the IP clients is centrally controlled, and in the event of a commercial AC power outage voice service may still be available via the UPS supplied power patch panel.

Figure 46 Second Ethernet drop for IP Phone with power over Ethernet

The Phase 1 IP Phone clients use the Nortel UNIStim protocol for call processing, and therefore cannot function independently in a SIP environment. There are three deployment scenarios for establishing communications between an IP Phone (Phase 1 or Phase 2) and another SIP client.

The first scenario is for the Phase 1 IP Phone to use the IP Client Manager (IPCM) to do the UNIStim to SIP conversion. In this scenario, the IPCM communicates with the IP Phone using the UNIStim protocol and then converts the messaging received from the IP Phone client to the SIP client. The IPCM then sends the messaging out to the Session Manager, which then forwards the information to its correct SIP client destination. Once the actual RTP connection is established, the IP Phone can communicate directly with the SIP endpoint.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Server-side network considerations 181

The second scenario is for an IP Phone (Phase 2) to use the Multimedia PC Client for UNIStim to SIP messaging conversion rather than the IPCM. The IP Phones have the capability to switch between the IPCM and Multimedia PC Client. Once again all messaging from the IP Phone is communicated first to the Multimedia PC Client, which then communicates to the Session Manager. After the RTP connection has been established, the IP Phone can communication directly with the SIP endpoint.

The third scenario only applies to the IP Phone 2004 with SIP firmware. That is because the IP Phone 2004 with SIP firmware communicates over SIP, and can connect directly to another SIP client.

The IP Phones and Multimedia PC Client support DiffServ markings. The IP Phones also supports 802.1p and q, whereas the PC platform where the Multimedia PC Client software is loaded may not. However, there should be no conflict in the scenario where the IP Phones use the Multimedia PC Client for messaging and establishing call, because the bearer path (RTP) is established directly between the IP Phone and the SIP endpoint. Neither the Multimedia PC Client nor the IPCM is in the RTP media path.

Server-side network considerations The MCS 5100 LAN is the point of aggregation for many of the network resources that need to be accessible by the IP clients. Since the MCS 5100 LAN is the major traversal point for many of the signaling, bearer, and management traffic flows between client and server, there are server side network considerations that need to be addressed to provide certain levels of QoS.

MCS 5100 LAN Nortel recommends the use of QoS-capable Layer 2 switches such as the BPS 2000 for MCS 5100 LAN connectivity. These switches support DiffServ, 802.1Q VLAN, and 802.1p priority classification marking. The number of BPS switches deployed in the MCS 5100 LAN is dependent upon the number of MCS Servers deployed. A minimum of two Layer 2 QoS capable switches must be deployed for redundancy. The Layer 2 QoS capable switches must be able to uplink to the customer’s Layer 2 and Layer 3 switch router. Each uplink needs to be configured as a trunk (802.1Q) and configured with the following requirements: ¥ STP (spanning tree protocol) needs to be turned off. ¥ The port speed for each uplink needs to be configured to 1000Mbps (GigE) full-duplex on both sides. ¥ IGMP (both proxy and snooping) needs to be turned off on the uplink router.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 182 IP network recommendations

¥ The uplink Layer 2 and Layer 3 switch-router should be able to separate the VLAN traffic for proper direction.

IP network routing recommendations and considerations A simple, stable, and scalable IP network environment is essential for MCS services and applications. This section provides recommendations and considerations pertaining to IP addressing, IP routing, and IP over the WAN.

IP addressing There are many IP addressing considerations to take into account when planning an MCS network. To avoid having to readdress the network due to future network expansion, an adequate number of IP addresses should be allocated during the initial planning stages. For ease of management, IP addresses should be distributed where applicable to be handled by the regional, departmental, or local network administrator(s). IP address summarization and allocating IP addresses in meaningful ways such as by region and area or geographical boundary can further simplify network management. Other considerations for IP addressing would be to use variable length subnet masks (VLSM) for efficient use of address space, and to only use public IP addresses for devices that need to be accessible from the Internet or extranet. More discussion on the IP addressing techniques can be found in Appendix "IP functional components" (page 319).

Nortel recommends using a consistent IP address allocation scheme across the enterprise. DHCP may be used for allocating IP address for clients such as Multimedia PC Clients, IP Phones, while static addresses should be given to devices such as MCS servers, network printers, and network infrastructure equipment such as hubs and routers. If DHCP servers are used, they should synchronize currently allocated IP addresses with a DNS database. Nortel NetID provides enterprise-wide DHCP and DNS services with dynamic DNS name synchronization between DHCP and DNS.

IP routing Maintaining a routing environment that is simple, stable, offers quick convergence, and minimizes bandwidth consumption for routing updates is critical for MCS applications. One of the IP routing considerations to take into account during an MCS network deployment would be to devise physical and logical LAN and WAN topologies that optimize the use of redundant links. This can speed up fault recovery and offer more simplified network management. Choosing WAN technologies that scale for core, regional, branch, and service provider connections, and selecting high performance routers and switches for backbone and other high traffic areas should also be considered. In addition, consider minimizing end-to-end hop counts, placing switch at Layer 3 where possible, using static or default routes over low-speed WAN links where applicable, and using scalable and robust routing protocols like OSPF and BGP in the core of large networks.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP network routing recommendations and considerations 183

Careful engineering of link over-subscriptions should be considered in order to avoid impact on MCS applications during normal as well as failure conditions. Understanding the media and signaling IP network convergence requirements for MCS applications along with the QoS requirements for the different traffic flows is essential. The network planners should develop a consistent routing model for connecting campus, branch, and Carrier or ISP networks. Network growth requirements for network transport, device switching capacity, port speeds, port density, device routing capacity, and IP address space should all be considered for MCS applications support. Finally, traffic should be properly managed across different administrative boundaries such as enterprise customers and ISP with regards to IP address space to avoid overlapping problems and network segmentation.

Additional information on IP routing is available in the "Deployment considerations" (page 87) section and Appendix "IP functional components" (page 319).

IP over WAN WAN links that connect the enterprise to the service providers and links that aggregate traffic for interconnecting branch offices need to support bandwidth requirements for MCS applications. Here are some considerations for WAN connectivity: ¥ Choose primary and backup paths for critical applications. ¥ Enable rapid error detection for quick convergence. ¥ Establish SLA (Service Level Agreements) over provider’s network. ¥ Ensure delay and latency is within the total end-to-end delay budget. ¥ Ensure jitter introduced by other nonreal-time traffic is within the acceptable limit. ¥ Use a separate VC (virtual circuit) for MCS traffic (if possible) when ATM or Frame Relay is used to manage QoS for real-time traffic. ¥ Ensure that the proper holes are opened thorough Firewalls or NAT devices to support MCS traffic. ¥ Use DiffServ with consistent Layer 2↔Layer 3 QoS mapping for end-to-end QoS support. ¥ Implement MPLS for QoS and traffic engineering through the carrier-managed network. ¥ Assign MCS traffic to the highest priority queue using strict priority scheduler.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 184 IP network recommendations

Low-speed access link considerations Low-speed access links from the Enterprise site to the MCS PoP can be problematic: they may become congested during high traffic periods causing network bottlenecks and thereby presenting a challenge to end-to-end QoS. The table below shows packet transmission delay for different links with speeds in milliseconds. As shown in the table, even in a noncongested condition, a large packet can present a long delay for all packets on a slow link. As a result, it may cause a delay exceeding the acceptable end-to-end delay budget.

Table 23 Packet transmission delay Packet Transmission Delay for Different Link Speeds (in milliseconds) Link Packet Size (MTU) Speed 40 80 88 136 184 232 280 520 1K 1.48K (in Kpb Bytes Bytes Bytes Bytes Bytes Bytes Bytes Bytes Bytes Bytes s) 56 5.714 11.429 12.571 19.429 26.286 33.143 40.000 74.286 146.2 211.4 86 29 64 5.000 10.000 11.000 17.000 23.000 29.000 35.000 65.000 128.0 105.0 00 00 128 2.500 5.000 5.500 8.500 11.500 14.500 17.500 32.500 64.000 92.500 256 1.250 2.500 2.750 4.250 5.750 7.250 8.750 16.250 32.000 46.250 384 0.833 1.667 1.833 2.833 3.833 4.833 5.833 10.833 21.333 30.833 1,000 0.320 0.640 0.704 1.088 1.472 1.856 2.240 4.160 8.192 11.840 1,540 0.208 0.416 0.457 0.706 0.956 1.205 1.455 2.701 5.319 7.688 2,048 0.156 0.313 0.344 0.531 0.719 0.906 1.094 2.031 4.000 5.781 10 000 0.032 0.064 0.070 0.109 0.147 0.186 0.224 0.416 0.819 1.184 100 000 0.003 0.006 0.007 0.011 0.015 0.019 0.022 0.042 0.082 0.118 150 000 0.002 0.004 0.005 0.007 0.010 0.012 0.015 0.028 0.055 0.079

To avoid excessive queuing delay, the following are recommendations for sites that use slow access links (less than T1 speed) to the MCS: ¥ Use the link speed at 256 Kbps or above for branch WAN connections. ¥ The hub site high-speed links must be shaped to avoid saturating the branch site. ¥ Keep jitter under 15ms over the low-speed links. ¥ Use queuing techniques that give preference and priority to voice traffic compared to noncritical data during periods of congestion.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP address requirements 185

¥ Packet size should be fragmented using PPP with class extension or kept at 512 bytes or lower to minimize jitter.

IP address requirements Each network element within the MCS system requires a specific number of IP addresses. Active and Standby servers should reside on the same subnet as the peer node for service logical IP addresses to be able to float to an alternate server in the event of failure. The following table shows IP address requirements for the MCS system.

Table 24 IP address configuration for MCS servers No. of IP MCS Elements Notes Addresses Server for Database Manager 1 1 physical Server for Session Manager 2 1 physical, 1 service logical IP (for redundant configuration only), on same subnet Server for System Manager (primary) 2 1 physical, 1 service logical IP for the and Accounting Manager (secondary) System Manager, on same subnet Server for System Manager 2 1 physical, 1 service logical IP for the (secondary) and Accounting Manager Accounting Manager, on same subnet (primary) Server for IPCM and Provisioning 1 1 physical Manager Border Control Point (half-shelf) 7 1 for the host card, 1 for each media card (up to 6 media cards for each half shelf) Terminal Server 1 System Management Console 1 Media Application Server See Media Application Server Planning and Engineering (NN42020-201) for IP requirements for MAS servers and applications. IN SIP Signaling Gateway See the documents for the Universal Signaling Point (USP) for ISSG IP requirements.

Note: IBM eServer x306m servers do not support LOM on Ethernet on the server. LOM is supported through serial connectivity to the Terminal Server.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 186 IP network recommendations

IP network services requirements There are certain IP network services that are required outside of the MCS 5100 network solution. This section explains the functions provided by each of these IP network services and describes which MCS 5100 components use these services.

Dynamic Host Configuration Protocol (DHCP) DHCP is a communications protocol that enables network administrators to centrally manage and automate the assignment of IP addresses in a network. DHCP uses the concept of a lease or amount of time that a given IP address is valid for a computer. The lease time can vary depending on how long a user is likely to require the Internet connection at a particular location. DHCP enables a network administrator to supervise and distribute IP addresses from a central point and automatically sends a new IP address when a computer is plugged into a different place in the network. Without DHCP, the IP address must be entered manually at each computer and, if computers move to another location in another part of the network, a new IP address must be entered. DHCP uses UDP (User Datagram Protocol) encapsulated in IP to transmit its request. For more detailed information on DHCP, see RFC2131. Within the enterprise, a network device or devices can be allocated as DHCP server(s) and strategically placed at optimum locations within the enterprise. MCS clients such as the IP Phone and Multimedia PC client can use DHCP to obtain an IP address for network connectivity. The benefits of using DHCP to assign IP addresses include the following: ¥ less configuration required (plug-and-play) ¥ ease of manageability for IP clients ¥ less room for configuration errors ¥ more efficient use of IP addresses

The IP Phones support the configuration of VLAN parameters. However, the underlying network switches must be configured to filter VLAN IDs for the VLAN parameters to work properly.

Domain Name System (DNS) DNS is the way that Internet domain names are located and translated into IP addresses. A domain name is meaningful and an easy-to-remember handle for an Internet address. Because maintaining a central list of domain name and IP address correspondences would be impractical, the lists of domain names and IP addresses are distributed throughout the Internet in a

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP network services requirements 187

hierarchy of authority. If one DNS server does not know how to translate a particular domain name, it asks other DNS servers until the correct IP address is returned.

In an MCS network, the MCS clients should be configured either manually or through DHCP with primary and secondary (sometimes tertiary) DNS servers.

Network Time Protocol (NTP) NTP is a protocol that is used to synchronize computer clock times in a network of computers. NTP is now an Internet standard that uses Universal Coordinated Time (UTC) to synchronize computer clock times to a millisecond, and sometimes to a fraction of a millisecond. UTC time is obtained using several different methods, including radio and satellite systems. Specialized receivers are available for high-level services such as the Global Positioning System (GPS) and the governments of some nations. However, it is not practical or cost-effective to equip every computer with one of these receivers. Instead, computers designated as primary time servers are outfitted with the receivers and they use protocols such as NTP to synchronize the clock times of networked computers. Degrees of separation from the UTC source are defined as strata. The term NTP applies to both the protocol and the client and server programs that run on computing devices. In basic terms, the NTP client initiates a time request exchange with the timeserver. As a result of this exchange, the client is able to calculate the link delay, local offset, and adjust its local clock to match the clock at the server’s computer. As a rule, six exchanges over a period of about five to 10 minutes are required to initially configure the clock. Once synchronized, the client updates the clock about once every 10 minutes, usually requiring only a single message exchange. Redundant servers and varied network paths are used to ensure reliability and accuracy. In addition to client and server synchronization, NTP also supports broadcast synchronization of peer computer clocks. NTP is designed to be highly fault-tolerant and scalable. For more detailed information on NTP, see RFC1305. Note: More information about the Network Time Protocol can be found in this URL: http://www.ntp.org

The MCS system is shipped with a default configuration where each server for the Management and Accounting Managers uses its own local hardware clock as its preferred reference. The local hardware clock is undisciplined and will drift. For this reason, the default configuration should be modified as soon as the MCS is up and has access to a disciplined source of time. The only way to ensure that the MCS system is synchronized to UTC is to use a reliable, low-stratum timeserver or an external clock connected directly to the Management and Accounting Server(s).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 188 IP network recommendations

The recommended NTP subnet configuration for the MCS system is as follows: ¥ Both servers for the Management and Accounting Manager should obtain NTP service from at least two different sources of synchronization, which should both exist at the same stratum level. ¥ Both servers for the Management and Accounting Manager should also peer with one other source at their own stratum level. (not required) ¥ Both Management and Accounting Servers should peer with each other. ¥ Every component should obtain NTP service from both the Management and Accounting servers.

See Figure 47 "Recommended MCS NTP subnet configuration" (page 188) for an illustration of the recommended configuration.

Figure 47 Recommended MCS NTP subnet configuration

This configuration results in two different sources of synchronization at the lowest stratum level (preferably 1) plus two other sources at the same stratum as the Management and Accounting Servers themselves, for a total of four outside synchronization sources. An outside source is one that is not part of the MCS, neither a Management and Accounting Server nor an MCS component. Ideally, each path shown in the following diagram would be a different physical network path; this would help minimize the

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP network services requirements 189

risk of a single point of failure. In Figure 47 "Recommended MCS NTP subnet configuration" (page 188), MS refers to a server for Management and Accounting Manager.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 190 IP network recommendations

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 191 Interworking with other networks

This section explains the general principle of MCS 5100 interworking with circuit-switched networks including PBX and PSTN. The section explains the topology of media traffic flow between different MCS and gateway components in different deployment scenarios. It also provides information on connection and interworking requirements. This section contains the following: ¥ "Interworking with circuit-switched networks" (page 191) ¥ "Interworking with PBX" (page 197) ¥ "Interworking with PSTN" (page 199) ¥ " SIP Trunking" (page 200)

Interworking with circuit-switched networks The MCS system is an IP-based multimedia communication system. MCS systems are often required to interwork with circuit-switched networks for voice services. A gateway is required to interwork the signaling and media between the two networks. For signaling, the gateway communicates with the circuit-switched network using one of the many circuit switched network signaling protocols. These signaling systems range from Primary Rate Interface (PRI) for large gateways to Channel Associated Signaling (CAS), and E&M signaling for small analog gateways. On the MCS side, the gateways communicate with the Session Manager using SIP.

For large- and medium-size gateways, the circuit-side media path is usually connected to digital media stream in a TDM multiplexed hierarchy, for example, T1 depending on the number of circuits. For small analog gateways, the circuit-side media is connected to analog FXS or FXO lines. On the MCS side, the media is packaged in RTP/UDP over IP. The physical interface on the IP side is usually Ethernet. The main functions of the gateway are to provide signaling translation and media transcoding between the MCS and the circuit-switched networks.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 192 Interworking with other networks

Figure 48 MCS interworking with circuit-switched networks

The MCS 5100 uses the Media Gateway 3200 to interwork with legacy TDM switches and PBXs. When the Media Gateway 3200 is used, both the signaling and media streams are transported through the PRI trunks. The Media Gateway 3200 can act as either a user-side or network-side PRI device. A single Media Gateway 3200 can be configured with 1, 2, 4, 8, or 16 DS1/E1 facilities.

In case very small TDM resources are required, the Mediatrix* 1204, a third-party SIP FXO gateway, can provide four ports of FXO access to central offices. Analog TDM devices can be connected into the SIP network through Mediatrix 1104, a third-party SIP FXS gateway that provides four ports of FXS access to central offices.

MCS 5100 support of circuit-switched signaling In-band tone signaling In addition to the out-of-band signaling system (PRI D-Channel), circuit-switch networks use in-band tone signaling for a variety of purposes. Some of these switched tones, like call progress tone are intended to provide feedback to human users, while others like DTMF tones are intended for machine detection for devices monitoring the media path. For successful interoperation with circuit-switched networks, the MCS must support these tones in the media path.

Dual-tone multi-frequency (DTMF) signaling DTMF tones that originate from the circuit-switched network need to be carried through the MCS network and transmitted to the MCS client. The DTMF tones are directly encoded by the gateways into the RTP media stream and transmitted to the MCS clients.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Interworking with circuit-switched networks 193

DTMF signals originating from MCS clients are forwarded through the gateway to the circuit-switched network in one of the following ways, depend on system provisioning: ¥ sent as a SIP INFO message to the Session Manager, which relays the message to the circuit-switched gateway (out-of-band-signaling). The DTMF tone is generated at the gateway and inserted into the circuit side. ¥ encoded into the RTP media directly and trans-coded into PCM or analog signal at the gateway (best effort in-band signaling) ¥ DTMF transmission using RFC 2833. (IETF has defined an encoding scheme (RFC 2833) for DTMF into the RTP packet headers).

Both the out-of band-signaling, and the best effort in-band signaling methods of DTMF transmission are limited with respect to interoperability and interworking between components.

DTMF transmission using RFC 2833 is the most common method employed in the packet networks today. DTMF transmission using RFC 2833 provides better third-party interoperability and improved interworking with a group of internal Nortel products.

In-band, best-effort DTMF transmission does not guarantee tone delivery. The CODEC compression used can degrade tones. G.729 provides approximately 95% accuracy for each DTMF digit transported. End points negotiating in-band best effort with G.729 can experience misdialed or unrecognized tones. RFC 2833 addresses this issue and is supported on all applicable endpoints on the MCS 5100 system.

RFC 2833 has DTMF-related name events in telephone-event payload format. RFC 2833 specifies payload format telephone-event. The implementation defines events in the range of 0-173, but 0-15 are used for DTMF tones for digits 0-9, *. #, and A-D.

The Nortel implementation of RFC 2833 support includes negotiation between the three different types of DTMF transmission, RFC 2833, SIP Info Method, and inband DTMF. This capability is important because the enterprise portfolio commonly uses out-of-band signaling and MCS 5100 interworking with the CS 2000 uses an in-band mechanism for DTMF transport. As a result, the MCS 5100 system supports negotiation of different DTMF transmission methods, including SIP INFO method.

Communication between the MCS 5100 system and the enterprise gateway is done through the NAT (Network Address Translation). At the time of gateway provisioning, you should provision the public address of the NAT that corresponds to the private address of the gateway on the MCS.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 194 Interworking with other networks

The preferred default for DTMF transmission within the MCS 5100 system is RFC 2833. However, MCS 5100 endpoints are still able to negotiate different DTMF transmission methods if need be (see the following table).

Table 25 DTMF support on MCS 5100 endpoints Component DTMF transmission in-band best effort out-of-band SIP Info using RFC 2833 DTMF transmission message DTMF transmission Session Manager Yes Yes Yes Border Control Point Yes Yes Yes Media Application Yes Yes Yes Server Multimedia PC Client Yes Yes Yes Multimedia Web Client Yes Yes Yes IP Phones Yes Yes Yes IP Client Manager Yes Yes Yes PRI Gateway Yes Yes Yes CAS Gateway Yes Yes Yes Q.SIG Gateway Yes Yes Yes Vegastream PRI Yes Yes Yes Gateway UAS Conferencing No Yes Yes Server (R6.0) FXS Yes Yes Yes FXO Yes Yes Yes Ambit (LG1001M) Yes Yes Yes Leadtek No Yes No IPDialog Yes Yes No CS 2000 Yes Yes No PVG Yes Yes No CICM (CS 2000) Yes Yes No CS 1000 (H.323) No No Yes CS 1000 (SIP) No No Yes BCM (H.323) No No Yes BCM (SIP) No No Yes

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Interworking with circuit-switched networks 195

On the IP Client Manager the UserAgent parameter controls RFC 2833. The default value is EnableRFC2833.

The parameters described in the following table need to be configured properly in the Media Gateway 3200 INI file or through the Media Gateway 3200 Web Interface to switch between inband DTMF, INFO DTMF, and RFC 2833.

Table 26 Media Gateway 3200 DTMF transmission parameter settings Parameter Description IsDTMFUsed Use out-of-band signaling to relay DTMF digits. 0 = Disable, DTMF digits are sent according to DTMFTransportType parameter. (default) 1 = Enable sending DTMF digits within INFO or NOTIFY messages. When out-of-band DTMF transfer is used DTMFTransportType is automatically configured to 0. Note 1: To use out-of-band DTMF, configure IsDTMFUsed=1 or Enable DTMF = yes. Note 2: When using out-of-band DTMF, the DTMFTransportType parameter is automatically configured to 0, to erase the DTMF digits from RTP path.

OutOfBandDTMFFormat The exact method to send out-of-band DTMF digits 1 = INFO format (Nortel) 2 = INFO format (Cisco) - (default) 3 = NOTIFY format RxDTMFOption Defines the supported Receive DTMF negotiation method. 0 = Do not declare RFC 2833 Telephony-event parameter in SDP. 1 = n/a 2 = n/a 3 = Declare RFC 2833 Telephony-event parameter in SDP (default). The Gateway is designed to always be receptive to RFC 2833 DTMF relay packets. Therefore, it is always correct to include the Telephony-event parameter as a default in the SDP. However, some gateways use the absence of the telephony-event from the SDP to decide to send DTMF digits inband using G.711 coder. If this is the case you can configure RxDTMFOption=0. TxDTMFOption 0 = No negotiation, DTMF digit is sent according to the DTMFTransportType parameter 4 = Enable RFC 2833 payload type (PT) negotiation.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 196 Interworking with other networks

Parameter Description Note 1: This parameter is applicable only if IsDTMFUsed=0 (out of band DTMF is not used). Note 2: If enabled, the Gateway ¥ Negotiates RFC2833 payload type using local and remote SDPs. ¥ Sends DTMF packets using RFC2833 Payload Type (PT) according to the received SDP. ¥ Expects to receive RFC2833 packets with the same PT as configured by the RFC2833PayloadType parameter.

Note 3: If the remote party does not include the RFC 2833 DTMF relay payload type in the SDP, the Gateway uses the same PT for send and for receive. Note 4: If TxDTMFOption is configured to 0, the RFC 2833 payload type is configured according to the parameter ’RFC2833PayloadType’ for both transmit and receive. DTMFTransportType 0 = Erase digits from voice stream, do not relay to remote. 2 = Digits remain in voice stream. 3 = Erase digits from voice stream, relay to remote according to RFC 2833. Note: This parameter is automatically updated if one of the following parameters is configured: IsDTMFUsed, TxDTMFOption, or RxDTMFOption.

Call progress tones Different MCS components are responsible for generating or transmitting the audio tones for Dial tone, Busy / Ring-back tones. The assignment of such responsibility depends on the stage of the call as well as the signaling from the gateway. The following list describes call progress tone handling in an MCS to circuit-switched network call through a gateway: ¥ dial tone: generated locally at the originating MCS client, as instructed by the Session Manager ¥ ring-back tone: — If the 183 SIP message from the circuit-switched gateway to the originating SIP client contains SDP information, the media path is connected and the ring-back tone generated from the terminating side is transported in the RTP media stream and presented to the user. This is the normal mode of interworking with circuit-switched networks. — If the 183 SIP message does not contain SDP information, the originating MCS client is instructed to generate the Ring-back tone locally.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Interworking with PBX 197

¥ busy tone: generated locally at the MCS client as instructed by the Session Manager when the session setup attempt is unsuccessful.

Interworking with PBX When an MCS system is deployed into an established enterprise, it is likely that an existing PBX is already in service. Assuming the enterprise keeps their PBX and terminal investment, the MCS system must interwork with the existing PBX system. This section describes the architecture and media flow topology when an MCS system networks with circuit-switched PBXs. In an enterprise deployment, the MCS system is dedicated to serving one enterprise. All MCS servers and gateways are usually located on the enterprise premise. PBX trunk gateways between MCS and PBX should be located close to the PBX on site to avoid long T1 or E1 links to the PBX trunks. Figure 49 "Interworking between MCS system and PBXs in enterprise deployment" (page 197) shows a typical topology of how an MCS enterprise system connects with the PBX systems in a multi-site network. Note that a multi-site PBX network usually requires a PBX system in each site connected together with dedicated trunks for traffic and signaling. A multi-site MCS network can be implemented with a single MCS system with MCS clients and gateways distributed across the sites. Figure 49 Interworking between MCS system and PBXs in enterprise deployment

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 198 Interworking with other networks

The following diagram shows how MCS media traffic flows between MCS clients and PBX terminals through CPE gateways: ¥ Flow 1 represents the media flow between the MCS clients and the PBX at the same site. ¥ Flow 2 represents the media flow using dedicated voice trunk between PBXs to carry the media traffic. ¥ Flow 3 represents using the Enterprise IP WAN to carry the traffic between an MCS client and a remote PBX terminal. ¥ Flow 4: The different ways the media flows depends on how call routing is set up in the MCS and the PBX systems. If the private Enterprise IP network can support the traffic, it is desirable to use flow 3 over flow 2, since this reduces the cost of leased trunks.

Figure 50 Traffic flow between MCS system and PBXs in enterprise deployment

Once trunk gateways are deployed to connect PBXs, these gateways can also provide connectivity between PBXs to take over voice traffic that used to be carried over dedicated leased trunks interconnecting PBXs. This traffic is shown in flow 4. Such deployment will further reduce enterprise voice network operating cost by eliminating dedicated lease trunks connecting PBXs at different sites.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Interworking with PSTN 199

In enterprise MCS deployment, it is expected that the enterprise IP network is available to provide connectivity between all the MCS clients, gateways, and servers. In addition, the network must provide sufficient capacity, low latency, low jitter, and low packet loss rate, under full voice and data load. It is strongly recommended that QoS be implemented to give priority to voice traffic over best effort data traffic. For more information, see "IP network recommendations" (page 145).

Interworking with PSTN An MCS system needs to be connected to the PSTN to enable calling to and from the PSTN. The following are general requirements for such connectivity: 1. Terminate local calls. The connection to the PSTN should provide a free calling area footprint similar to that offered by a PSTN line located at the MCS client sites. 2. Long distance remote hop-off. The connection to PSTN should offer remote hop-off capability in which a call terminating to a remote destination should be carried on the IP network and connect to the PSTN at a gateway as close to the PSTN destination as possible to minimize PSTN long distance tolls.

The following are the PSTN interworking requirements for the MCS deployment: 1. Local call termination: In general, each enterprise site with an MCS deployment needs to access the PSTN locally. This can be achieved by one or more of the following: ¥ Deploy MCS PSTN gateway at each site. ¥ Deploy MCS PSTN gateway at a central site in one metropolitan area and route all PSTN traffic terminating in the metro area to that site. ¥ If the MCS penetration is small, such as less than 5%, at the site, and a CPE gateway connecting to the PBX is available, it is possible to route the PSTN traffic through the existing PBX at the site.

2. Long distance remote hop-off: For long distance calls, the call should be delivered to the PSTN closest to the destination. This routing will maximize the use of the IP network for transport and minimize the PSTN toll network usage. In an enterprise with a large number of sites and PSTN gateways covering major metro areas, terminating the PSTN traffic at the appropriate gateways will significantly reduce PSTN toll charges.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 200 Interworking with other networks

SIP Trunking The MCS system supports interworking over SIP trunking between the MCS system and gateway endpoints of Meridian 1 and Succession Communication Server 1000 and 1000M (release 4.0). The interworking support the following services: ¥ private dial plans (UDP/CDP) ¥ transit of called party number, calling party name and number, TON/NPI, presentation and screening indicators ¥ routing based on E.164, private number, alias ID ¥ MCS services set: — calling-line ID — calling-name display — call transfer (blind, consultative) — call forward/reject/pass — call hold/mute/retrieve — SimRing (simultaneous ringing) — sequential ringing — call screening and management — three-way calling — Ad Hoc audio conferencing — call redirect through refer — Meet Me audio conferencing — audio announcements — Music on hold — Unified messaging — Converged Desktop II — Call Park

¥ message waiting indicator (MWI) using SIP Notify message for the following scenarios: — CallPilot is hosted off the Meridian 1/ Succession 1000/Succession 1000M — CallPilot is hosted off the MCS system

¥ Audio CODEC negotiation and support for G.711, G.729a CODECs

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 201 Traffic considerations

This section contains the following: ¥ "Traffic flow overview" (page 201) ¥ "Voice traffic" (page 202) ¥ "Video traffic" (page 212) ¥ "Instant Messaging traffic" (page 212) ¥ "Collaboration traffic" (page 213) ¥ " Converged Desktop traffic" (page 213)

Traffic flow overview The MCS 5100 system is a distributed IP-based communication system. The Session Manager controls the media and signaling flow between various components, while the System Manager controls the management flows between the MCS components. The key traffic flows and the protocols used in the MCS Network include the following: ¥ Signaling and control flows: SIP, SIP-T, SQL/JDBC, SOAP, HTTP, HTTPS, UNIStim, WCSCP, MPCP, MGCP+ ¥ Media flows: RTP, RTCP, UDP ¥ Management data flows: OMI, PCP, DTP, SNMP, NTP, TCP

Figure 51 "Media, signaling and management data flows with Border Control Point" (page 202) illustrates the overall media, signaling and control, and management data flows between MCS components when the deployment includes the Border Control Point.

These flows are mostly internal to the MCS system. For the purpose of network engineering, the media and signaling flows are the most important flows due to the real-time response requirements. The media and signaling flows must be supported with low latency and high priority on the IP network. The management flows between the System Manager and the managed entities also need to be supported for proper network operations.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 202 Traffic considerations

Figure 51 Media, signaling and management data flows with Border Control Point

Voice traffic Overview Among the different MCS services, voice service has the most complex traffic flows due to the requirements for interworking with the PSTN and enterprise PBX systems. Assuming user voice communication behavior does not change with the underlying technology, telephony traffic planning rules and analysis can be applied to MCS voice services planning. The task of computing the telephony flows between different user groups in an MCS deployment is similar to the PBX network design. In an enterprise deployment, the telephony traffic from the MCS system terminates in one of the following ways: ¥ stays within the MCS system ¥ terminates to the enterprise PBX system

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Voice traffic 203

¥ terminates to the PSTN

To a large extent, the telephony traffic flow is dependent on user communication behavior in the enterprise, and is independent of the telephony system technology, either MCS or circuit-switched PBX.

In an enterprise, the telephony traffic flows between communities are usually quite symmetric, except for call center applications. As shown in Figure 52 "Traffic flow with return traffic" (page 203), there is in general a roughly equal amount of traffic originating from the other networks terminating to the MCS system.

Figure 52 Traffic flow with return traffic

Enterprise deployment scenarios The following two scenarios for deploying the MCS system into an enterprise served by a PBX are considered: ¥ Scenario 1: The MCS system is deployed into a new site to serve all the population of that site. ¥ Scenario 2: The PBX population is capped, and the MCS system is deployed to serve the incremental population growth.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 204 Traffic considerations

In a new site deployment, traffic terminating within the same site is all MCS client traffic. Traffic to and from the PSTN needs to go through PSTN gateways. Traffic to and from other enterprise sites can be divided into two components: PBX traffic and MCS client traffic. The ratio of this traffic corresponds to the average MCS population penetration rate relative to the rest of the enterprise sites. The traffic component to and from the PBX needs to pass through a PBX trunk gateway.

In an incremental MCS deployment scenario, the majority population at the site of interest is served by the PBX, with incremental population growth served by the MCS system. The following diagram shows the traffic flow within the site. In early phases of deployment, the MCS penetration at the site can be low. The traffic originating from site A that terminates within the same site is expanded as illustrated in the figure. The flow labels inside site A show the fraction of traffic flow between the MCS client community and the PBX community. If the MCS penetration (PA) is low, the majority of the traffic within the site terminates to PBX. The traffic originating from and terminating in the MCS is very small.

Traffic within an enterprise site See Figure 53 "Traffic flows within a site" (page 205) for an illustration of the traffic flow within an enterprise site. In the illustration, the traffic loads of Enterprise XYX Site A are shown as follows:

2 ¥ PBX-PBX traffic load: s(1-PA)

¥ PBX→MCS traffic load: s(1-PA)PA

¥ MCS→PBX traffic load: sPA(1-PA) 2 ¥ MCS traffic load: sPA ce

For traffic flowing from the MCS community and leaving the site, the ratio of traffic terminating to the MCS community in other enterprise sites to the ratio of traffic terminating in PBX at other sites follows the MCS population penetration (P).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Voice traffic 205

Figure 53 Traffic flows within a site

Voice traffic in multi-site enterprise deployment In multi-site enterprise networks, the telephony traffic usually follows some degree of site locality. Between sites of roughly equal population, the traffic distribution can roughly be approximated by fraction s staying within a site, fraction t terminating to other sites in the enterprise and the rest, fraction u, terminating to PSTN. See Figure 52 "Traffic flow with return traffic" (page 203) for an illustration of the s, t, and u traffic. An example distribution is that one third of the traffic terminates within the same site, one third terminates to other sites of the enterprise and one third terminates to PSTN.

This section highlights the media and signaling flows for voice traffic between different MCS components in a multi-site private enterprise deployment. It covers traffic interworking with PBX systems and the PSTN.

Media flow Figure 54 "Media flow" (page 206) shows the media flow between different MCS components in a multi-site private enterprise deployment using gateways deployed at each site to interface with the PBX at the site.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 206 Traffic considerations

Figure 54 Media flow

The following is a description of each numbered media flow:

1. MCS client-to-MCS client calls within Enterprise Site A 2. MCS client-to-MCS client calls across different Enterprise sites 3. MCS client-to-PBX terminal call within Enterprise Site A 4. MCS client-to-PBX terminal call between different Enterprise sites 5. MCS client-to-PBX terminal call between different Enterprise sites 6. PBX-to-PBX calls between site routed through Media Gateway 3200 and transported by IP 7. MCS client-to-and-from local PSTN calls 8. local PSTN-to-remote MCS client calls 9. MCS client-to-remote PSTN calls

The direction of the arrow in the above flows represents the direction of call origination. The media flow is always bidirectional for all telephony calls.

A media flow often passes through multiple components. The network planner has to dimension the traffic of each flow using Erlangs and compute the loads on each component and IP transport links accordingly. The following list summarizes the different traffic flows going through the MCS network components in a multi-site enterprise deployment:

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Voice traffic 207

PSTN gateway at Site A The PSTN gateway traffic at site A can be broken down into the following components: ¥ From the PSTN to enterprise Site A. This component represents traffic originating from the PSTN and terminates in Site A. This is represented by the PSTN to Enterprise direction of Flow 7. ¥ Traffic from enterprise Site A terminating to the local PSTN. This is represented by the Enterprise to PSTN direction of Flow 7. ¥ Traffic from other enterprise sites terminating to the PSTN free calling area where site A is located. This is represented by Flow 8. Note that the call origination represented by this flow is unidirectional from MCS to PSTN. PSTN to MCS calls are usually routed to the local PSTN gateway of that site.

Media Gateway 3200 at Site A The traffic components flowing through this gateway include: ¥ traffic between the MCS community at Site A and the PBX community at site A. This component is represented by Flow 3. The traffic includes the origination from MCS as well as the origination from PBX. ¥ traffic between the MCS community at other sites and the PBX community at site A. This component is represented by Flow 5. The traffic includes the origination from MCS as well as the origination from PBX. ¥ traffic flows between PBXs are carried on the IP network through Media Gateway 3200. This is represented by Flow 6.

MCS clients Collectively, the MCS clients in site A carry the following traffic components: ¥ See Flow 1 for traffic originating and terminating at Site A MCS community. ¥ See Flow 2 for traffic between Site A MCS and other site MCS community. ¥ See Flow 3 for traffic between Site A MCS and Site A PBX community. ¥ See Flow 4 for traffic between Site A MCS and other site PBX. ¥ See Flow 7 for traffic between Site A MCS and local PSTN. ¥ See Flow 9 for traffic from Site A MCS to other sites for remote hop-off to PSTN.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 208 Traffic considerations

Enterprise IP network The enterprise IP network is used to carry traffic between enterprise sites. The traffic carried by the enterprise network can be described by an NxN matrix, where N is the number of sites in the enterprise. Considering traffic between one pair of sites A and B, the traffic that flows across the matrix includes the following: ¥ traffic between Site A MCS community and Site B MCS community ¥ traffic from Site A MCS community to Site B PBX community ¥ traffic from Site B MCS community to Site A PBX community ¥ traffic from Site A MCS community to PSTN gateway in site B ¥ traffic from Site B MCS community to PSTN gateway in Site A ¥ traffic between Site A PBX community and Site B PBX community when traffic between PBXs is carried on the IP network through CPE gateways

This list above only represents the traffic between one pair of sites. In general, in a multi-site enterprise MCS network, when computing the traffic for a site, it is necessary to sum up the traffic between the site being considered and all other sites.

Signaling flow The following diagram shows the signaling flow between the MCS signaling servers, clients and gateways. For presentation clarity, only the signaling for site A is shown. Signaling for the Multimedia Web Clients and IP phones must flow through the respective signaling client managers, while signaling for the Multimedia PC Clients and Media Gateway 3200 is sent directly to the Session Manager using SIP.

In general, the signaling traffic is only a very small fraction (order of magnitude of 1% or less) of the total traffic. However, the signaling traffic travels a path different from the media and must be given due consideration.

MCS clients signaling The MCS system supports the three types of clients: IP Phone client, Multimedia Web Client, and Multimedia PC Client. In addition, the IP Phone client can be used in conjunction with the Multimedia PC client, which controls the IP Phone client using UNIStim. The signaling traffic for client can be divided into three components: ¥ power-up and launch traffic: the average volume of traffic associated with each power cycle (Internet Telephone client) or program launch and termination (PC client, Web client) ¥ call setup traffic: the average volume of traffic associated with each basic call setup

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Voice traffic 209

¥ maintenance traffic: the average traffic for each unit of time associated with maintaining a client even when there is no call event. The maintenance traffic can be further divided into client reregistration traffic, firewall and NAT maintenance traffic, and display refresh messages.

See Figure 55 "Signaling flow" (page 209) for an illustration of the signaling flow.

Figure 55 Signaling flow

MCS clients are configured to reregister once for each configurable period of time. When determining this value it is best to consider the impact to the Session Manager and Database Manager. When clients are separated from the Session Manager by a firewall/NAT (as in typical service provider hosted enterprise deployments), a periodic stream of messages is required to keep the firewall/NAT mapping open. The firewall/NAT is typically configured to time out and close unused mappings in three minutes, while the Multimedia PC clients are typically configured to ping the Session Manager at every two and a half minutes. Display refresh only applies to IP Phone clients.

Gateway signaling Since the MCS is a distributed system, the gateways are not required to colocate with the signaling servers. In fact, gateways are often located remotely from the central MCS site. It is necessary to compute the gateway signaling traffic volume for proper transport network design. The signaling traffic load for different types of gateways can be calculated by multiplying the signaling load by the number of ports the gateway supports.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 210 Traffic considerations

Voice media bandwidth The IP network bandwidth consumed by one voice channel depends on the speech CODEC used and the size of the voice sample used in each packet. The following table shows the voice data payload generated by different CODECs for different frame sizes.

Note: The computations in the table do not include 802.1p&q headers.

Table 27 Voice channel bandwidth by CODEC Fram e Voic Ether Ether Fram ATM durat e IP net net e Fram Head ion paylo Pack fram Band Rela e AAL5 ers in ms ad et e width y Rela PDU plus ATM (payl (byte (byte (byte in (byte yin (byte Padd in oad) s) s) s) Kbps s) Kbps s) ing Kbps G.711 (64 Kbps) 10 80 120 146 116.8 134 107.2 136 23 27.2 20 160 200 226 90.4 214 85.6 216 49 106.0 30 240 280 306 81.6 294 78.6 296 75 99.1 G.729A/G.729 (8 10 10 50 76 60.8 64 51.2 66 40 84.8 Kbps) 20 20 60 86 34.4 74 29.6 76 30 42.4 30 30 70 96 25.6 84 22.4 86 20 28.3

The RTP transport layer overhead and IP packet overhead are shown in the following diagram. The Layer 4 and Layer 3 headers add 40 bytes of overhead to each voice sample packet. Different Layer 2 technologies also add different amounts of Layer 2 overhead to each packet. It is interesting to compare the bandwidth consumed by G.711 (uncompressed 64 Kbps PCM) and G.729 (compressed 8 Kbps linear predictive coding based) CODECs at 20ms of speech sample for each packet. On Ethernet transport, the G.711 coded consumes 90.4 Kbps, while the G.729 consumes 34.4 Kbps. The CODEC data rate 8 to 1 reduction ratio is reduced to only 2.6 to 1 when transport overheads are taken into account.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Voice traffic 211

Figure 56 VoIP datagrams over Layer 2 protocols

When the telephony voice traffic over a link is given in Erlangs, the media bandwidth allocation planning involves two steps. The traffic in Erlangs is first converted to the number of voice channels using the Erlang B formula (for the Erlang calculator, see http://www.erlang.com/calculator/erlb/), given a particular blocking probability. The second step is to convert the number of channels into the appropriate Layer 2 and Layer 3 bandwidth given a particular choice of speech CODEC, voice packet sample size, and the link used for transport.

Figure 57 Erlang B calculator

For example, if the Erlang requirement is 5 and the blocking factor is 0.01, the required number of channels is 11. When G.711 and 10-ms packetization times are used for each channel, the total bandwidth required

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 212 Traffic considerations

for an ATM WAN link is close to 1.4 Mbps (11*127.2 Kbps). Other media flows such as collaboration and video must be considered when planning network bandwidth.

Video traffic The MCS system only supports point-to-point video traffic between the PC- based clients. In a point-to-point video session, the video media stream and voice media stream are processed independently at the clients, which means the voice and video CODECs do not synchronize the audio streams and the video streams. Video media flows typically follow the same MCS components as voice media path. Network planning for point-to-point video is a degenerated case of voice traffic.

The MCS system supports the DivX and H.263 video CODECs. They are variable data rate CODECs, which means that the data rate depends on the scene complexity, and how fast the images are moving. For more H.263 video CODEC information, see Multimedia PC Client User Guide (NN42020-102) .

For the DivX CODEC the data rate, inclusive of IP header, is illustrated in the following table.

Table 28 DivX video CODEC bandwidth requirements Client present Frames per setting values Resolution second Quality Bandwidth Home 160 x 120 8 low 40 - 80 Kbps Office 320 x 240 10 medium 150 - 300 Kbps Conference 352 x 288 15 high 400 - 800 Kbps

Instant Messaging traffic Secure Instant Messaging (IM) is supported by the following SIP endpoints: ¥ Multimedia PC Client ¥ Multimedia Web Clients ¥ IP Client Manager on behalf of IP Phones ¥ Session Manager on behalf of other SIP endpoints not capable of secure IM sessions in communication with clients that are capable ¥ Media Application Server IN Chat service

The maximum payload size of a single instant message varies. Once the text message exceeds approximately 450 characters, the message is automatically split into two instant messages. IM traffic does not use

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 213

the Border Control Point. For more information about SIP signaling bandwidth calculations, see "Bandwidth" (page 151) in the "IP network recommendations" (page 145).

Collaboration traffic The Multimedia PC client supports Web pushing, file exchange, clipboard sharing, and whiteboard between two Multimedia PC Clients. A collaboration session can be established independently of other media streams such as a voice call. The collaboration session is established through SIP.

The Multimedia Web Client supports Web pushing as its only form of collaboration in this release.

File exchange and clipboard sharing use a UDP stream to transfer the data between clients. If the Border Control Point is required, the UDP media stream will pass through the Border Control Point.

Web pushing and whiteboard updates are sent using text commands embedded in SIP messages through the Session Manager.

Converged Desktop traffic A Converged Desktop consists of a regular PSTN telephone and a PC running the Multimedia PC Client with Converged Desktop service. The Converged Desktop enables end users to continue to receive their line services from the existing TDM switching system and receive the MCS multimedia services at the same time.

Figure 58 Converged Desktop

The Converged Desktop services are divided into two development phases: Converged Desktop I service and Converged Desktop II service.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 214 Traffic considerations

Converged Desktop I service The Converged Desktop I service blends TDM voice and MCS multimedia services.

In the Converged Desktop I service, the SimRing feature on the existing switch is used to extend the call as a second call leg simultaneously to the MCS for converging. The extended call terminates at the MCS and interacts only momentarily with the MCS system to provide a screen pop and call log on the originator and terminator of the call. When the call is answered at the switch, the second call leg is released, although the GUI stays visible, and the MCS system has no control over the call.

The user’s existing switching system connects to the MCS through the Media Gateway 3200. A key concept is that the TDM switching system sends a call indication to the MCS and on to the Converged Desktop through PRI or SIP trunking to the Communication Server 1000. The MCS system takes this information, and pops a window up on the user’s PC. The user is then able to initiate a separate multimedia session to the other party if the other party is capable of participating in the requested type of multimedia session.

Features for Converged Desktop I The following is a list of feature available to both Converged Desktop I and II service subscribers: ¥ redirection of the call based on MCS Personal Agent screening rules. The user can log into a web page and take control over how he is reached. This functionality is not modified by the Converged Desktop features, and is currently available to existing MCS subscribers. A good example of this is a user activating MCS-based forking through a web page), so that a cell phone is rung when the user’s work desktop telephone is rung. When one leg of the call is answered, the other legs drop (stop ringing). ¥ inbound Call Log that enables the user to see who has called and when the call occurred. ¥ picture calling line identification that enables the user to see who is calling (and in some cases who is being called). ¥ file transfer. If both the originator and terminator are capable of this functionality, then files can be transferred back and forth between the users. The Converged Multimedia PC Client (CD1 and CD2) and the Regular Multimedia PC Client are the only endpoints that support this functionality. ¥ white board sharing. If both the originator and terminator are capable of this functionality, then a Whiteboard session can be set up between the two users. The, Converged Multimedia PC Client (CD1 and CD2) and

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 215

the Regular Multimedia PC Client are the only endpoints that support this functionality. ¥ clip board transfer. If both the originator and terminator are capable of this functionality, then the clipboard may be transferred between the users. The clipboard function enables a user to Copy (CTRL-C) items, such as PowerPoint slides or sections of Excel* spreadsheets to the clipboard, and then sends them to the other party. The other party then Pastes (CTRL-V) the items. The Converged Multimedia PC Client (CD1 and CD2) and the Regular Multimedia PC Client are the only endpoints that support this functionality. ¥ Web Cobrowsing. If both the originator and terminator are capable of this functionality, then one user can automatically drive the other’s Web browser. The Converged Multimedia PC Client and the Regular Multimedia PC Client are the only endpoints that support this functionality. The Web Client will support the reception of the page, but will not send pages to the Multimedia PC Client with Converged Desktop service. Note that clients with Converged Desktop I service do not support Web cobrowsing due to INFO and MESSAGE implementation. Web browse is supported for clients with Converged Desktop I service. ¥ Instant Messaging and Presence state indications. Any client that supports the Nortel IM format can message back and forth with the Converged Multimedia PC Client. The clients that currently support IM are: — Multimedia PC Client with Converged Desktop I service — Multimedia PC Client with Converged Desktop II service — Multimedia PC Client without Converged Desktop service — Multimedia Web Client — IP Phones controlled by the IP Client Manager

In order to use these services, the existing TDM switching system must support a simultaneous ring or SimRing type capability, which enables a call indication to be sent to the Converged Desktop user through a PRI interface into the MCS system. Calls placed on the circuit network to a user with a Converged Desktop are extended to the MCS by the SimRing capability on the voice switching system. Calls placed on the IP or multimedia network to the Converged Desktop are sent to the circuit (voice) network by the MCS. The system administrator must make data changes on both the MCS and the existing switching system in order to configure a user as a Converged Desktop I user. The MCS system supports the communications between two Converged Desktop I users, between a TDM user and a Converged Desktop user, and between a SIP user and a Converged Desktop user. For full Converged

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 216 Traffic considerations

Desktop I functionality, both the originator and terminator must reside on the same MCS system. They can, however, reside on different TDM systems. Converged Desktop clients on different MCS systems can communicate (IM, Collaboration) with one another by manually typing the SIP address of the other party, but this communication is not triggered from an inbound call as in the Converged Desktop case. Figure 59 Signaling flows between two Converged Desktops

The above diagram shows an example of signaling flow between two Converged Desktops. The following is a description of the steps of the signaling flow:

1. Phone A calls Phone B. 2. The call is routed through a TDM network. 3. The terminating TDM switch rings Phone B. 4. Because the SimRing feature is configured for Phone B on the terminating switch, the switch calls the corresponding MCS number out on a PRI trunk into the Media Gateway 3200.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 217

5. The PRI gateway converts the call into a SIP INVITE message and sends it to the Session Manager. 6. The Session Manager executes the call screening rules, if set up by the subscriber. 7. The Session Manager sends a 100 TRYING message to stop various timers from expiring. 8. The Session Manager routes the INVITE message to all of the aliases and contacts for the Converged Desktop user. Since the INVITE message comes from the PRI gateway, the Converged Desktop user has been set up with a Converged Desktop alias by the System Administrator that is equal to the Called Party received on the PRI trunk. This causes the INVITE message to be routed to the Multimedia PC Client. 9. A screen popup with picture ID (if provided) appears on the Converged Desktop User B’s PC which enables call redirect until the TDM phone is answered. 10. The Multimedia PC Client B sends a 180 RINGING message to the Session Manager. The Multimedia PC Client B does not notify the user of an incoming call, and therefore, the user will not be able to answer. 11. The Session Manager sends the 180 RINGING message on to the PRI gateway. This is normal processing and is not modified by Converged Desktop functionality. 12. A SIP MESSAGE packet is sent from the terminating Multimedia PC Client B to the Session Manager. The message is addressed to the calling line ID of the inbound call. The message contains the Require com.Nortel.collab tag. The message with this tag is ignored by other SIP endpoints that the user has registered. The message indicates that the originator (Phone A) just called the terminator (Phone B). The message includes the SIP address and the Display Name of the Multimedia PC Client B. 13. The Session Manager routes the Multimedia PC Client A based on the configured alias of the caller. The message is actually routed to all of the endpoints that the caller has registered. 14. The Multimedia PC Client A receives the MESSAGE and responds back with a 200 OK message. The 200 OK message contains information about the Multimedia PC Client A’s SIP address, multimedia capabilities, and Display Name. 15. The Session Manager routes the 200 OK message to Converged Desktop User B. Converged Desktop User B now knows that the originator is multimedia capable. 16. After Multimedia PC Client A responds with the 200 OK message, a screen pops up on the display screen to make sure all of the multimedia functions are available.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 218 Traffic considerations

17. When Multimedia PC Client B receives the 200 OK. Multimedia PC Client B enables the multimedia buttons so that multimedia sessions can be set up. 18. The called party answers Phone B. 19. When the terminating TDM switching system SimRing feature detects that the call has been answered, it releases all other legs that are being SimRinged. In this case, a PRI release is sent to the Media Gateway 3200. 20. When the Media Gateway 3200 gets the release from the TDM switch, it sends a CANCEL to the Session Manager. 21 The Session Manager forwards on the CANCEL to the Multimedia PC Client B. It then grays out and deactivates the Redirection button, and disables this functionality.

After the call is set up, either party can initiate a multimedia session by sending an INVITE to the SIP name (received in MESSAGE) or alias (received in previous call INVITE). They can also send Instant Messages and push Web pages to each other. Note that the above scenario (Converged Desktop to Converged Desktop) is one of many that can involve multimedia and voice endpoints. For the current Converged Desktop solution, there are some key assumptions and limitations that must be noted: 1. PRI is used as the interface between the existing TDM switching system and the MCS system. The currently supported PRI variants are ¥ AT&T 4ESS (AT4) ¥ TR 41459 (August 1995), PRI ¥ AT&T 5ESS10 (E10) ¥ AT&T 235-900-342 (January 1996): PRI ¥ Northern Telecom DMS-100 (DMS) ¥ NIS A211-1 release 6 BCS36 ¥ Bellcore National 2 (NI2) ¥ SR-3887 (November 1996): PRI ¥ QSIG (Basic call only) ¥ ECMA 143 (June 1997) ¥ ETSI ¥ ETS 300 102-1 (December 1990) + Addendum ETS 300 103-1/A2 (October 1993) ¥ Nortel M1 PBX Patch RLS25, Feature RLS 26

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 219

¥ PRI variants may vary from vendor to vendor of gateways.

2. No software changes are required on the existing switching system. However, user configuration changes are required to activate the SimRing feature. 3. The Multimedia PC Client with Converged Desktop service is not capable of video or audio. 4. The system administrator provisions a user as a Converged Desktop User. A Converged Desktop User never uses a Multimedia PC Client for voice. The Converged Desktop User can, however, use other SIP endpoints (such as the Multimedia Web Client or IP Phone controlled by the IP Client Manager) for voice over IP calls. 5. An alias must be set up for each user so that the alias is the same as the Calling Line ID sent from the TDM switch to the MCS over PRI. 6. When a nonConverged Desktop user calls a Converged Desktop user, the nonConverged Desktop user’s charge ID must be included as an alias in the nonConverged Desktop user’s provisioning. Because nonConverged Desktop user’s public/private charge ID identifies them to the TDM switch as the calling party, it must be unique within a domain. Aliases cannot be shared amongst users within a domain.

The Converged Desktop media traffic can be divided into the following types: 1. Voice traffic that is entirely contained within the TDM network (TDM phone ↔ TDM phone) 2. Voice traffic between the TDM community and MCS community (Converged Desktop TDM phone ↔ SIP phone, TDM phone ↔ nonConverged Desktop PC Client) 3. Multimedia traffic within the MCS community (Multimedia PC Client with Converged Desktop service ↔ Multimedia PC Client with Converged Desktop service, Multimedia PC Client with Converged Desktop service ↔ other PC Client)

The type-1 traffic is considered as part of TDM network engineering process. Both type-2 and type-3 traffic should be considered as part of the overall traffic engineering process discussed earlier.

Converged Desktop II service Converged Desktop II (CD2) service blends TDM voice and MCS multimedia services with additional features.

For the Converged Desktop II service, a Nortel Communication Server 1000 (CS 1000) platform with the Converged Desktop feature to route the calls to the Converged Desktop II service on the MCS.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 220 Traffic considerations

Figure 60 Converged Desktop enterprise configuration

The user’s existing switching system connects to the MCS system through a CS 1000.

SimRing is still available in MCS 5100 Release 4.0 as Converged Desktop I. The Converged Desktop II service no longer relies on the SimRing feature to blend calls. It instead relies on the call being sent to the MCS system before a converged user’s telephone is rung. Converged Desktop calls always send signaling and sometimes send media to the MCS system.

Although Converged Desktop II service is more expensive than Converged Desktop I (SimRing) from a resource usage point-of-view if PRI is used, more end user features are available.

Converged Desktop subscriber configuration The MCS system administrator enables a subscriber to access the Converged Desktop service by provisioning the subscriber as either a Converged Desktop I User or a Converged Desktop II User.

Note: A user cannot be both, since the Multimedia PC Client and the Session Manager function differently for a Converged Desktop I from a Converged Desktop II client.

In order to use the Converged Desktop II Service, the system administrator must configure or provision the existing TDM switching system to permit insertion of an MCS system into the call topology. A subscriber configured

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 221

in this mode is a Converged User. A Converged User can turn off the Converged Desktop functionality by setting the personal computer as the Preferred Audio Device.

Features available in Converged Desktop II The following is a list of additional features available to Converged Desktop II service: ¥ Outbound Call Log: This feature enables the user to see where and when outgoing sessions were setup. ¥ Video: An option is available on the CD2 Multimedia PC Client to add or remove the video call after a voice call has been setup. If one of the parties has a video camera, and both of the parties are using a video capable MCS client (CD2 Multimedia PC Client, Regular Multimedia PC Client, Web Client), then video may be set up between the two parties. For the CD2 users, the video session is not tied to the voice session. They are two independent media streams, video to the client and audio to the Converged Phone. But, when either party hangs up on the audio session, the video session is stopped also. ¥ Click-to-Call: This feature enables the Converged Desktop II User to originate voice calls. The user clicks on a number at the Converged Multimedia PC Client, and the Converged Phone is rung. Upon the user answering their Converged Phone, the called party is rung. ¥ Automatic presence: The subscriber’s presence is updated when call is placed, received or hung up. It is also updated when the MCS soft clients detect no activity. When a CD user is transferred out of a call, the presence state will remain Active on the Phone until the second call leg is ended.

Interfaces supported by Converged Desktop services The interfaces between the MCS system and the existing switching system that are supported include Primary Rate Interface (PRI) and Session Initiation Protocol (SIP) interface. The following PRI variants are supported: ¥ AT&T 4ESS (AT4) ¥ AT&T 5ESS10 (E10) ¥ Northern Telecom DMS-100 (DMS) ¥ Bellcore National 2 ¥ QSIG Note: The PRI variants must support the PRI redirect parameter.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 222 Traffic considerations

Session Manager cluster interworking In a Session Manager cluster, the signaling path could route through different Session Managers.

The Multimedia PC Client with Converged Desktop II registers with one Session Manager. The service is executed on the first Session Manager that handles the call from the Converged Phone. The main reason for running the Converged Desktop service at the first Session Manager is to avoid having the service run in two different Session Managers and reducing the overall capacity of the clustered system.

Multi-site interworking The Converged Desktop II service supports the use of multi-site MCS systems. A multi-site system is defined as separate MCS systems that do not share a common database. Subscribers in the network configuration reach each other by dialing access codes, or by using fully qualified user names such as [email protected].

In a multi-site Converged Desktop-to-Converged Desktop call, the originating MCS system has to be provisioned to proxy the incoming request from the Multimedia PC Client with Converged Desktop service or the TDM phone with Converged Desktop service to the terminating MCS system. Since the terminating user is not known at the originating MCS system, two Converged Desktop services are run on separate MCS systems.

IP network configuration The IP address of the IN SIP Signaling Gateway IP link cards must be provisioned as Trusted Nodes on all the MCS proxies.

As a Trusted Node, the MCS proxy does not need to authenticate the IN SIP Signaling Gateway for each request as is normally done by prompting for a password when MCS clients attempt to make calls.

Network configuration Two network configurations are described for the Converged Desktop service in the following illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 223

Figure 61 Converged Desktop enterprise network configuration

In the first configuration, the MCS has a 1+1 Session Manager (proxy) system where one is a spare server. Call requests from the IN SIP Signaling Gateway go to a single Session Manager.

In the second scenario, the MCS uses a 2+1 Session Manager configuration where two servers provide Converged Desktop service and one acts as the spare.

Converged Desktop II call flows Simplex calls The term simplex refers to a call scenario where a single dip in the MCS is required to run the CD services for the terminating subscriber. The services include window pops for indicating the calling subscriber information, such as picture ID, calling number. A simplex call with the call events is described in the following illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 224 Traffic considerations

Figure 62 Converged Desktop simplex call flow

The Converged Desktop simplex call flow is described as follows:

1. Call origination sends an event to inform the MCS of presence change and outbound call log. 2. Meridian 1 places a call to the terminating Meridian 1 over a trunk, such as PRI or SIP. 3. Meridian 1 starts terminating Converged Desktop service and informs the MCS. 4. MCS starts the Converged Desktop service, notifies the originator and terminator’s Converged Desktop client, and the terminating phone. Screen pops occur on both Converged Desktop clients. 5. Terminating phone answers the call, the call response is sent to the MCS, and then to the blended clients. 6. Answer message is sent back to the originating Meridian 1 and Converged Desktop client for call log and presence use.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 225

Complex call The complex call scenario is only applicable when a call originates from outside the MCS system and terminates to the phone of a subscriber with Converged Desktop service. For example, a subscriber with the Converged Desktop service is calling another subscriber with Converged Desktop service, or a PSTN user is calling a phone of a subscriber with Converged Desktop service. Figure 63 Converged Desktop complex call flow

The Converged Desktop complex call flow is described as follows: ¥ Origination events are sent to the MCS for call logs and presence. ¥ Terminating signaling loop is used for all complex calls. ¥ Terminating signaling loop with the MCS stays during the call. ¥ Audio path is optimized at the Meridian 1.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 226 Traffic considerations

Click-to-Call A Click-to-Call flow between two subscribers with Converged Desktop service is shown in the following illustration.

Figure 64 Click-to-Call flow

The Click-to-Call flow is described as follows:

1. The Converged client clicks on address book, outlook plug-in, or call logs to place a call. A request is sent to the MCS Web Server which creates the Click-to-Call service. 2. The request message is sent to the originator’s M1. 3. The call is answered by the originating Phone. 4. The MCS places the original leg on hold and sends a request to transfer the call to Phone-2.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 227

5. Meridian 1 sends a notification to MCS to log the origination to Phone-2. 6. Meridian 1 calls Phone-2. 7. Meridian 1 sends a request to the Converged Desktop service on behalf of the terminating Converged Desktop user. 8. The MCS starts the Converged Desktop service and notifies the originator and terminator’s clients and the terminating phone. Screen pops occur on both Converged Desktop clients.

Click-to-Call from Personal Agent A Click-to-Call flow of Personal Agent is shown in the following illustration. Figure 65 Click-to-Call flow

The Click-to-Call flow of Personal Agent is described as follows: ¥ A user originates a call from PA client. An http request is sent to the MCS Web Server which creates the click to call service. ¥ The request message is sent to the originator’s M1. ¥ MCS sends a request to transfer the call to Phone-2. ¥ M1 accepts this request. ¥ MCS notifies PA that call is successful. ¥ M1 places a call to the terminating M1 over a trunk (SIP, PRI, etc.).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 228 Traffic considerations

Converged Desktop media traffic The Converged Desktop I media traffic can be divided into the following types: ¥ Voice traffic that is entirely contained within the TDM network (TDM phone ↔ TDM phone) ¥ Voice traffic between the TDM community and MCS community (Converged Desktop TDM phone ↔ SIP phone, TDM phone ↔ NonConverged Desktop PC Client) ¥ Multimedia traffic within the MCS community (Multimedia PC Client ↔ Multimedia PC Client, Multimedia PC Client ↔ other PC Client)

The voice traffic in the first bullet is part of TDM network engineering process. The voice traffic in the second and third bullet are considered part of the overall MCS traffic engineering process.

Click-To-Call with PSTN setup recommendations The recommendations regarding telephony numbers usage in Click to call are as follows: Whenever we have a network with Communication Server1000 Call Server or Signaling Server, NRS and MCS5100 it is necessary to have dialing plan configuration aligned on all 3 platforms - that is the following dialing plan elements must be the same on all platforms: ¥ ESN/UDP dialing access code (e.g. 6) ¥ Public/E.164 Local dialing access code (e.g. 9) ¥ Public/E.164 National dialing access code (e.g. 61) ¥ Public/E.164 International dialing access code (e.g. 000)

These settings directly impact how end-users dial destination numbers - all platforms that may handle a call in a solution have to be able to handle the same dialed numbers.

Ring me— Ring me must be dialable number in order to have ability to call from MCS5100 to CS1000.

Make call to— Make call must be dialable number in order to have ability to redirect call from Ring me at number to the number you want to dial.

When making calls to the Public or PSTN network, in case of the Ring me at call leg or the Make call to call leg, enter the numbers in the full E.164 international form (not the E.164 national nor the E.164 local form).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Converged Desktop traffic 229

When making calls to the Private or Enterprise network, in case of the Ring me at call leg or the Make call to call leg, enter the numbers in the full Private UDP form (not the CDP form).

Add the appropriate dialing access codes to both strings: the Ring me at string and the Make call to string:

If Ring me at is an enterprise number (For example: 3431234 and the Private or UDP dialing access code in the solution is set to "6", the following value must be entered in the Ring me at field: 63431234).

If Make call to is an international number (For example: 16139671234 and the Public or International dialing access code in the solution is set to "000" the following value must be entered in the Make call to field: 0001613967123.

ATTENTION In case of calls initiated to two external PSTN numbers (Ring me at and Make call to are external PSTN numbers), if one party clears the call, it is not cleared to the remote party until the PSTN trunk times out.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 230 Traffic considerations

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 231 Security

ATTENTION It is assumed that IP services outside of the scope of this solution, such as Dynamic Host Configuration Protocol (DHCP), Domain Name Service (DNS), Simple Network Management Protocol (SNMP), Network Time Protocol (NTP), and IP routing protocols, have sufficient security systems and policies in place. It is the responsibility of the network administrators to ensure these services are secure.

This section contains the following: ¥ "Security strategies overview" (page 231) ¥ "Protecting MCS servers and gateways" (page 232) ¥ "Protecting end user traffic" (page 264)

Security strategies overview The two main aspects of the network-level security strategy are ¥ protecting the MCS servers and gateways ¥ protecting the end user traffic

The overall strategy for network protection is based on the following architectural design elements: ¥ Authentication and authorization: Servers in the Protected MCS Network ensure authentication and authorization of access to network services. ¥ Packet filter and Firewall: A packet filter or a firewall or both protect the MCS Protected Network. ¥ Signaling: When the Border Control Point is used, user IP addresses are hidden in SIP signaling.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 232 Security

In some network deployments, it may be necessary to protect end user IP space. For example, a company that has a partnership with a third-party vendor and the vendor needs access to the MCS system from a client perspective. The overall strategy for protecting the end-user network and end users is based on the following design elements: ¥ end-user NAT/NAPT, hiding user’s private IP address ¥ end-user firewall at the edge of the end-user network ¥ ability to safely traverse both firewall and NAT ¥ use of the Border Control Point to segregate clients ¥ built-in SIP and client security mechanisms ¥ terminal server security provided through MRV LX-4000 series terminal servers that support Terminal Access Controller Access Control System (TACACS)

Protecting MCS servers and gateways MCS 5100 network topology The following is a high-level illustration of the MCS 5100 network topology.

Figure 66 MCS 5100 network topology

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 233

Typically all MCS servers, gateways, and clients are deployed in a contiguous IP address space with no Network Address Translation (NAT) devices or Network Address Port Translation (NAPT) devices between the clients and servers or gateways. For networks where clients are behind a firewall or NAT device, the Border Control Point is required.

Clients not located on the enterprise network, such as teleworkers, can access the MCS system using IP virtual private network (VPN) technology. Soft clients can be software-based IP VPN clients such as the Nortel Contivity client.

When the IP Phone is used in conjunction with the Multimedia PC Client, audio can be routed between the far endpoint and the IP Phone using the Multimedia PC Client.

The VPN technology should be configured to mark the external VPN packet headers with the QoS. If a teleworker wishes to use a standalone IP Phone, then a hardware-based VPN device at the remote location configured as branch-to-branch is recommended. See Figure 67 "External access using IP VPN technology" (page 234) for an illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 234 Security

Figure 67 External access using IP VPN technology

In the MCS 5100 network, all nodes are on the trusted Enterprise Network. Critical business resources must be protected from service disruption. As shown in Figure 67 "External access using IP VPN technology" (page 234), the MCS components on the Protected MCS network are protected by a packet filter or a firewall. It is good practice to protect shared MCS resources such as servers and gateways with a packet filtering device, firewall, or router with an access control list.

MCS network public interfaces

CAUTION All server interfaces must have routable IP addresses from all other MCS endpoints. In other words, the interface cannot reside behind a NAT or NAPT device. More information is available in Appendix "IP functional components" (page 319).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 235

This section describes the security measures for the MCS servers that can be accessed by MCS clients.

Session Manager The Session Manager uses back-to-back-user agent (BBUA) functionality to control the Border Control Point through the Media Portal Control Protocol (MPCP) protocol. The Session Manager performs the following security functions: ¥ Media Portal Control Protocol (MPCP) is used to communicate with the Border Control Point to control which IP ports are opened or closed. ¥ All SIP signaling traffic traverses the Session Manager, the only component to which clients terminate SIP signaling. Note: Media Application Server IM Chat and Meet Me-based IM Chat services communicate directly with Multimedia PC Clients on behalf of the Multimedia Web Client for Instant Messages and SIP PINGs.

¥ Real endpoint IP addresses are hidden from other users. ¥ Clients accessing services are authenticated.

The Session Manager uses Port 5060 as the public interface. This is the only port on the server for Session Manager that is kept open.

Implement authentication in networks to prevent theft of service or denial of service attacks using SIP. The Digest Method is the only supported authentication method.

IP Client Manager The IP Client Manager controls IP Phones using UNIStim signaling. The IP Phones can also be under the control of the Multimedia PC Client. The IPCM uses the following public interfaces and associated security mechanism: ¥ Port 5070: The IP Client Manager uses this port for SIP signaling to the Session Manager. This port normally uses UDP. On occasion, TCP is used for Call Processing Language (CPL) script registration. ¥ Port 5000: This is the only port required to be open to the End User Network. The port is used for UNIStim communications on both the IP Phone and the IP Client Manager. UNIStim is a Nortel Network proprietary protocol. ¥ Port 50000 - 50099: All IP Phones use a range of ports to send media using RTP and UDP. The range of ports is defaulted to 50000-50099 (inclusive) and is configurable on each domain.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 236 Security

¥ Port 50020: A UFTP server is typically configured on the same server as the IP Client Manager. ¥ Management interface: The IP Client Manager is managed from the Protected MCS Network. Access to the management interface should be restricted by the packet filter.

Provisioning Manager The Provisioning Manager provides the access to the Database Manager for the Provisioning Clients, Personal Agents, and Multimedia Web Clients. The Provisioning Manager is managed from the MCS Protected Network. Access to the management interface should be restricted by the packet filter. There are three public interfaces to the Provisioning Manager: ¥ Port 80: This port uses HTTP over TCP downloading HTTP Web pages and Simple Object Access Protocol (SOAP) over TCP for downloading service package and address book. ¥ Port 443: HTTPS port on the Provisioning Manager used to interface with the Personal Agent, Provisioning Client, and the Multimedia Web Client.

The Provisioning Manager resides on the same server as the IP Client Manager.

Media Gateway 3200 It is good practice to implement an access control list or firewall rules in front of the Media Gateway 3200. SIP access should be restricted to the Session Managers only. There should be no NAT or NAPT devices between the Media Gateway 3200 and the Session Managers.

More information on NAT and NAPT can be found later in this section. If transparent firewalls are used, the SIP port 5060 should remain open for access to the Session Manager.

Border Control Point The Border Control Point has a pool of IP addresses. Each media card on the Border Control Point has one IP address. Multiple ports are available for each IP address. The messaging between the Session Manager and the Border Control Point (MPCP) travels over the public MCS network. The Border Control Point performs the following functionality to ensure security of the clients and the network: ¥ The Border Control Point inspects all packets coming into opened ports at various levels. The inspection ensures that the packets destined for that port are not astray or malicious packet. The check adds another level of security to the Border Control Point as described below:

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 237

— Verify source port: The Border Control Point verifies that the source address of the received packet matches the one provided by the Session Manager. If it does not match, the packet is discarded. — UDP, RTP, or RTCP verification: By design, the Border Control Point only accepts packets that match recognized signatures that identify those packets as either UDP, RTP, or RTCP. — NAPT: Comparing arriving packets against entries in the NAPT mapping table provides further packet filtering functions. Packets can only pass through the Border Control Point if a matching entry is found.

¥ The Border Control Point masks the real addresses of the two parties in communication. Parties involved in a call (voice, video, collaboration) are not aware that there is an intermediary device. Party A sends media to Party B, when in fact Party A is sending media to the Border Control Point, which in turn sends it to Party B.

On the public interface, the Border Control Point supports UDP flows for collaboration and RTP and RTCP flows for media such as audio and video. Flows through the Border Control Point are dynamically created and destroyed under the control of the Session Manager using MPCP. For security purposes, the IP port numbers are changed randomly for each flow. The range of ports can be restricted to a specific range or can be randomly allocated over a larger more dynamic range.

Media Application Server For information related to security measures for the Media Application Server, see Media Application Server Planning and Engineering (NN42020-201).

MCS network public interface protection The MCS network public interfaces must be protected from unwanted packets. This can be accomplished by fronting the MCS system with a packet filtering device. When a packet filter policy is in place, packets destined for a node, port, or protocol arrive at the wrong device should be dropped. Ports for common services should be restricted to IP addresses of administrators and support systems for administrative functions such as ftp, telnet, rlogin, time, and finger. All others should be dropped. Many attacks today originate internally. Nortel recommends implementing a firewall or packet filter on the Protected MCS network to protect against internal attacks. Access to the Protected MCS Network should be restricted to OSS systems, support personnel, support remote access devices, and personnel requiring access to the management console.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 238 Security

The key signaling ports for the MCS are described as follows: ¥ Port 5060: UDP port used on the Session Manager for SIP signaling ¥ Port 5000: UDP port used on the IP Client Manager for UNIStim signaling ¥ Port 50020: UFTP/UDP port on the server used for UFTP firmware downloads for IP Phones. ¥ Port 80: — HTTP/TCP port on the Provisioning Manager used to interface with the Personal Agent, Provisioning Client, and the Multimedia Web Client. — SOAP/TCP port on the Provisioning Manager used by the Command Line Interface (CLI) and Multimedia PC Client for accessing the Provisioning Manager for downloading service package and address book

¥ Port 443: HTTPS port on the Provisioning Manager used to interface with the Personal Agent, Provisioning Client, and the Multimedia Web Client. ¥ Port 3090: WCSCP/TCP port used to interface with the Multimedia Web Client for communication control ¥ Port 5070: UDP and TCP port on the IP Client Manager used for sending SIP signaling to the Session Manager ¥ Port 50000 to 50100: RTP/RTCP media from clients to Border Control Point NAT cards. Border Control Point NAPT can use a completely different range of ports. For example, 40 000 to 60 000 or 33 300 to 33 600, configurable at the Border Control Point

If a firewall is used, the same policy must be applied. If the firewall is equipped with a Denial of Service (DoS) or Distributed Denial of Service (DDoS) filter, Nortel recommends that this functionality be activated.

Access to Protected MCS Network Nortel recommends routing all traffic in and out of the Protected MCS Network with a packet filter or firewall.

Table 29 "Traffic between MCS Public Accessible Network and MCS Public Protected Network" (page 239) shows a summary of traffic between MCS Pubic Accessible Network and the Protected MCS Network for OAMP purposes.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 239

Table 29 Traffic between MCS Public Accessible Network and MCS Public Protected Network Source IP Source Port Destination IP Destination Use Address Address Port MCS TCP > 1023 System Manager TCP System Management Console administrator NE Service IP Manager download (webstart) baseport + 20 MCS Mgmt TCP > 1023 System Manager TCP System OMI Console NE Service IP Manager baseport + 20 MCS Mgmt TCP > 1023 System Manager TCP System OMI (TLS) Console NE Service IP Manager baseport + 21 MCS Mgmt TCP > 1023 System Manager TCP 2100 Bulk Import and Export Console NE Service IP FTP pull passive mode (control) MCS Mgmt TCP > 1023 System Manager TCP 1023 (data Bulk Import and Export Console NE Service IP port) FTP pull passive mode (data) MCS Mgmt TCP > 1023 System Manager TCP System System Manager Console NE Service IP Manager Console log browser baseport + 26 MCS Mgmt TCP > 1023 FPM NE Service TCP System System Manager Console IP Manager Console log browser baseport + 26 SSH Client TCP > 1023 All MCS Linux TCP 22 SSH hosts (logical IP addresses) SSH Client TCP > 1023 Border Control TCP 22 SSH Point 7200 OAM IP Web browser TCP > 1023 Media Gateway TCP 80 HTTP provisioning 3200 Windows TCP > 3389 Media Application TCP 3389 Windows Terminal Terminal Server Services (remote Services client administration) System UDP 123 NTP sources UDP 123 NTP Manager NE Server logical IP Accounting UDP 123 NTP sources UDP 123 NTP Manager NE Server logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 240 Security

Source IP Source Port Destination IP Destination Use Address Address Port Accounting TCP > 1023 Back-office billing default: TCP Billing stream Manager NE processing system 9000 configurati Server logical on dependent IP Web browser TCP > 1023 BladeCenter/ TCP 80 Provisioning BladeCenter- T Platform Management Module Session TCP > 1023 Terminal server Configuration SMDI Manager NE dependent Server logical IP MAS TCP > 1023 Email server TCP 25 SMTP (Unified (SMTP) Messaging) MAS TCP > 1023 NOC TCP 21 (control FTP push passive port) mode (logs and performance reports) MAS TCP > 1023 NOC TCP > 1023 FTP push passive (data port) mode (logs and performance reports) Server to be TCP > 0 Backup server with TCP 514 rsh - backup and backed up or tape drive restore restored System UDP > 1023 OSS configurable SNMP TRAPs Manager NE Service IP OSS UDP > 1023 System Manager UDP FPM SNMP GETS NE Service IP baseport + 17 FPM NE UDP > 1023 OSS configurable SNMP TRAPs Service IP OSS UDP > 1023 FPM NE Service UDP FPM SNMP GETS IP baseport + 17 All Linux UDP > 1023 Syslog server UDP 514 Syslogs Servers logical IP Prov NE Server TCP > 1023 LDAP Server TCP 389 LDAP sync Logical WiCM TCP > 1023 MDS Web Server TCP (no default) MDS (Message waiting indicator)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 241

Source IP Source Port Destination IP Destination Use Address Address Port Web browser TCP > 1023 Database TCP 3339 Oracle Enterprise Manager server Manager logical IP Web browser TCP > 1023 Database TCP 7771 Oracle Enterprise Manager server Manager logical IP Web browser TCP > 1023 Database TCP 7776 Oracle Enterprise Manager server Manager logical IP Web browser TCP > 1023 Database TCP 7773 Oracle Enterprise Manager server Manager logical IP Web browser TCP > 1023 Database TCP 1748 Oracle Enterprise Manager server Manager logical IP Web browser TCP > 1023 Database TCP 1754 Oracle Enterprise Manager server Manager logical IP Web browser TCP > 1023 Database TCP 1808 Oracle Enterprise Manager server Manager logical IP Web browser TCP > 1023 Database TCP 1809 Oracle Enterprise Manager server Manager logical IP

Access to MCS Public Accessible Network Table 30 "Traffic between MCS clients and MCS Public Accessible Network" (page 241) shows a summary of traffic between the MCS clients and the MCS Public Accessible Network.

Table 30 Traffic between MCS clients and MCS Public Accessible Network Source IP Source Port Destination IP Destination Use Address Address Port Any client behind UDP > 1023 Session Manager UDP 5060 SIP the NAT/NAPT NE Service IP Any client behind UDP > 1023 MAS UDP 5060 SIP (IM Chat) the NAT/NAPT Any client behind UDP > 1023 IP Client Manager UDP 5000 UNIStim the NAT/NAPT NE server logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 242 Security

Source IP Source Port Destination IP Destination Use Address Address Port Any client behind TCP > 1023 UNIStim FTP NE UDP 50020 firmware download for the NAT/NAPT server logical IP IP Phone Any client behind TCP > 1023 Provisioning TCP 8080 or HTTP or HTTPS the NAT/NAPT Manager NE TCP 8443 server logical IP Any client behind TCP > 1023 Mobile Personal TCP 8081 or HTTP or HTTPS the NAT/NAPT Agent Manager TCP 8444 for Mobile Personal NE server logical Agent IP Any client behind TCP > 1023 Personal Agent TCP 80 or HTTP or HTTPS the NAT/NAPT Manager NE TCP 443 server logical IP Any client behind TCP > 1023 Personal Agent TCP 80 or HTTP or HTTPS for the NAT/NAPT Manager NE TCP 443 Personal Agent access server logical IP Any client behind UDP > 1023 Border Control UDP 40000 to Media (RTP, RTCP, the NAT/NAPT Point 7200 60000 UDP) Net1/Net2 Any client behind UDP > 1023 Media Gateway UDP > 6000 RTP media the NAT/NAPT 3200 Any client behind UDP > 1023 MAS (for each UDP 53500 to RTP media the NAT/NAPT blade for the 59999 BladeCenter platform) Any client behind TCP > 1023 Web Application TCP 80 Collaboration media the NAT/NAPT Collaboration over HTTP Server Any client behind TCP > 1023 Web Application TCP 443 HTTPS the NAT/NAPT Collaboration Server Any client behind TCP > 1023 Wireless Client TCP 80 HTTP the NAT/NAPT Manager (WiCM)

Communication between MCS servers and gateways Table 31 "Summary of communication between MCS servers and gateways" (page 243) shows a summary of communication between MCS servers and gateways. Note the following: ¥ FT (Fault Tolerant) heartbeat applies to all NE types. ¥ All Perfect Channels apply to both active and inactive NE instances. ¥ Database Channels apply to both active and inactive NE instances.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 243

Table 31 Summary of communication between MCS servers and gateways Source IP Source Port Destination IP Destination Use Address Address Port System Manager UDP 123 Border Control UDP 123 NTP server logical IP Point 7200 OAM System Manager UDP 123 MAS UDP 123 NTP server logical IP System Manager UDP 123 All Linux servers UDP 123 NTP server logical IP System Manager UDP 123 Web Application UDP 123 NTP server logical IP Collaboration server Session Manager TCP > 1023 Session Manager TCP SessMgr FT Sync Channel NE server logical NE server logical baseport + 53 IP (standby IP (active instance) instance) Session Manager UDP > 1023 Session Manager UDP > 1023 FT Sync Channel NE server logical NE server logical IP (standby IP (active instance) instance) System Manager TCP > 1023 System Manager TCP SM FT Sync Channel NE server logical NE server logical baseport + 53 IP (standby IP (active instance) instance) System Manager UDP > 1023 System Manager r UDP > 1023 FT Sync Channel NE server logical NE server logical IP (standby IP (active instance) instance) System Manager/ UDP > 1023 Border Control UDP 161 SNMP (GET) FPM server logical Point 7200 OAM IP System Manager/ UDP > 1023 MAS UDP 161 SNMP (GET) FPM server logical IP System Manager/ UDP > 1023 All Linux servers UDP 161 SNMP (GET) FPM server logical IP System Manager/ UDP > 1023 Media Gateway UDP 161 SNMP (GET) FPM server logical 3200 IP Media Gateway UDP 162 System Manager/ UDP FPM SNMP (TRAP) 3200 FPM NE server baseport + 23 logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 244 Security

Source IP Source Port Destination IP Destination Use Address Address Port MAS UDP 162 System Manager/ UDP FPM SNMP (TRAP) FPM NE server baseport + 24 logical IP Media Gateway UDP 162 System Manager/ UDP FPM SNMP (TRAP) 3200 FPM NE server baseport + 23 logical IP MAS UDP 162 FPM NE server UDP FPM SNMP (TRAP) logical IP baseport + 24 System Manager/ UDP > 1023 Database Server UDP 9161 SNMP (GET) Oracle FPM NE server logical IP MIB logical IP System Manager TCP > 1023 Border Control TCP 4891 NED NE server logical Point 7200 OAM IP System Manager/ TCP > 1023 Session Manager TCP 4891 NED FPM NE server NE server logical logical IP IP System Manager/ TCP > 1023 IPCM NE server TCP 4891 NED FPM NE server logical IP logical IP System Manager/ TCP > 1023 Provisioning TCP 4891 NED FPM NE server Manager NE logical IP server logical IP System Manager/ TCP > 1023 Personal Agent TCP 4891 NED FPM NE server Manager NE logical IP server logical IP System Manager/ TCP > 1023 Accounting TCP 4891 NED FPM NE server Manager NE logical IP server logical IP System Manager TCP > 1023 FPM NE server TCP 4891 NED NE server logical logical IP IP System Manager/ TCP > 1023 UNIStim FTP NE TCP 4891 NED FPM NE server server logical IP logical IP Border Control TCP > 1023 System Manager/ TCP 2100 NED FTP pull Point 7200 OAM FPM NE server passive mode logical IP (control) Border Control TCP > 1023 System Manager/ TCP > 1023 NED FTP pull Point 7200 OAM FPM NE server passive mode (data) logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 245

Source IP Source Port Destination IP Destination Use Address Address Port Session Manager TCP > 1023 System Manager/ TCP 2100 NED FTP pull NE server logical FPM NE server passive mode IP logical IP (control) Session Manager TCP > 1023 System Manager/ TCP > 1023 NED FTP pull NE server logical FPM NE server passive mode (data) IP logical IP IPCM NE server TCP > 1023 System Manager/ TCP 2100 NED FTP pull logical IP FPM NE server passive mode logical IP (control) IPCM NE server TCP > 1023 System Manager/ TCP > 1023 NED FTP pull logical IP FPM NE server passive mode (data) logical IP Provisioning TCP > 1023 System Manager/ TCP 2100 NED FTP pull Manager NE FPM NE server passive mode server logical IP logical IP (control) Provisioning TCP > 1023 System Manager/ TCP > 1023 NED FTP pull Manager NE FPM NE server passive mode (data) server logical IP logical IP Personal Agent TCP > 1023 System Manager/ TCP 2100 NED FTP pull Manager NE FPM NE server passive mode server logical IP logical IP (control) Personal Agent TCP > 1023 System Manager/ TCP > 1023 NED FTP pull Manager NE FPM NE server passive mode (data) server logical IP logical IP Accounting TCP > 1023 System Manager/ TCP 2100 NED FTP pull Manager NE FPM NE server passive mode server logical IP logical IP (control) Accounting TCP > 1023 System Manager/ TCP > 1023 NED FTP pull Manager NE FPM NE server passive mode (data) server logical IP logical IP FPM NE server TCP > 1023 System Manager/ TCP 2100 NED FTP pull logical IP FPM NE server passive mode logical IP (control) FPM NE server TCP > 1023 System Manager/ TCP > 1023 NED FTP pull logical IP FPM NE server passive mode (data) logical IP UNIStim FTP NE TCP > 1023 System Manager/ TCP 2100 NED FTP pull server logical IP FPM NE server passive mode logical IP (control) UNIStim FTP NE TCP > 1023 System Manager/ TCP > 1023 NED FTP pull server logical IP FPM NE server passive mode (data) logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 246 Security

Source IP Source Port Destination IP Destination Use Address Address Port

System Manager/ TCP > 1023 Border Control TCP BCP 7200 Configuration FPM NE server Point 7200 OAM OAM baseport maintenance logical IP +2 (perfect channel) System Manager/ UDP SM Border Control TCP BCP 7200 Configuration FPM NE server baseport + 1 Point 7200 OAM OAM baseport maintenance logical IP +2 (perfect channel) System Manager/ TCP > 1023 Session Manager TCP SessMgr Configuration FPM NE server NE server logical baseport + 2 maintenance logical IP IP (perfect channel) System Manager/ UDP SM Session Manager UDP SessMgr Configuration FPM NE server baseport + 1 NE server logical baseport + 2 maintenance logical IP IP (perfect channel) System Manager/ TCP > 1023 IPCM NE server TCP IPCM Configuration FPM NE server logical IP baseport + 2 maintenance logical IP (perfect channel) System Manager/ UDP SM IPCM NE server UDP IPCM Configuration FPM NE server baseport + 1 logical IP baseport + 2 maintenance logical IP (perfect channel) System Manager/ TCP > 1023 Provisioning TCP Prov Configuration FPM NE server Manager NE baseport + 2 maintenance logical IP server logical IP (perfect channel) System Manager/ UDP SM Provisioning UDP Prov Configuration FPM NE server baseport + 1 Manager NE baseport + 2 maintenance logical IP server logical IP (perfect channel) System Manager/ TCP > 1023 Personal Agent TCP Prov Configuration FPM NE server Manager NE baseport + 2 maintenance logical IP server logical IP (perfect channel) System Manager/ UDP SM Personal Agent UDP Prov Configuration FPM NE server baseport + 1 Manager NE baseport + 2 maintenance logical IP server logical IP (perfect channel) System Manager/ TCP > 1023 Accounting TCP AcctMgr Configuration FPM NE server Manager NE baseport + 2 maintenance logical IP server logical IP (perfect channel) System Manager/ UDP SM Accounting UDP AcctMgr Configuration FPM NE server baseport + 1 Manager NE baseport + 2 maintenance logical IP server logical IP (perfect channel) System Manager TCP > 1023 FPM NE server TCP FPM Configuration NE server logical logical IP baseport + 2 maintenance IP (perfect channel)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 247

Source IP Source Port Destination IP Destination Use Address Address Port System Manager UDP SM FPM NE server UDP FPM Configuration NE server logical baseport + 1 logical IP baseport + 2 maintenance IP (perfect channel) System Manager/ TCP > 1023 UFTP NE server TCP UFTP Configuration FPM NE server logical IP baseport + 2 maintenance logical IP (perfect channel) System Manager/ UDP SM UFTP NE server UDP UFTP Configuration FPM NE server baseport + 1 logical IP baseport + 2 maintenance logical IP (perfect channel) Border Control TCP > 1023 System Manager/ TCP SM Configuration Point 7200 OAM FPM NE server baseport + 1 maintenance logical IP (perfect channel) Border Control UDP System Manager/ UDP SM Configuration Point 7200 OAM baseport + 2 FPM NE server baseport + 1 maintenance logical IP (perfect channel) Session Manager TCP > 1023 System Manager/ TCP Configuration NE server logical FPM NE server baseport + 1 maintenance IP logical IP (perfect channel) Session Manager UDP System Manager/ UDP Configuration NE server logical baseport + 2 FPM NE server baseport + 1 maintenance IP logical IP (perfect channel) IPCM NE server TCP > 1023 System Manager/ TCP SM Configuration logical IP FPM NE server baseport + 1 maintenance logical IP (perfect channel) IPCM NE server UDP System Manager/ UDP SM Configuration logical IP baseport + 2 FPM NE server baseport + 1 maintenance logical IP (perfect channel) Provisioning TCP > 1023 System Manager/ TCP Configuration Manager NE FPM NE server baseport + 1 maintenance server logical IP logical IP (perfect channel) Provisioning UDP System Manager/ UDP Configuration Manager NE baseport + 2 FPM NE server baseport + 1 maintenance server logical IP logical IP (perfect channel) Accounting TCP > 1023 System Manager/ TCP SM Configuration Manager NE FPM NE server baseport + 1 maintenance server logical IP logical IP (perfect channel) Accounting UDP System Manager/ UDP SM Configuration Manager NE baseport + 2 FPM NE server baseport + 1 maintenance server logical IP logical IP (perfect channel) FPM NE server TCP > 1023 System Manager/ TCP Configuration logical IP FPM NE server baseport + 1 maintenance logical IP (perfect channel)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 248 Security

Source IP Source Port Destination IP Destination Use Address Address Port FPM NE server UDP System Manager/ UDP Configuration logical IP baseport + 2 FPM NE server baseport + 1 maintenance logical IP (perfect channel) UNIStim FTP NE TCP > 1023 System Manager/ TCP Configuration server logical IP FPM NE server baseport + 1 maintenance logical IP (perfect channel) UNIStim FTP NE UDP System Manager/ UDP Configuration server logical IP baseport + 2 FPM NE server baseport + 1 maintenance logical IP (perfect channel) System Manager/ TCP > 1023 Border Control BCP 7200 OAM Alarms FPM NE server Point 7200 OAM baseport + 25 (perfect channel) logical IP System Manager UDP FPM Border Control BCP 7200 OAM Alarms NE server logical baseport + 25 Point 7200 OAM baseport + 25 (perfect channel) IP System Manager TCP > 1023 Session Manager TCP SessMgr Alarms NE server logical NE server logical baseport + 25 (perfect channel) IP IP System Manager UDP FPM Session Manager UDP SessMgr Alarms NE server logical baseport + 25 NE server logical baseport + 25 (perfect channel) IP IP System Manager TCP > 1023 IPCM NE server TCP IPCM Alarms NE server logical logical IP baseport + 25 (perfect channel) IP System Manager UDP FPM IPCM NE server UDP IPCM Alarms NE server logical baseport + 25 logical IP baseport + 25 (perfect channel) IP System Manager TCP > 1023 Provisioning TCP Prov Alarms NE server logical Manager NE baseport + 25 (perfect channel) IP server logical IP System Manager UDP FPM Provisioning UDP Prov Alarms NE server logical baseport + 25 Manager NE baseport + 25 (perfect channel) IP server logical IP System Manager TCP > 1023 Personal Agent TCP Prov Alarms NE server logical Manager NE baseport + 25 (perfect channel) IP server logical IP System Manager UDP FPM Personal Agent UDP Prov Alarms NE server logical baseport + 25 Manager NE baseport + 25 (perfect channel) IP server logical IP System Manager TCP > 1023 Accounting TCP AcctMgr Alarms NE server logical Manager NE baseport + 25 (perfect channel) IP server logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 249

Source IP Source Port Destination IP Destination Use Address Address Port System Manager UDP FPM Accounting UDP AcctMgr Alarms NE server logical baseport + 25 Manager NE baseport + 25 (perfect channel) IP server logical IP System Manager TCP > 1023 UNIStim FTP NE TCP UFTP Alarms NE server logical server logical IP baseport + 25 (perfect channel) IP System Manager UDP FPM UNIStim FTP NE UDP UFTP Alarms NE server logical baseport + 25 server logical IP baseport + 25 (perfect channel) IP Border Control TCP > 1023 System Manager TCP SM Alarms Point 7200 OAM NE server logical baseport + 25 (perfect channel) IP Border Control UDP FPM System Manager UDP SM Alarms Point 7200 OAM baseport + 25 NE server logical baseport + 25 (perfect channel) IP Session Manager TCP > 1023 System Manager TCP SM Alarms NE server logical NE server logical baseport + 25 (perfect channel) IP IP Session Manager UDP FPM System Manager UDP SM Alarms NE server logical baseport + 25 NE server logical baseport + 25 (perfect channel) IP IP IPCM NE server TCP > 1023 System Manager TCP SM Alarms logical IP NE server logical baseport + 25 (perfect channel) IP IPCM NE server UDP FPM System Manager UDP SM Alarms logical IP baseport + 25 NE server logical baseport + 25 (perfect channel) IP Provisioning TCP > 1023 System Manager TCP SM Alarms Manager NE NE server logical baseport + 25 (perfect channel) server logical IP IP Provisioning UDP FPM System Manager UDP SM Alarms Manager NE baseport + 25 NE server logical baseport + 25 (perfect channel) server logical IP IP Personal Agent TCP > 1023 System Manager TCP SM Alarms Manager NE NE server logical baseport + 25 (perfect channel) server logical IP IP Personal Agent UDP FPM System Manager UDP SM Alarms Manager NE baseport + 25 NE server logical baseport + 25 (perfect channel) server logical IP IP Accounting TCP > 1023 System Manager TCP SM Alarms Manager NE NE server logical baseport + 25 (perfect channel) server logical IP IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 250 Security

Source IP Source Port Destination IP Destination Use Address Address Port Accounting UDP FPM System Manager UDP SM Alarms Manager NE baseport + 25 NE server logical baseport + 25 (perfect channel) server logical IP IP UNIStim FTP NE TCP > 1023 System Manager TCP SM Alarms server logical IP NE server logical baseport + 25 (perfect channel) IP UNIStim FTP NE UDP FPM System Manager UDP SM Alarms server logical IP baseport + 25 NE server logical baseport + 25 (perfect channel) IP FPM NE server TCP > 1023 Border Control TCP BCP 7200 Alarms logical IP Point 7200 OAM OAM baseport (perfect channel) +25 FPM NE server UDP FPM Border Control UDP BCP 7200 Alarms logical IP baseport + 25 Point 7200 OAM OAM baseport (perfect channel) +25 FPM NE server TCP > 1023 Session Manager TCP SessMgr Alarms logical IP NE server logical baseport + 25 (perfect channel) IP FPM NE server UDP FPM Session Manager UDP SessMgr Alarms logical IP baseport + 25 NE server logical baseport + 25 (perfect channel) IP FPM NE server TCP > 1023 IPCM NE server TCP IPCM Alarms logical IP logical IP baseport + 25 (perfect channel) FPM NE server UDP FPM IPCM NE server UDP IPCM Alarms logical IP baseport + 25 logical IP baseport + 25 (perfect channel) FPM NE server TCP > 1023 Provisioning TCP Prov Alarms logical IP Manager NE baseport + 25 (perfect channel) server logical IP FPM NE server UDP FPM Provisioning UDP Prov Alarms logical IP baseport + 25 Manager NE baseport + 25 (perfect channel) server logical IP FPM NE server TCP > 1023 Personal Agent TCP Prov Alarms logical IP Manager NE baseport + 25 (perfect channel) server logical IP FPM NE server UDP FPM Personal Agent UDP Prov Alarms logical IP baseport + 25 Manager NE baseport + 25 (perfect channel) server logical IP FPM NE server TCP > 1023 Accounting TCP AcctMgr Alarms logical IP Manager NE baseport + 25 (perfect channel) server logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 251

Source IP Source Port Destination IP Destination Use Address Address Port FPM NE server UDP FPM Accounting TCP AcctMgr Alarms logical IP baseport + 25 Manager NE baseport + 25 (perfect channel) server logical IP FPM NE server TCP > 1023 UNIStim FTP NE TCP UFTP Alarms logical IP server logical IP baseport + 25 (perfect channel) FPM NE server UDP FPM UNIStim FTP NE UDP UFTP Alarms logical IP baseport + 25 server logical IP baseport + 25 (perfect channel) Border Control TCP > 1023 FPM NE server TCP FPM Alarms Point 7200 OAM logical IP baseport + 25 (perfect channel) Border Control BCP 7200 OAM FPM NE server UDP FPM Alarms Point 7200 OAM baseport + 25 logical IP baseport + 25 (perfect channel) Session Manager TCP > 1023 FPM NE server TCP FPM Alarms NE server logical logical IP baseport + 25 (perfect channel) IP Session Manager UDP SessMgr FPM NE server UDP FPM Alarms NE server logical baseport + 25 logical IP baseport + 25 (perfect channel) IP IPCM NE server TCP > 1023 FPM NE server TCP FPM Alarms logical IP logical IP baseport + 25 (perfect channel) IPCM NE server UDP IPCM FPM NE server UDP FPM Alarms logical IP baseport + 25 logical IP baseport + 25 (perfect channel) Provisioning TCP > 1023 FPM NE server TCP FPM Alarms Manager NE logical IP baseport + 25 (perfect channel) server logical IP Provisioning UDP Prov FPM NE server UDP FPM Alarms Manager NE baseport + 25 logical IP baseport + 25 (perfect channel) server logical IP Personal Agent TCP > 1023 FPM NE server TCP FPM Alarms Manager NE logical IP baseport + 25 (perfect channel) server logical IP Personal Agent UDP Prov FPM NE server UDP FPM Alarms Manager NE baseport + 25 logical IP baseport + 25 (perfect channel) server logical IP Accounting TCP > 1023 FPM NE server TCP FPM Alarms Manager NE logical IP baseport + 25 (perfect channel) server logical IP Accounting UDP AcctMgr FPM NE server UDP FPM Alarms Manager NE baseport + 25 logical IP baseport + 25 (perfect channel) server logical IP UNIStim FTP NE TCP > 1023 FPM NE server TCP FPM Alarms server logical IP logical IP baseport + 25 (perfect channel)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 252 Security

Source IP Source Port Destination IP Destination Use Address Address Port UNIStim FTP NE UDP UFTP FPM NE server UDP FPM Alarms server logical IP baseport + 25 logical IP baseport + 25 (perfect channel) System Manager TCP > 1023 Border Control TCP BCP 7200 Logs NE server logical Point 7200 OAM OAM baseport (perfect channel) IP +13 System Manager UDP FPM Border Control UDP BCP 7200 Logs NE server logical baseport + 12 Point 7200 OAM OAM baseport (perfect channel) IP +13 System Manager TCP > 1023 Session Manager TCP SessMgr Logs NE server logical NE server logical baseport + 13 (perfect channel) IP IP System Manager UDP FPM Session Manager UDP SessMgr Logs NE server logical baseport + 12 NE server logical baseport + 13 (perfect channel) IP IP System Manager TCP > 1023 IPCM NE server TCP IPCM Logs NE server logical logical IP baseport + 13 (perfect channel) IP System Manager UDP FPM IPCM NE server UDP IPCM Logs NE server logical baseport + 12 logical IP baseport + 13 (perfect channel) IP System Manager TCP > 1023 Provisioning TCP Prov Logs NE server logical Manager NE baseport + 13 (perfect channel) IP server logical IP System Manager UDP FPM Provisioning UDP Prov Logs NE server logical baseport + 12 Manager NE baseport + 13 (perfect channel) IP server logical IP System Manager TCP > 1023 Personal Agent TCP Prov Logs NE server logical Manager NE baseport + 13 (perfect channel) IP server logical IP System Manager UDP FPM Personal Agent UDP Prov Logs NE server logical baseport + 12 Manager NE baseport + 13 (perfect channel) IP server logical IP System Manager TCP > 1023 Accounting TCP AcctMgr Logs NE server logical Manager NE baseport + 13 (perfect channel) IP server logical IP System Manager UDP FPM Accounting UDP AcctMgr Logs NE server logical baseport + 12 Manager NE baseport + 13 (perfect channel) IP server logical IP System Manager TCP > 1023 UNIStim FTP NE TCP UFTP Logs NE server logical server logical IP baseport + 13 (perfect channel) IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 253

Source IP Source Port Destination IP Destination Use Address Address Port System Manager UDP FPM UNIStim FTP NE UDP UFTP Logs NE server logical baseport + 12 server logical IP baseport + 13 (perfect channel) IP Border Control TCP > 1023 System Manager TCP SM Logs Point 7200 OAM NE server logical baseport + 12 (perfect channel) IP Border Control UDP Host System Manager UDP SM Logs Point 7200 OAM card or OAM NE server logical baseport + 12 (perfect channel) baseport + 13 IP Session Manager TCP > 1023 System Manager TCP SM Logs NE server logical NE server logical baseport + 12 (perfect channel) IP IP Session Manager UDP SessMgr System Manager UDP SM Logs NE server logical baseport + 13 NE server logical baseport + 12 (perfect channel) IP IP IPCM NE server TCP > 1023 System Manager TCP SM Logs logical IP NE server logical baseport + 12 (perfect channel) IP IPCM NE server UDP IPCM System Manager UDP SM Logs logical IP baseport + 13 NE server logical baseport + 12 (perfect channel) IP Provisioning TCP > 1023 System Manager TCP SM Logs Manager NE NE server logical baseport + 12 (perfect channel) server logical IP IP Provisioning UDP Prov Mgr System Manager UDP SM Logs Manager NE baseport + 13 NE server logical baseport + 12 (perfect channel) server logical IP IP Personal Agent TCP > 1023 System Manager TCP SM Logs Manager NE NE server logical baseport + 12 (perfect channel) server logical IP IP Personal Agent UDP PA Mgr System Manager UDP SM Logs Manager NE baseport + 13 NE server logical baseport + 12 (perfect channel) server logical IP IP Accounting TCP > 1023 System Manager TCP SM Logs Manager NE NE server logical baseport + 12 (perfect channel) server logical IP IP Accounting UDP Acct Mgr System Manager UDP SM Logs Manager NE baseport + 13 NE server logical baseport + 12 (perfect channel) server logical IP IP UNIStim FTP NE TCP > 1023 System Manager TCP SM Logs server logical IP NE server logical baseport + 12 (perfect channel) IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 254 Security

Source IP Source Port Destination IP Destination Use Address Address Port UNIStim FTP NE UDP UFTP System Manager UDP SM Logs server logical IP baseport + 13 NE server logical baseport + 12 (perfect channel) IP FPM NE server TCP > 1023 Border Control TCP BCP 7200 Logs logical IP Point 7200 OAM OAM baseport (perfect channel) +13 FPM NE server UDP FPM Border Control BCP 7200 OAM Logs logical IP baseport + 12 Point 7200 OAM baseport + 13 (perfect channel) FPM NE server TCP > 1023 Session Manager TCP SessMgr Logs logical IP NE server logical baseport + 13 (perfect channel) IP FPM NE server UDP FPM Session Manager UDP SessMgr Logs logical IP baseport + 12 NE server logical baseport + 13 (perfect channel) IP FPM NE server TCP > 1023 IPCM NE server TCP IPCM Logs logical IP logical IP baseport + 13 (perfect channel) FPM NE server UDP FPM IPCM NE server UDP IPCM Logs logical IP baseport + 12 logical IP baseport + 13 (perfect channel) FPM NE server TCP > 1023 Provisioning TCP Prov Logs logical IP Manager NE baseport + 13 (perfect channel) server logical IP FPM NE server UDP FPM Provisioning UDP Prov Logs logical IP baseport + 12 Manager NE baseport + 13 (perfect channel) server logical IP FPM NE server TCP > 1023 Personal Agent TCP Prov Logs logical IP Manager NE baseport + 13 (perfect channel) server logical IP FPM NE server UDP FPM Personal Agent UDP Prov Logs logical IP baseport + 12 Manager NE baseport + 13 (perfect channel) server logical IP FPM NE server TCP > 1023 Accounting TCP AcctMgr Logs logical IP Manager NE baseport + 13 (perfect channel) server logical IP FPM NE server UDP FPM Accounting UDP AcctMgr Logs logical IP baseport + 12 Manager NE baseport + 13 (perfect channel) server logical IP FPM NE server TCP > 1023 UFTP NE server TCP UFTP Logs logical IP logical IP baseport + 13 (perfect channel) FPM NE server UDP FPM UFTP NE server UDP UFTP Logs logical IP baseport + 12 logical IP baseport + 13 (perfect channel)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 255

Source IP Source Port Destination IP Destination Use Address Address Port Border Control TCP > 1023 FPM NE server TCP FPM Logs Point 7200 OAM logical IP baseport + 12 (perfect channel) Border Control UDP BCP 7200 FPM NE server UDP FPM Logs Point 7200 OAM OAM baseport logical IP baseport + 12 (perfect channel) +13 Session Manager TCP > 1023 FPM NE server TCP FPM Logs NE server logical logical IP baseport + 12 (perfect channel) IP Session Manager UDP SessMgr FPM NE server UDP FPM Logs NE server logical baseport + 13 logical IP baseport + 12 (perfect channel) IP IPCM NE server TCP > 1023 FPM NE server TCP FPM Logs logical IP logical IP baseport + 12 (perfect channel) IPCM NE server UDP IPCM FPM NE server UDP FPM Logs logical IP baseport + 13 logical IP baseport + 12 (perfect channel) Provisioning TCP > 1023 FPM NE server TCP FPM Logs Manager NE logical IP baseport + 12 (perfect channel) server logical IP Provisioning UDP Prov FPM NE server UDP FPM Logs Manager NE baseport + 13 logical IP baseport + 12 (perfect channel) server logical IP Personal Agent TCP > 1023 FPM NE server TCP FPM Logs Manager NE logical IP baseport + 12 (perfect channel) server logical IP Personal Agent UDP Prov FPM NE server UDP FPM Logs Manager NE baseport + 13 logical IP baseport + 12 (perfect channel) server logical IP Accounting TCP > 1023 FPM NE server TCP FPM Logs Manager NE logical IP baseport + 12 (perfect channel) server logical IP Accounting UDP AcctMgr FPM NE server UDP FPM Logs Manager NE baseport + 13 logical IP baseport + 12 (perfect channel) server logical IP UNIStim FTP NE TCP > 1023 FPM NE server TCP FPM Logs server logical IP logical IP baseport + 12 (perfect channel) UNIStim FTP NE UDP UFTP FPM NE server UDP FPM Logs server logical IP baseport + 13 logical IP baseport + 12 (perfect channel) System Manager TCP > 1023 Border Control TCP BCP 7200 OMs NE server logical Point 7200 OAM OAM baseport (perfect channel) IP +15

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 256 Security

Source IP Source Port Destination IP Destination Use Address Address Port System Manager UDP FPM Border Control UDP BCP 7200 OMs NE server logical baseport + 14 Point 7200 OAM OAM baseport (perfect channel) IP +15 System Manager TCP > 1023 Session Manager TCP SessMgr OMs NE server logical NE server logical baseport + 15 (perfect channel) IP IP System Manager UDP FPM Session Manager UDP SessMgr OMs NE server logical baseport + 14 NE server logical baseport + 15 (perfect channel) IP IP System Manager TCP > 1023 IPCM NE server TCP IPCM OMs NE server logical logical IP baseport + 15 (perfect channel) IP System Manager UDP FPM IPCM NE server UDP IPCM OMs NE server logical baseport + 14 logical IP baseport + 15 (perfect channel) IP System Manager TCP > 1023 Provisioning TCP Prov OMs NE server logical Manager NE baseport + 15 (perfect channel) IP server logical IP System Manager UDP FPM Provisioning UDP Prov OMs NE server logical baseport + 14 Manager NE baseport + 15 (perfect channel) IP server logical IP System Manager TCP > 1023 Personal Agent TCP Prov OMs NE server logical Manager NE baseport + 15 (perfect channel) IP server logical IP System Manager UDP FPM Personal Agent UDP Prov OMs NE server logical baseport + 14 Manager NE baseport + 15 (perfect channel) IP server logical IP System Manager TCP > 1023 Accounting TCP AcctMgr OMs NE server logical Manager NE baseport + 15 (perfect channel) IP server logical IP System Manager UDP FPM Accounting UDP AcctMgr OMs NE server logical baseport + 14 Manager NE baseport + 15 (perfect channel) IP server logical IP System Manager TCP > 1023 UNIStim FTP NE TCP UFTP OMs NE server logical server logical IP baseport + 15 (perfect channel) IP System Manager UDP FPM UNIStim FTP NE UDP UFTP OMs NE server logical baseport + 14 server logical IP baseport + 15 (perfect channel) IP Border Control TCP > 1023 System Manager TCP SM OMs Point 7200 OAM NE server logical baseport + 14 (perfect channel) IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 257

Source IP Source Port Destination IP Destination Use Address Address Port Border Control UDP Host System Manager UDP SM OMs Point 7200 OAM card/OAM NE server logical baseport + 14 (perfect channel) baseport + 15 IP Session Manager TCP > 1023 System Manager TCP SM OMs NE server logical NE server logical baseport + 14 (perfect channel) IP IP Session Manager UDP Session System Manager UDP SM OMs NE server logical Mgr baseport + NE server logical baseport + 14 (perfect channel) IP 15 IP IPCM NE server TCP > 1023 System Manager TCP SM OMs logical IP NE server logical baseport + 14 (perfect channel) IP IPCM NE server UDP IPCM System Manager UDP SM OMs logical IP baseport + 15 NE server logical baseport + 14 (perfect channel) IP Provisioning TCP > 1023 System Manager TCP SM OMs Manager NE NE server logical baseport + 14 (perfect channel) server logical IP IP Provisioning UDP Prov Mgr System Manager UDP SM OMs Manager NE baseport + 15 NE server logical baseport + 14 (perfect channel) server logical IP IP Personal Agent TCP > 1023 System Manager TCP SM OMs Manager NE NE server logical baseport + 14 (perfect channel) server logical IP IP Personal Agent UDP PA Mgr System Manager UDP SM OMs Manager NE baseport + 15 NE server logical baseport + 14 (perfect channel) server logical IP IP Accounting TCP > 1023 System Manager TCP SM OMs Manager NE NE server logical baseport + 14 (perfect channel) server logical IP IP Accounting UDP Accting System Manager UDP SM OMs Manager NE Mgr baseport + NE server logical baseport + 14 (perfect channel) server logical IP 15 IP UNIStim FTP NE TCP > 1023 System Manager TCP SM OMs server logical IP NE server logical baseport + 14 (perfect channel) IP UNIStim FTP NE UDP UFTP System Manager UDP SM OMs server logical IP baseport + 15 NE server logical baseport + 14 (perfect channel) IP FPM NE server TCP > 1023 Border Control TCP BCP 7200 OMs logical IP Point 7200 OAM OAM baseport (perfect channel) +15

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 258 Security

Source IP Source Port Destination IP Destination Use Address Address Port FPM NE server UDP FPM Border Control UDP BCP 7200 OMs logical IP baseport + 14 Point 7200 OAM OAM baseport (perfect channel) +15 FPM NE server TCP > 1023 Session Manager TCP SessMgr OMs logical IP NE server logical baseport + 15 (perfect channel) IP FPM NE server UDP FPM Session Manager UDP SessMgr OMs logical IP baseport + 14 NE server logical baseport + 15 (perfect channel) IP FPM NE server TCP > 1023 IPCM NE server TCP IPCM OMs logical IP logical IP baseport + 15 (perfect channel) FPM NE server UDP FPM IPCM NE server UDP IPCM OMs logical IP baseport + 14 logical IP baseport + 15 (perfect channel) FPM NE server TCP > 1023 Provisioning TCP Prov OMs logical IP Manager NE baseport + 15 (perfect channel) server logical IP FPM NE server UDP FPM Provisioning UDP Prov OMs logical IP baseport + 14 Manager NE baseport + 15 (perfect channel) server logical IP FPM NE server TCP > 1023 Personal Agent TCP Prov OMs logical IP Manager NE baseport + 15 (perfect channel) server logical IP FPM NE server UDP FPM Personal Agent UDP Prov OMs logical IP baseport + 14 Manager NE baseport + 15 (perfect channel) server logical IP FPM NE server TCP > 1023 Accounting TCP AcctMgr OMs logical IP Manager NE baseport + 15 (perfect channel) server logical IP FPM NE server UDP FPM Accounting UDP AcctMgr OMs logical IP baseport + 14 Manager NE baseport + 15 (perfect channel) server logical IP FPM NE server TCP > 1023 UNIStim FTP NE TCP UFTP OMs logical IP server logical IP baseport + 15 (perfect channel) FPM NE server UDP FPM UNIStim FTP NE UDP UFTP OMs logical IP baseport + 14 server logical IP baseport + 15 (perfect channel) Border Control TCP > 1023 FPM NE server TCP FPM OMs Point 7200 OAM logical IP baseport + 14 (perfect channel) Border Control UDP BCP 7200 FPM NE server UDP FPM OMs Point 7200 OAM OAM baseport logical IP baseport + 14 (perfect channel) +15

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 259

Source IP Source Port Destination IP Destination Use Address Address Port Session Manager TCP > 1023 FPM NE server TCP FPM OMs NE server logical logical IP baseport + 14 (perfect channel) IP Session Manager UDP SessMgr FPM NE server UDP FPM OMs NE server logical baseport + 15 logical IP baseport + 14 (perfect channel) IP IPCM NE server TCP > 1023 FPM NE server TCP FPM OMs logical IP logical IP baseport + 14 (perfect channel) IPCM NE server UDP IPCM FPM NE server UDP FPM OMs logical IP baseport + 15 logical IP baseport + 14 (perfect channel) Provisioning TCP > 1023 FPM NE server TCP FPM OMs Manager NE logical IP baseport + 14 (perfect channel) server logical IP Provisioning UDP Prov FPM NE server UDP FPM OMs Manager NE baseport + 14 logical IP baseport + 14 (perfect channel) server logical IP Personal Agent TCP > 1023 FPM NE server TCP FPM OMs Manager NE logical IP baseport + 14 (perfect channel) server logical IP Personal Agent UDP Prov FPM NE server UDP FPM OMs Manager NE baseport + 14 logical IP baseport + 14 (perfect channel) server logical IP Accounting TCP > 1023 FPM NE server TCP FPM OMs Manager NE logical IP baseport + 14 (perfect channel) server logical IP Accounting UDP AcctMgr FPM NE server UDP FPM OMs Manager NE baseport + 14 logical IP baseport + 14 (perfect channel) server logical IP UNIStim FTP NE TCP > 1023 FPM NE server TCP FPM OMs server logical IP logical IP baseport + 14 (perfect channel) UNIStim FTP NE UDP UFTP FPM NE server UDP FPM OMs server logical IP baseport + 14 logical IP baseport + 14 (perfect channel) Session Manager TCP > 1023 Accounting TCP AcctMgr Billing stream NE server logical Manager NE baseport + 18 (perfect channel) IP server logical IP Session Manager UDP SessMgr Accounting UDP AcctMgr Billing stream NE server logical baseport + 18 Manager NE baseport + 18 (perfect channel) IP server logical IP Accounting TCP > 1023 Session Manager TCP AcctMgr Billing stream Manager NE NE server logical baseport + 19 (perfect channel) server logical IP IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 260 Security

Source IP Source Port Destination IP Destination Use Address Address Port Accounting UDP SessMgr Session Manager UDP AcctMgr Billing stream Manager NE baseport + 18 NE server logical baseport + 19 (perfect channel) server logical IP IP

System Manager TCP > 1023 Media Gateway TCP 80 HTTP provisioning server logical IP 3200

WiCM TCP > 1023 Prov NE server TCP 8080 HTTP OPI logical IP

Session Manager TCP > 1023 Database TCP 1521 database SQL NE server logical Manager NE IP server logical IP IPCM NE server TCP > 1023 Database TCP 1521 database SQL logical IP Manager NE server logical IP Provisioning TCP > 1023 Database TCP 1521 database SQL Manager NE Manager NE server logical IP server logical IP Personal Agent TCP > 1023 Database TCP 1521 database SQL Manager NE Manager NE server logical IP server logical IP System Manager TCP > 1023 Database TCP 1521 database SQL NE server logical Manager NE IP server logical IP Accounting TCP > 1023 Database TCP 1521 database SQL Manager NE Manager NE server logical IP server logical IP Border Control TCP > 1023 Database TCP 1521 database SQL Point 7200 Manager NE server logical IP FPM NE server TCP > 1023 Database TCP 1521 database SQL logical IP Manager NE server logical IP UNIStim FTP NE TCP > 1023 Database TCP 1521 database SQL server logical IP Manager NE server logical IP Database TCP > 1023 Database TCP 1521 database synchroni Manager NE Manager NE zation server logical IP server logical IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 261

Source IP Source Port Destination IP Destination Use Address Address Port

Session Manager UDP 5060 Media Gateway UDP 5060 SIP NE server logical 3200 IP Session Manager UDP 5060 MAS UDP 5060 SIP NE server logical IP Session Manager UDP 5060 IPCM server UDP IPCM SIP NE server logical logical IP baseport + 52 IP Session Manager UDP 5060 Personal Agent UDP PA SIP NE server logical Manager NE baseport + 52 IP server logical IP Session Manager UDP 5060 Provisioning UDP PA SIP NE server logical Manager NE baseport + 52 IP server logical IP Personal Agent TCP > 1023 Session Manager UDP 5060 SIP Manager NE NE server logical server logical IP IP Session Manager UDP 5060 ISSG IPS7 blades UDP 5060 SIP NE server logical IP Session Manager UDP 5060 Session Manager UDP 5060 SIP NE server logical NE service logical IP IP Session Manager UDP 5060 Wireless Client UDP 5060 to SIP NE server logical Manager 5090 IP Wireless Client UDP 5060 to Session Manager UDP 5060 SIP Manager 5090 NE service logical IP Session Manager UDP 5060 CSE 1000 UDP 5060 SIP NE server logical IP Session Manager UDP 5060 CS 2000 UDP 5060 SIP NE server logical IP

Session Manager UDP SESM Border Control UDP BCP 7200 MPCP NE server logical baseport + 16 Point 7200 MPCP MPCP baseport IP +16

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 262 Security

Source IP Source Port Destination IP Destination Use Address Address Port CS 2000 UDP 3904 Border Control UDP 3904 MPCP Gateway Point 7200 MPCP Controller

Provisioning TCP > 1023 MAS TCP 52005 Multimedia Content Manager NE Store server logical IP Personal Agent TCP > 1023 MAS TCP 4005 External Session Manager NE API (ESA) - Ucomm server logical IP service only Provisioning TCP > 1023 MAS TCP 443 HTTPS (SOAP) Manager NE server logical IP

NE inactive UDP baseport + NE active instance UDP baseport + FT heartbeat instance server 50 server IP 50 between NE IP instances

System Manager/ TCP > 1023 Border Control TCP 500 ISAKMP (IPSEC) FPM NE server Point 7200 OAM logical IP Border Control TCP > 1023 System Manager/ TCP 500 ISAKMP (IPSEC) Point 7200 OAM FPM NE server logical IP Database TCP > 1023 Border Control TCP 500 ISAKMP (IPSEC) Manager server Point 7200 OAM logical IP Border Control TCP > 1023 Database TCP 500 ISAKMP (IPSEC) Point 7200 OAM Manager server logical IP Session Manager TCP > 1023 Border Control TCP 500 ISAKMP (IPSEC) NE Server logical Point 7200 OAM IP Border Control TCP > 1023 Session Manager TCP 500 ISAKMP (IPSEC) Point 7200 OAM NE Server logical IP

Session Manager UDP 7060 CS 2000 UDP 7060 GCP (SIP LINES NE Server logical ONLY) IP

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Protecting MCS servers and gateways 263

Source IP Source Port Destination IP Destination Use Address Address Port Session Manager UDP 7061 CS 2000 UDP 7061 GCP (SIP LINES NE Server logical ONLY) IP System Manager/ SCTP CS 2000 SCTP GCP (SIP LINES FPM NE server ONLY) logical IP

MAS TCP > 1023 Web TCP 80 HTTP Collaboration and Application Sharing server

Remote access security Support personnel typically require remote access to the IP network. The remote access device should be closely monitored and access disabled when not required. Firewalls and packet filters should permit only authorized users to access the network using the remote access device. IP access should be restricted to IP addresses contained within the solution.

Table 32 "Summary of remote technical support traffic" (page 263) shows a summary of remote technical support access to the Protected MCS Network.

Table 32 Summary of remote technical support traffic Source IP Address Source Port Destination IP Address Destination Port Traffic MCS administrator TCP > 1023 All Solaris hosts TCP 22 SSH MCS administrator TCP > 1023 Border Control Point host TCP 22 SSH blades MCS administrator TCP > 1023 All MCS servers and TCP/UDP 20/21 FTP gateways MCS administrator TCP > 1023 All MCS servers and TCP/UDP 80 HTTP gateways MCS administrator TCP > 1023 All MCS servers and TCP 23 Telnet gateways

SIP security options Nortel has also incorporated security mechanisms within SIP for registration and invite messages. Authentication using the digest scheme should be implemented.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 264 Security

Digest scheme is based on a challenge-response approach. Digest scheme makes a challenge using a nonce value. Valid response would include a checksum (MD5 by default - IETF 1321) of the user name, password, given nonce value, the method and the requested address/URI. This ensures that the password is not sent in clear.

An optional header enables the server to specify the algorithm to be used to create the checksum or digest. As mentioned, MD5 algorithm is used by default.

Protecting end user traffic MCS enables an existing enterprise user, SOHO user, or User Agent to access services, without changing the end user’s existing IP addressing scheme. The administrator simply provides the end users who use MCS clients such as the Multimedia PC Client or IP Phones with authorization to access the MCS services.

The measures used by the MCS for protecting end user traffic are described in the following areas: clients, signaling, and media.

Clients The MCS system does not have preconfigured knowledge of the IP address of the clients.

Signaling All SIP signaling travels using the Session Manager, including call terminations.

The Session Manager uses the IP address where a REGISTRATION comes from to communicate with the user who is registering, not the IP address in the SIP message.

Media All media stay inside the Enterprise Network or the VPN.

ATTENTION Application-Level Gateways (ALGs) are not supported. There can be no ALG devices between the client and any MCS module.

Packet filter rules The packet filter is responsible for protecting the IP network where the MCS servers and gateways reside.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Internal threat security 265

The following table defines a firewall policy for both incoming and outgoing packets (from the firewall point of view). The tables assume default port settings. The fields in the table are described as follows: ¥ Source IP address: For incoming policy, this is the IP address from which the packets are coming.

ATTENTION Several lines that do not contain a source address and the IP service is FTP or telnet. These services should only be enabled for workstations that require access to the operating system such as administrators, support personnel, and back office systems such as billing processors.

¥ Source port: The IP logical port of the above. For those indicating > 1023, the port is dynamically allocated by the application and is greater than port 1023. ¥ Destination IP address: The IP address to where the packets are going. ¥ Destination port: The logical IP port of the destination IP address.

The rules only need to be configured for those clients that are to be used with the MCS system. For example, if only the Multimedia PC Client are to be used, then only the SIP signaling port (5060) and media ports (50000 to 50100) need be opened.

All clients use RTP/RTCP over UDP for audio and video media. The Multimedia PC Client and IP Phones use UDP for signaling, while the Multimedia Web client uses TCP for signaling.

Internal threat security Multimedia infrastructure components are deployed in enterprise networks with desktop access to data and Voice over IP (VoIP) and other multimedia Virtual LANs (VLAN). Desktop accessibility increases the vulnerability of these systems to internal threats, such as disgruntled employees, compromised systems, or virus-infected PCs. Internal threats can be more severe than external threats for these critical infrastructure components. To provide adequate service availability, VoIP and other multimedia systems must be protected from internal threats.

The SMC 2450 is a security system that consists of a PC-based hardware platform with SMC software.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 266 Security

As shown in Figure 68 "Secure Media Zone" (page 266), the SMC 2450 creates a Secure Multimedia Zone (SMZ) between the enterprise LAN/WAN and the call servers. The SMZ protects the signaling and media infrastructure of the MCS 5100 product line. All signaling and media traffic entering or leaving the SMZ must pass through the SMC.

Figure 68 Secure Media Zone

For additional information, see Secure Multimedia Controller Implementation Guide (553-3001-225 for CS 1000 Release 4.5; NN43001-325 for CS 1000 Release 5.0).

MCS operating system hardening measures The majority of the attacks are targeted at well-known vulnerabilities. There are abundant publicly and freely available tools to search and attack these vulnerabilities. The sophistication of the attacks is increasing, but the effort and skill needed to launch an attack is decreasing.

Nortel has created a team (SATF) dedicated to actively monitor and respond to vulnerabilities as they are discovered.

The MCS components running Linux and Windows have implemented industry guidelines to minimize vulnerabilities for the operating system. These security measures are referred to as operating system hardening. operating system hardening is incorporated in both the initial installation and commissioning process as well as the upgrade process.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS operating system hardening measures 267

Update to latest patch This is the most important step of the operating system hardening process. The MCS components are packaged with the latest patches from the operating system vendor. Nortel recommends that MCS customer actively monitor for new security patches after the initial installation and commission. New patches must be screened and tested by Nortel to ensure compatibility with the MCS components.

Linux operating system hardening Implement administrator password policy To implement administrator password policy, use the following guidelines: ¥ The administrator password is configured in file /etc/default/passwd. ¥ Passwords must be of a sufficient length and contain a mix of alpha numeric characters. Linux passwords must be 8 characters in length and must contain at least 2 alphabet characters and at least one number or special characters. ¥ Passwords must expire after a specified duration of time. Linux passwords are configured to expiry after 12 weeks. ¥ Passwords must be aged for one week before it can be reset. ¥ Provide users login warning message for the expiry date.

Minimize services running on the servers. The following is a list of actions for minimizing services: ¥ Disable any unnecessary services. ¥ Two groups of services — Services started from inetd /etc/inetd.conf — Services started from boot files /etc/rc*.d/

The list of Linux services that are disabled include, but not limited to: name, comstat, talk, uucp, tftp, finger, echo, and rcp.

These services are disabled, but not removed from the system. They can be selectively enabled by the system administrator when required.

Increase logging To increase logging, use the following as guidelines: ¥ Increase the level of logging to capture details on any malicious activity. ¥ Configure /etc/Syslog to capture logs for all priorities. ¥ Redirect high priority logs to a central syslog server (System Manager). Keeping logs at multiple hosts increases the difficulty for the attacker to erase evidence.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 268 Security

¥ Configure inetd to log all Telnet and FTP access. ¥ Rotate Syslogs at regular intervals.

Note: Since disk space is limited, this is a trade off to get more detailed logs at the expense of reducing of number of day’s worth of logs that can be kept locally.

Kernel tuning Linux provides a number of kernel parameters that can be tuned to increase security: ¥ Disable executable stack in /etc/system (prevents some buffer overflow exploits). ¥ Set the random generation of TCP sequence number to increase the difficulty of man in the middle attacks. ¥ Disable IP forwarding: MCS components do not perform IP routing, which prevents unauthorized access to the private subnet. ¥ Disable IP source routing to prevent spoofing attacks. ¥ Disable echo broadcast to prevent denial of service attacks (drop all IPCM echo packets to the broadcast address) ¥ Disable Timestamp request broadcast to prevents denial of service attacks ¥ Disable redirect errors (used by routers to inform a host to forward packets to a different router)

Restrict access Use the following measures to restrict access: ¥ Remove all unnecessary accounts. Many unnecessary accounts are created by the Linux installation process that can be removed. A sample of these accounts include: adm, lp, uucp, listen, nobody. ¥ Restrict CRON and AT access. CRON and AT permit users to run automated jobs at scheduled intervals. Only the root account can schedule jobs. ¥ Restrict root telnet and FTP access. Root access is only permitted at the system console. ¥ Restrict SU access. Only the administrator and Oracle user accounts have SU access.

File system permission File system options are configured in the file /etc/fstab.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS operating system hardening measures 269

The setuid and setgid options permit a program to execute with the privileges of the owner of the program, for example: sudo.

Default login properties The default login properties are configured in file /etc/default/login with the following variables: ¥ CONSOLE=/dev/console (only root is permitted to login to console) ¥ TIMEOUT=300 (number of seconds to wait before abandoning a login session) ¥ UMASK=077 (default umask for new accounts) ¥ SLEEPTIME=4 (number of seconds that the login command should wait before printing the login incorrect message used to prevent brute force password attacks) ¥ RETRIES=3 (number of failed login attempts before login exits) ¥ SYSLOG_FAILED_LOGINS=0 (all failed login attempts are logged)

Windows 2000 operating system hardening measures The Windows 2000 operating system hardening measures include the following: ¥ Service Pack 4 ¥ Ensure all optional components are not installed except for Terminal Services and SNMP ¥ System should be configured as stand-alone ¥ Enable security audits ¥ Rename administrative accounts ¥ Disable guest accounts ¥ Disable display of last logon user name ¥ Disable NetBIOS and LMHost lookups on each interface ¥ Uninstall Client for Microsoft Networks and File and Printer Sharing on all adaptors ¥ Disable automatic updates from Microsoft ¥ Increase size of log files ¥ Configure terminal services ¥ Disable remote control ¥ Free idle connections ¥ Remove default mappings for printer and com ports

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 270 Security

¥ Neutralize (requiring manual startup) of unnecessary services ¥ Password protected screen saver ¥ Login banner ¥ Enable packet filtering (permitting only necessary ports)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 271 Guidelines for reliability and survivability

This section contains the following: ¥ "General guidelines for high availability design" (page 271) ¥ "Component reliability" (page 271) ¥ "Network Considerations" (page 281) ¥ "SIP-PSTN gateway reliability strategy" (page 281)

General guidelines for high availability design The MCS is a complex system that consists of various components that perform specific tasks to offer voice, video, and data services. To ensure high availability and reliability, the components of the MCS system need to be redundant based on the criticality to the entire system. This section provides information on the reliability features of the components of the MCS system and recommendations to eliminate single point-of-failure.

Component reliability The MCS uses the IBM eServer x306m server. These servers are equipped with dual 10/100 Ethernet interfaces.

The following subsections outline reliability as it applies to each MCS component.

Session Manager The Session Manager is deployed on IBM eServer x306m server. The following is a list of the deployment scenarios: ¥ single server ¥ 4-server ¥ 8-server

Only the 8-server deployment uses a redundant architecture. The single server and 4-server deployments use a nonredundant architecture.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 272 Guidelines for reliability and survivability

In a deployment with redundant servers, MCS supports 2+1 or 1+1 redundancy configuration. In the 2+1 redundancy configuration, there are two active servers and one standby server. Heartbeating is used between the active and the standby servers to ensure reliability at the application layer.

Connectivity The Session Manager server has redundant connections to the MCS Network. The following diagrams show the redundant connections.

Figure 69 Network connectivity for Session Manager

Redundant Ethernet links are connected to two separate Layer 2 switches. Associated with each link is one IP address. There is also one host-logical IP address assigned to the Sun IP Multipathing entity for the network appearance. Using the Sun IP Multipathing enables failover protection within Ethernet interfaces upon failure of an Ethernet Switch or link. The IP Multipathing software monitors the link status and will internally route IP traffic to the active Ethernet link. Upon a link failure, the IP Multipathing software begins routing to the newly active link within a few milliseconds. A gratuitous ARP containing the logical IP address is also sent to the router to inform it of the MAC address change.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Component reliability 273

Since the IBM eServer x306m server supports only one power supply, Nortel recommends that active and backup servers connect to separate protected power sources. The Layer 2 switches should have redundant paths to the routing network.

Failover The MCS system supports a 1+1 autofailover mechanism. As the starting point for deployment, a minimum two-server Session Manager configuration is recommended as the base configuration. With the redundancy support, there is no impact on the existing media flows in the event of server problems, which can be caused by the server itself or the network connections to the server.

For an active Session Manager server, there is a floating service IP address owned by the active Session Manager. These floating service IP addresses should not be confused with the host-logical IP addresses or machine logical IP addresses. Standby servers do not have floating IP addresses assigned.

In detail, each physical interface on the server has an assigned static IP address. Physical interfaces are paired into a redundant set on a server. Each set contains a shared host-logical address. The host-logical address remains accessible so long as one physical interface is able to provide service. The static addresses are required by and relevant only to IP multipathing and should never be used elsewhere. The floating service addresses for active Session Managers are shared amongst a group of servers hosting the Session Managers.

When a server hosting the Session Manager becomes active, it claims an available service floating address. The active Session Manager servers use their respective service floating address for communication with clients. If a Session Manager or server fails, a standby Session Manager will take ownership of this service floating IP address (and therefore the associated clients).

To support autofailover for the servers of Session Manager, four IP addresses are required for each active IBM eServer x306m server and three for each standby server.

ATTENTION Both active servers and standby servers must reside on the same IP subnet so that the floating service IP address can shift from server to server.

All standby servers hosting Session Managers periodically send heartbeat messages to all active servers. If a standby server detects the failure of an active server, it notifies all servers in the redundancy group of the failure detection, negotiates state with other standby servers, assumes the failed

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 274 Guidelines for reliability and survivability

server’s floating service IP address, activates its service components, and begins to service requests. This activation sequence, the transition of a standby server to an active state, will take between 6 to 12 seconds to complete (depending on heartbeat time settings).

After a failover, the active session state and accounting data in RAM on the failed server is lost. Media is not affected and the client will not notice. But mid-call events will cause the call to end. The client will receive a 481 Transaction Not Found message and end the call.

A maximum of four active servers and four standby servers are supported in the MCS system.

Isolation The redundancy software must be provisioned to detect network isolation of any server in the server group. To accomplish this, all servers in a redundancy group periodically exchange Internet Control Message Protocol (ICMP) PING requests with a provisioned set of other IP devices. These IP devices are commonly provisioned to be the gateway and database of each respective host. A server in a redundancy group declares itself isolated if it cannot reach through ICMP ping any of its critical resources. A standby server will not activate when isolated. When an active server becomes isolated, it will yield execution of a nonisolated standby server or will fail itself. Multiple active servers negotiate which servers should remain active when returning from an isolated state to nonisolation.

For more information, see Session Manager Fundamentals (NN42020-107).

System Manager Redundancy and failover In a redundant configuration, the System Manager is installed on two IBM eServer x306m servers. The primary System Manager is active on the server for primary System Manager/secondary Accounting Manager. The System Manager supports automatic failover.

The LAN redundancy is provided by connecting two Ethernet links from the Ethernet interfaces to separate QoS policy enabled Layer 2 switches. This enables failover protection on failure of a switch or a link to the switch. Three IP addresses are required for the System Manager server.

Accounting Manager Redundancy and failover In a redundant configuration, the Accounting Manager is installed on two IBM eServer x306m servers. The primary Accounting Manager is active on the server for the primary Accounting Manager/secondary System Manager.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Component reliability 275

If the server for primary Accounting Manager fails, a manual intervention is required to start the secondary Accounting Manager running on the server for primary System Manager/secondary Accounting Manager. When the failed server recovers, the primary Accounting Manager must be manually started to take over as the active Accounting Manager.

The LAN redundancy is provided by connecting two Ethernet links from the Ethernet interfaces to separate QoS-policy-enabled Layer 2 switches. This enables failover protection on failure of a switch or a link to the switch. The server hosting the Accounting Manager requires four IP addresses.

Database Manager The Database Manager is installed on two IBM eServer x306m servers in a redundant configuration. The Database Manager uses a replicated database subsystem. Each database server is installed with an instance of the Oracle database and the Database Manager.

Connectivity There are two Ethernet connections to each server. These redundant links are connected to two separate QoS-policy-enabled Layer 2 switches. The two links share an IP address.

The database is fully protected by using internally mirrored disk drives for physical data store and using Master-Master replication of domain, subscriber, service, routing provisioning, and system configuration data.

Failover Failover handling in the database is achieved through Nortel developed Transparent Application Failover (TAF). TAF is a process by which client applications switch over to a failover database when the main database fails. To be TAF aware, all MCS applications must connect to the Active Master through JDBC connections. Since both active and backup databases have the consistent data, the MCS applications will continue to operate normally using the backup database in the event of active database failure. After becoming active, except for the MCS Registrar Service component, the backup database server can only support Read operations from all other MCS service components, for example, updates with the Personal Agent are unavailable.

The database failover time is approximately 2 seconds.

Provisioning Manager The Provisioning Manager supports a manual failover mechanism.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 276 Guidelines for reliability and survivability

Border Control Point The IBM BladeCenter T or HS20 blade is used as the physical platform for the Border Control Point.

Redundancy A Border Control Point platform consists of two half-shelves managed by two independent host controllers. Each host controller manages a half-shelf of up to six media traffic processing blades. Each blade has one Ethernet connection to the MCS private network and the second Ethernet connection to the managed public IP network.

If one of the host controllers fails, only the media blades managed by the failed controller go out of service. The blades located in the other half shelf under the control of the other host controller continue to operate normally.

To ensure that the engineered service capacity will not be degraded due to a single host outage, the system can be provisioned on an N + M basis. The Border Control Point resources are a pooled resource. Should a failure occur on a Host Controller, that half shelf is removed from the pool.

Connectivity Each host and media blade has two Ethernet connections to the IP network. The two network interfaces of the host blade (eth0 and eth1) are grouped together as a redundant pair. With the MCS public-only deployment model, only one of the two interfaces for the media blade is used to connect the media blade to the network. Only one IP address is required for a host or a media blade.

The NET1 and NET2 labels associated with the interfaces on each media blade are displayed at the System Management Console. An unused (unassigned) media blade IP address is left blank to indicate to the Border Control Point that it should not be brought into service. By default, an IP address of 0.0.0.0 is associated with both interfaces to indicate that the Border Control Point is not configured. In the public-only deployment model, only the NET2 address is assigned.

To prevent the loss of a host blade when one of the switches or one of the Ethernet links fails, each Ethernet port must be connected to a different switch of a switch stack, forming one logical switch using the cascade cable connections. See Figure 70 "Border Control Point network connectivity" (page 277) for an illustration.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Component reliability 277

Figure 70 Border Control Point network connectivity

For chassis-based switches, these links should be connected to a separate switch module in the same switch if only one switch is available or preferably to a separate switch module in two different switches.

To prevent the complete loss of all media blades when a switch fails, the media blades should alternate their connections to a different switch.

Media blades on the Border Control Point must connect their NET2 network interfaces to the same local LAN segment or VLAN as the host blade. The restriction is due to the use of the 192.168.. address by the media blade to communicate with the host blade for the initial image download and subsequent communications. The 192 address is not a publicly routable IP address. This 192.168.x.x address is in addition to the publicly addressable IP address associated with each blade. When all media and host blades are connected to the same local LAN segment or VLAN, the direct communication between media blades and host blade require no router for packet forwarding.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 278 Guidelines for reliability and survivability

Session survivability The Border Control Point enables existing media streams on a media blade to survive during a host card failure and subsequent recovery. The host blade queries each of its media blades for the status of the active session to verify the number of existing calls and available capacities. Upon successful recovery, the host blade can resume control over any existing calls and uses unused capacity to process new calls without destroying existing sessions.

Media Application Server Information about the survivability and reliability of the Media Application Server is available in Media Application Server Planning and Engineering (NN42020-201).

Media Gateway 3200 Connectivity The Media Gateway 3200 provides dual Ethernet connections. If one connection fails, the other connection is used.

The Media Gateway 3200 provides dual power supplies. If one of the power supplies fails, the other power supply will take over.

Failover The Media Gateway 3200 can be provisioned into a trunk group that spans across multiple Media Gateway 3200. In the case of a failed Media Gateway 3200, the routing function of the Session Manager can use another Media Gateway 3200 for call processing.

IP Client Manager The IP Client Manager is deployed on IBM eServer x306m servers. Automatic failover of the IPCM is performed at the IP Phones. If connectivity with one IPCM is lost, the IP Phones switch over to the second IPCM in which they are configured.

Connectivity There are two Ethernet connections to each server. These redundant links are connected to two separate QoS-policy-enabled Layer 2 switches. The two links share an IP address.

Since the IBM eServer x306m server supports only one power supply, it is good practice to connect the active and backup servers to separate power sources.

Redundancy and failover Figure 71 "IPCM redundancy and failover example" (page 279) shows an illustration of the IP Client Manager redundancy and failover mechanism. In this illustration, the IP Client Managers are homed to the same database.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Component reliability 279

Figure 71 IPCM redundancy and failover example

The IP Phones are configured manually with one primary IP Client Manager (S1) and a secondary IP Client Manager (S2). Upon a loss of service from a primary IP Client Manager, the secondary IP Client Manager assumes the responsibilities of managing the IP Phones and subscribers of the failed IP Client Manager and begins processing SIP and UNIStim messages.

Figure 71 "IPCM redundancy and failover example" (page 279) shows an example of a failover scenario where the secondary IP Client Manager (IPCM B) is now servicing the IP Phones. The IP Phones are previously controlled by IPCM-A, which experienced a loss of service detected by the Keep-Alive mechanism. If a call is in progress when the primary IP Client Manager goes down, the call remains active until a mid-call event occurs, such as a key pressed by the user.

Additional redundancy for control of the IP Phones is provided by the Multimedia PC Client. The Multimedia PC Client can control the IP Phones if the Use the IPPhone2002 (IPPhone2004) telephone for voice instead of PC check box is selected in the Multimedia PC Client preferences. The IP Phones are manually configured for the IP address and port of their primary and secondary controllers during device setup. The primary controller should contain the IP address for the primary IP Client Manager, while the secondary controller should contain the IP address for the standby IP Client Manager. The IP Phones also have a virtual third controller IP address

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 280 Guidelines for reliability and survivability

not visible to the user. The Multimedia PC Client writes its IP address as this virtual third controller, so that the IP Phones can switch between three controllers: two IP Client Managers and one Multimedia PC Client.

Manual and automatic switches of the controlling IP Client Manager/Multimedia PC Client are available on the IP Phones. The automatic switch can be configured in the Multimedia PC Client and can also be based on the phones Keep-Alive timer. The Multimedia PC Client uses the Keep-Alive timer the same way the IP Client Manager does to confirm the connection between the IP Phones and itself. The Keep-Alive timer for the Multimedia PC Client is hard-coded at 30 seconds and the Reset keep-alive message is sent every 10 seconds.

Failure of an IP Client Manager causes the IP Phones to reconnect to the secondary IP Client Manager. The client also reregisters with the Session Manager and performs the service package, address book, and presence subscription. Failure of an IP Client Manager can create a registration and subscribe storm on the Session Managers. To decrease the impact of the registration storms, IP Phones should alternate which IP Client Manager is configured as S1 and which is configured as S2. Once reconnected with the IP Client Manager, the IP Phones can make outbound calls. Calls cannot be received by the IP Phones until they are successfully registered with the Session Manager.

IP Phones Reliability When the IP Phones initialize, they send a UNIStim message to the IPCM. There are as many as 10 handshake messages that travel between the IP Phones and the IPCM. After this handshake sequence is complete, the IPCM will send a REGISTER message to the Session Manager for each user that is attached to that device. The Session Manager replies with a 200 OK (registration successful) message. The IPCM then sends a UNIStim message to the IP Phone that instructs the IP Phones to display the phone icon and the user name, indicating successful registration.

When a user manually logs into the phone, the user must press the login key and provide login information. The same process described above follows when the manual login completes.

After successful registration, the IPCM begins to send keep-alive messages to prompt a response from the IP Phones, which resets the NAT/NAPT timers to keep the signaling path open.

The IPCM or Multimedia PC Client periodically sends Keep-Alive messages to the IP Phones. The IP Phones keep a Keep-Alive phone timer for the Keep-Alive messages. The receipt of a Keep-Alive message causes this

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. SIP-PSTN gateway reliability strategy 281

timer to be reset. The interval between Keep-Alive messages being sent from the IPCM to the IP Phones is a configurable parameter, configured during deployment in the Keep-Alive timer field on the IP Client Manager tab on the System Management Console. The Keep-Alive phone timer is also configurable at IPCM setup using the Keep-Alive Phone Timer field on the same tab.

The Keep-Alive Timer has dual functionality that enables the IP Phone to switch controllers when its connection with the IPCM is lost and keeps the connection through the firewall/NAPT open if one exists. If the Keep-Alive phone timer expires before a new Keep-Alive message is received from the IPCM, the IP Phones assume that the connection with the primary IPCM is lost. The IP Phones automatically attempt to reconnect, first to its existing controller, and, if unreachable, to its secondary controller. The IP Phones attempt to reconnect X number of times, where X is manually configured on the IP Phones (along with the primary and secondary controller IP Addresses). If X is configured to 1, then the IP Phones automatically look for the secondary controller when the connection to the primary controller is lost.

For more information about the IPCM failure scenario, see "Redundancy" (page 64).

Network Considerations The following are good practice in network design: ¥ Use redundant components such as switches and routers to core servers and gateways. ¥ Use redundant connections to the network. ¥ Eliminate single point of failure in network design. ¥ Use redundant WAN links. ¥ Use redundant Power supplies. ¥ Use connections to devices that support only one power source should be duplicated and the secondary device feeds to secondary power source or breaker. ¥ Keep critical spare components on site.

SIP-PSTN gateway reliability strategy Reliability is dependent on the SIP gateway capabilities. It is good practice to configure redundant gateways for each TDM access point. Primary and alternate trunk groups can be configured across the SIP gateways.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 282 Guidelines for reliability and survivability

Figure 72 SIP-PSTN gateway redundancy

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 283 Domain considerations

This section introduces the concept of subdomains and route restrictions to MCS routing. This section also explains how the MCS network supports domain-based routing and how it interworks with traditional circuit-switched network routing. This section contains the following: ¥ "Domain and subdomain" (page 283) ¥ "MCS telephony routes" (page 285) ¥ "Class of Services" (page 286) ¥ "Enhanced 911 services" (page 287)

Domain and subdomain The MCS system uses domain to manage users, services, devices and translation. The Provisioning Client is used to provision a domain. There are three types of domains: root domain, subdomain, and foreign domain. A root domain is the top-level domain. An example of a root domain is nortel.com. A service provider can create a root domain for each customer. Using subdomain, subscribers and devices of the top domain can be divided into smaller groups. Subscribers within nortel.com are assigned to subdomains, but can be reached by just using the top domain. See Figure 73 "Domain and subdomain" (page 284) for an illustration. Domains and subdomains can be used to control routing and access to services. Under a subdomain, levels of subdomains can be created. The MCS system places no limits on the levels of subdomains. Each subdomain inherits the parameter values from the parent domain. To be more compatible with the supporting IP networks, Nortel recommends that subdomains be configured based on geographic locations. A foreign domain is a domain that is not served by any Session Managers of the current MCS system. Usually, a foreign domain is resolved through DNS and would not need to be provisioned. However, if a DNS server is not available, routes for the foreign domain must be datafilled.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 284 Domain considerations

Figure 73 Domain and subdomain

There are implications and limitations when a subscriber registered in one domain is moved to another domain. The implications and limitations are due to the difference in the provisioning of IP Client Manager, E911 locations, voice mail servers, Session Managers, and service packages at each domain. The following is a list of the implications and limitations: ¥ If the provisioned voice mail server for the subscriber is not supported in the new domain, the subscriber’s voice mail server is configured to None Selected. Voice mails in the new domain corresponding to the voice mailbox is cleared. The administrator must reprovision the voice mail service for the subscriber and notify the subscriber of the new configurations. ¥ The subscriber’s service package is configured to the default service package for the new domain. ¥ The subscriber’s COS will remain the same except in the scenarios described as follows: If a subscriber is moved between two different root domains or between subdomains whose root domains are different, the subscriber’s COS is configured to None Selected. ¥ The subscriber’s location will remain the same except in the scenarios described as follows: If a subscriber is moved between two different root domains or between subdomains whose root domains are different, the subscriber’s location is changed to the new domains default location.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. MCS telephony routes 285

¥ The subscriber’s Meet Me audio conferencing service properties are lost. ¥ If a subscriber moves from one domain to another, changes the personal photo after the move and moves back to the previous domain within seven days, the subscriber will need to upload the new personal photo again. Otherwise, the old personal photo in the previous domain is used. ¥ When a registered subscriber moves between two different domains, the Move User functionality provided by the Session Manager does not force the subscriber to reregister with a new fully qualified user name with the new domain. When the subscriber tries to register after the move, the registration is rejected. The rejection means that the subscriber cannot make calls with the new user name in the new root domain. If the move is between subdomains, the above described rejection will not be a problem. The Session Manager will reregister the subscriber to the new subdomain as long as the registration refers to the subscriber at the top-level domain name. ¥ After a change of domain, the status reason for the subscriber is nulled out. The subscriber’s status is configured to the base status to which the status reason is provisioned. For example, a subscriber with the status reason INACTIVE: Blocked is moved with the status reason INACTIVE.

If a subscriber has an alias that already exists in the new domain, the subscriber cannot be moved to the new domain. This is only an issue when a subscriber moves from one root domain to another root domain.

MCS telephony routes The MCS system supports three types of routes for telephony-style dialing: private route, SIP route, and gateway route.

Private routes are used to reach subscribers in a private network. Private routing is used widely in the enterprise PBX environments because it supports abbreviated telephony style digit dial plans. By supporting private routes, MCS enables a subscriber to dial another subscriber’s extension number rather than using that subscriber’s URL. For example, all MCS subscribers in dallas.tx.us.abc.com are assigned aliases of 972-68x-yyyy. The Administrator can configure private routing for the subdomain so that subscribers can call each other by dialing 5-yyyy.

SIP routing emulates internal routing from one domain to another without necessarily forwarding the request to another server. For instance, there are two top-level domains A and B. SIP routes can be used to quickly verify if a number dialed by a user in domain A is the alias of a user in domain B. SIP routes are limited in their applicability in most traditional dialing plans since they do not cause any telephony routing to occur in the target domain. SIP routes perform only one-time lookups.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 286 Domain considerations

Gateway routes provide a mechanism to select the correct point and facility to use when terminating outside the SIP network such as routing a call from a SIP client out to a PSTN gate- way. For example, the gateway routing can translate the URI [email protected] to [email protected];nortelde- vice=pri;norteltrkgrp=cs_tx_trunk;user=phone;maddr= 23.23.23.23

MCS telephony routing provides the means of processing a request URI to locate a subscriber, a domain, or a gateway to another network. The digit translation and routing rules can be configured for any MCS domain and subdomain to support the local dialing requirements.

Class of Services Access to a translation table implies permission to access the resources reachable through the table. Class of Service (COS) provides a mechanism for screening and restricting users from accessing certain telephony routes provisioned for a domain or a subdomain. It only comes into play when a request URI containing a telephony digit stream. If a call can be completed without translating digits, there are no COS restrictions applied to that call. The MCS system enables the administrators to assign a single COS value to a route list, a domain, a subdomain or a subscriber. The route list enables the administrators to combine one or multiple routes through multiple (or the same) gateway which is used for the same purpose. For instance, if the customer needed the ability to send local calls out through multiple gateways due to the high volume of these types of calls, then multiple routes could be combined in this route list to provide the functionality. Before the MCS system starts the process of translating digits, it first checks the COS that the calling party will use, and then determines the set of translation and routing rules (or dial plan) should be used for processing the call. The MCS system uses the following rules to determine the class of service and dial plan to be utilized for a particular call: ¥ When calling within the same domain and if the originating party is a subscriber of the domain, the COS of the originator and the dial plan of the originator’s domain are used for processing the call. ¥ When calling within the same domain and if the originating party is not a subscriber within that domain such as a caller behind a PBX, the default COS and the dial plan of the domain are used for processing the call. ¥ When calling across different domains, the default COS and dial plan of the terminating domain are used for processing the call. ¥ If the request has been redirected, the redirecting party is considered to be the originator of the redirected call.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Enhanced 911 services 287

¥ If the call is originated from a Communication Server 1000, the domain matching the Communication Server 1000 profile is used as the originator’s domain. ¥ If the call contains additional network-asserted information regarding the originator’s true identity through P-Asserted-ID or Remote-Party-ID, the network-asserted information is used as the originator’s identity.

After determining the dial plan, the MCS system compares the COS of the route list with the calling party’s COS. If the COS of the route list exceeds the calling party’s COS, that route list is excluded from the translations process. Therefore, the calling party is not permitted to access the resources accessible through the route list.

By properly provision COS against subscribers, subdomains, domains, and route lists, the administrator can restrict a subscriber in the subdomain to access routes in the parent and other subdomains. As a result, the administrator can customize the manner in which a call may be routed. For example, a lobby phone may have a very low COS, and the only route available would route to an operator. On the other hand, the owner of a business may have a very high COS, and be able to make expensive international calls.

Enhanced 911 services Overview For basic 911 emergency service in North America, a 911 call is routed to a Public Safety Answering Point (PSAP) which is not necessarily a local one. The PSAP or 911 operator is responsible for talking to the caller and arranging the appropriate emergency response, such as sending police, fire or ambulance teams.

Unlike the basic 911 service, the Enhanced 911 (E911) service must route emergency calls to a local PSAP based on the location of the caller and display the caller’s location information at the emergency operator’s terminal. The telephone switch is provisioned for routing calls to a local PSAP, which utilizes the caller’s number for routing calls to query an Automatic Location Information (ALI) database for location information. The ALI database provides all of the geographic information to PSAPs for E911 calls.

Figure 74 "E911 PSTN call processing" (page 288) shows an E911 call routing over a circuit-switched network. When the subscriber dials 911, the call coming into the end office is sent to the E911 Tandem. The tandem switch then routes the call to the appropriate PSAP based on the caller’s number. Once the PSAP receives the call with Automatic Number Information (ANI), it queries the ALI database based on the ANI.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 288 Domain considerations

Figure 74 E911 PSTN call processing

When an MCS subscriber dials 911, the call is sent to a switch through a PRI gateway. MCS uses the Location Infrastructure concept to provision location trees for a domain. The E911 service uses the provisioned locations to define Emergency Response Locations (ERLs) and the location information of subscribers to route 911 calls.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Enhanced 911 services 289

Figure 75 Domain ERL location tree example

Not all locations defined for a domain’s location trees are ERLs. To support E911, one or more locations of a location tree are provisioned as ERLs for connections to the current PSAP locations. An ERL does not need to be a leaf location on a location tree, but there can only be one ERL provisioned for each branch of a tree. That means an ERL cannot have another ERL as its child.

ERLs are defined for each domain. Based on the PSAP locations accessible to the domain, the domain administrators can determine the ERLs on the domain trees. When all subscribers of a domain belong to the same PSAP location, the domain administrator can simply provision an ERL for the Other location without having to provision multiple locations on the domain location trees.

For an enterprise configuration, each ERL is assigned at least one 10-digit Direct Inward Dialed (DID) number which the PSAP can call back to reach an emergency caller. Whereas ERLs provisioned for a Residential configuration is defined so that the subscriber’s Public Charge ID is sent to the PSAP during an E911 call. Each ERL must have a minimum of one ANI, but there is no maximum number of ANIs that can be assigned to an ERL.

The ANI is assigned to a caller for a length of time configured in the Provisioning Client, after which calls made to the ANI will no longer map to the caller. If more than one ANIs are assigned to a given ERL, the oldest used ANI is chosen for the current call. If there are more concurrent callers making emergency calls than ERL ANIs provisioned, the oldest ANI is assigned to the most recent emergency caller. For example, if two ANIs are

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 290 Domain considerations

provisioned for an ERL and three users are making the emergency calls, the ANI assigned to the first emergency caller is given to the third emergency caller. PSAP can no longer reach the first emergency caller after the third caller made the call. To avoid this, an adequate number of ANIs must be provisioned for an ERL’s service area.

Once all ERLs are provisioned, the E911 administrator can get a printable ERL list by clicking on the Get Printable ERL List button on the ERLs provisioning page. The printed ERL list must be delivered through either postal mail or hand delivery to update the ALI database. The procedure must be done in accordance with local emergency legislature. The E911 planners must be familiar with local and national E911 requirements and regulations before implementing E911 services for the MCS system.

To associate the provisioned ERL locations to subscribers/clients, the MCS core clients, including the Multimedia PC Client, Multimedia Web Client, IP Phones, are presented with a list of the ERLs so that the subscriber can select an appropriate ERL based on the current physical location of the client. For the IP Phones, the user-selected location is stored in the Database Manager against the device. For the Multimedia PC and Web Clients, the selected location is stored on the machine where the soft client is running. Other MCS entities, such as gateways, Media Application Servers, older MCS core clients, and third-party clients, have their location ID derived from the provisioned Location Infrastructure using the Location Precedence Rules described in the "Location infrastructure" (page 90) section.

The MCS core clients send the selected location id in the x-nt-location header of the INVITE message. The Session Manager queries the Database Manager for the gateway route, OSN, and ANI information using the location id. Retrieving the gateway route from the Database Manager based on the location id ensures that the emergency call is routed to the PSAP tied to the location sent in the INVITE message.

On-site notification is required for enterprise sites larger than 40 000 square feet that do not have an address associated with the user’s device or location from which the user is calling the emergency number. When a user calls an emergency number, an instant message is sent to a preconfigured SIP address to notify the proper personnel, typically a security guard or administrative assistant, of the emergency so that this person can direct emergency teams to the correct location for faster response.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Enhanced 911 services 291

Emergency number provisioning Emergency numbers differ from country to country. For example, 911 is used in the United States and Canada, while 112 is used in most of Europe. MCS enables emergency numbers to be provisioned to meet local requirements. Emergency numbers are provisioned for the entire system so they do not need to be provisioned for each domain.

MCS uses the starts-with algorithm to match the received URI against a list of provisioned emergency numbers to determine if the call is an emergency call. For example, the number 91111 will match the provisioned 911 emergency number. The Session Manager removes any digits or characters dialed after 911 and sends only 911 to the ERL gateway.

When adding a new user or a new emergency number, the MCS system also uses the starts-with algorithm to make sure that the new username/alias does not match any existing emergency numbers or the new emergency number does not match any existing username/alias. An error message is displayed if a match is found.

Finally, emergency aliases can be assigned to emergency numbers by clicking on the Assign Emergency Alias link , then clicking on the Details link of the emergency number on the Emergency Alias Management screen to add aliases. Assigning emergency aliases are useful for enterprise users who dial a prefix to get an outside line before making a call. For example, 9911 or 6911 can be defined as aliases to the emergency number 911. The MCS system will resolve these emergency aliases to the actual emergency number before sending the call to the gateway. If a user dials the emergency alias 9911, the MCS system will resolve it to 911 and send the 911 call to the gateway provisioned against the ERL selected by the user. Emergency aliases also use starts-with matching so that dialing 9911111 would match against 9911 alias. Because the emergency numbers and their associated aliases are provisioned by the system administrator at the system level, the system administrator must work with the domain administrators to define these aliases.

E911 translation and routing MCS does not require the administrator to provision E911 translation and routing rules for each domain or subdomain. Normal telephony translation to obtain the gateway routes is not performed for emergency calls when ERLs are properly defined since the gateway routes are provisioned directly in ERLs. However, when ERLs and emergency numbers are not provisioned, MCS depends on domain/subdomain translation and routing for forwarding E911 calls.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 292 Domain considerations

The Emergency Application service handles emergency calls. When the Session Manager receives an emergency call, it passes the originator location ID to the Emergency Application. Using the location ID, the Emergency Application queries the Database Manager to obtain ANI, OSN, and gateway route information. An ANI is temporarily assigned to the enterprise emergency caller for two hours. The Emergency Application returns the information to the Session Manager. Before forwarding the emergency call INVITE message to the provisioned ERL gateway, the Session Manager modifies the INVITE message by adding the mapped ERL ANI in the P-Asserted- Identity and Remote-Party-Id headers. Residential subscribers use Public Charge Id, rather than the mapped ANIs provisioned on the ERL. For M1 , a special phone-context=emergency parameter is added to the Request URI. At M1, the number qualifier emergency should also be mapped to a national/ISDN number. If this is missing, the call could be rejected at the gateway, and the Session Manager will not be able to complete the emergency call. Other MCS gateways such as CS 2000, Media Gateway 3200 do not support this parameter. Therefore, these gateway do not require this parameter. The following are examples of the INVITE messages that contain or do not contain the phone-context parameter: INVITE sip:911@nor- tel.com;gw;trusted;user=phone;maddr=47.104.11.170;norteltrk- grp=ACLoop1 SIP/2.0

INVITE sip:911;phone-context=emergency@nor- tel.com;gw;trusted;user=phone;maddr=47.104.26.70;norteltrk- grp=M1_trkgrp SIP/2.0

When an OSN address is provisioned in the ERL, an instant message is sent to on-site personnel to inform them a user on the premise has placed an emergency call. The instant message is meant to be a notification. Unlike an ordinary instant message, the receiver of the message will not be able to reply to the message, nor click the callback button to call the originator, because the FROM header contains the emergency number (911), not the caller’s SIP address. The above description presents a situation when gateway routes, ERLs, and emergency numbers are all properly provisioned. However, it is possible that the Emergency Application cannot find an ERL because there is no ERL provisioned for the entire system or along the tree branch associated with the originator’s location and the domain’s default Other location. When this occurs, the Emergency Application increments the EmergencyCallFailure

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Enhanced 911 services 293

OM, which tracks the total number of times that an emergency call may have failed, gracefully headlined so that the call is translated as a normal call through telephony routes as provisioned on the domain or subdomain. Although the MCS system can forward emergency calls using telephony style translation and routing, it is not optimal due to the following reasons: 1. The emergency call is no longer processed with priority and calls may fail when system is overloaded. 2. The originator’s ANI will not be delivered to the PSAP. As a result, the emergency callback may fail. 3. The originator’s INVITE message may be authenticated. Authentication is normally bypassed when ERLs are used to route the call.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 294 Domain considerations

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 295 Interworking with third-party components

This section contains the following: ¥ "Gateways" (page 295) ¥ "Voice mail servers" (page 298)

Gateways Vegastream Figure 76 "Vegastream Vega-100 gateway" (page 295) shows how an MCS system can be configured to interwork with the Vegastream Vega-100 gateway. Figure 76 Vegastream Vega-100 gateway

The Vegastream Vega-100 gateway is a SIP-to-PRI gateway that can support 23/30 B channels (1 T1/E1) or 46/60 B channels (2 T1/E1s). The MCS system interacts with the Vega-100 gateway using SIP.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 296 Interworking with third-party components

The MCS interworking with the Vega-100 gateway provides compatibility and interoperability between the Vega-100 gateway and MCS gateways, servers and clients. The following is a high-level view of the technical specifications of Vega 100 gateway: ¥ Lan interfaces: 10 BaseT, 100 BaseTX ¥ PSTN interface: T1, E1 NI1/NI2/5ESS/DMS 100 ¥ Physical dimensions: — 440mm (17.4") x 66mm (2.6") x 310mm (12.2") width/height/depth — industrial rack mount: 483mm (19") — Weight: 6kg

¥ Vocoders: G711 (A-law, MU-law), G.729a, G.729b ¥ Network management: HTTP Web server with integrated user guide, SNMP MIB 1and2 IWarn/cold boot, link up/down, user authentication traps supported, IF-MIB (ISDN), VT100 -RS232/Telnet ¥ Power: 100-240 VAC; 47-63 Hz. 1A-0.5A ¥ Environmental: 0-40C, 0-90% humidity (noncondensing)

Mediatrix FXO and FXS Gateways The Mediatrix 1204(APAIII-4FXO) is a telecommunication device that provides an analog interface to a PBX or Central Office. It is identified as the SIP FXO Gateway in the MCS system. The Mediatrix 1104 (4 port FXS) is a telecommunication device that provides an analog interface to RJ11 based phones or faxes. It is identified as the SIP FXS Gateway in the MCS system. The SIP FXO and FXS Gateways can be configured with MCS as shown in the following diagram.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Gateways 297

Figure 77 Mediatrix FXS and FXO gateways

The SIP FXO Gateway can be installed in an office space, or in wiring closets, wherever existing wiring is terminated. The 1204 will also be supported for providing a small-scale PSTN interface for enterprise branch offices. MCS interworking with the Mediatrix gateway covers the compatibility and interoperability of devices that are used on the IP network. In addition the following features are intended for interoperability: ¥ Basic Calls ¥ Hold and retrieve for each calling and called port ¥ Call Forward-Unconditional ¥ Call Forward-No Answer ¥ Call Forward-Busy ¥ Call Transfer (REFER Method) ¥ Call Waiting ¥ Caller ID ¥ Mid-call CODEC change (initiated by the remote party)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 298 Interworking with third-party components

The SIP FXS Gateway supports the Nortel SIP PING function for the NAT/firewall signaling keep-alive mechanism. The Mediatrix FXS gateway can be used as a Customer Premise Equipment (CPE).

Note: At the time of writing, the FXO does not support the Nortel SIP Ping mechanism.

Voice mail servers There are two forms of voice mail interoperability available from MCS: One for the legacy voice mail systems through a combination of Channel Associated Signaling (CAS) and Station Message Desk Indicator (SMDI) and another for the SIP-enabled voice mail systems.

CAS interoperability The elements involved in the interoperability with the voice mail systems include the Media Gateway 3200, an MRV inReach terminal server, the Session Manager, MCS 5100 clients, and the third-party voice mail system. See Figure 78 "CAS-based voice mail system" (page 298) for an illustration.

Figure 78 CAS-based voice mail system

A unique trunk group is allocated from the Media Gateway 3200 to the voice mail system.

An IP address for the terminal server needs to be assigned. The terminal server is connected to the voice mail system with a serial cable. The terminal server is used for bridging the signaling between the voice mail system using the SMDI link and the Session Manager(s).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Voice mail servers 299

The Session Manager server is provisioned for the terminal server. The Session Manager and the voice mail system preferably are colocated at the same site.

In the Database Manager, end users are provisioned with mailbox IDs. The mailbox ID is the user’s 10-digit DN alias that is a routable number.

A universal CPL script enables routing to the voice mail system. Upon a message deposit through the RS232 link, the terminal server would pass the SMDI information over the IP network back to the Session Manager. The Session Manager would then send the clients the notification which turns on the message waiting indicator (MWI) light.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 300 Interworking with third-party components

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 301 OAMP

This section contains the following: ¥ "Management system components" (page 301) ¥ "Fault management" (page 304) ¥ "Configuration management" (page 312) ¥ "Provisioning management" (page 313) ¥ "Accounting management" (page 314) ¥ "Performance management" (page 317) ¥ "Security management" (page 317) ¥ "Backup and restore strategy" (page 318)

Management system components The following diagram provides a logical view of the MCS management system and its components.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 302 OAMP

Figure 79 MCS Management System logical view

The MCS management system consists of the following components: ¥ System Manager: This module provides management functions for configuration, operation, administration, maintenance, and performance of all MCS components, except for the SIP clients. The System Manager also collects data from the Media Gateway 3200 for fault management only. ¥ Database Manager: This module provides storage and retrieval capability for the following data: — Subscriber location information and registration status — Routing and translation entries, subscribers and system configuration data — Address Book information — Service Package information

¥ Provisioning Manager: This module provides secure access to the Database Manager for: — administrative provisioning using Provisioning Client, using the Open Provisioning Interface (OPI) — user domain provisioning using Provisioning Client, using the Open Provisioning Interface (OPI)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Management system components 303

— subscriber self-provisioning using Personal Agent, using the Lightweight Directory Access Protocol (LDAP) — Network service functionality, such as Network Address Book for the Multimedia PC Client — Web server functionality included within the Provisioning Manager to process HTTP requests from the Multimedia Web Client, Personal Agent, and Provisioning Client to support self-provisioning and network based services. — Bulk Provisioning Tool (BPT) function, using Command Line Interface (CLI) for human to machine provisioning

¥ Provisioning Client: The Provisioning Client enables system administrators and domain administrators to configure domain and user attributes. The Provisioning Client is accessed through a Web browser interface. ¥ Accounting Manager: This module provides accounting data collection, formatting, and transmission of accounting records or further processing that includes rating, correlation, and call trending by a downstream Operations Support Systems (OSS). ¥ System Management Console: This is the portal to the MCS system. All management functions are performed through this interface except — provisioning tasks that are performed through the Provisioning client and the Personal Agent. — database administration functions that are accomplished through Oracle Enterprise Manager (OEM) Console.

From the console, the administrator can do the following: — Log on/log off. — Display system topology in a directory tree. — Issue Maintenance Commands. — Use Properties editor. — Browse alarms, logs, and operational Measurements (OMs). — Monitor administrative and operational states. — Launch components such as Media Gateway 3200, BPS 2000 switch, OEM, Provisioning Client. — Perform backup and restore activities. — Deploy software loads.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 304 OAMP

The Management Console is a JAVA application that operates on a PC. The minimum PC requirements to run the Management Console include the following: — Windows 98 or later version of Operating System (Win98 SE, WinNT 4.0, Win2000 or Win XP) — Java Runtime Environment (provided with the System Management Console software) — Internet browser (Netscape Communicator 4.7 or later version, or Internet Explorer 5.0 or later version) — 384 MB of RAM or greater — 50 M of free hard disk space (need at least 30 M of free space on C: drive for installation of console software) — IP Access to the Protected MCS Network — CD-ROM drive or 3 1/2-inch floppy drive

The maximum number of simultaneous System Management Console connections to the System Manager is 20.

Fault management The System Manager is responsible for storing, formatting, and forwarding the log streams of all MCS components. Log data are formatted into Nortel Standard (STD) format and stored on the local disk. The Operational Support System (OSS) can retrieve the log data using SFTP. Alarms are sent to OSS using SNMP v2c traps and Nortel Reliable MIB. In an MCS system, only SNMP Get command is supported.

Alarms and log information can be viewed using the System Management Console.

The System Manager formats the raw log files into the Nortel Standard STD format and stores them in log files on a local disks. There is enough disk space allocated in the local disk to store seven days worth of log data. Log files are rotated by size or a time that is configured using the System Management Console.

If the connection between the System Manager and the Session Manager is lost for any extended period of time, raw log files are written to local disk. There is enough disk space allocated on the local drives to store up to one day’s worth of log files.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Fault management 305

Once the connection is reestablished, the System Manager resumes its normal operation. Raw log file data are not lost during this period. However, the raw log files are not automatically sent to the System Manager for formatting.

The default file rotation parameters for logs are 100 Kb in file size or 3,600 minutes in time. The threshold that is exceeded first causes the current file to be closed and a new file to be opened.

The directory path to the archived logs on the System Manager server is as follows:

/var/mcp/oss/log/MCP_9.1/ //

where is the Session Manager instance from which the log is generated, for example: 0 or 1 in the case of two active Session Managers. is the name of the service component that originated the logs. is the archived log file name. Integrating with an OSS The MCS alarm information is sent using the Nortel Reliable Management Information Base (MIB). The SNMP agent component in the System Manager implements Nortel Reliable MIB to support sending of notification for events and polling of management data as defined in the MIB. The default ports are UDP 161/162 for back-office SNMP management station and SNMP northbound interface. The sequencing of the MIB file compilation is described as follows: 1. nortel.mib 2. nortelGenericMIBs-smi2.mib 3. nortelNMItextConv-smi2.mib 4. nortelNMIconfigMgmt-smi2.mib 5. nortelNMIconformance-smi2.mib 6. nortelNMImibGroups-smi2.mib 7. nortelNMIresourceMgmt-smi2.mib 8. nortelNMInotifications-smi2.mib 9. nortelNMIneInventory-smi2.mib 10. nortelNMIconfigNoti-smi2.mib

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 306 OAMP

11. nortelNMIfaultMgmt-smi2.mib 12. nortelNMIfaultNoti-smi2.mib 13. nortelNMIalarmSurv-smi2.mib 14. nortelNMIstateInfo-smi2.mib 15. nortelNMIappComplianceIndications-smi2.mib 16. nortelNMIappRequirements-smi2.mib 17. nortelCSMOAappRequirements-smi2.mib 18. nortelCSMOAappComplianceIndications-smi2.mib

Notifications Notifications are used to guide the SNMP Manager (back-office system) regarding the frequency of the polling. This enables more controlled network management traffic and simplifies the agent. These notifications generally communicate an event along with some qualifying information provided by a list of variable bindings. The MCS system uses the following four main categories of notifications: ¥ NE (Network Element) Enrol Notifications ¥ NE Deenrol Notifications ¥ NE OSI State Change Notifications ¥ Alarm Notifications

NE enrol notifications Once the SNMP Agent has been initialized in the System Manager, it notifies the SNMP Manager provisioned when a new Managed Element (ME) is added to its management domain. The following are variable bindings of the notification. 1. sysUpTime (1.3.6.1.2.1.1.3.0): the sysUpTime of the System Manager server, counted in hundredths of seconds. Syntax is of type TimeTicks [RFC 1902]. 2. snmpTrapOID (1.3.6.1.6.3.1.1.4.1.0): the syntax is of type Object Identifier. The value would be 1.3.6.1.4.1.562.29.1.6.3.0.11 as defined in the Nortel Reliable MIB. 3. currentTxNotificationSequenceNum (1.3.6.1.4.1.562.29.1.6.1): a monotonically increasing 32-bit unsigned integer that is incremented by one for every outgoing trap irrespective of the category of the trap. 4. NE type (1.3.6.1.4.1.562.29.1.6.2.1): type of the enrolling NE specified as a display string. The possible values are CM, System Manager and AM to represent Configuration Manager, System Manager and Accounting Manager respectively.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Fault management 307

5. NE name (1.3.6.1.4.1.562.29.1.6.2.2): a name/label to uniquely identify the NE in the system. It is in the format of SiteName.ME_Name. For example, if a managed element called AM1 is configured in Site name Site1, the NE name would be Site1.AM1. 6. Time of enrollment (1.3.6.1.4.1.562.29.1.6.3.1.2): timestamp in seconds since the reference epoch 00:00:00 January 1, 1970. The syntax is unsigned32. 7. NE IP address (1.3.6.1.4.1.562.29.1.6.3.1.3): the value of it will always be the IP address of the System Manager server. The syntax is IpAddress. 8. NE version information (1.3.6.1.4.1.562.29.1.6.3.1.4): this reflects the Major and the Minor of the software release. 9. NE vendor name (1.3.6.1.4.1.562.29.1.6.3.1.6): a display string variable specifying the NE vendor. The value will always be Nortel. 10. NE location name (1.3.6.1.4.1.562.29.1.6.3.1.6): the name of the place where the NE is currently located. The value is the site name that the managed element belongs. The syntax is displayString. 11. NE administrative state (1.3.6.1.4.1.562.29.1.6.2.3): this variable indicates the current administratively assigned state of the NE that could be locked, unlocked or shutting down according to ITU-T recommendation X.731. The syntax is an enumerated integer with the above mentioned three values. 12. NE operational state (1.3.6.1.4.1.562.29.1.6.2.4): this indicates whether the NE is enabled/disabled according to ITU-T recommendation X.731. The syntax is an enumerated integer with two values (enabled, disabled). The value will always be enabled. 13. NE Unknown Status (1.3.6.1.4.1.562.29.1.6.2.5): this variable indicates whether the NE is presently considered to be at an unknown status according to ITU-T recommendation X.731. This status indicates whether the System Manager server can perform OAM communications with the managed element or not. The syntax is TruthValue textual convention.

NE de-enrol notifications The System Manager notifies SNMP Managers (Service Provider’s back-office system) whenever an existing managed element is removed from its management domain which no longer needs to be managed. The notifications contain the following variable bindings: ¥ sysUpTime ¥ snmpTrapOID ¥ currentTxNotificationSequenceNum

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 308 OAMP

¥ NE type ¥ NE name ¥ Time of de-enrolment variable: same as Time of enrolment variable

For variable definitions, see "NE enrol notifications" (page 306).

NE OSI state change notifications Whenever the administrative state or unknown status of a managed element changes, the System Manager notifies its SNMP managers about the new value. The notifications contain the following variable bindings: ¥ sysUPTime ¥ snmpTrapOID ¥ currentTxNotificationSequenceNum ¥ NE type ¥ NE name ¥ NE administrative state ¥ NE operational state ¥ NE unknown status ¥ Event Time Stamp (1.3.6.1.4.1.562.29.1.6.4.1.9): Timestamp in seconds since the reference epoch 00:00:00 January 1, 1970. The syntax is unsigned32.

For variable definitions, see "NE enrol notifications" (page 306).

Alarm notifications The System Manager server notifies its SNMP managers on the occurrence of various problems in the managed element. The four traps (Critical, Major, Minor, Warning) categorized based on severity would be supported at this interface to report the occurrence of a fault condition. All traps have the following variable bindings in addition to sysUpTime, snmpTrapOID and the currentTxNotificationSequenceNum as described in the "NE enrol notifications" (page 306) section: 1. Component Object Identifier (1.3.6.1.4.1.562.29.1.6.4.1.1): this variable unambiguously identifies the specific component of the managed element that raised the alarm. The format of the string is: Site=Site name;Server=NE name;component fdn. For example, Site=Mgmt- Site;Server=SysMgr;System.Sites.MgmtSite.Servers.MgmtSvr.Ser- vices.SysMgr.OSSAGENT.OMIConnectionAlarm0. 2. Problem Category (1.3.6.1.4.1.562.29.1.6.4.1.2): this varbind is used to classify the problem into five categories (Communications, Quality of

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Fault management 309

Service, Processing Error, equipment and environmental) according to ITU-T recommendation X.733. The syntax is an enumerated integer listing all the five applicable values. 3. Notification identifier (1.3.6.1.4.1.562.29.1.6.4.1.3): this field provides an integer (32 bits) value to uniquely identify the alarm notification at an NE. It is a maximum of 10-digit number in MCS alarm notifications. If the alarm notification is coming from a managed element instance of instance ID equals to one, the first digit is 1 which indicates it is coming from instance ID 1. The rest of the digit is equal to a sequence number generated at the managed element instance. 4. Additional text (1.3.6.1.4.1.562.29.1.6.4.1.5): the time at which the particular alarm condition happened at the NE. This value is represented as the time in seconds since a fixed reference epoch 00:00:00 January 1, 1970. 5. Probable Cause (1.3.6.1.4.1.562.29.1.6.4.1.6): the syntax is an enumerated integer and the actual list would be provided at the MIB module. MCS alarm notification also has the value of 118 (unspecified Reason). 6. Specific Problem (1.3.6.1.4.1.562.29.1.6.4.1.7): this string parameter provides further refinement to be probable cause according to ITU-T Recommendation X.733. The value of MCS alarm notification contains the family name and three-digit alarm number. For example, if the family name of an alarm is CAM and the alarm number is 121, the Specific Problem will have the value of CAM:121.

Alarm Clear notifications are used to indicate that previously reported problems have been cleared. The notification will contain the variable bindings of List of Correlation Identifiers along with sysUpTime, snmpTrapOID, currentTxNotificationSequenceNum, Component Object Identifier, Additional text, Alarm TimeStamp.

Polling management data The MCS system also supports the polled management model of SNMP to facilitate well-controlled network management traffic and to enable reliable data synchronization using a request and response interaction. Through the maintaining of MIB tables and variables, the MCS system provides the following functions: ¥ Recover the missing data due to lost notifications which is referred to as an audit. ¥ Perform initial data synchronization for NE inventory, state information, and active alarm list.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 310 OAMP

¥ Allow SNMP manager to monitor the status of OAM communications with the agents and resynchronize all data after recovering from communication loss.

Audit Auditing can be categorized as regular auditing and data auditing. In regular auditing, the SNMP agent permits polling the value of the following MIB variables: ¥ sysUpTime (1.3.6.1.4.1.562.29.1.1.3) ¥ currentTxNotificationSequenceNum (1.3.6.1.4.1.562.29.1.6.1)

Data synchronization This section describes the MIB data required by the SNMP manager to perform data synchronization initially after startup and after every Management restart/reboot.

The SNMP agent of the System Manager maintains a list of network elements that are in its domain along with their key attributes. This information is provided using the inventoryTotalNEs (number of NEs, OID: 1.3.6.1.4.1.562.29.1.4.2.1.1).

The following table shows the contents of the NE inventory table.

Table 33 NE inventory MIB nortelINMinventoryTotalNEs Unsigned32 ¥ number of NEs in the System Manager server domain ¥ indicates number of rows in the NE inventory table neInventoryTable neNameIndex The NE name (DisplayString), serves as the index of this table and uniquely identify the NE in the management domain of the System Manager server Columnar Variables Name Type Description NE Type DisplayString type of network element NE enrolment Time Unsigned32 time in seconds since the reference epoch January 1, 1970 NE IP address IpAddress IP address of the System Manager server NE version info DisplayString indicates the Major, Minor release of the software

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Fault management 311

NE vendor name DisplayString always shows the value of Nortel in the release NE location name DisplayString geographical/physical location of the NE

Active alarm status The Active Alarm Table provides a consolidated list of all the currently outstanding alarms against the managed element in the System Manager server domain. The Active Alarm table contains the following two index columns: and. ¥ NE name: This string value is the same as the unique NE name used at NE enrols. ¥ activeAlarmIndex: This is a monotonically increasing integer whose primary purpose is to identify an entry to be unique with respect to the first index, the NE name. This is a 32-bit integer, and the index need not be maintained to be contiguous.

The following table describes the columnar variables of the Active Alarm Table.

Table 34 Active alarm table details Columnar Variables Name Type Description sysUpTime TimeTicks time in hundreds of a second since the SNMP agent of System Manager server last initialized snmpTrapOID Object Identifier Authoritative identification of the notification componentOID DisplayString Distinguished name of the component for which this alarm is applicable Problem category Enumerated Integer Communications, QoS, Processing Error, equipment and environmental X.733 notificationID Integer32 Unique ID for the notification Description text DisplayString Textual description of the problem Alarm timestamp Integer32 Time in seconds since the reference epoch January 1, 1970 Probable Cause Enumerated Integer It will always have the value of 118 Specific problem DisplayString It will have the value of alarm number Correlation Id list DisplayString It will always be an empty string

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 312 OAMP

A management application can use activeAlarmIndex to determine the latest alarm on the stack of alarms for a particular network element, the value of activeAlarmIndex is monotonically increased by one for each alarm generated for a particular network element instance. The value is reset to zero when the network element instance is restarted. The management application is notified by NE OSI State Change Notification when the network element instance is restarted. To determine the latest stack of all kinds of notification, a management application needs to monitor the value of currentTxNotificationSequenceNum. The value is monotonically increased by one for every notification generated from FPM. The value is reset to zero when the FPM restarts. The management application needs to monitor the sysUpTime to determine whether the FPM has been restarted.

State Information The SNMP manager can query the operational state, the administrative state and the presence or absence of unknown status condition of all the NEs in the MCS management domain.

The following tables shows the details of the State Information table. The State Information table has NE name as the index and it maintains a one-to-one dependant relationship with the NE inventory table. When a row is added or deleted at the NE inventory table, it should also be reflected here.

Table 35 State information table variables Columnar Variables Name Type Description NE Admin state Enumerated integer (locked/unlocked/shutting down) NE Operational Enumerated integer (disabled/enabled) state NE Unknown Truth Value (true/false) Status

Configuration management There are two different types of roles within the MCS system: system management role and provisioning role. System management role is defined and used for configuration and maintenance tasks within the MCS system. The different administrators’ level of system access depends on their role defined by the system administrator using the Provisioning Client. There are four system management roles available for administrators: ¥ Default Administrator: This administrator does not have any access rights or privileges on the System Management Console.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Provisioning management 313

¥ System Administrator: The system administrator is the system super-user assigned during the initial deployment. System administrators have access rights to all functions on the System Management Console, and are responsible for adding and defining the roles of other administrators. ¥ Database Administrator: The database administrator role is divided into three categories: — Observer: This category only enables the administrator to observe and monitor the Database Manager. — Sysman: This category enables the administrators to log into Oracle Enterprise Manager (OEM) and use the management tools to perform database administration tasks. In addition, the administrators have access to the System Management Console that enables them to perform the same tasks as general administrators. — Oracle: This category is given to UNIX users capable of running command line scripts on the servers that host the Oracle database.

¥ General Administrator: General Administrators can access limited functions on the System Management Console. For example, they can be assigned to configure sites, servers, components, and services. They can monitor operations and maintenance information.

Provisioning management Provisioning Role is defined as a collection of access rights and privileges that enable an administrator to perform various provisioning tasks. System Administrators can create any administrator roles, including provisioning roles. System administrators can define the necessary provisioning roles to support their system, by enabling or restricting provisioning roles for specific actions. Example provisioning tasks are as follows: ¥ domain management ¥ Service Package creation ¥ IP Client Manager provisioning ¥ device management ¥ gateway management ¥ voice mail management ¥ subscriber management

Provisioning Management is performed using the Provisioning Client. The Provisioning Manager supports up to 1,000 simultaneous connections.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 314 OAMP

Accounting management Features of the Accounting Management system include but are not limited to the following: ¥ Central Accounting Manager: Accounting records are stored for up to 1 day on the local disk of the Session Manager server in the event of loss of communication with the Accounting Manager. Up to 7 days of accounting records are stored on the Accounting Manager. ¥ Local Accounting Manager: No accounting records are lost in the event of connection loss to the Session Manager local disk, unless the disk is full. ¥ A recovery stream from the LAM to the CAM automatically sends the raw records for formatting when the connection is restored. These records are sent at a lower priority than the real-time records. they are dumped into a recovery directory. ¥ Network Time Protocol (NTP) is used across all servers to ensure all accounting records timestamps are synchronized.

Accounting records The MCS system uses an event-based accounting model. The MCS system generates an accounting record for each event. In order to correlate all the records for a session, the SIP Correlation ID field is used.

The size and number of accounting records depends upon the event being recorded. As an example, four accounting records are generated for a basic completed SIP to SIP call (1 originating ingress, 1 originating egress, 1 terminating ingress, and 1 terminating egress record).

It is important to note that the records are also useful for data mining and should be considered an enhancement to performance reporting. Data mining can include CODEC usage, RTP usage which is indicative of more expensive calls, service utilization, type of client which can determine whether or not the Multimedia PC Client have upgraded to the latest load, call trending and call patterns, and whether a gateway has been misconfigured for routing.

Accounting record transport The Accounting Manager receives raw accounting data from the Session Manager and formats the received raw data into an IPDR-like format (based on XML) accounting record and transports the formatted accounting data to a configured destination.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Accounting management 315

In the event of communication loss between the Accounting Manager and the Session Manager, there are two alarms associated with the accounting streams: ¥ connectToPrimAmLost: raised whenever the Primary Stream’s communication is lost between the Session Manager and the Accounting Manager ¥ connectTo2ndAmLost: raised whenever the Recovery Stream’s communication is lost between the Session Manager and the Accounting Manager

The Accounting Manager can be configured (optionally) to compress (ZIP) the newly formatted IPDR-like records before transmission to the configured destination. The configuration is performed through the System Management Console under the Central Accounting Tab of the Accounting Manager.

The Accounting Manager can also be configured to transport IPDR/XML records to the local disks or directly to the downstream accounting systems using a TCP/IP connection or an FTP connection. The transport of accounting data is through near real-time streaming based on each record to a defined downstream device. Secure Shell (SSH) is supported to pull accounting files. FTP is used to push accounting files to one downstream destination.

All formatted accounting records are stored to the local disk on the Accounting Manager. In case of communication loss between the Accounting Manager and the back-office billing system, the accounting records cannot be transported through TCP/IP stream. Billing administrators can retrieve the records from the local disk on the Accounting Manager. In the case of administrators who configure an FTP push into their back-office system, transferred files will also be automatically renamed on the local Accounting Manager disk with the filename extension of transferred. For the TCP/IP stream, the recovery stream will automatically transfer accounting records to the enterprise’s back-office processing system.

CAUTION Potential data loss The Accounting Manager does not perform file deletion automatically. It is the administrator’s responsibility to remove old accounting records from the Accounting Manager once it is received by the downstream processor. Otherwise, the provided disk storage may be exhausted and cause a disk-full condition. New accounting information may be lost due to this condition. Alarms are generated to indicate the disk-full condition (which has a configurable threshold).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 316 OAMP

Accounting file rotation The default file rotation parameters for accounting record files are 100 Kbs in file size or 20 000 minutes in time. The threshold that is exceeded first will cause the current file to be closed and a new file to be opened (active).

Accounting file naming convention The formatted accounting filename that is output to disk storage has the following naming convention: IPDR_ @

is in format. For example 20010309 represent March 9, 2001

is one of the following text strings: ¥ .active - represents an open file that is actively being written. ¥ .closed - represents a closed file (The maximum file size (FileRotationSize) is reached, or the maximum time (FileRotationTime) elapses). ¥ .transferred - represents a transferred file using FTP, for example, the file has been successfully sent using FTP to the customer OSS successfully.

Accounting alarms The Accounting Manager reports a variety of disk full conditions as alarms are raised on the System Management Console. As disk-full conditions can result in the loss of accounting information, the Accounting Manager generates alarms to alert the administrator when preconfigured thresholds are met (or surpassed). The thresholds at which these alarms are raised are based on the following configurable properties on the Central Accounting Tab for the Accounting Manager: ¥ DiskMonMajorThreshold: indicates the percentage of accounting disk partition that must be used before a DiskMajorAlarm is raised. ¥ DiskMonCriticalThreshold: indicates the percentage of accounting disk partition that must be used before a DiskCriticalAlarm is raised.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Security management 317

IM related accounting By default, Instant Message accounting is disabled for performance reasons. It is important to note that Instant Message accounting does not capture the content of the Instant Message. Instant Message accounting only works for Instant Messages that traverse the Session Manager. The MAS IM Chat service provides a mechanism to bypass the Session Manager after the initial IM exchange has occurred.

Performance management Features of the System Management Console include but are not limited to the following: ¥ Disk usage and CPU utilization information can be viewed using the System Management Console. ¥ Operational Measurements are stored in Comma Separated Value (CSV) format. ¥ OSS may SFTP Operational Measurement (OM) information from the System Manager. ¥ Active and Archived OM data can be viewed using the System Management Console.

The default file rotation parameters for OMs are 100 Kilobytes in file size or 3,600 minutes in time. The threshold that is exceeded first will cause the current file to be closed and a new file to be opened.

If the connection between the System Manager and its network elements is lost for any extended period of time, raw OM files are written to local disk. There is enough disk space allocated on the local drives to store up to one day’s worth of OM files.

Once the connection is reestablished, it will resume its normal operation. The raw OM file is not lost during this period. However the raw OM files are not sent to the System Management Console for formatting.

OM data to be monitored for network growth and planning For details of operational measurements (OMs) and their definitions, see the related component documents . For information about the component documents, see "About this document" (page 19).

Security management The following are the OAM&P security management measures: ¥ history command file on the System Manager ¥ operating system hardening ¥ authentication of user ID and password and password complexity

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 318 OAMP

¥ multi-level management profiles ¥ secure Shell (SSH)

Backup and restore strategy For information about system backup and restore, see Routine Maintenance (NN42020-502).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 319 Appendix A IP functional components

Routing Routing has been defined as the process of choosing a best forwarding path based on a Layer 3 address over which to send packets. Routers use Layer 3 (Protocol) addresses to make decisions about where traffic should go. Hosts sending packets to other networks send those packets to their default router. The router selects the optimal route between itself and the destination networks using routing tables and decision trees. The router then forwards the packet to the next hop router or the locally attached host.

The router is able to transmit information between different LANs and WANs. The router can provide redundancy for the network by defining alternate paths to locations. It terminates broadcast domains, which lets the network manager create smaller broadcast domains, thus containing broadcast storms. Using filters, security for networks can be enforced, keeping unwanted traffic off network segments. Routers are used to build and scale large networks.

RIP, OSPF, and BGP are routing protocols used by routers for constructing the network topology. These protocols are configurable and can be tailored to match specific routing requirements. RIP and OSPF are protocols typically used within an enterprise, while BGP is the protocol typically used between enterprises.

When routers exchange updates that reflect changes in the network, they converge on a new representation of the topology. Network convergence is a state in which all routers posses a consistent view of the internetworking environment. Should a link failure occur in the network, the convergence time is the time it requires all routers to detect the failure (detection time), plus the time to propagate the bad news (propagation time), and finally the time to calculate and update the forwarding tables (calculation time). With minimum disruption on network applications, convergence time should be designed to be less than the fault tolerance time of the applications and the user of the end-to-end system.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 320 Appendix A IP functional components

RIP depends on receiving periodical (30 sec) routing updates to keep the routing table current. When there is a topology change, the convergence time can take up to minutes depending on the size of the network. It can also consume significant bandwidth for sending/receiving periodical updates. RIP should not be used in a large MCS network.

OSPF (Open Shortest Path First) is another Interior Gateway Protocol (IGP) that is used only for IP routing. OSPF solves problems with convergence (by quickly and reliably sending topology changes to the network) and overhead (by only advertising changes to the network).

IP addressing The following points contain a few key IP addressing concepts that must be kept in mind for addressing any IP network: ¥ Efficient use of address space—when numbering an IP network, it is important to use a structured approach to the allocation of the IP addresses, more specifically, to implement a hierarchical scheme. This enables more appropriate allocations of subnets and hosts numbers to those areas of the topology requiring them, for example, by employing variable length subnet masking (VLSM). ¥ Address summarization—Address summarization is the ability to express a range of power-of-2 blocks of contiguous addresses as a single routing database entry. Hierarchical routing protocols (OSPF) inherently benefit from address summarization by minimizing routing advertisements and thus routing database size and network overhead traffic. In addition, route policies can be efficiently defined by enabling blocks of addresses to be consolidated in a definition. ¥ Planning horizon—The addressing scheme should be flexible enough to handle network growth . A key objective is to minimize the probability of having to renumber networks and nodes. Changing network addresses is a disruptive activity. ¥ Distributed administration—Allocating subsets of IP addresses in ranges to geographical regions enables the distributed administration of the subnets within a region. Doing so enables a business organization to respond faster to local network requirements and changes. ¥ Understandable by network personnel—Dividing the address space into areas and allocating contiguous blocks of addresses to each area enables network personnel to more easily detect routing problems when they occur and reduce the amount of time required to troubleshoot routing problems. ¥ Route policies—Route policies can be simplified with a structured IP addressing plan. A single route policy can potentially be created to cover a range of routing entries by using an address/mask pair specification.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP addressing 321

A large flat network is inherently costly to manage. Structured (hierarchical) IP addressing refers to the mapping of an IP address space to a network topology using a hierarchical scheme. It imposes an ordered logical structure on the physical network topology.

Structured IP addressing can be illustrated using a graphical address tree. It allocates blocks of contiguous addresses that are power-of-2 multiples. It is repeated at all levels of the network involving refinement as one investigates the quantity of host addresses needed among the network segments at different levels (sites).

Figure 80 Structured IP addressing

Structured addressing methodology can be used in many ways when planning for the MCS deployment. For example, you can reserve an address space for IP Phone terminals at regional level, site level, campus level, remote location level, building level, or at floor level depending on your addressing strategy. If you have reserved addresses for IP Phone terminals in a building or the floor, you may place all phones in one VLAN within the building or the floor. No matter what approach is taken, it is critical to develop an addressing structure that is easily understood by the individuals responsible for maintaining MCS network operations.

Structured addressing enables effective address summarization to be performed, which results in performance benefits to the network infrastructure components by conserving memory in routing tables,

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 322 Appendix A IP functional components

bandwidth for the transmission of routing update messages, and CPU for processing the messages and managing the routing tables. It enables the network administrators to have better control over the network and enables them to detect and resolve network problems quicker.

Firewall types There are three main types of firewalls: filtering routers, stateful packet filters, and application gateways. Most firewall designs use a combination of these technologies. Filtering routers combine the function of routing and security and are the simplest form of firewalls. They are routers with traffic filters configured to filter packets based on IP header information. Filtering routers do not maintain state information; that is, they treat each packet separately, not as a part of a connection. This makes implementing complex policies very challenging if not impossible. Filtering routers are best used in combination with other firewall technologies to provide a first-level defense.

Stateful packet filters, also called stateful inspection engines, are designed from the start for sophisticated firewall functions. Unlike filtering routers, they maintain connection state and treat each packet as part of a connection. With stateful inspection, the packet is intercepted at the network layer, and then handed to the inspection engine. The inspection engine, which is application aware, extracts state-related information for the application and maintains this information in dynamic tables for evaluating subsequent connection attempts. Since these devices maintain the state information about a connection, they can be used to implement very complex security policies. Stateful inspection introduces stronger security using derived state information.

Application gateways, sometimes called application proxies, are programs that stand between the end user and the public network. The end user contacts the application gateway. The gateway performs the requested function on behalf of the user. The application gateway intercepts any IP packets from the Internet. The application gateway can enforce the security policy since the end user never talks directly to a system on the Internet. The application gateway runs on a bastion host. There are generally no user accounts on this host; its only function is to proxy requests from end users. Since the proxy stands between the user and the target system, they are not transparent to the users. Users will need to install custom applications to contact application gateways.

In the simplest firewall implementation, a firewall might consist of just a router with a group of traffic filters configured. In a more complex implementation, a firewall might consist of a screening router working with a standalone sophisticated firewall system to protect the corporate intranet as shown in the following diagram.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Network Address and Port Translation (NAPT) 323

Figure 81 A firewall system

This firewall implementation uses all the firewall technologies to provide strong protection for Intranet abuse. The screening router provides basic traffic filters and directs appropriate traffic to the application gateways. The application gateways protect the internal systems from direct exposure to vulnerable or risky services. The final step is a stateful inspection firewall that controls all access to the protected network.

Network Address and Port Translation (NAPT) Network Address Translation (NAT) was originally implemented to circumvent the problem with IPV4 running out of address space. To provide more address space for enterprises, RFC 1918 defines the following blocks of the IP address space for use in private enterprise networks: 10.0.0.0/8, 172.16-31.0.0/16, and 192.168.0.0/16. Any IP network number(s) can be chosen for implementing a private addressing scheme, but they cannot be advertised into the public network such as the Internet.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 324 Appendix A IP functional components

Figure 82 Network Address Translation

NAT (RFC 1631) is a process where the source or destination IP address in a packet is translated from a private enterprise address space to a public address routable in the Internet. This happens when the packet crosses a boundary system connecting the networks. You can also use NAT to enable communication between two autonomous IP networks with IP address conflicts. In the basic NAT model, the IP header of every packet must be modified. This modification includes the source IP address for outbound packets, the destination IP address for inbound packets, and the IP checksum. For TCP/UDP sessions, modifications must include update of checksum in the TCP/UDP headers. This is because TCP/UDP checksum also covers a pseudo header, which contains the source and destination IP addresses. Figure 83 NAPT

NAPT extends the translation one step further by also translating the transport identifier (for example, TCP and UDP port numbers, ICMP query identifiers). This enables the transport identifiers of a number of private hosts to be multiplexed into the transport identifiers of a single external address. NAPT enables a set of hosts to share a single external address. NAPT is a good fit for many SOHO (Small Office Home Office) users and

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Network Address and Port Translation (NAPT) 325

telecommuters. They have several network nodes in the office, but have a single IP address assigned to them by their service provider to access the Internet. This community of users would be benefited by NAPT, which would permit multiple nodes in a local network to simultaneously access remote networks using the single IP address assigned to their router. Note that NAPT can be combined with Basic NAT so that a pool of external addresses is used in conjunction with port translation. Address binding (or bind for short) is the phase in which a local node IP address is associated with an external address or vice versa, for purposes of translation. The address bindings can be done both statically and dynamically. Once the binding between two addresses is in place, all subsequent sessions originating from or to this host will use the same binding for session-based packet translation. In the case of static address binding, there is one-to-one address mapping for hosts between a private network address and an external network address for the lifetime of NAT operation. Static address assignment ensures that NAT does not have to administer address management with session flows. In the case of dynamic address binding, external addresses are assigned to private network hosts dynamically based on session flow usage requirements. A session (source IP address, source TCP/UDP port, target IP address, target TCP/UDP port) is defined as the set of traffic that is managed as a unit for translation. When the last session using an address binding is terminated, NAT would free the binding so that the global address could be recycled for later use. All address bindings must be initiated from the private side of the NAT. Depending on the ways address bindings are done, there are four different types of NAPTs existing today: ¥ Full Cone NAPT: one where all requests from the same internal IP address and port are mapped to the same external IP address and port. Furthermore, any external host can send a packet to the internal host, by sending a packet to the mapped external address. ¥ Restricted Cone NAPT: one where all requests from the same internal IP address and port are mapped to the same external IP address and port. Unlike a full cone NAPT, an external host (with IP address X) can send a packet to the internal host only if the internal host had previously sent a packet to IP address X. ¥ Port Restricted Cone NAPT: This is like a restricted cone NAPT, but the restriction includes port numbers. Specifically, an external host can send a packet, with source IP address X and source port P, to the internal host only if the internal host had previously sent a packet to IP address X and port P.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 326 Appendix A IP functional components

¥ Symmetric NAPT: one where all requests from the same internal IP address and port to a specific destination IP address and port are mapped to the same external IP address and port. If the same host sends a packet with the same source port, but to a different destination, a different mapping is used. Furthermore, only the external host that receives a packet can send a UDP packet back to the internal host.

A graphic comparison of these four types of NAPTs is shown in the following diagram. Figure 84 NAPT types

Because NAPTs hide the details of the IP addressing structure of the private network, as a side effect, they also provide security. Packets can only traverse the NAPT towards the private side when a bind has already been established. Although NAPT extends usable IP address space and enhances security for enterprises, it also causes serious problems for applications communicating over the Internet because NAPTs can break the following: ¥ the ability to use a unique address to communicate with a host or a service from anywhere in the network ¥ the ability for cooperating processes to exchange host/service addresses

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP Virtual Private Network (VPN) 327

¥ the ability of hosts to know their own addresses as viewed from outside the NAT ¥ the ability of an application to associate the host state with the IP address of the host ¥ applications that assume that two different ports with the same host address are located at the same host ¥ applications that expect traffic to come from particular port numbers ¥ sessions that have been inactive longer than the NAT session timer Note: Since NAT is not aware of the applications, it uses a heuristics (or configured) timer to determine when to close a binding or a pinhole.

¥ the utility of IP addresses for logging

In general, NAPTs will break any server applications running behind a NAPT and any applications wanting to distribute IP addresses and port numbers between cooperating processes. There are solutions for these problems, which include the use of static address bindings, Application-Level Gateway (ALG) on NAPTs and making DNS look like the servers on the inside have visible external IP addresses. However, these solutions do not always solve the problem, and sometimes make things worse. For example, static mappings are not scalable and error prone. In addition, ALGs might not be available for or compatible with the new applications.

IP Virtual Private Network (VPN) VPN (Virtual Private Network) uses a public network as an extension of the corporate private Intranet. The public network could be the Internet or a provider-managed network such as Frame Relay, ATM, IP, or IP/MPLS networks. The key feature of the public network is that you do not have exclusive use or control of it. Virtual Private Networks can be used to link corporate sites, replace dialup lines, and link to business partners and suppliers. When used to link your corporate network to those of your partners and suppliers it is referred to as Extranet. For our purposes, VPNs are defined as the private data communications networks that use a public (managed) IP network as the basic transport for connecting corporate data centers, mobile employees, telecommuters, remote offices, customers, suppliers, and business partners. VPNs can be accessed in a variety of ways, including analog dial, ISDN, dedicated circuit, Data over Cable, or xDSL, Frame Relay, ATM, Ethernet, and wireless. Typically, VPNs also feature security technology to transport encrypt or authenticate data.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 328 Appendix A IP functional components

Figure 85 Public IP network serving different users

There are three basic applications for VPNs: Remote Access Outsourcing, Extended Intranets, and Extranets. Remote access outsourcing using VPN is probably the easiest to understand and implement. The cost saving of replacing 800 numbers, modem banks, circuit costs, and associated personnel costs can be anywhere from 50 to 75 percent. For enterprises, VPN can also be used as an alternative to leased lines for linking corporate sites. These remote sites use only local access lines to connect to the corporate Intranet. The cost savings come from the fact that dedicated leased lines are charged by the mile. Since local connections are much cheaper, it becomes possible to connect remote offices with high-speed connections using dedicated lines. Another advantage of using the public IP network is that to fully mesh sites with dedicated lines requires a lot more lines than using VPN to achieve the same level of connectivity. A third use of VPN is to connect the corporate Intranet to customers, suppliers, contractors, and business partners. Extranets enable companies to form dynamic network connections to the people they do business within a timely and cost effective manner. Extranets leverage the public IP networks to replace the time consuming and costly practice of installing leased lines to other companies’ networks. However, Extranets require careful attention to the areas of interoperability and security. The following are the commonly seen VPN deployment models, which essentially describe the ownership of the VPN devices and where VPN tunnels are initiated and terminated: ¥ CPE-based model: This model uses the public IP network primarily as a transport service. There are two variations of this model. In the pure

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP Virtual Private Network (VPN) 329

CPE model, the customer owns and manages the VPN equipment and tunnel endpoints. The service provider provides only the access to the network, but is not involved in the formation of the VPN tunnels. In the managed CPE model, the service provider bundles the VPN service with the provider owned CPE equipment, provides the initial installation, sets up tunnels, and manages the VPN services remotely. The CPE devices perform routing, tunnel generation and termination, encryption, and firewall protection where needed. For both CPE-based models, tunnels are started and ended at customer sites. Figure 86 CPE VPN model

The above diagram shows an example of running IPsec tunnels over a provider managed IP network, which offers better QoS. In this example, Intranet connections are separated from the Internet connection offered by local ISPs, which do not require advanced QoS. Within the Intranet, private addressing schemes could be used to facilitate network expansion and to enhance security. The service provider backbone provides IP transport between the CPE devices. The address space and routing among customer sites and remote users are completely under the customer’s control. The remote users’ tunnel IP addresses are assigned by the CPE devices during remote login. In addition to using secure tunnels to connect business partners, firewall/NAT and static routing are also used to implement stored security policies for the communications with them. ¥ Network-based VPN model: There is no CPE equipment required for VPN functions. VPN equipment is placed in the service provider’s POPs offering Network-based VPN services to a large number of customers

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 330 Appendix A IP functional components

on a single hardware platform. This arrangement enables the service provider to reduce equipment and operations costs for offered services. Figure 87 Network-based VPN model

The provider’s VPN equipment performs routing, tunnel generation and termination, firewall protection, QoS, and encryption on a per customer basis to meet diverse requirements of the customers. The tunnels are terminated at PoPs, not extended to the customer sites. Layer 1 or Layer 2 traffic segregation provides last mile security for connected sites. The network-based VPNs offer enterprises with private addressing, private routing, and data protection for entire traffic streams between geographically diverse locations. The provider owns the backbone for this protected traffic and the access to the Internet. The backbone can support QoS enabled MPLS-VPN (RFC2547), IPsec-VPN, or VR (Virtual Router) VPN (RFC2764) for the attached customers. NAT is used at the POPs to manage limited IP address resources. Virtual routers (VR) are multiple software routing engines on a single hardware, but they share memory and processor resources of the same platform. Each VR runs independent IP routing protocols and builds independent forwarding tables completely isolated from each other. Packets are forwarded by a VR the same way a physical router does. Each VR is connected to CPE devices using standard physical interfaces and standard layer 2 protocols such as Frame Relay, PPP, ATM, and Ethernet. These VRs exchange routing information using standard IP routing protocols such as RIP, RIP V2, OSPF, BGP-4 and static routes. The VRs between each customer site are connected through a virtual connection through the core of the provider’s network. Since the Virtual Router model does not specify a tunneling protocol to

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP Virtual Private Network (VPN) 331

use, service providers are free to choose IPsec, GRE, MPLS, or Layer 2 protocols such as ATM or Frame Relay for their tunneling mechanism. Figure 88 Virtual router VPN model

¥ BGP/MPLS VPN model: Only one instance of the routing engine is running on a physical Provider Edge (PE) router. Each customer site has one or more Customer Edge (CE) routers, which exchange the site’s routing information with PE. MP-BGP (Multiprotocol BGP) is used to distribute VPN-IPv4 prefixes and associated MPLS labels among peered PE routers.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 332 Appendix A IP functional components

Figure 89 BGP/MPLS VPN model

Although there is only a single BGP routing database maintained for all VPNs, each VPN, however, maintains its own Virtual Routing and Forwarding table (VRFs). This is done by filtering (use route import/export filters) the target-community BGP attribute in the BGP database, for each VPN VRF. MPLS is used to forward packets over the backbone network using dual-label stacking. MPLS brings significant benefits to a network, including preprovisioned paths for faster network restoration and advanced traffic engineering capabilities. Both VR and MPLS support DiffServ QoS in the core network. The customer traffic can be policed, metered, and marked at the edge of the service provider network. With MPLS, the DiffServ DSCP can be realized implicitly using traffic engineered LSP or explicitly using EXP field carried in the label. With VR, the DiffServ DSCP can be carried natively or mapped to the Layer 3 or Layer 2 tunnels used for transport. Compared to CPE-based model, the Network-based model can scale to support a larger number of VPN networks. In addition, using the network-based model, the providers can offer more consistent QoS through traffic aggregations.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. IP Virtual Private Network (VPN) 333

Figure 90 QoS for VPN models

¥ Hybrid model: The hybrid model is a combination of both CPE-based and network-based VPN models. The CPE-based model is used where (central site for example) tighter security control is required and the network-based VPN model is used when VPN services need to be offered to a large number of smaller sites and remote users. In addition to the Layer 3 VPN models discussed above, there are also Layer 2 VPN solutions available. Most commonly seen are Frame Relay, ATM, Transparent LAN services, and Ethernet VLAN services. The following diagram summarizes commonly provided Layer 2 and Layer 3 VPN services. Figure 91 Layer 2 and Layer 3 VPN services

Although Layer 2 VPN tends to be easier to configure, it does not scale to a large network distributed over large geographic areas. Layer 2 VPN provides good security and QoS and is a good choice for metro-area VPN services.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 334 Appendix A IP functional components

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 335 Appendix B MCS deployment checklist

The following tables provide lists of items required for each phase of the MCS system deployment. Nortel recommendations are provided for each item. The development phases include the following: ¥ customer readiness assessment ¥ MCS service planning ¥ general network planning and engineering ¥ MCS network planning and engineering

Table 36 Customer readiness assessment phase Items Recommendations Technical user support ¥ Ensure that the customer technical and service support mechanisms are in place. teams, including TDM and data support, are trained. ¥ Ensure that the problem escalation and resolution procedures are established before deployment. ¥ Ensure that quick start training classes and user guide are available for MCS end users. MCS services quality Establish quality and service targets for MCS voice, video, and expectations are defined. interactive applications such as Instant Messaging. Use the ITU-T G.1010 as the basis for developing such service target requirements for MCS services: ¥ PSTN quality: The overall network must have less than 70 ms one-way delay and 0% packet loss mouth to ear. ¥ Business quality: The overall network must have less than 150 ms one-way delay and 1% packet loss mouth to ear.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 336 Appendix B MCS deployment checklist

Items Recommendations

¥ Acceptable quality: The overall network must have less than 250 ms one-way delay and less than 2% packet loss mouth to ear. Network health is checked. A network health check should be performed by Nortel or channel partner engineers to the deployment. The check is to ensure that network issues presenting negative impacts to the MCS deployment are identified. Ensure that up-to-date and detailed physical and logical network diagrams are assembled.

Table 37 MCS services planning phase Items Recommendations Security policies and procedures The existing security policy should be adapted for MCS services. for MCS deployment are The adaptation should include the policies for the acceptable use addressed. policy for users and network policies for servers, local/remote clients and gateways, firewalls, and filters for Layer 3 devices. Ensure that an audit of all security devices are analyzed for violation and compliancy. Establish procedures for regular network scanning and analysis to ensure that only needed UDP/TCP ports are open for the permitted services. QoS policies for MCS services An end-to-end QoS policy should be established for all are well defined for the entire applications including the MCS system. The QoS policy should network. include the following: ¥ Enforce consistent Layer 3-to-Layer 2 mappings over various Layer 2 transports. ¥ Follow the Nortel Networks Services Classes guidelines established for various application traffic types. ¥ Specify the required SLAs (Service Level Agreements) for portions of the network managed by third parties. ¥ Establish procedures for regular QoS and SLAs monitoring and validation on the network infrastructure.

The following is recommended for DiffServ tagging: ¥ EF DSCP code and 802.1p 6 to VoIP signaling and media traffic ¥ AF4 DSCP code and 802.1p 5 to video traffic

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Appendix B MCS deployment checklist 337

Table 38 General network planning and engineering phase Items Recommendations LAN readiness is checked. ¥ Use only CAT-5 or better grade cables for Ethernet connections. The cable length should not exceed 100 meters. ¥ Use only Layer 2/Layer 3 switches for LAN. ¥ Use only Layer 2/Layer 3 switches that can support minimum 4 queues among which at least one queue is a priority queue. ¥ Enable autonegotiation on both the switch ports and the connecting devices to eliminate packet losses as result of duplex mismatches. ¥ Place DSCP EF and 802.1p 6 tagged packets in the priority queue.

It is good practice to limit one-way delay for campuses and branch offices to no more than 5 ms so that packet loss is very close to 0%, if not 0%. WAN readiness is checked. The following are good practice: ¥ Use WAN IO modules that can support minimum 4 queues among which one queue is a priority queue. ¥ Place DSCP EF and 802.1p 6 tagged packets in the priority queue. ¥ Provision sufficient bandwidth to cover all traffic types. Ensure that on the average, no more than 30% of bandwidth should be used on the WAN links. ¥ Establish at least two (primary/backup or loadshare) physical/logical paths for network traffic. ¥ Use Layer 2 link bundles such as MLPPP, Multiple VCs, MLT and Layer 1-Layer 3 fast failover mechanisms (SONET, RPR, MPLS, ECMP) where appropriate to enhance network stability. ¥ Use lower layer error detection mechanisms for quick Layer 2/Layer 3 failover. Examples of such mechanisms include ATM F5 OAM loopback cell, Frame Relay A-bit, Ethernet Far End Fault Indication (FEFI) ¥ Fix and monitor physical layer timing and bit error rates.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 338 Appendix B MCS deployment checklist

Items Recommendations

¥ Limit delay, packet loss, and jitter performance for all WAN link, especially on WAN aggregation points and on WAN links slower than 384 Kbps, based on the established end-to-end impairment budgets. IP network readiness is checked. The following are good practice: ¥ Use redundant network architecture to enhance the network resiliency and availability and to minimize Layer 3 convergence. The particular points to follow are as follows: — Use VRRP at the dual-homed network edges to provide default router redundancy to support real-time and critical network services. — Use Nortel Multi-link Trunking (MLT), Distributed MLT (DMLT), Split MLT (SMLT) to increase network bandwidth and stability. — Enable IP Equal Cost Multi-Path (ECMP) for loadsharing and quick failover.

¥ Structure IP address spaces that meet the needs of new services, support route summarization, and facilitate network management. ¥ Use OSPF as the choice of routing protocol. The particular points to follow are as follows: — Use more than one ABRs to provide the redundant paths. — Use Area and External route summaries to reduce routing table and increase routing stability. — Use Stub Area and NSSA where applicable to reduce routing table and increase routing stability. — Limit the number areas supported by a switch to 3. — Limit the number of adjacencies supported by the switch to what is recommended by the vendor. — Limit convergence time to a few seconds.

¥ Use RIP only in a very small network that has less than 3 hops. When RIP is used, use the following guidelines: — Trigger update must be enabled. — RIPv2 is recommended. RIPv1 is not recommended.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Appendix B MCS deployment checklist 339

Table 39 MCS network planning and engineering phase Items Recommendations MCS network readiness is The following are good practice: checked. ¥ Use the MCS logical hierarchy to design the network. ¥ Provide at least two network paths between any pair of MCS components such as servers, gateways, and clients. ¥ Place MCS servers, gateways, and clients to locations with the following attributes: — High availability — Sufficient link speed and bandwidth — Low delay and packet loss rate — Minimum impact related to routing changes — Security (logical and physical)

¥ Shared media such as Ethernet hubs are not recommended for connecting MCS servers and clients. Wireless LAN should minimize VoIP traffic on a given node as it is a shared medium. ¥ Establish port-based Layer 2/Layer 3 filters to tag the traffic from servers for Session Managers, IPCMs, Media Gateway 3200, and Media Application Servers. ¥ Allocate IP address space to MCS components that meet the current and future needs. ¥ Place standalone IP Phones in a separate VLAN when feasible. ¥ Place MCS server and gateway components in separate OSPF (Stub or NSSA) areas when feasible. VPN readiness is checked. The following are good practice: ¥ Use Layer 2/Layer 3 VPN such as VC, MPLS, OE, IPsec tunnel to maintain a single IP address space over geographically distributed locations to avoid the use of NAT devices. With the VPN, the use of Border Control Points in the media path between two clients in the same address space is not required. This reduces the delay and cost of communication between any two endpoints. ¥ Use Layer 2 VPN such as VLAN over OE/ATM/MPLS to support geographic redundancy of the MCS server farm.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 340 Appendix B MCS deployment checklist

Items Recommendations

¥ Use IPsec tunnels to support remote sites and mobile users. MCS network security is checked. The following are good practice: ¥ Open pinholes needed for MCS communications on all firewall devices. ¥ Configure firewall/NAT timers to be twice larger than MCS Ping/Keep Alive timers used on the Session Manager Servers, and IPCMs. ¥ Close all unused TCP and UDP ports on the servers and other security devices. ¥ Enhance the protection for MCS server farm by implementing traffic filters on the Layer 2 or Layer 3 devices that are fronting the server/gateway farm so that only the traffic from known sources/destinations are passed to the servers.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 341 Appendix C DSCP code values

The following table provides the binary, hex and decimal values for the corresponding DSCP codes:

Table 40 Type of Service (ToS) bits for the DSCP codes DSCP ToS Bits Hex Values Decimal Values DE, CS0 00000000 0 0 EF 10111000 B8 184 AF41 10001000 88 136 AF42 10010000 90 144 AF43 10011000 98 152 AF33 01111000 78 120 AF32 01110000 70 112 AF31 01101000 68 104 AF23 01011000 58 88 AF22 01010000 50 80 AF21 01001000 48 72 AF13 00111000 38 56 AF12 00110000 30 48 AF11 00101000 28 40 CS7 11100000 E0 224 CS6 11000000 C0 192 CS5 10100000 A0 160 CS4 10000000 80 128 CS3 01100000 60 96 CS2 01000000 40 64 CS1 00100000 20 32

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 342 Appendix C DSCP code values

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 343 Appendix D Customer specific information datasheets

The following tables provide datasheets are used for collecting customer specific data required for network deployment. Photocopy a table for each component required for the deployment.

Network deployment configuration: network overview MCS 5100 functional module configuration datasheets Customer-specific information Customer name: Date: Network Overview IP addresses Subnet mask IP address range Default gateway 1 Default gateway 2

Note: Network elements: gateways/routers, private interface of media blades of Border Control Point, controller card of Border Control Point, IBM eServer x306m servers, PC for management service, MRV terminal server.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 344 Appendix D Customer specific information datasheets

Configuration datasheet for QoS Layer 2 switch such as BPS 2000, MRV terminal server, System Management Console (PC), SMTP, and NTP sources Designation Value Comments QoS Layer 2 switch such as BPS Stack IP address (if BPS is cascaded) Host name IP address Subnet mask Default gateway MRV terminal server IP address Default gateway IP address Subnet mask MRV terminal server MRV terminal server LOM logical port and physical ports (serial A) MRV terminal server console logical and physical port (serial B) System Management Console (PC) Host name IP address Default gateway IP address Subnet mask SMTP Administrator e-mail SMTP server IP address NTP sources NTP source IP address NTP source IP address

Configuration datasheet for server of the System Manager Designation Value Comments ¥ Primary System Manager server ¥ Secondary System Manager server Host name

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Appendix D Customer specific information datasheets 345

Designation Value Comments Service logical IP address Machine logical IP address Primary Ethernet interface IP address (e.g. dmfe0/bge0) Layer 2 switch name, VLAN ID, port number for primary Ethernet interface (e.g. dmfe0/bge0) Secondary Ethernet interface IP address (e.g. dmfe1/bge1) Layer 2 switch name, VLAN ID, port number for secondary Ethernet interface (e.g. dfme1/bge1) MRV terminal server LOM logical port and physical ports (serial A) MRV terminal server console logical and physical port (serial B) Default gateway IP address Subnet mask Net management IP address Net management default gateway IP address Net management subnet mask

Configuration datasheet for server of the Accounting Manager Designation Value Comments ¥ Primary Accounting Server ¥ Secondary Accounting Server Host name Service logical IP address Machine logical IP address Primary Ethernet interface IP address (e.g. dmfe0/bge0) Layer 2 switch name, VLAN ID, port number for primary Ethernet interface (e.g. dmfe0/bge0) Secondary Ethernet interface IP address (e.g. dmfe1/bge1)

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 346 Appendix D Customer specific information datasheets

Designation Value Comments Layer 2 switch name, VLAN ID, port number for secondary Ethernet interface (e.g. dfme1/bge1) MRV terminal server LOM logical port and physical ports (serial A) MRV terminal server console logical and physical port (serial B) Default gateway IP address Subnet mask Net management IP address Net management default gateway IP address Net management subnet mask

Configuration datasheet for server of the Database Manager Designation Value Comments ¥ Primary Database Server ¥ Secondary Database Server Host name Machine logical IP address Primary Ethernet interface IP address (e.g. dmfe0/bge0) Layer 2 switch name, VLAN ID, port number for primary Ethernet interface (e.g. dmfe0/bge0) Secondary Ethernet interface IP address (e.g. dmfe0/bge1) Layer 2 switch name, VLAN ID, port number for secondary Ethernet interface (e.g. dfme1) MRV terminal server LOM logical port and physical ports (serial A) MRV terminal server console logical and physical port (serial B) Default gateway IP address Subnet mask Net management IP address

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Appendix D Customer specific information datasheets 347

Designation Value Comments Net management default gateway IP address Net management subnet mask

Configuration datasheet for server of the Session Manager Designation Value Comments ¥ Primary Session Server ¥ Secondary Session Server Host name Primary Ethernet interface IP address Layer 2 switch name, VLAN ID, port number for primary Ethernet interface Secondary Ethernet interface IP address Layer 2 switch name, VLAN ID, port number for secondary Ethernet interface MRV terminal server LOM logical port and physical ports (serial A) MRV terminal server console logical and physical port (serial B) Default gateway IP address Subnet mask Net management IP address Net management default gateway IP address Net management subnet mask

Configuration datasheet for IPCM and Prov server Designation Value Comments ¥ Primary IPCM/Provisioning Server ¥ Secondary IPCM/Provisioning Server Host name Primary Ethernet IP address Layer 2 switch name, VLAN ID, port number for primary Ethernet interface Secondary Ethernet interface address

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 348 Appendix D Customer specific information datasheets

Designation Value Comments Layer 2 switch name, VLAN ID, port number for secondary Ethernet interface MRV terminal server LOM logical port and physical ports (serial A) MRV terminal server console logical and physical port (serial B) Default gateway IP address Subnet mask Net management IP address Net management default gateway IP address Net management subnet mask

Configuration datasheet for the Media Application Server Designation Value Comments Host name Machine logical IP address Layer 2 switch name, VLAN ID, port number for MAS Ethernet connection 1 Layer 2 switch name, VLAN ID, port number for MAS Ethernet connection 2 (dual connection only) Default gateway IP address Subnet mask

For information about MAS IP address requirements, consult Media Application Server Planning and Engineering (NN42020-201).

Configuration datasheet for the Media Gateway 3200 Designation Value Comments Host name Machine logical IP address Layer 2 switch name, VLAN ID, port number for Media Gateway 3200 Ethernet connection 1

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Appendix D Customer specific information datasheets 349

Designation Value Comments Layer 2 switch name, VLAN ID, port number for Media Gateway 3200 Ethernet connection 2 (dual connection only) Default gateway IP address Subnet mask

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 350 Appendix D Customer specific information datasheets

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 351 Appendix E Presence impact on system capacity

Overview Presence updates can cause significant network traffic and transaction stress for the Session Manager. Nortel recommends that a policy be established on the maximum number of friends for each subscriber and the maximum number of watchers for each subscriber and to determine who should have automatic presence enabled.

Automatic presence is defined as Idle or On the Phone presence. Automatic presence state changes do not require a conscious effort by the subscriber to modify their state.

Weighted SIP transaction cost SIP transactions are not identical in their impact on the MCS network. A SIP transaction is defined as a request followed by a response. For example, a REGISTER request followed by a 200 OK response is considered a SIP transaction. Different types of MCS call and service types are comprised of one to many SIP transactions.

Some transactions require significantly more processing and bandwidth than others. For example, the cost of a SIP-SIP call is more than the cost of a SIP Instant Message (IM). In order to keep numbers and calculations consistent a baseline is established from which all other transactions derive their weight.

To describe performance of the MCS system, Nortel introduced a measurement called a Weighted SIP Transaction Cost and each transaction type is assigned a weight relative to the cost of unauthenticated IMs on both the Session Manager and the Database Manager.

Several transaction types are actually the aggregation of a number of SIP messages required to complete a basic function such as a SIP-SIP call, gateway call, or forking. For example, the NOTIFY transaction type is actually a notification followed by an acknowledgement.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 352 Appendix E Presence impact on system capacity

Unauthenticated IMs have a weight of 1.0, while all other transitions have weights either higher or lower than 1.0. When a system is rated for a certain number of transactions, it is assumed that all transactions are expressed in terms of IMs.

To determine the maximum number of nonIM transactions the system can support, the rated capacity must be divided by the Weighted SIP Transaction Cost of the transaction in question. If a system is rated at 100 000 transactions in a busy hour and the maximum number of SIP-SIP calls needs to be determined, then 100 000 is divided by the weight of a SIP-SIP call.

Since SIP-SIP calls have a higher weight than IMs, the maximum rated capacity measured in SIP-SIP calls is less than 100 000. On the other hand, SIP PING messages require very little resources to be processed so the maximum rated capacity expressed in terms of SIP PING messages would be much greater than 100 000.

Presence state update All SIP transactions place some load on the Session Manager and presence updates are no exception. A number of events can generate presence updates to be sent to the Session Manager. The Session Manager then sends presence state change notification to all clients that are subscribed to watch that particular subscriber.

The following table shows the states that are available to each client type.

Mapping of presence states to MCS clients Unavailable Connected Active Connected Active on offline available inactive the phone Multimedia PC Client XXXXX Multimedia Web Client XXXXX IBM LotusNotes Client XXXXX IP Phone 2002 X X X IP Phone 2004 X X X IP Phone 2007 X X X IP Phone 1120E X X X IP Phone 1140E X X X

To extend the usefulness of client states, subscribers can create custom states based on the either the Connected state or the Unavailable state by appending a note to either one providing additional information.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Building call pattern models 353

For example, a person going on vacation might change their state to (Unavailable On Vacation) in order to provide an explanation for the unavailable status.

In addition to state changes being reported as a result of connecting and disconnecting to the system and changing state manually, subscribers can also configure their clients to report additional state changes automatically. The Multimedia PC Clients and Multimedia Web Clients can be configured to report state changes when a subscriber has been inactive for a specified period of time. This presence state is called Idle automatic presence. All clients can report when a subscriber is on or off the phone. This presence state is called On the Phone automatic presence.

To enable the automatic presence feature in MCS clients, the feature must first be enabled in the subscriber’s service package. When a presence update occurs, all subscribers who are subscribed to watch the particular SIP User ID are notified of the state change. In a course of a day, an active phone user can generate a dozen or more state changes (2 for each call) and, as shown later, these transactions can use a significant amount of resources.

Building call pattern models It is possible to estimate the load placed on a Session Manager by determining the quantity of each individual transaction type. It is good practice to build models with different call patterns when engineering a system to establish the type of transaction mix the system is capable of supporting. One use of such a model can be to find out the number of subscribers that can be supported with automatic presence activated.

It is possible to find a reasonable answer by using SIP call flows and Weighted SIP Transaction Costs along with different traffic mix estimates. An output of one possible rough model is illustrated in the following figure.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 354 Appendix E Presence impact on system capacity

Figure 92 Example call pattern model

ATTENTION All other factors being constant, the maximum number of subscribers supported with automatic presence activated decreases with a corresponding increase in the maximum number of friends for each subscriber.

All users placed in an address book of a particular subscriber can be designated as Friends, up to the limit provisioned in the service package, through a simple check box. The presence state of all address book entrants designated as Friends are continuously monitored by the system.

To create the graph shown in the above figure, the following formulas are used. This and other subsequent formulas in this section are derived from the SIP Transaction Model.

Calculating cost of NOTIFY transaction Use the following formula to calculate the cost of NOTIFY transaction caused by a presence state change:

TransactionCost_A = Clients x [2 x Calls/hr x (Friends + 1) + StateChange/hr x (Friends + 1)] x NOTIFYCost

This formula calculates the number of notify transactions and then multiplies it by the corresponding SIP weighted transaction cost. To obtain the number of transactions generated by phone conversations, the number of calls for each subscriber in an hour is multiplied by the number of friends plus one and then again multiplied by two since each call generates two state changes. To this result, the number of transactions generated by automatic presence and manual state changes is added. Then transactions for each

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Building call pattern models 355

subscriber in an hour are multiplied by the number of active clients to find the number of notify transactions from all active clients. The last step is to multiply the result by the SIP Weighted Transaction Cost of presence state changes.

Note: In the formula, SateChanges/hr, represent both automatic presence and manual state changes for each subscriber in an hour.

The Friends lines in the above figure represent the number of transactions generated when subscribers with X friends make 5 calls and two additional state changes an hour. The different lines show what happens when each subscriber has the given number of friends with automatic presence update enabled. In particular, line 2 on the graph shows the number of transactions generated when each subscriber has 16 friends.

The way to read this graph is to locate the rated hourly transaction capacity of the Session Manager. In this case, it is assumed that the capacity is 200 000. Next, draw a horizontal line that intersects all the Friends lines and from the intersection of the two drop a line to the X axis. The value on the X axis will show how many clients can be supported when presence is turned on for given maximum number of friends.

Note that this value does not take into account all other messaging that normally takes place such as client registrations, subscriptions and other services, and is only the first step of developing a rough model.

Continuing the example with 16 friends, it is apparent that the dashed vertical line intersects the X axis close to 4000 subscribers. In this example, this means that a Session Manager rated at 200 000 transactions an hour can support a little over 4000 subscribers if presence is turned on and each subscriber has 16 friends.

However it is important to keep in mind that so far only presence related transactions have been accounted for. One Session Manager, in this example, is overloaded with 4,000 active subscribers because all of the other SIP transactions besides presence notifications will drive the system into overload.

The model makes no distinction between a subscriber and a connected client. One subscriber can be logged in on multiple clients at the same time. For instance, a subscriber may choose to have an IP Phone 2004 and a Multimedia PC Client connected at the office and then simultaneously connects from a remote location using the Multimedia Web Client. In this scenario there are three clients active on behalf of just one subscriber. This model also assumes that no authentication messaging takes place. In a live system this is not usually the case. Some transactions will

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 356 Appendix E Presence impact on system capacity

require authentication to occur since a MCS clients’ cached authentication credentials expire every 5 minutes and most clients from other vendors don’t implement caching of authentication credentials.

Figure 93 "On the Phone presence update" (page 356) is an example of an MCS-client-to-MCS-client call where both subscribers have automatic presence enabled in their service package and On the Phone presence activated in their client preferences.

Figure 93 On the Phone presence update

When automatic presence is activated with both Idle and On the Phone options, one phone call generates two state changes. The first state change is Active On the Phone and when the call is over the client state changes back to Active Available. Each client requests to watch the state of subscribers on the friends list as well as the state of the subscriber using the client. Each time a state change occurs, SIP REGISTER and NOTIFY messages have to be sent. The REGISTER message is sent to the Session Manager indicating a presence state change and NOTIFY messages are sent to all clients watching the subscriber who just went through a state change.

Even when calls are not made but subscribers change their state manually or because of Idle presence changes, a large number of NOTIFY messages are generated. An illustration of this concept is provided in Figure 94 "Idle presence update" (page 357).

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Building call pattern models 357

Figure 94 Idle presence update

When Subscriber B wants to watch the state of Subscriber A, the subscriber must place the SIP User ID of Subscriber A in Subscriber B’s address book and mark the entry as a friend. Usually this action is reciprocated. Subscriber A will also place the SIP User ID of Subscriber B in Subscriber A’s friends list.

Most people have a mutual interest in watching each other. However, there are cases when an individual is of particular interest, such as a senior manager, a large number of subscribers want to be notified of the manager’s presence state changes. In this case, the number of friends the manager is watching is less than the number of subscribers watching the manager. This is a special case that requires additional consideration and is not discussed in this section.

For the purposes of the current illustration it is assumed that the number of watchers equals the number of friends. If the presence service is enabled in the domain, all subscribers watch their own presence state. This feature enables a subscriber to maintain multiple, concurrently registered MCS clients in sync with each other. For example, a person may register from an IP Phone 2004 and then concurrently register using their Multimedia PC Client. If they change their state on either of the registered clients the other client will automatically be updated with the new state. When a person goes out for lunch it is likely that their client will become idle and their state will change to Connected Inactive. When they return from lunch and resume activity, the client will again change to Active Available. Both clients will reflect the most current state.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 358 Appendix E Presence impact on system capacity

Calculating cost of NOTIFY and REGISTER transactions In the previous example, one Session Manager can be overloaded with 4,000 active subscribers. To determine a more realistic number of supported subscribers, additional transaction costs must be considered. REGISTER messages and authentication costs must also be accounted for.

The following formula is used for calculating the cost of REGISTER transactions:

TransactionCost_B = Clients x (2 x Calls/hr + StateChange/hr) x REGISTERCost

The above formula calculates the cost of REGISTER message sent out from active clients to the Session Manager when presence state changes occur.

The following formula is used for calculating the cost of NOTIFY and REGISTER transactions:

TransactionCost_C = TransactionCost_A + TransactionCost_B

The formula calculates the total cost of NOTIFY and REGISTER transactions associated with presence state change when no authentication takes place.

Figure 95 "Call pattern model with cost of NOTIFY and REGISTER transactions" (page 358) shows the resulting graph.

Figure 95 Call pattern model with cost of NOTIFY and REGISTER transactions

An examination of the graph shows that the maximum number of subscribers that can be supported is 3,000, assuming no other messaging takes place and there is a maximum for only 16 friends for each subscriber.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Building call pattern models 359

Calculation cost of NOTIFY, REGISTER and authentication transactions To further improve the accuracy of the model authentication cost must be added. The following formula is used to calculate the cost of NOTIFY, REGISTER, and authentication transactions.

TransactionCost_D = TransactionCost_A + [Clients x (2 x Calls/hr + StateChange/hr) x authenticationCost]

To calculate the transaction cost of NOTIFY and REGISTER transactions related to presence state with authentication, the above formula takes the cost of NOTIFY transaction and adds the cost of authenticated registers. The cost of authenticated REGISTERs is calculated by multiplying the number of register transactions by the cost of authenticated register transactions.

Note: A simplifying assumption is made that all REGISTER messages are authenticated and the credential caching capability of Nortel clients is not used.

The following figure shows the call pattern model based on the cost of NOTIFY, REGISTER, and authentication transactions.

Figure 96 Call pattern model with cost of NOTIFY, REGISTER, and authentication transactions

The difference between Figure 95 "Call pattern model with cost of NOTIFY and REGISTER transactions" (page 358) and Figure 96 "Call pattern model with cost of NOTIFY, REGISTER, and authentication transactions" (page 359) is not as pronounced as when REGISTER messages were added to the model previously.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 360 Appendix E Presence impact on system capacity

It is clear that even if the maximum list of friends is halved from the 16 to 8 the system will still be overloaded with just presence related messages. There is no room, in this hypothetical situation, for any other messaging such as call setup, teardown, client registrations, IMs. Therefore the only viable option for this example would be to lower the total number of subscribers or to lower the number of subscribers with automatic presence updates enabled.

If automatic presence updates are limited to only key personnel, for example at 15 percent of the subscriber base, it will produce a dramatic improvement of system capacity.

Figure 97 "Call pattern model with 15 percent presence penetration" (page 360) clearly illustrates the same system with 4,000 subscribers and 15 percent presence penetration will only consume a quarter of the total transaction budget per active Session Manager. Assuming a Weighted SIP Transaction limit of 200 000 per Session Manager and each subscriber, with automatic presence enabled, has 16 friends in their list.

Figure 97 Call pattern model with 15 percent presence penetration

Additional considerations During the deployment of an MCS system, it is possible to split automatic presence into its two features, namely the Idle and On the Phone presence.

By specifying different penetration rates for each one, greater flexibility can be achieved. For example, most users today expect Idle presence found in popular IM Clients. So a penetration of 100% Idle presence can be used. One the other hand, On the Phone presence can be restricted to a much smaller group of subscribers.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. Building call pattern models 361

In terms of feature impact, On the Phone presence updates are expected to occur more frequently than Idle presence notifications.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 362 Appendix E Presence impact on system capacity

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 363 Appendix F Constructing a reference model for system capacity planning

The Multimedia Communications Server (MCS) system contains many logical and physical elements which can be hosted on a number of different hardware platforms. A particular deployment of an MCS system may be used to offer a broad set of services and can span multiple physical locations. For these reasons, performing detailed and precise system capacity planning is difficult and should be done by professionals with good understanding of the complete system. The discussion and illustrations contained in this section are not intended to replace a rigorous planning process, but are provided as a starting point and a way to obtain an estimate of the system capacity.

SIP transaction overview This engineering guide provides transaction weights for the most common MCS transactions. These weights are intended for calculating a load imposed on a system in certain scenarios. Even though it is not possible to predict all calling patterns that can arise when a full system is deployed, it is important to look at a subset of scenarios that will most likely use the majority of system capacity. These usage scenarios are composed of discrete transactions which are listed in the following table.

SIP Transaction types SIP transaction type Description 1 Instant Message (IM) Basic IM from SIP client to SIP client 2 Basic SIP-to-SIP call Basic call from SIP client to SIP client 3 Client registration Uses REGISTER messages. This type of transaction accounts for a client registration with the Session Manager. The cost of this type of transaction is larger than the cost of presence state change REGISTERs.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 364 Appendix F Constructing a reference model for system capacity planning

SIP transaction type Description 4 Presence state change Uses REGISTER messages. This type of transaction does not include corresponding NOTIFY messages and requires less processing than the client registration REGISTER message. 5 SIP PING A PING message in SIP (Not to be confused with ICMP PING) 6 Forking One subscriber has multiple clients registered when a call or IM comes in. All registered clients ring or receive IMs. Forking should not be confused with Personal Agent-based sequential ringing or simultaneous ringing where translation logic must be traversed for each address. 7 Presence subscription Uses SUBSCRIBE messages. This type of transaction does not include corresponding NOTIFY messages that are sent from the Session Manager to the client. 8 Presence notification Uses NOTIFY messages. This type of transaction is usually sent by the Session Manager. 9 Address book subscription Uses SUBSCRIBE messages. This type of transaction subscribes to the Address Book service. After subscribing to the Address Book service, clients proceed to download the address book from the Provisioning Manager. 10 Service package subscription Uses SUBSCRIBE messages: This type of transaction accounts for service package subscriptions. After subscribing to their service package, clients proceed to download the service package from the Provisioning Manager. 11 Gateway calls (E.164 Calls that require the Session Manager to find a route translations) using translations for the requested destination address.

These discrete transactions can be used to make a rough spreadsheet model with which numerous what-if scenarios can be examined. These usage scenarios can shed some light on what type, level and mix of transactions can be supported by a given system. The Weighted SIP Transaction model provided earlier in this guide in the section about the Session Manager contains maximum transaction ratings for each platform type. A rough spreadsheet model can be used to calculate the number of SIP Weighted transactions generated by a given usage scenario and this can be compared to the maximum rated capacity.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. SIP transaction cost calculation 365

In addition to understanding SIP transaction weights and the different types of transactions, it is also important to obtain information about the subscriber base that will be actively using the system. For the purposes of illustration, the subscriber base inputs are listed with sample values in the following table.

Sample values for subscriber base inputs Subscriber base inputs Sample values A Total number of active subscribers 3,000 B Active Multimedia PC Clients 1,000 C Active Multimedia Web Clients 700 D Active IP Phone clients 1,000 E Active nonNortel clients 300 F Concurrently registered clients per subscriber 1 G Busy-hour call origination rate per subscribers 5 H Registrations within a day (24 hours) per client 2 I Percentage of conference calls 5% J Percentage of SIP-SIP calls 70% L Percentage of PSTN/BPX (E.164) calls 15% M Average number of parties in an Ad Hoc audio conference call 3 N Average number of parties in a Meet Me audio conference call 6 O Number of friends per subscriber 20 P Instant Messages sent per subscriber 5 Q Maximum engineered capacity 60% R Manual presence change per client per day 2 S Number of times that a PC running an MCS multimedia client goes 2 idle during busy hour T Automatic presence On the Phone penetration 15% U Automatic presence Idle penetration 15%

SIP transaction cost calculation With the information presented in the above table, it is now possible to calculate the Session Manager transaction costs for the most common cases of system usage. All calculations are based upon everything occurring during the busy hour. The calculations are presented in the form of formulas that use reference letters from the above table and are followed by explanations.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 366 Appendix F Constructing a reference model for system capacity planning

Instant Messages To find the number of transactions generated by Instant Messages, the number of subscribers is multiplied by the number of IMs sent by each subscriber as shown in the following formula:

Note: The number of subscribers is assumed to equal the number of subscribers with clients that support convenient sending of IMs. This may not always be true since some clients do not support sending IMs and it is not very convenient to compose IMs on IP Phones with a numeric keypad, even though they support sending and receiving IMs. Subscribers who only have a desk phone are not likely to generate a large number of IMs.

Basic SIP-SIP calls To find the number of transactions generated by SIP-SIP calls, the number of subscribers is multiplied by the subscriber’s call rate in an hour and then multiplied by the percentage of all calls that are SIP-SIP as shown in the following formula:

Note: Basic SIP-SIP calls are defined as using SIP addressing such as [email protected]. Calls using E.164 addressing such as 123-456-7891 have a different cost associated with them and should be calculated separately.

Client registrations To find the number of transactions generated by client registrations during a busy hour, the number of subscribers is multiplied by the number of concurrently registered clients for a subscriber and by the number of times a client registers per 24 hour period. The result is divided by 4 to account for the bursty nature of registrations.

Note: It is important to note that both registration and de-registration use the same SIP transaction type, namely REGISTER. To register with the system, a client sends in a registration message with an expiration value greater than zero. To de-register, a client sends in a registration message with an expiration value of zero. If all registrations were spaced out equally over a 24 hour period, then instead of dividing by 4, the

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. SIP transaction cost calculation 367

equation above would be divided by 24. Since the nature of registrations is bursty and not uniform, it is important not to underestimate the number of registrations in a busy hour. Most registrations and de-registrations will probably take place around the start and end of a working day. By dividing by 4 in the equation above we get a much closer approximation of the number of registration transactions that will take place during a busy hour. If more precise data is available from analyzing actual system usage it can be substituted into the equation to get a more precise estimate.

Presence-state-change generated REGISTER transactions To find out the number of transactions generated by presence state changes, use the following formula:

The formula for calculating the number of REGISTER transactions from presence state changes is composed of three parts: 1. transactions from manual state changes per day per client 2. transactions from automatic presence On the phone changes 3. transactions from automatic presence Idle changes

The first section takes the number of subscribers and multiplies it by the number of concurrently registered clients per subscriber. Then the result is multiplied by the number of manual presence state changes per day per client and divided by 24 to arrive at the number of transactions per hour.

The second section takes the number of subscribers and multiplies it by the number of concurrently registered clients per subscriber then by the total call rate per busy hour and automatic presence On the phone penetration. Note that this calculation only applies to MCS clients because automatic presence is only supported on Nortel clients. The last step is to multiply by 2 since each phone call, with automatic presence enabled, generates two state changes.

The third section first adds the Multimedia PC Clients and Multimedia Web Clients because they are the only Nortel clients that support Idle presence. The result is multiplied by the number of times that a PC running an MCS client goes idle during busy hour and then by automatic presence Idle penetration. The last step is to multiply by 2 since each state change to Idle will usually be accompanied by a corresponding state change to Active Available.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 368 Appendix F Constructing a reference model for system capacity planning

All three sections are added together to arrive at the final number of REGISTER transactions from presence state changes.

In some situations presence state changes from connecting and disconnecting from the network can also add a significant enough number of transactions that it will need to be added as a fourth component of the formula.

Presence subscription transactions To find the number of transactions generated by presence subscription, use the following formula:

Presence subscription transactions take place at the same time as the client registrations. Both the Multimedia PC client and the Multimedia Web client subscribe for presence information automatically when they first register with the system. On the other hand, IP Phones subscribe to the presence information of one friend at a time and only when presence information is requested by the subscriber. It is much more convenient to look at presence state information of friends on a computer screen than on a small screen of a desk phone. Therefore it is much more likely that subscribers will choose to use the Multimedia PC Client or the Multimedia Web Client to obtain presence information for their friends.

The number of presence subscriptions is equal to the number of Multimedia clients registered. If all registrations and the corresponding presence subscriptions were spaced out equally over a 24 hour period, then instead of dividing by 4, the equation above will divide by 24. Since the nature of registrations and presence subscriptions is bursty and not uniform, it is important not to underestimate the number of registrations in a busy hour. Most registrations and de-registration will probably take place around the start and end of a working day. By dividing by 4 in the equation above, a much closer approximation can be obtained about the number of presence subscription transactions that will take place during a busy hour. If more precise data is available from analyzing actual system usage, it can be substituted into the equation to get a more precise estimate. Finally we multiply by the number of friends per subscriber.

Presence notification transactions To find the number of transactions generated by presence notification, use the following formula:

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. SIP transaction cost calculation 369

Calculating the number of Presence Notify transactions is not a trivial exercise as shown by the size of the formula. There are three components that comprise this calculation as in the Presence State Change REGISTER transaction calculation. The similarity exists because NOTIFY messages are sent once a client sends in a Presence State Change REGISTER message. The three sections of the formula are listed below and they an each part is added to the next to form the complete formula. 1. Transactions from manual state changes a day for each client 2. Transactions from automatic presence On the phone changes 3. Transactions from automatic presence Idle changes

The first section takes the number of subscribers and multiplies it by the number of concurrently registered clients per subscriber. One is added to the number of friends per subscriber to account for the fact each client subscribes to its own state. Then the result is multiplied by the number of manual presence state changes per day per client and divided by 24 hours to arrive at the number of transactions in an hour.

The second section takes the number of subscribers and multiplies it by the number of concurrently registered clients per subscriber then by the total call rate per busy hour and automatic presence On the phone penetration. One is added to the number of friends per subscriber to account for the fact each client subscribes to its own state and then the result is multiplied by 2 because most of the time a client that changes state to On the phone will change back to the original state once the phone call is completed.

The third section first adds the Multimedia PC Clients and Multimedia Web Clients because they are the only Nortel clients that support Idle automatic presence. The result is multiplied by the number of times PC goes idle during busy hour and then by automatic presence Idle penetration. One is added to the number of friends per subscriber to account for the fact each client subscribes to its own state and then the result is multiplied by 2 because most of the time a client that changes state to On the phone will change back to the original state once the phone call is completed.

Address book transactions To find the number of address book transactions generated, use the following formula:

TransactionsFromAddressBook = TransactionsFromClientRegistrations

Since Address Book subscriptions almost always occur during client registration the number of Address Book subscription transactions is approximately equal to the number of Client Registration transactions. Note that address book subscribe transactions only apply to MCS clients.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 370 Appendix F Constructing a reference model for system capacity planning

Service package subscription transactions To find the number of transactions generated by service package subscription, use the following formula:

Since service package subscriptions almost always occur during client registration the number of service package subscription transactions is approximately equal to the number of client registration transactions. Note that service package subscription transactions only apply to MCS clients.

Gateway call transactions To find the number of transactions generated by gateway calls, use the following formula:

The number of transactions generated by gateway calls is calculated by multiplying the number of subscribers by the sum of non SIP-SIP calls expressed as a percentage. The result is multiplied by the busy hour call rate to arrive at the final number of transactions from gateway calls.

Calculating the total transaction cost Once the number of transactions is calculated for all items listed in Table 2, each one must be multiplied by its corresponding SIP Weighted Transaction Cost listed in the Session Manager section of the engineering guide.

A decision needs to be made regarding what percentage of transactions will be authenticated for the purposes of the model. Since this is a model that does not include all the nuances of transactions and omits some less common transactions, it is better to err on the side of caution and assume that all transactions eligible for authentication will be authenticated.

The following example illustrates the method of calculation. For actual SIP transaction weight numbers, see Table 3 "Weighted SIP transaction cost for a IBM eServer x306m server" (page 49) in "Deployment options and functional components" (page 39).

"Example calculations" (page 371) shows example calculations using formulas provided in the previous section and sample values from "Sample values for subscriber base inputs" (page 365) along with Weighted Transaction cost with authentication.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. SIP transaction cost calculation 371

Example calculations Type Formula Example Total Instant Message 1.095 16,425 Basic SIP-SIP calls 3.194 33,537 Client registrations 2.15 3,225 Presence-state- 2.01 11,145 change generated REGISTER

Presence subscription 1.237 10,515

Service notification 0.19 23,940

Address book 1.503 2,255 Service package 1.503 2,255 subscription Gateway calls 3.564 13,365 Total 116,661

The resulting costs must be summed to arrive at the total cost for the mix of parameters used as inputs to the model. This cost must be compared to the rated capacity of the system in question and a decision must be made whether or not the result is acceptable. Use the following formula for comparison:

Use the total of CalculatedCost from the above table and the RatedCapacity of 215,000, the PercentUtlization of the above example is as follows:

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

. 372 Appendix F Constructing a reference model for system capacity planning

Compare the PercentUtilization to Q (Maximum Engineered Capacity). If Q is less than or equal to PercentUtilization, then the mix of parameters used as inputs to this model will have to be reexamined to arrive at a result where Q is greater than PercentUtilization.

For this example, Q is 60% and PercentUtilization is 44% as shown above, therefore predicted system utilization in this example lies slightly below the Maximum Engineered Capacity.

These example calculations were carried out for just one Session Manager and one domain. To fully characterize a larger system, similar calculations would have to be conducted for each Session Manager and root domain in the complete system since Subscriber Base Inputs as shown in "Sample values for subscriber base inputs" (page 365) are likely to be different for each root domain.

Additional considerations To perform a more precise calculation, it is necessary to identify the number of transactions generated by each client type.

This is necessary since only the Multimedia PC Clients and nonNortel clients are authenticated. Multimedia Web Clients and IP Phone clients are not authenticated by the Session Manager because all messaging from these clients goes through the IP Client Manager.

From the perspective of the Session Manager, this messaging comes from trusted network nodes. NOTIFY messages are usually sent to clients by the Session Manager and are not authenticated by it. Some NOTIFY messages are also sent to the Provisioning Manager during address book and service package subscription process.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering NN42020-200 04.02 Standard Release 4.0 8 October 2009 Copyright © 2007-2009, Nortel Networks

.

Nortel Multimedia Communication Server 5100 MCS Planning and Engineering

Copyright © 2007-2009, Nortel Networks All Rights Reserved.

Publication: NN42020-200 Document status: Standard Document version: 04.02 Document date: 8 October 2009

To provide feedback or report a problem with this document, go to www.nortel.com/documentfeedback.

Sourced in Canada

The information in this document is subject to change without notice. The statements, configurations, technical data, and recommendations in this document are believed to be accurate and reliable, but are presented without express or implied warranty. Users must take full responsibility for their applications of any products specified in this document. The information in this document is proprietary to Nortel Networks.

Nortel, Nortel (Logo), and the Globemark are trademarks of Nortel Networks.

IBM, Lotus, Lotus Notes, BladeCenter and BladeCenter T are trademarks of International Business Machines. Microsoft, and Windows are trademarks of Microsoft. Oracle is a trademark of Oracle Corporation. Red Hat is a registered trademark of Red Hat, Inc. RIM, BlackBerry and BlackBerry Enterprise Server are trademarks of Research in Motion, Inc.

All other trademarks are the property of their respective owners.