Virtualization and Network Evolution Open Networking

SDN Architecture for Cable Access Networks Technical Report

VNE-TR-SDN-ARCH-V01-150625 RELEASED

Notice

This Virtualization and Network Evolution technical report is the result of a cooperative effort undertaken at the direction of Laboratories, Inc. for the benefit of the cable industry and its customers. You may download, copy, distribute, and reference the documents herein only for the purpose of developing products or services in accordance with such documents, and educational use. Except as granted by CableLabs® in a separate written license agreement, no license is granted to modify the documents herein (except via the Engineering Change process), or to use, copy, modify or distribute the documents for any other purpose.

This document may contain references to other documents not owned or controlled by CableLabs. Use and understanding of this document may require access to such other documents. Designing, manufacturing, distributing, using, selling, or servicing products, or providing services, based on this document may require intellectual property licenses from third parties for technology referenced in this document. To the extent this document contains or refers to documents of third parties, you agree to abide by the terms of any licenses associated with such third party documents, including open source licenses, if any.

 Cable Television Laboratories, Inc. 2014-2015

VNE-TR-SDN-ARCH-V01-150625 Open Networking

DISCLAIMER

This document is furnished on an "AS IS" basis and neither CableLabs nor its members provides any representation or warranty, express or implied, regarding the accuracy, completeness, noninfringement, or fitness for a particular purpose of this document, or any document referenced herein. Any use or reliance on the information or opinion in this document is at the risk of the user, and CableLabs and its members shall not be liable for any damage or injury incurred by any person arising out of the completeness, accuracy, or utility of any information or opinion contained in the document. CableLabs reserves the right to revise this document for any reason including, but not limited to, changes in laws, regulations, or standards promulgated by various entities, technology advances, or changes in equipment design, manufacturing techniques, or operating procedures described, or referred to, herein. This document is not to be construed to suggest that any company modify or change any of its products or procedures, nor does this document represent a commitment by CableLabs or any of its members to purchase any product whether or not it meets the characteristics described in the document. Unless granted in a separate written agreement from CableLabs, nothing contained herein shall be construed to confer any license or right to any intellectual property. This document is not to be construed as an endorsement of any product or company or as the adoption or promulgation of any guidelines, standards, or recommendations.

 2 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Document Status Sheet

Document Control Number: VNE-TR-SDN-ARCH-V01-150625

Document Title: SDN Architecture for Cable Access Networks Technical Report

Revision History: V01 - Released 6/25/2015

Date: June 25, 2015

Status: Work in Draft Released Closed Progress

Distribution Restrictions: Author CL/Member CL/ Member/ Public Only Vendor

Trademarks CableLabs® is a registered trademark of Cable Television Laboratories, Inc. Other CableLabs marks are listed at http://www.cablelabs.com/certqual/trademarks. All other marks are the property of their respective owners.

 3 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Contents

1 SCOPE ...... 9 1.1 Introduction ...... 9 1.2 SDN in the Access Network ...... 9 1.3 Topics Covered in the Tech Report ...... 10 2 INFORMATIVE REFERENCES ...... 11 2.1 Reference Acquisition...... 12 3 TERMS AND DEFINITIONS ...... 13 4 ABBREVIATIONS AND ACRONYMS ...... 14 5 SDN AND THE ACCESS NETWORK ...... 17 5.1 CCAP ...... 17 5.1.1 High Level Overview of CCAP ...... 17 5.1.2 Benefits of CCAP ...... 17 5.1.3 CCAP Management and Configuration ...... 17 5.1.4 Distributed CCAP Architecture ...... 18 5.2 CCAP Management Abstraction (CMA) ...... 18 5.3 Network Function Virtualization of CCAP ...... 18 5.4 Some General SDN and NFV Concepts and Terms ...... 18 6 SDN ARCHITECTURE FOR CABLE ACCESS NETWORKS ...... 20 6.1 SDN/NFV Integration ...... 20 6.2 SDN Controller ...... 21 6.3 Northbound and Southbound Interfaces on the SDN Controller ...... 21 7 CCAP MANAGEMENT ABSTRACTION ...... 23 7.1 Need for CCAP Management Abstraction...... 23 7.2 CMA for Distributed CCAP Architectures ...... 24 8 SELECTED USE CASE WORKFLOWS ...... 25 8.1 HSD – Bring Your Own Cable ...... 25 8.2 L2VPN ...... 27 8.3 Third Party ISP Access (TPIA) ...... 29 8.4 Lawful Intercept...... 33 8.4.1 Lawful Intercept – Data ...... 34 8.4.2 Lawful Intercept – Voice ...... 36 8.5 Voice ...... 38 8.5.1 PacketCable 1.5 ...... 38 8.5.2 PacketCable 2.0 ...... 41 8.6 DSG ...... 42 8.7 IPTV ...... 44 9 DATA MODELS ...... 50 9.1 DOCSIS IP HSD Provisioning ...... 51 9.1.1 Packet Classifier ...... 52 9.1.2 YANG Model for DOCSIS Data Model...... 53 9.2 L2VPN Data Model ...... 62 9.2.1 YANG Model for L2VPN ...... 63 9.3 TPIA Data Model ...... 68 9.3.1 YANG Model for TPIA ...... 68 9.4 Lawful Intercept Data Model ...... 70

 4 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.4.1 YANG Model for Lawful Intercept ...... 70 9.5 Generic Model (Northbound Data Model) ...... 73 9.5.1 YANG Model for Generic Flow Model ...... 75 9.6 OpenDaylight PCMM Plug-in Data Model ...... 82 9.6.1 Traffic Profile Data Model ...... 83 9.6.2 PCMM Traffic Profile Data Model ...... 83 9.6.3 OpenDaylight PCMM Plugin YANG Model ...... 85 10 NORTHBOUND AND SOUTHBOUND PROTOCOLS ...... 90 10.1 PCMM ...... 90 10.2 NETCONF ...... 90 10.3 XMPP ...... 90 10.4 REST (RESTful API) ...... 90 10.5 RESTCONF ...... 91 10.6 WebSockets ...... 92 10.7 Recommendations for Southbound Protocols ...... 92 10.7.1 One Protocol versus Multiple Protocols ...... 92 10.7.2 Static Configuration versus Dynamic Configuration ...... 92 10.7.3 Data Consistency ...... 93 11 SERVICE FUNCTION CHAINING...... 94 11.1 SFC Architecture ...... 94 11.2 Service Chaining Setup through Network Service Header (NSH) ...... 95 11.3 SFC Implementation in a DOCSIS Network ...... 96 11.3.1 SFC Initiating from Application ...... 97 11.3.2 SFC Initiating from Home Gateway ...... 97 11.3.3 SFC Initiating from CM ...... 97 11.3.4 SFC Initiating from CCAP/CMTS ...... 97 11.3.5 SFC Initiating from a Proxy Behind CCAP ...... 98 11.4 Recommendation ...... 98 12 DOCSIS 3.1 PROFILE MANAGEMENT APPLICATION ...... 99 12.1 Introduction ...... 99 12.2 Problem Description ...... 99 12.2.1 Background ...... 99 12.2.2 Problem Statement and Goals ...... 100 12.2.3 High Level Architecture ...... 101 12.2.4 Areas of Focus ...... 101 12.3 PMA Use Cases ...... 102 12.3.1 Case 1: New Channel Startup ...... 102 12.3.2 Case 2: Profile Optimization ...... 102 12.3.3 Case 3: “Fallback” Use Case ...... 104 12.3.4 Push versus Pull Approaches...... 104 12.4 Data Elements and Actions ...... 104 12.4.1 Data/Messaging Scope ...... 105 12.4.2 Messages for the PMA-CMTS Interface ...... 105 12.5 PMA Data Backend and Protocols ...... 110 12.6 Other PMA Considerations ...... 110 12.6.1 Central Database ...... 110 12.6.2 Multiple Masters Problem ...... 111 12.6.3 Evaluating Profile Changes ...... 111 12.6.4 Policy Definition ...... 111 12.6.5 Data Acquisition Methods ...... 111 12.6.6 Data Volume for Message Exchanges ...... 111 12.6.7 Gaps in the PMA Solution ...... 112

 5 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

13 INTENT-BASED NETWORKING – VISION AND ARCHITECTURE...... 113 13.1 Use Cases and Intent-based Order Portals ...... 113 13.1.1 VPN Service Order Scenario ...... 114 13.1.2 Gaming Service Order Scenario ...... 115 13.1.3 Home Security Service Order Scenario ...... 115 13.1.4 Triple Play Service Order Scenario ...... 115 13.2 Related Industry Initiatives ...... 117 13.2.1 Group Based Policy ...... 117 13.2.2 Open Networking Foundation (ONF) Common Intent Northbound Interface (NBI) Initiative...... 118 13.3 Network Intent Composition ...... 119 13.3.1 NeMo ...... 120 13.3.2 OpenStack Congress ...... 122 13.3.3 IETF SUPA ...... 123 13.4 Conclusions and Recommendations ...... 123 14 CONTRIBUTION TO OPEN SOURCE CONTROLLERS ...... 125 15 CONCLUSION ...... 126 APPENDIX I EXAMPLES OF CCAP ABSTRACTIONS ...... 127 APPENDIX II ACKNOWLEDGMENTS ...... 128

List of Figures

Figure 1 - SDN Architecture...... 9 Figure 2 - Overview of CCAP ...... 17 Figure 3 - SDN Reference Architecture for Cable Access Networks ...... 20 Figure 4 - Internal View of SDN Controller ...... 21 Figure 5 - SDN Reference Architecture and CMA ...... 23 Figure 6 - HSD Workflow Today ...... 25 Figure 7 - HSD Workflow with SDN ...... 26 Figure 8 - L2VPN Workflow Today...... 27 Figure 9 - L2VPN with SDN ...... 28 Figure 10 - Virtualized Network Topology ...... 29 Figure 11 - TPIA Workflow Today ...... 30 Figure 12 - TPIA with SDN...... 31 Figure 13 - Slicing Physical Network into Multiple Virtual Networks ...... 32 Figure 14 - TPIA Workflow with SDN, Part 1 ...... 32 Figure 15 - TPIA Workflow with SDN, Part 2 ...... 33 Figure 16 - CBIS Interfaces ...... 34 Figure 17 - CBIS Logical Network...... 34 Figure 18 - Lawful Intercept Workflow Today ...... 35 Figure 19 - Lawful Intercept Workflow with SDN ...... 35 Figure 20 - Lawful Intercept Signaling and Media ...... 36 Figure 21 - Today’s Lawful Intercept Workflow for Voice ...... 37 Figure 22 - Lawful Intercept Workflow for Voice with SDN ...... 37 Figure 23 - PacketCable Call Setup Workflow ...... 38

 6 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 24 - PacketCable 1.5 Call Setup Workflow with SDN ...... 39 Figure 25 - PacketCable Event Message Architecture with RKS ...... 39 Figure 26 - Event Message Architecture with SDN Replacing RKS ...... 40 Figure 27 - PacketCable 2.0 Voice Call Setup Workflow Today ...... 41 Figure 28 - PacketCable 2.0 Voice Call Setup Workflow with SDN ...... 41 Figure 29 - Overview of Current DOCSIS Set-top Gateway System ...... 43 Figure 30 - DSG Setup Workflow Today ...... 43 Figure 31 - DSG Setup Workflow with SDN ...... 44 Figure 32 - IPTV Setup Workflow Today – Multicast ...... 45 Figure 33 - IPTV Setup Workflow Today – Unicast ...... 46 Figure 34 - IPTV Setup Workflow with SDN – Multicast ...... 47 Figure 35 - IPTV Setup Workflow with SDN – Unicast ...... 48 Figure 36 - Use of Data Model in SDN Architecture ...... 50 Figure 37 - DOCSIS Data Model ...... 51 Figure 38 - Packet Classifier Data Model ...... 52 Figure 39 - L2VPN Data Model ...... 62 Figure 40 - TPIA Data Model ...... 68 Figure 41 - Lawful Intercept Data Model ...... 70 Figure 42 - Generic Flow Data Model ...... 74 Figure 43 - Traffic Profile Data Model from Open Daylight SDN Controller ...... 83 Figure 44 - PCMM Traffic Profile Data Model ...... 84 Figure 45 - SFC and the DOCSIS Network ...... 94 Figure 46 -Traffic Flows from DOCSIS Network through Different Service Chains ...... 96 Figure 47 - Different Starting Point of SFC...... 97 Figure 48 - DOCSIS 3.1 Downstream OFDM Channel ...... 100 Figure 49 - Possible Composition of the Profile Management Application ...... 101 Figure 50 - PMA Data Backend and Protocols ...... 110 Figure 51 - High Level Intent-Based Networking Architecture ...... 113 Figure 52 - Group Based Policy Architectural Components ...... 118 Figure 53 - OpenStack Keystone Supporting SDN Multiple Controllers with Intent ...... 119 Figure 54 - NeMo Relationship to Open Source and IETF ...... 122 Figure 55 - OpenDaylight Architecture ...... 125 Figure 56 - CCAP Abstractions ...... 127

List of Tables

Table 1 - HSD Workflow Action and Information Needed ...... 26 Table 2 - L2VPN Information Exchange ...... 28 Table 3 - Information Exchanged for TPIA ...... 33 Table 4 - Information Exchanged for CBIS ...... 36 Table 5 - Information Exchanged for Lawful Intercept Voice ...... 38 Table 6 - Information Exchanged for PacketCable 1.5 ...... 40 Table 7 - Information Exchanged for PacketCable 2.0 ...... 42

 7 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Table 8 - Information Exchanged for DSG...... 44 Table 9 - Information Exchanged for IPTV ...... 49 Table 10 - Southbound Protocols ...... 92 Table 11 - PMA-CMTS Downstream Modulation Profile Messages ...... 105 Table 12 - Downstream OFDM Channel Descriptor Message ...... 106 Table 13 - Downstream Profile Request Message ...... 106 Table 14 - Downstream Profile Descriptor Message ...... 106 Table 15 - OFDM Downstream Spectrum Request Message ...... 107 Table 16 - OFDM Downstream Spectrum Descriptor Message ...... 107 Table 17 - OFDM Downstream Profile Test Request Message ...... 107 Table 18 - OFDM Downstream Profile Test Response Message ...... 108 Table 19 - CM-to-Profile Assignment Request Message ...... 108 Table 20 - CM-to-Profile Assignment Descriptor Message ...... 108 Table 21 - Profile-to-CM Assignment Message ...... 109 Table 22 - Profile-to-CM Assignment Message ...... 109 Table 23 - Service Order Data for New VPN ...... 114 Table 24 - Service Order Data for Adding Location to Existing VPN ...... 114 Table 25 - Example Location Information...... 115 Table 26 - Service Order Data for ...... 116 Table 27 - Service Order Data for Voice ...... 116 Table 28 - Service Order Data for TV ...... 116

 8 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

1 SCOPE

1.1 Introduction

For the last few years, the networking industry has been interested in the new technology paradigms of Software Defined Networking (SDN) and Network Functions Virtualization (NFV). CableLabs has been investigating SDN and NFV and analyzing how these ideas can help solve various problems in Cable networks. The initial focus of the research was the OpenFlow protocol and the improvements it could bring in solving problems within the Cable Access network. Eventually, as the working group gained understanding of the SDN architecture, the focus shifted toward the most valuable ideas from the SDN space, including the following: • Enabling a Software programmable network for an operator (across various access technologies). • Dynamic provisioning and management of various network devices from a centralized controller. • Improved automation using common APIs to abstract the underlying networks (be it DOCSIS, (EPoN), etc.). • Allow rapid creation of new services based on the platform created. The SDN and NFV technologies can be used individually by a cable operator with various benefits. However, a higher level of benefits will result when SDN and NFV are combined. NFV describes an architecture where functions within a network element, running on proprietary hardware, can be virtualized and moved to generic hardware. SDN is an architecture that abstracts the underlying network and allows the network to be programmatically configured. A separate work effort has been initiated around identifying components within the cable operator’s network that can be virtualized.

1.2 SDN in the Access Network

Given the benefits of SDN and NFV, the CableLabs Open Networking working group started investigating ideas around making the Cable access network elements, such as the CMTS or Converged Cable Access Platform (CCAP), more Software programmable; i.e., programming services dynamically on the access network devices. Figure 1 below describes an overall architecture for cable access network. Using the CCAP/CMTS architecture as a baseline; the working group identified components that can be potentially virtualized. The group identified data models which capture the configuration information for various services, which can now be set up on the access network via an SDN Controller.

Figure 1 - SDN Architecture

 9 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

The goal of an SDN and NFV is to reduce Operational Expenditure (OPEX) and create a platform that enables faster delivery of new services. MSOs would like to scale the CMTS platform as service requirements increase (power, cooling, and compute processing load). There are multiple aspects of a managing and provisioning of services on a CCAP device and moving certain functionality outside the CCAP to run on MSO cloud. While the PHY functions cannot be virtualized at this point, other functions, (e.g., parts of the MAC, the scheduler, load balancing algorithms, dynamic QoS allocation for video, and others) may be virtualized as functions on a generic server architecture.

1.3 Topics Covered in the Tech Report

• SDN architecture for cable access network • CCAP management abstraction • SDN-ized service provisioning workflows and uses cases • Data Models: UML and YANG • North and southbound protocols analysis and recommendation • Service function chaining in cable networks • DOCSIS 3.1 profile management as an application • Intent-based networking analysis

 10 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

2 INFORMATIVE REFERENCES

This specification uses the following informative references. [CBIS] Cable Broadband Intercept Specification, CM-SP-CBI2.0-I08-140729, June 25, 2015, Cable Television Laboratories, Inc. [C-DOCSIS] C-DOCSIS System Specification, CM-SP-CDOCSIS-I02-150305, March 5, 2015, Cable Television Laboratories, Inc. [CM-OSSI] Operations Support System Interface Specification, CM-SP-CM-OSSIv3.1- I04-150611, June 11, 2015, Cable Television Laboratories, Inc. [MULPIv3.1] DOCSIS 3.1 MAC and Upper Layer Protocols Interface Specification, CM-SP- MULPIv3.1-I06-150611, June 11, 2015, Cable Television Laboratories, Inc. [DSG] DOCSIS Set-top Gateway (DSG) Interface Specification, CM-SP-DSG-I24-130808, August 8, 2013, Cable Television Laboratories, Inc. [L2VPN] Business Services over DOCSIS, Layer 2 Virtual Private Networks, CM-SP-L2VPN-I15- 150528, May 28, 2015, Cable Television Laboratories, Inc.

[PCMM] PacketCable Multimedia Specification, PKT-SP-MM-I06-110629, June 29, 2011, Cable Television Laboratories, Inc.

[PKT-DQoS] PacketCable 1.5 Dynamic Quality-of-Service Specification, PKT-SP-DQOS1.5-I04- 090624, June 24, 2009, Cable Television Laboratories, Inc. [PKT-ESP] PacketCable 1.5 Electronic Surveillance Specification, PKT-SP-ESP1.5-I02-070412, April 12, 2007, Cable Television Laboratories, Inc. [PKT-QoS] PacketCable 2.0 Specification, PKT-SP-QOS-C01-140314, March 14, 2014, Cable Television Laboratories, Inc. [PHYv3.1] DOCSIS 3.1 Specification, CM-SP-PHYv3.1-I06-150611, June 11, 2015, Cable Television Laboratories, Inc. [RFC 6020] IETF RFC 6020, YANG - A Data Modeling Language for the Network Configuration Protocol (NETCONF), October 2010.

[RFC 6120] IETF RFC 6120, Extensible Messaging and Presence Protocol (XMPP): Core, March 2011. [RFC 6241] IETF RFC 6241, Network Configuration Protocol (NETCONF), June 2011. [RFC 6455] IETF RFC 6455, The WebSocket Protocol, December 2011. [Fielding-2000] Architectural Styles and Design of Network-based Software Architectures; ://www.ics.uci.edu/~fielding/pubs/dissertation/top.htm [IETF SUPA] Simplified Use of Policy Abstractions, https://datatracker.ietf.org/wg/supa/charter/ [NeMo] NeMo Project, http://www.nemo-project.net [NI COMP] Network Intent Composition, https://wiki.opendaylight.org/view/Network_Intent_Composition:Main [NSH] Network Service Header, https://www.ietf.org/id/draft-ietf-sfc-nsh-00.txt, March 2015. [ODL] OpenDaylight documentation for PacketCable PCMM Plug-in, https://wiki.opendaylight.org/view/PacketCablePCMM:Documentation

 11 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

[OpenStack] OpenStack Congress, https://wiki.openstack.org/wiki/Congress [RESTCONF] RESTCONF Protocol, draft-ietf-netconf-restconf-04, https://tools.ietf.org/html/draft-ietf- netconf-restconf-04, January 2015. [SFC Service Function Chaining, https://www.ietf.org/id/draft-ietf-sfc-architecture-09.txt, June Architecture] 2015.

2.1 Reference Acquisition

• Cable Television Laboratories, Inc., 858 Coal Creek Circle, Louisville, CO 80027; Phone +1-303-661-9100; Fax +1-303-661-9199; http://www.cablelabs.com • Internet Engineering Task Force (IETF) Secretariat, 46000 Center Oak Plaza, Sterling, VA 20166, Phone +1- 571-434-3500, Fax +1-571-434-3535, http://www.ietf.org • OpenDaylight software community, www.opendaylight.org • OpenStack software community, www.openstack.org • School of Information and Computer Science at University of California of Irvine, 6210 Donald Bren Hall, Irvine, CA, 92967, www.ics.uci.edu

 12 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

3 TERMS AND DEFINITIONS

This document uses the following terms:

Cable Modem A headend component that provides the operator network side termination for Termination System the DOCSIS link. A CMTS communicates with a number of cable (CMTS) to provide data services. Converged Cable Access A headend component that provides the functionality of a CMTS and an Platform (CCAP) Edge QAM in a single architecture with greater QAM density and overall capacity. Edge QAM (EQAM) A headend or hub device that receives packets of digital video or data from the operator network. It re-packetizes the video or data into an MPEG transport stream and digitally modulates the transport stream onto a downstream RF carrier using QAM. Northbound Abstraction This is the interface exposed to applications from am SDN controller. The Layer SDN controller provides a set of common APIs to the (typically referred to as the northbound interface). This is described by the application facing information/data model, and needs to be device independent. OpenDaylight OpenDaylight is an open source project of a modular, pluggable, and flexible SDN controller platform. http://www.opendaylight.org. OpenStack OpenStack is a free and open-source cloud computing software platform, it consists of a series of interrelated projects that control pools of processing, storage, and networking resources throughout a data center, managed through a web-based dashboard, command-line tools, or a RESTful API. http://www.openstack.org Orchestrator Coordinates one or more controllers to provide end-to-end service. RESTCONF An HTTP based protocol used to access the data defined in YANG models using the concepts defined in NETCONF. SDN Controller Device manager that implements some or all of the device control plane as well as manage device configuration. Southbound Abstraction This is the interface used by an SDN controller to communicate with network Layer devices. An SDN Controller implements one or more protocols for command and control of the physical hardware within the network (typically referred to as the southbound interface). This is described by the device facing information/data model, and needs to be device specific. YANG YANG is a data modeling language used to model configuration and state data manipulated by the Network Configuration Protocol (NETCONF).

 13 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

4 ABBREVIATIONS AND ACRONYMS

This document uses the following abbreviations: AMID Application Manager ID API Application Programming Interface BGP BIF Broadband Interrupt Function BSS Business Support System BYOCM Bring Your Own Cable Modem BW Bandwidth CAPEX Capital Expenditure CBIS Cable Broadband Intercept Specification CCAP Converged Cable Access Platform CGN Carrier-grade NAT CLI Command-line Interface CM Cable Modem CMA CCAP Management Abstraction CMC Coax Media Converter CMTS Cable Modem Termination System COPS Common Open Policy Service COTS Commercial Off-the-Shelf CPE Customer Premise Equipment CSR Customer Service Representative DCA Distributed CCAP Architecture DHCP Dynamic Host Configuration Protocol DPD Downstream Profile Descriptor DS Downstream DSCP Differentiated Services Code Point DOCSIS Data-Over-Cable Service Interface Specification DPoE DOCSIS Provisioning of EPON DSG DOCSIS Set-top Gateway EPL Ethernet Private Line EPoN Ethernet Passive Optical Network EQAM Edge QAM EVPL Ethernet Virtual Private Line FW Firewall GBP Group Based Policy GRE Generic Routing Encapsulation HSD High Speed Data HTTP Hypertext Transfer Protocol ID Identifier

 14 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

IETF Internet Engineering Task Force IP IPS Intrusion Prevention System IPTV IP Television JSON JavaScript Object Notation L2VPN Layer 2 Virtual Private Network L3 Layer 3 L4 Layer 4 L7 Layer 7 LEA Legal Enforcement Act MAC Media Access Control MEPID Maintenance Entity Group End Point Identifier MIB Management Information Base MMM MAC Management Message MPLS Multiprotocol Label Switching MSO Multiple System Operator MTA Multimedia Terminal Adapter NBI Northbound Interface NCP Next Codeword Pointer NETCONF Network Configuration Protocol NFV Network Function Virtualization NMS Network Management System NSH Network Service Headers NSI Network Side Interface OCD OFDM Channel Descriptor ODL OpenDaylight ODS OFDM Downstream Spectrum OFDM Orthogonal Frequency Division Multiplexing OFDMA Orthogonal Frequency Division Multiplexing with Multiple Access OLT Optical Line Termination OPEX Operational Expenditure OPT OFDM Downstream Profile Test OSS Operations Support System OSSI Operations Support System Interface PCMM PacketCable Multimedia PCRF Policy and Charging Rules Function PHY Physical Layer PLC PHY Link Channel PMA Profile Management Application PNM Proactive Network Maintenance QAM Quadrature Amplitude Modulation

 15 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

QoS Quality of Service REST Representational State Transfer RKS Record Keeping Server SDN Software Defined Networking SDP Session Description Protocol SF Service Flow SFC Service Function Chaining SLB Server Load Balancing SNMP Simple Network Management Protocol SNR Signal-to-Noise Ratio SOAM Service Operations, Administration, and Maintenance SSL Secure Socket Layer TCP Transmission Control Protocol TPIA Third Party ISP Access UDC Upstream Drop Classifier US Upstream VAN Virtual Access Node VCC Virtual CCAP Controller VLAN Virtual VM Virtual Machine VoIP Voice over IP VXLAN Virtual Extensible LAN XML Extensible Markup Language XMPP Extensible Messaging and Presence Protocol

 16 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

5 SDN AND THE ACCESS NETWORK

5.1 CCAP

5.1.1 High Level Overview of CCAP Converged Cable Access Platform (CCAP) combines CMTS core and EQAM functionalities into a single hardware. Combining CMTS core and EQAM functionalities give MSOs flexibility to share QAM ports used for data and video. It also allows a simplified transition to all IP delivery. The CCAP leverages existing technologies, including DOCSIS 3.0, DOCSIS 3.1, Modular Headend Architecture, Ethernet optics, Ethernet Passive Optical Network (EPON) and current HFC architectures; and also can include newer ones. Figure 2 below shows the high level functional blocks present within a CCAP device.

Figure 2 - Overview of CCAP

5.1.2 Benefits of CCAP CCAP is intended to provide a new equipment architecture option for manufacturers to achieve the Edge QAM and CMTS densities that MSOs require in order to address the costs and environmental challenges resulting from the success of narrowcast services. The CCAP architecture is designed to meet several goals for MSOs: • Larger DS and US channel counts to match subscriber BW demands • Lower costs per DOCSIS channel • Less space required within headend facilities • Space savings results in reduced power consumption in the data center • Simpler configuration management to support rapid HFC plant changes that will occur over next decade 5.1.3 CCAP Management and Configuration There are several methods to manage and configure a CCAP. The first method is the processing of XML configuration files that hold the configuration details for all services on the CCAP. The XML configuration files are generated by YANG modules. The CCAP can also support NETCONF for configuration as defined in [RFC 6241] as well as the traditional command-line interface (CLI).

 17 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

5.1.4 Distributed CCAP Architecture Distributed CCAP Architecture (DCA) allows the distribution of CCAP PHY or MAC and PHY functions to remote nodes. The distributed CCAP Architecture (Remote PHY, Remote MAC-PHY, etc.) is documented in CableLabs DCA specifications and Technical reports. [C-DOCSIS] and the DCA documents present a logical architecture of distributed deployment and centralized management for the cable broadband access system. The C-DOCSIS specification defines a CMTS with a Coax Media Converter (CMC) in a remote node and a CMC Controller to achieve the DOCSIS CMTS functionality.

5.2 CCAP Management Abstraction (CMA)

There are various network devices, which compose the access network, and each need to be configured and managed for MSOs to deploy services. The SDN architecture needs to support legacy and future devices in the access network. There is a need to abstract the underlying CMTS/CCAP/Distributed-CCAP components and the other network devices within the access network. The CMA, as described in Section 7 is an application framework, which abstracts the underlying CMTS/CCAP components to the SDN Controller and supports newer protocols for management and configuration. DCA deployments can use a centralized controller that implements the CMA functions.

5.3 Network Function Virtualization of CCAP

It is possible to implement CCAP packet-processing functions inside a virtual machine. This may include MAC level processing as well as DOCSIS scheduling and QoS functions. From an SDN and CMA perspective, such a device can be managed similarly to other distributed architectures.

5.4 Some General SDN and NFV Concepts and Terms

This section introduces some general terms and concepts around Software defined Networking and Network Function virtualization. The term “virtualization” has become overloaded to the point where any form of abstraction or decomposition of functions is coined to as “virtualization”. The following section attempts to present a finer grained terminology from the point of view of “network function virtualization” • Centralized Architecture: In CMTS/CCAP architecture, the term “centralized architecture” refers to an integrated CCAP where all the functions of CCAP are implemented in a single device. In the general computing world, a centralized architecture is one where most of the compute power is centered in one location. The mainframe and the cloud are good examples of centralized architectures. • Distributed Architecture: In CMTS/CCAP architecture, the term “distributed architecture” refers to an distributed CCAP where the functions that comprise of CCAP are implemented in several discreet components. In the general computing world, a distributed architecture is one where the compute power is equally distributed in a system. Traditional networking with routing protocols is a good example of a distributed architecture. • Centralized and Distributed Architectures: centralizing of functions should not be confused with virtualization and one should not confuse a network architecture with implementation of functions (virtual vs. physical). For example, a physical CCAP packet-shelf can be placed in a central location with remote access nodes to create a centralized architecture that is not based on virtual components. The complement is also true – virtualized functions can be distributed. This is sometimes referred to as “fog computing” where the virtual functions as close to the user. • Network Virtualization: The term network virtualization refers to creating an overlay network on top of a physically connected network (also known as “underlay”). • Software Defined Networking: While the exact definition of SDN is evolving, a common theme is the separation of control and management planes of a network element from the data forwarding. The extent of separation can be varied. The purist view of SDN is to have a controller supporting a protocol with hooks into the forwarding plane (open-flow like) interface on the networking device. This has evolved to other approaches that focus more on the programmability of network devices and services.

 18 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

The control/management plane can run in a virtual machine but SDN by itself is not referred to as “virtualization”. From the SDN controller point of view, it does not matter if the controlled network function is physical or virtual. • Virtualization: Originally the term “virtualization” referred to the simulation a hardware platform. An unmodified binary of a software package can run on this hardware platform simulation natively as if it is running on the original hardware platform. In the context of NFV, the term “virtualization” focuses on the benefits of running a networking functions over a virtual hardware platform. • Independence of the network function from a vendor specific hardware • Ability to move network function workloads between hardware platforms

 19 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

6 SDN ARCHITECTURE FOR CABLE ACCESS NETWORKS

6.1 SDN/NFV Integration

Before diving into the details of how to leverage SDN and NFV technologies in cable, it is important to outline a reference architecture that defines the interaction between the various components. Figure 3 below shows a high- level architecture. The main concept here is that of a centralized SDN controller; this controller can talk to various access network elements in the MSO’s network. The working agreement is that the SDN controller communicates directly to the headend equipment (e.g., CMTS, CCAP, DPoE System) and intermediate nodes (e.g., CMC or Remote PHY), but not directly to the Customer premise equipment (CMs, ONUs, customer devices). The idea is to define the data models needed to configure the network devices and network services, and choose a common southbound protocol to carry this configuration information from the controller to the network devices. This enables the operator to expose a common network API, which can be used to build various applications. The applications reside above the SDN controller and request the SDN controller to configure the network as per the specific needs of the application. Also as parts of the network functionality are moved out of the CMTS to virtual machines into the MSO cloud, an orchestration layer will be needed to manage the various functions, both virtual and physical, and to coordinate with the SDN controller to steer traffic to and from those functions.

Figure 3 - SDN Reference Architecture for Cable Access Networks

There are multiple points at which a standardized interface can be defined. The main interface / interaction, in the scope of this document at this time, is the southbound interface from the SDN controller to the network devices. Within the MSO access network, the idea is to make this interface the same, regardless of whether the SDN controller is communicating to a CCAP/CMTS device, a C-DOCSIS Controller, or other access network equipment such as DPoE systems. This abstraction enables any application to be written without consideration for the specifics of the underlying network infrastructure. For example, an L2VPN application could connect end points across access networks, and

 20 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

the SDN controller will manage configuration of the end points, one of which could be on a DOCSIS network and the other on a EPoN network, and also configure the path in the core network between the end points. The SDN architecture described here for DOCSIS access networks can be easily applied to other access network technologies and pieces of network equipment on the MSO network. Take, for example, an EPoN deployment by an operator; the EPoN systems can be provisioned using the DPoE specifications. An SDN controller could essentially play that role of the DPoE System and a virtual CM, to take in DOCSIS Provisioning commands and translate them to EPoN specific commands, and send them to an existing EPoN OLT system. In this way, legacy equipment with disparate provisioning systems can all be controlled via a Single SDN controller. Again, the key would be to create the needed data models for EPoN systems to make sure the application/service intents can be translated appropriately to the technology which is being configured. The protocol to talk from an SDN controller to the OLT could be something like RESTCONF for some other legacy protocol supported by the OLT, as long as the SDN controller can support that southbound protocol. This can apply to other access technologies as well, e.g., GPoN deployments, new services on top of DOCSIS or PON deployments, Wi-Fi AP provisioning and management, etc.

6.2 SDN Controller

Figure 4 shows a high level overview of the SDN controller. The layers within the SDN controller enable it to support multiple applications, multiple devices, and multiple communication methodologies. The Southbound protocols will evolve over time. The idea is to pick some common protocols, which will be used by the access network devices and devices in the core of the network that may be controlled by the centralized SDN controller in the future. As an intermediate step to the future, the idea is to re-use protocols that exist today (even if they are not perfectly optimized for this SDN-ized architecture) to enable faster development and deployment of services. The SDN controller provides abstractions for network devices that can be used by applications to create needed forwarding and device configurations required by services. The SDN controller also provides access to protocols and data models for communicating directly with network elements to create the necessary configurations needed to instantiate a given service. The data models that the SDN controller supports and uses are critical; they form the basis for how new services and applications will be built and deployed. This model aligns with the work that is being done around OpenStack and OpenDaylight. The SDN controller is considered an element manager even in the case of CMA, despite the fact that it’s a collection of devices; the CMA presents that collection as a single “element” to the SDN controller.

Figure 4 - Internal View of SDN Controller

6.3 Northbound and Southbound Interfaces on the SDN Controller

The service/business logic interfaces are commonly referred to as “northbound” of the controller and the device- specific interfaces are referred to as “southbound”. The SDN controller exposes APIs to the applications; these are the “northbound” APIs from the perspective of the SDN controller. The SDN controller exposes programmatic

 21 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking interfaces to each of the network elements it controls; this is the “southbound” API. This document attempts to define what the southbound APIs would be (in terms of the parameters passed) with the understanding that the northbound APIs exposed to an application would be addressed at a future time.

 22 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

7 CCAP MANAGEMENT ABSTRACTION

In the SDN Reference architecture below the CCAP/CMTS will support protocols on the NSI side to communicate with the SDN controller. CCAP/CMTSs which support protocols and the data models to enable dynamic configuration of services will bring the level of programmability to the network and enable MSOs to develop applications independent of the underlying access network architecture. There are different methods of implementing the CMA and this section covers some of the use cases.

Figure 5 - SDN Reference Architecture and CMA

7.1 Need for CCAP Management Abstraction

The permutations and combinations of how a CCAP/CMTS can be distributed within the access network are many. The key is the existence of a CCAP Management Abstraction (CMA) that abstracts the various components of this distributed system into a single, cohesive CCAP platform for the OSS/BSS systems. This is an important point when considering this technical report. The CMA is not the SDN controller that applications are built on top of; at least it is not required to be. The CMA is an application whose function is to create a container around a number of physical and virtual functions and enable them to be managed and provisioned from existing back-office systems, similar to an integrated monolithic CCAP is today. The purpose of the CMA is to abstract the underlying components and division of labor from the back-office systems deployed today in MSO networks. It represents the disparate components under it as an integration monolithic CCAP, enabling its use without modification of existing configuration and management systems. Just like a CMTS or a CCAP, the CMA is a client of the SDN controller and can interface with it using a variety of protocols. SNMP, NETCONF, and PCMM are existing tools deployed today and will be leveraged in the future.

 23 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

The CMA makes it possible to present small and separate network elements as a single managed entitiy. In that sense, it emulates an integrated CCAP as a single managable entitiy even though it is physically made up of many smaller components. For example, a router + a number of remote nodes + a number of EQAMs will all appear as a single CCAP device.

7.2 CMA for Distributed CCAP Architectures

Distributed CCAP architectures define a different CMTS architecture where DOCSIS MAC and PHY can be located in different parts of the access network: at the headend or at a fiber node. The CMA provides an abstraction to the underlying Distributed Architecture. A CMA performs the following functions: • Emulates an Integrated CCAP for upper layers of the architecture as mentioned above (e.g., the CMA IP address can be the CCAP management IP address). • Supports SNMP agent and command line interface to configure CCAP if needed. • Supports network virtualization. • Emulates CM behavior for OSS/NMS, via the concept of Virtual CMs. • Supports self-management (e.g., self-configuration and self-healing) for a distributed CCAP system. The CMA can be integrated in the headend with CMTS/OLT/router equipment, or can be connected to them using an L2 or L3 network. For example, the CMA can run in a virtual machine in the data center which is connected to the access network equipment via an L3 network. The interface between CMA and the underlying distributed CCAP architecture is left to vendor implementation. The management of distributed CCAP involves a set of concepts. It may involve re-partitioning the functions traditionally performed in an integrated CCAP system and distributing them to other devices in the network. This is coupled with a software application that acts as the “controller” of this distributed system. The components of this new CCAP system can be either physical – edge router, fiber node, etc., and/or virtual – application or Virtual Machine (VM) running on an x86 server. This new CCAP architecture will benefit from a new way to provision and manage the device and services on the DOCSIS access network as described in Section 6.

 24 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

8 SELECTED USE CASE WORKFLOWS

Several service workflows / use cases in the cable operator’s network were studied with the purpose of figuring out how these services would be enabled, provisioned, and managed within the SDN Architecture developed so far. The use cases studied were – High Speed Data, Layer 2 Virtual Private Network (L2VPN), Third Party ISP Access (TPIA), Lawful Intercept, Voice, and IPTV. The approach was to study current practice of setting up those service types, then to investigate how SDN could streamline various procedures in order to reduce both setup time and unintended errors, and introduce a level of software programmability which is not present today. The information exchanged between network components was gathered for the purpose of developing information models and captured as UML diagrams. From a network management point-of-view, the SDN controller, to a large extent, replaces the element management and part of the network management, but is still below the service and business logic. The role of the SDN controller in many cases is to take a generic service definition (e.g., provide a customer with a 100 Mbps service) and translate it to a device-specific configuration. For example, in the DOCSIS case the SDN controller takes a generic service creation request (100 Mbps speed) and translates it to a DOCSIS service flow with a sustained rate of a 100 Mbps and upstream service flows with other DOCSIS specific parameters.

8.1 HSD – Bring Your Own Cable Modem

High Speed Data (HSD) service is one of the most common services when a user subscribes to an MSO network. Figure 6 shows the current workflow for HSD using a customer-provided CM. The Customer Service Representative (CSR) is responsible for gathering customer and device information and entering that information into the BSS/OSS system. The current approach requires the CSR to be a middleman between the customer and network components such as BSS/OSS.

Figure 6 - HSD Workflow Today

 25 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 7 demonstrates the benefit of using SDN in this use case. A customer care web portal replaces the CSR. This web portal collects customer input and then forwards it to the BSS/OSS system. Upon receiving the customer request, BSS/OSS then instructs the SDN controller to set up network components such as the CM, and the CMTS for ordered service. The information needed for such service setup is listed in Table 1.

Figure 7 - HSD Workflow with SDN

Table 1 - HSD Workflow Action and Information Needed

Action Other Information Information for YANG Model Method (Static (for SDN Controller) versus Dynamic) Purchase cable modem (CM) None N/A CM initiation and DHCP CM capabilities CM MAC address Static register (CM) register CM Basic CM CM IP MAC and IP configuration DHCP (CPE) IP address(es) or IPv6 prefixes Dynamic Register CPE MAC and IP allocated Index of CM MAC, CPE MAC Enter order for HSD CM MAC Enter equipment ID with Account number account Payment information Validate customer Provide customer’s subscriber Service template tiers Update service level Service ID versus customer ID: CM Dynamic MAC Tier of Service: translate to DOCSIS QoS parameters Subscriber management: Max CPE, Max number of IP Things from a CM configuration file (SNMP, classifiers, SF, UDCs, etc.) Other miscellaneous parameters Service attributes

 26 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Action Other Information Information for YANG Model Method (Static (for SDN Controller) versus Dynamic) Update services on CM DOCSIS-specific Dynamic HSD service active parameters Maximum channels

8.2 L2VPN

L2VPN service is used mainly by businesses to communicate across geologically separate campuses. Figure 8 shows the process of ordering an L2VPN service on the DOCSIS network and setting up the network path from the CM. Currently, a new L2VPN service order requires a CSR to enter information into the BSS/OSS system. Upon receiving such order, a network engineer then will configure the NSI path and will create a configuration file to be downloaded to the corresponding CMs.

Figure 8 - L2VPN Workflow Today

One major issue with the current workflow model is that it comprises two disparate processes: 1) the edge/core side where a pseudo-wire is configured; and 2) the access side where the CMTS is located. This process: a) takes time since each separate organization creates and responds to work tickets; and b) is error prone since it involves the manual exchange of data. SDN improves the workflow by “orchestrating” the two sides of the network and eliminating manual interventions. Figure 9 shows how SDN can be used for such service order and setup. Similar to the HSD use case, a web portal can replace the CSR and a corresponding SDN application configures the L2VPN path on the NSI side. The information needed for such service setup is listed in Table 2.

 27 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 9 - L2VPN with SDN

SDN improves the process by dynamically associating a service flow to an L2VPN pseudo-wire. This means that a CM does not need to be rebooted in order to activate an L2VPN service. Ideally, this can be done without changes to PCMM; the DOCSIS service flow would be brought up as a standard flow (using the ODL PCMM/COPS plug-in) and the NSI side pseudo-wire would be brought up using other SDN tools (where even CLI is an acceptable interim way to achieve that). The controller would bind the DOCSIS service flow to the pseudo-wire automatically, thereby achieving a dynamic setup of an L2VPN service. Note that, in this report, only the attachment of the pseudo-wire to the CMTS is covered; there is also an end-to-end setup of the pseudo-wire that can be controlled by SDN; however that is outside the scope of this document. Table 2 - L2VPN Information Exchange

Action Other Information Information for YANG Model (for Method (Static SDN Controller) versus Dynamic) Order L2VPN Service Customer None N/A Order entry and approval contacts/billing info/technical Circuit locations Bandwidth SLA MAC address Provide circuit parameters CM MAC address (one end or both) Dynamic CMTS address (one end or both) Type of Service (EPL, EVPL, etc.) Redundancy/protection

 28 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Action Other Information Information for YANG Model (for Method (Static SDN Controller) versus Dynamic) Configure L2VPN path on Circuit ID/endpoint IDs NSI Next hop / Path setup NSI MEP ID (SOAM) L2VPN encapsulation for CM Configure service Network topology Encapsulation type (VLAN, MPLS, etc.) parameter and BGP information for Label (provisioning application or auto- encapsulation core interaction assigned) L2VPN SF parameters Tier of service: Translate to DOCSIS QoS parameters and classifiers Subscriber management: Max CPE, max number of IP Other parameters for CM Configure service flow DOCSIS-specific Dynamic parameters Max channels, etc.

8.3 Third Party ISP Access (TPIA)

Canadian Radio-Television and Telecommunications Commission (CRTC) published a rule that mandates Canadian operators to offer to third party resellers. This rule was instituted to discourage monopoly and to offer consumers more provider choices. TPIA has allowed resellers to provide Internet service without significant investment in a network infrastructure. Figure 10 shows a high level overview of TPIA in a network, in which three resellers are offering Internet services from a single MSO.

Figure 10 - Virtualized Network Topology

Figure 11 shows the current practice of setting up a TPIA service, which requires collaborative effort between business teams and engineering teams throughout the entire order process.

 29 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 11 - TPIA Workflow Today

By using SDN as shown in Figure 12, an SDN-aware application running as a web portal can replace the business team for collecting the necessary information from the customer and the external vendors. Furthermore, an SDN controller can configure the various network elements, which is otherwise a manual process done by the engineering team.

 30 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 12 - TPIA with SDN

TPIA can utilize a network virtualization technology to slice its physical network into several virtual networks as shown in Figure 13 below. Each virtual network can be wholesaled to different retailers (service providers). Each virtual network has its own data plane and control/management plane. Virtual Network can scale to include geographically dispersed locations. Virtual networks have the following advantages for MSO and service providers: • Infrastructure sharing reduces TCO for MSO • Brings a new stream of revenue for MSO • Save OPEX for SP because service provisioning is simplified • Ease of SP network implementation due to the use of the underlying MSO physical network

 31 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 13 - Slicing Physical Network into Multiple Virtual Networks

Figure 14 and Figure 15 show a detailed view of the TPIA workflow using SDN, starting from initial customer requests to the final establishment of the circuit. The SDN controller and the SDN application mimic the original setup procedure that is conducted by the business team and the engineering team.

Figure 14 - TPIA Workflow with SDN, Part 1

 32 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 15 - TPIA Workflow with SDN, Part 2

Table 3 - Information Exchanged for TPIA

Action Other Information Information for YANG Model (for SDN Method (Static Controller) versus Dynamic) Service Request Customer Address None Contract Speed IP Requirements Order Validation Configure circuit IP block Dynamic CMTS ID Contract Speed Create circuit CPE IP Block Dynamic Uplink VLAN Route Device Configuration DHCP server: Add CPE scope Dynamic CMTS: CPE IP block, uplink, VLAN Regional switch: L2 Circuit ID, VPLS and or VLAN POI Router: L3 Protocol and VRF configuration Verification of circuit Start and stop test Dynamic Report test

8.4 Lawful Intercept

CableLabs has defined two specifications in regards to lawful intercept: Cable Broadband Intercept Specification [CBIS] for data and PacketCable 1.5 Electronic Surveillance Specification [PKT-ESP] for voice. The following sections describe how to use SDN in those two cases.

 33 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

8.4.1 Lawful Intercept – Data Figure 16 illustrates the CBIS functionality surrounding data transport. The three main components are: 1) Mediation, 2) Broadband Intercept, and 3) Collection.

Figure 16 - CBIS Interfaces

Figure 17 offers a functional breakdown of CBIS operation surrounding data transport starting from a user end point at a MSO network to the collection function end point in the Legal Enforcement Act (LEA) network. CBIS has enough information to create identification tags for the expected data streams. For IPv4, it uses a 5-tuple consisting of source and destination IP addresses, source and destination ports and the protocol field. For IPv6, it adds a flow label to the 5-tuple. When user data is forwarded to the mediation function through the access function, it can be filtered by subject header only or unaltered. Data that matches the identification tag is formatted and passed to the Broadband Intercept Function and then it is forwarded to the Collection Function for further investigation by LEA.

Figure 17 - CBIS Logical Network

As shown in Figure 18, current lawful intercept practice requires an operator to look up specific customer information in a database to identify the corresponding CM and CMTS. Once a user is identified, the operator configures the access function, the mediation function, and the broadband intercept function for data capture.

 34 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 18 - Lawful Intercept Workflow Today

As shown in Figure 19, with an SDN controller and a corresponding application, the configuration of access, mediation and broadband intercept functionality is accomplished with the SDN controller, with instructions from an SDN application. The application replaces the aforementioned operator in the role of collecting user information, querying the database, and identifying the corresponding CM and CMTS. The information needed for such service setup is listed in Table 4.

Figure 19 - Lawful Intercept Workflow with SDN

 35 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Table 4 - Information Exchanged for CBIS

Action Other Information Information for YANG Model Method (Static (for SDN Controller) versus Dynamic) LEA to operator Court document Name and address of customer Operator to SDN Enable lawful intercept for user Controller 5-tuple to identity the user Time duration Address of mediation function SDN Controller to Enable lawful intercept for user Mediation Function Format of data Full packets or packet headers only Out-of-band messages Hashes SDN controller to Enable lawful intercept for user Broadband Intercept Buffer size Function Time to buffer LEA address

8.4.2 Lawful Intercept – Voice The two subcategories of lawful intercept for voice are: 1) the signaling information (call data), and 2) the media (call content). Figure 20 illustrates the intercept points for both types.

Figure 20 - Lawful Intercept Signaling and Media

Figure 21 shows the workflow for voice lawful intercept. The intercept request comes from the LEA to the operator and the operator looks up the user. Once the user is identified, the operator configures different network elements for signaling and media capture.

 36 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 21 - Today’s Lawful Intercept Workflow for Voice

Figure 22 shows lawful intercept with an SDN application (Operator Lawful Intercept application) and an SDN controller. Similar to the data intercept case, the SDN application and controller replaces the operator functionality. The SDN controller configures different network elements for voice capture, with information obtained from LEA via the Operator Lawful Intercept application. The information needed for such service setup is listed in Table 5.

Figure 22 - Lawful Intercept Workflow for Voice with SDN

 37 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Table 5 - Information Exchanged for Lawful Intercept Voice

Action Other Information Information for YANG Method (Static versus Model (for SDN Controller) Dynamic) LEA to operator Court document Name, address or phone number of customer Operator to SDN Name, address, or phone controller number of customer SDN controller to Enable lawful intercept for user CMTS (Access 5-tuple to identify the user function) Time duration Address of delivery function SDN controller to Enable lawful intercept for user PC 2.0 core 5-tuple to identify the user Time duration Address of delivery function SDN controller to Enable lawful intercept for user Delivery Function Format of data CC call content CII call identifying information Address of call function

8.5 Voice

PacketCable 1.5 and PacketCable 2.0 specifications were developed by CableLabs and adopted for voice transport services over cable networks. There are differences between these two suites of specifications and the following sections describe the two workflows accordingly. 8.5.1 PacketCable 1.5 Figure 23 shows the information exchanged between the MTAs, CMTSs, and RKSs from origin and destination side to establish a call.

Figure 23 - PacketCable Call Setup Workflow

Figure 24 shows an SDN application and an SDN controller replacing the RKS function.

 38 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 24 - PacketCable 1.5 Call Setup Workflow with SDN

Figure 25 - PacketCable Event Message Architecture with RKS

As shown in Figure 24, for PacketCable 1.5, a progression of call start to call establishment requires the exchange of information between MTAs, CMTSs, and RKSs from both the origin and destination side. As shown in Figure 25, RKSs shoulder the responsibility of communicating to other components, such as billing systems and network elements. These steps comprise a suitable replacement candidate for SDN and a corresponding SDN application. An SDN controller, by design, has topology information, control, and status information for the underlying network element. An SDN application can leverage such information and communicate to the billing system as needed for OSS/BSS. This allows the separation of network configuration information and billing information. Thus network processing can be standardized using known protocols (e.g., OpenFlow, etc.), and the future migration of the network configuration element will be easier. In the meantime, it also allows communication between the SDN application and the existing billing system to be less bounded by the underlying network changes. This also allows seamless communication with other SDN controllers outside of the current MSO domain. Figure 26 shows this design philosophy and Table 6 shows the information needed for such service setup.

 39 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 26 - Event Message Architecture with SDN Replacing RKS

Table 6 - Information Exchanged for PacketCable 1.5

Action Other Information Information for YANG Model (for Method (Static versus SDN Controller) Dynamic) QoS (SDP) SLA (bandwidth, MTA MAC addresses N/A priority, etc.) MTA IP addresses MAC address Audio Code type IP address Audio Code parameters Setup latency and post pickup delay Service related Call party name and Phone number N/A number Device capabilities Vendor specific Dynamic policy changes Billing Correlation ID Multiple concurrent sessions Dynamic adjustment of QoS parameters DOCSIS QoS CM MAC address Dynamic parameters CMTS MAC address Type of Service

 40 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

8.5.2 PacketCable 2.0 In the case of PacketCable2.0, a voice call setup is accomplished via PCMM protocol. PCRF is responsible for setting a flow and negotiating QoS parameters with CMTS and P-CSCF. Figure 27 shows the steps that are needed to make such a call.

Figure 27 - PacketCable 2.0 Voice Call Setup Workflow Today

By replacing PCRF with an SDN voice application and an SDN controller, as seen in Figure 28, the SDN voice application serves as a translator between P-CSCF and SDN controller for the QoS requests and replies. And SDN controller can setup proper flow via DSx messages accordingly. The information needed for such service setup is listed in Table 7.

Figure 28 - PacketCable 2.0 Voice Call Setup Workflow with SDN

 41 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Table 7 - Information Exchanged for PacketCable 2.0

Action Other Information for YANG Model (for SDN Method (Static Versus Information Controller) Dynamic) SDP TSpec Dynamic Bucket depth (b) – bytes Bucket rate (r) – bytes/second Peak rate (p) – bytes/second Minimum policed unit (m) – bytes Maximum datagram size (M) – bytes RSpec Reserved rate (r) - bytes/second Slack term (S) – microseconds IP Flow Direction (in or out) Source and destination IP address and port Protocol Framed IP AVP DOCSIS QoS Subscriber ID Dynamic Parameters CM MAC address CMTS address GateSpec (ToS DSCP markings) Classifiers Identity: MAC address and CM MGMT IP and SF

8.6 DSG

The DOCSIS Set-top Gateway (DSG) specification defines a method to transport set-top box commands and configurations over a DOCSIS network. Figure 29 provides both a physical and logical view of the various components needed for this type of scenario.

 42 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 29 - Overview of Current DOCSIS Set-top Gateway System

As can be seen in Figure 30, current DSG setup requires an operator to configure the DSG server, agent, and client separately. This requires human effort and is subject to various human errors during the process. With a DSG application running on top of an SDN controller, as illustrated in Figure 31, an operator can instruct the SDN controller to configure the DSG server, agent, and client in one pass. Table 8 lists the necessary information elements and actions needed for the SDN controller to accomplish this task.

Figure 30 - DSG Setup Workflow Today

 43 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 31 - DSG Setup Workflow with SDN

Table 8 - Information Exchanged for DSG

Action Other Information for YANG Model (for SDN Controller) Method (Static Information Versus Dynamic) SDN controller to DSG tunnel setup DSG server IP address and UDP port number (source and destination) SDN controller to DSG server connection setup DSG agent Multiple DSG servers DSG tunnel setup DSG rule setup (Annex A MIBs) Data Carrier Detect (DCD)

8.7 IPTV

As shown in Figure 32 and Figure 33, a Multicast or Unicast service setup involves many interactions among different components in the network. In particular, PCRF/CMTS and PCRF/PCMM components are responsible for setting up a DOCSIS service flow and the corresponding QoS parameters. Thus, an SDN controller can be used to provide this functionality just as in other use cases. This allows for a single generic SDN interface instead of proprietary protocols such as PCMM, and easier interaction with other SDN networks. Figure 34 and Figure 35 show this approach. The information needed for such service setup is listed in Table 9.

 44 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 32 - IPTV Setup Workflow Today – Multicast

 45 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 33 - IPTV Setup Workflow Today – Unicast

 46 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 34 - IPTV Setup Workflow with SDN – Multicast

 47 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 35 - IPTV Setup Workflow with SDN – Unicast

 48 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Table 9 - Information Exchanged for IPTV

Action Other Information for YANG Model (for SDN Method (Static Versus Information Controller) Dynamic Protocol) Program None Static requests IGMP Join NSI: Dynamic message HTTP: wget Subscriber ID Duration Extended classifier Minimum reserved rate PCRF Service Name DOCSIS: COPS: Gate Set AMID Extended classifier DOCSIS parameters Channel Leave NSI and DOCSIS: Dynamic Gate Delete (Gate ID, Tear Down, Gate Delete Acknowledge)

 49 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

9 DATA MODELS

The data models are an organized collection of information elements, and they describe how the information relates to each other. These information elements are parameters, which can be configured for a specific device or service. The data model defines the requirements/parameters needed to setup a service. The data models that the SDN controller supports and uses are critical because they form the basis of how new services and applications will be implemented. Once a data model has been created, it will be used by the SDN controller and the network devices as shown in figure below. The data model is implemented by the SDN controller to expose APIs to the network devices it controls. The network devices implement the same data model to complete the device and service configuration.

Figure 36 - Use of Data Model in SDN Architecture

One of the enabler for a software programmable network is the configuration and statistics information as described in previous section, which is exchanged between the SDN controller and each of the network devices. An information model or the data model needs to be developed for every network device that needs to be programmed. This data model will include all the data elements, like device settings or service settings, which need to be read by or written from a controller, to enable the devices and services across the network. It will include the elements needed to configure the device on boot-up (day-zero config). More importantly, the data model will also include the data elements which represent the creation of services over those network devices (dynamic-config). For data modeling, YANG is the choice of the networking industry. YANG is the data modeling language used to model configuration and state data manipulated by the NETCONF protocol (see [RFC 6241]), remote procedure calls, and notifications. It is a "human-friendly" modeling language for defining the semantics of operational data, configuration data, notifications, and operations. This document has defined data models for the DOCSIS network, L2VPN, TPIA, Lawful Intercept and a generic flow model. These are shown in Figure 37, Figure 39, Figure 40, Figure 41, and Figure 42.

 50 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.1 DOCSIS IP HSD Provisioning

Figure 37 - DOCSIS Data Model

 51 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.1.1 Packet Classifier

Figure 38 - Packet Classifier Data Model

 52 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.1.2 YANG Model for DOCSIS Data Model

module DOCSISConfig { namespace "urn:cablelabs:params:xml:ns:yang:sdn"; prefix DOCSISConfig; import ietf-inet-types { prefix inet; revision-date 2010-09-24; } import ietf-yang-types { prefix yang; revision-date 2010-09-24; } organization "Cable Television Laboratories, Inc."; contact "Postal: Cable Television Laboratories, Inc. 858 Coal Creek Circle Louisville, Colorado 80027-9750 U.S.A. Phone: +1 303-661-9100 Fax: +1 303-661-9199 E-mail: [email protected]";

description ""; reference ""; revision 2015-02-25 { description ""; }

typedef ethernet-protocol-id-type { type enumeration { enum other { value 1; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum none { value 2; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum ethertype { value 3; description "A value of 'ethertype' means that the rule applies only to frames that contain an EtherType value. Ethertype values are contained in packets using the DEC-- Xerox (DIX) encapsulation or the [RFC 1042] Sub-Network Access Protocol (SNAP) encapsulation formats."; } enum dsap { value 4; description "A value of 'dsap' means that the rule applies only to frames using the IEEE802.3 encapsulation format with a Destination Service Access Point (DSAP) other than 0xAA (which is reserved for SNAP)."; } enum mac { value 5; description " A value of 'mac' means that the rule applies only to MAC management messages for MAC management messages."; } enum all { value 6; description "A value of 'all' means that the rule matches all Ethernet frame. If the Ethernet frame contains an 802.1P/Q Tag header (i.e., EtherType 0x8100), this attribute applies to the embedded EtherType field within the 802.1p/Q header."; } }

 53 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

description "This enumerates the set of formats of the layer 3 protocol ID in the Ethernet frame."; }

grouping PacketClassifierEncodings-group { list PacketClassifierEncodings { key Reference Identifier; leaf Reference { type int8; description ""; } leaf Identifier { type uint16; description ""; } leaf SfReference { type uint16; description ""; } leaf SfIdentifier { type uint32; description ""; } leaf RulePriority { type uint8; description ""; } choice Encodings { case MplsPacketClassifierEncodings { leaf TrafficClass { type uint8; description ""; } leaf Label { type uint32; description ""; } } case Ipv6PacketClassifierEncodings { leaf TCRangeandMask { type int32; description ""; } leaf IpFlowTable { type int32; description ""; } leaf NextHeaderType { type int16; description ""; } leaf SourceAddressV6 { type inet:-address; description ""; } leaf SourcePrefixLengthV6 { type int8; description ""; } leaf DestinationAddressV6 { type inet:ipv6-address; description ""; } leaf DestinationMaskV6 { type int8; description ""; } } case Ipv4PacketClassifierEncodings { leaf TosRangeAndMask {

 54 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

type uint32; description ""; } leaf IpProtocol { type uint16; description ""; } leaf SourceAddressV4 { type inet:-address; description ""; } leaf SourcePrefixLengthV4 { type int8; description ""; } leaf DestinationAddressV4 { type inet:ipv4-address; description ""; } leaf DestinationMaskV4 { type int8; description ""; } } case IcmpV4V6PacketClassifierEncodings { leaf TypeStart { type int32; description ""; } leaf TypeEnd { type int32; description ""; } } case TcpUdpPacketClassifierEncodings { leaf SourcePortStart { type int16; description ""; } leaf SourcePortEnd { type int16; description ""; } leaf DestPortStart { type int16; description ""; } leaf DestPortEnd { type int16; description ""; } } case EthernetPacketClassifierEncodings { leaf DestinationMacAddress { type yang:mac-address; description ""; } leaf SourceMacAddress { type yang:mac-address; description ""; } leaf EtherDsapMacType { type int32; description ""; } } } } } grouping SubscriberMgmt-group { container SubscriberMgmt {

 55 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

leaf SubMgmtMacAddress { type yang:mac-address; description ""; } leaf SubMgmtIpV4Address { type inet:ipv4-address; description ""; } leaf SubMgmtIpV6Address { type inet:ipv6-address; description ""; } leaf SubMgmtPrefix { type inet:ipv6-address; description ""; } leaf SubMgmtPrefixLength { type int8; description ""; } } } grouping DsServiceFlow-group { list DsServiceFlow { key DsFlowId; leaf DSMaxSustainedTraffRate { type int32; description ""; } leaf DsPeakTraffRate { type int32; description ""; } leaf MaxDsLatency { type int32; description ""; } leaf DsResequencing { type boolean; description ""; } leaf DsFlowReference { type int8; description ""; } leaf DsServiceClassName { type string; description ""; } leaf DsFlowId { type int32; description ""; } leaf DsQosParamSetType { type int8; description ""; } leaf DsSFReqAttrMask { type int32; description ""; } leaf DsServiceId { type int16; description ""; } leaf DsSFForbidAttrMask { type int32; description ""; } leaf DsSFAttrAggrRuleMask { type int32;

 56 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

description ""; } leaf DsAppId { type int32; description ""; } leaf DsAggrSFRef { type int16; description ""; } leaf DsMESPRef { type int16; description ""; } leaf DsTrafficPriority { type int8; description ""; } leaf DsMaxTraffBurst { type int32; description ""; } leaf DsMinResTraffRate { type int32; description ""; } leaf DsAssumedMinResRatePacketSize { type int8; description ""; } leaf DsTimeOutActiveQoSParam { type int8; description ""; } leaf DsTimeOutAdmitQoSParams { type int8; description ""; } leaf DsIPTOSOverwrite { type string; description ""; } leaf DsMinBuffer { type int32; description ""; } leaf DsTargetBuffer { type int32; description ""; } leaf DsMaxBuffer { type int32; description ""; } leaf DsSFtoIATCProfileNameRef { type string; description ""; } leaf DsSfAqmDisable { type int8; description ""; } leaf DsSfAqmLatencyTarget { type int8; description ""; } } } grouping UsServiceFlow-group { list UsServiceFlow { key UsFlowId;

 57 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

leaf UsPeakTraffRate { type int32; description ""; } leaf UsMaxConcatenatedBurst { type int16; description ""; } leaf UsSfSchedulingType { type int8; description ""; } leaf UsFlowReference { type int8; description ""; } leaf UsServiceClassName { type string; description ""; } leaf UsFlowId { type int32; description ""; } leaf UsServiceId { type int16; description ""; } leaf UsQosParamSetType { type int8; description ""; } leaf UsSFReqAttrMask { type int32; description ""; } leaf UsSFForbidAttrMask { type int32; description ""; } leaf UsSFAttrAggrRuleMask { type int32; description ""; } leaf UsAppId { type int32; description ""; } leaf UsAggrSFRef { type int16; description ""; } leaf UsMESPRef { type int16; description ""; } leaf UsTrafficPriority { type int8; description ""; } leaf UsMaxTraffBurst { type int32; description ""; } leaf UsMinResTraffRate { type int32; description ""; } leaf UsAssumedMinResRatePacketSize { type int8; description "";

 58 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

} leaf UsTimeOutActiveQoSParam { type int8; description ""; } leaf UsTimeOutAdmitQoSParams { type int8; description ""; } leaf UsIPTOSOverwrite { type string; description ""; } leaf UsMinBuffer { type int32; description ""; } leaf UsTargetBuffer { type int32; description ""; } leaf UsMaxBuffer { type int32; description ""; } leaf UsSFtoIATCProfileNameRef { type string; description ""; } leaf UsSfAqmDisable { type int8; description ""; } leaf UsSfAqmLatencyTarget { type int8; description ""; } leaf UsReqTxPolicy { type int32; description ""; } leaf UsNominalPollInterval { type int32; description ""; } leaf UsToleratedPollJitter { type int32; description ""; } leaf UnsolicitedGrantSize { type int16; description ""; } leaf UsNominalGrantInterval { type int32; description ""; } leaf UsToleratedGrantJitter { type int32; description ""; } leaf UsGrantsPerInterval { type int8; description ""; } leaf UsUnsolicitedGrantTimeRef { type int32; description ""; } leaf UsMultToContentionReqBackoffWin { type int8;

 59 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

description ""; } leaf UsMultToNumOfBytesReq { type int8; description ""; } } } container DocsisCfg { list DocsisCfg { key ChannelId; leaf DownstreamFrequency { type int32; description ""; } leaf ChannelId { type int8; description ""; } leaf NetworkAccessCtrl { type boolean; default false; description ""; } leaf VendorId { type string; description ""; } leaf SoftwareUpgradFileName { type string; description ""; } leaf MaxNumCpes { type int8; description ""; } leaf TftpServerTimestamp { type int32; description ""; } leaf TftpCMIpv4Addr { type inet:ipv4-address; description ""; } leaf TftpCMIpv6Addr { type inet:ipv6-address; description ""; } leaf MaxNumClasifiers { type int32; description ""; } leaf PrivacyEnable { type boolean; default false; description ""; } leaf SubscriberMgmtControl { type string; description ""; } leaf SoftwareMgmtFilterGroups { type string; description ""; } leaf SubscriberMgmtControlMaxCpeIpv6 { type int16; description ""; } leaf EnableTestMode { type boolean;

 60 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

default false; description ""; } leaf SoftwareUpgradeTftpServerIpv4Addr { type inet:ipv4-address; description ""; } leaf SoftwareUpgradeTftpServerIpv6Addr { type inet:ipv6-address; description ""; } leaf DefaultUpstreamTargetBuffer { type int16; description ""; } leaf CmUpstreamAqmDisable { type int8; description ""; } leaf MacAddressLearningControl { type boolean; description ""; } leaf MacAddressLearningHoldoffTimer { type int8; description ""; } leaf CmMacAddress { type yang:mac-address; description ""; } leaf CmIpV4Address { type inet:ipv4-address; description ""; } leaf CmIpV6Address { type inet:ipv6-address; description ""; } list EnergyMgmt { key EmId;

leaf EmId { type int16; description ""; } leaf FeatureControl { type int32; description ""; } leaf CyclePeriod { type int16; description ""; } leaf ModeIndicator { type int8; description ""; } } uses PacketClassifierEncodings-group; uses SubscriberMgmt-group; uses DsServiceFlow-group; uses UsServiceFlow-group; } } }

 61 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.2 L2VPN Data Model

Figure 39 - L2VPN Data Model

 62 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.2.1 YANG Model for L2VPN

module L2Vpn { namespace "urn:cablelabs:params:xml:ns:yang:sdn"; prefix L2Vpn; import ietf-inet-types { prefix inet; revision-date 2010-09-24; } import ietf-yang-types { prefix yang; revision-date 2010-09-24; } organization "Cable Television Laboratories, Inc."; contact "Postal: Cable Television Laboratories, Inc. 858 Coal Creek Circle Louisville, Colorado 80027-9750 U.S.A. Phone: +1 303-661-9100 Fax: +1 303-661-9199 E-mail: [email protected]";

description ""; reference ""; revision 2015-02-25 { description ""; }

typedef ethernet-protocol-id-type { type enumeration { enum other { value 1; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum none { value 2; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum ethertype { value 3; description "A value of 'ethertype' means that the rule applies only to frames that contain an EtherType value. Ethertype values are contained in packets using the DEC-Intel- Xerox (DIX) encapsulation or the [RFC 1042] Sub-Network Access Protocol (SNAP) encapsulation formats."; } enum dsap { value 4; description "A value of 'dsap' means that the rule applies only to frames using the IEEE802.3 encapsulation format with a Destination Service Access Point (DSAP) other than 0xAA (which is reserved for SNAP)."; } enum mac { value 5; description " A value of 'mac' means that the rule applies only to MAC management messages for MAC management messages."; } enum all { value 6; description "A value of 'all' means that the rule matches all Ethernet frame. If the Ethernet frame contains an 802.1P/Q Tag header (i.e., EtherType 0x8100), this attribute applies to the embedded EtherType field within the 802.1p/Q header."; } }

 63 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

description "This enumerates the set of formats of the layer 3 protocol ID in the Ethernet frame."; }

container L2Vpn { list L2Vpn { key VpnId; leaf VpnId { type uint32; description ""; } leaf ESafeDHCPSnooping { type uint32; description ""; } leaf CMIterfaceMaskSubType { type uint32; description ""; } leaf AttachementGroupId { type binary; description ""; } leaf SourceAttachmentIndividualId { type binary; description ""; } leaf TargetAttachmentIndividualId { type binary; description ""; } leaf L2VPNSADescriptorSubType { type binary; description ""; } leaf PseudowireSignaling { type binary; description ""; } leaf DsId { type uint64; description ""; }

choice NSIEncapsulationSubType { case ET802.1Q { leaf Tag { type uint32; description ""; } } case ET802.1ad { leaf SPVLanId { type uint32; description ""; } leaf CustVLanId { type uint32; description ""; } } case ET802.1ah { leaf BackboneServiceITagTCI { type uint32; description ""; } leaf DestBEBMacAddress { type yang:mac-address; description "";

 64 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

} leaf BTag { type uint16; description ""; } leaf Itag { type uint16; description ""; } leaf Ipcp { type uint8; description ""; } leaf Idei { type uint8; description ""; } leaf Iuca { type uint8; description ""; } leaf ISIDBackboneServiceInstanceId { type uint64; description ""; } leaf BTagTPId { type uint16; description ""; } leaf Bpcp { type uint8; description ""; } leaf Bdei { type uint8; description ""; } leaf Bvid { type uint16; description ""; } leaf Stpid { type uint16; description ""; } } case ETMpls { leaf PseudoWireId { type uint64; description ""; } leaf PeerIpAddrV4 { type inet:ipv4-address; description ""; } leaf PeerIpAddrV6 { type inet:ipv6-address; description ""; } leaf PseudoWireType { type uint8; description ""; } leaf BackupPseudoWireId { type uint64; description ""; } leaf BackupPeerIpAddrV4 { type inet:ipv4-address; description ""; }

 65 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

leaf BackupPeerIpAddrV6 { type inet:ipv6-address; description ""; } } case ETL2TpV3 { leaf InetAddressType { type uint32; description ""; } leaf L2TIpAddrV4 { type inet:ipv4-address; description ""; } leaf L2TIpAddrV6 { type inet:ipv6-address; description ""; }

} } list BgpAttribute { key VpnId; leaf VpnId { type uint32; description ""; } leaf RouteDistinguisher { type uint64; description ""; } leaf RouteTargetImport { type uint64; description ""; } leaf RouteTargetExport { type uint64; description ""; } leaf CEIdVeId { type uint32; description ""; } leaf Attribute { type string; description ""; } } list SoamSubType { key MepId; leaf MepId { type uint32; description ""; } leaf MdLevel { type uint8; description ""; } leaf MdName { type string; description ""; } leaf ManName { type string; description ""; } leaf RemoteMepId { type uint32; description ""; } leaf RemoteMdLevel {

 66 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

type uint8; description ""; } leaf RemoteMdName { type string; description ""; } leaf RemoteManName { type string; description ""; } container FaultMgmtConfig { leaf ContinuityCheck { type uint8; description ""; } leaf Loopback { type uint8; description ""; } leaf LinkTrace { type uint8; description ""; } } container PerformaceMgmtConfig { container FrameDelay { leaf FrameDelayMeasEnable { type boolean; description ""; } leaf FDOneWayTwoWay { type boolean; description ""; } leaf FDTransmissionPeriodicity { type uint32; description ""; } } container FrameLoss { leaf FrameLossMeasEnable { type boolean; description ""; } leaf FLTransmissionPeriodicity { type uint32; description ""; } } } } } } }

 67 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

9.3 TPIA Data Model

Figure 40 - TPIA Data Model

9.3.1 YANG Model for TPIA

module TPIA { namespace "urn:cablelabs:params:xml:ns:yang:sdn"; prefix TPIA; import ietf-yang-types { prefix yang; revision-date 2010-09-24; } import DOCSISConfig{ prefix DOCSISConfig; revision-date 2015-02-25;

} organization "Cable Television Laboratories, Inc."; contact "Postal: Cable Television Laboratories, Inc. 858 Coal Creek Circle

 68 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Louisville, Colorado 80027-9750 U.S.A. Phone: +1 303-661-9100 Fax: +1 303-661-9199 E-mail: [email protected]";

description ""; reference ""; revision 2015-02-25 { description ""; }

container TPIA { list TpiaCcap { key SubInterface Vlan; leaf SubInterface { type int32; description ""; } leaf Vlan { type int32; description ""; } uses DOCSISConfig:SubscriberMgmt-group;

list PerSubscriberConfigParams { key PSMacAddress; leaf PSMacAddress { type yang:mac-address; description ""; } leaf CmUsAqmDisable { type int32; description ""; } leaf MacAddressLearningCtrl { type int32; description ""; } uses DOCSISConfig:PacketClassifierEncodings-group; uses DOCSISConfig:DsServiceFlow-group; uses DOCSISConfig:UsServiceFlow-group; } } } }

 69 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

9.4 Lawful Intercept Data Model

Figure 41 - Lawful Intercept Data Model

9.4.1 YANG Model for Lawful Intercept

module DataLI { namespace "urn:cablelabs:params:xml:ns:yang:sdn"; prefix DataLI; import ietf-inet-types { prefix inet; revision-date 2010-09-24; } organization "Cable Television Laboratories, Inc."; contact "Postal: Cable Television Laboratories, Inc. 858 Coal Creek Circle Louisville, Colorado 80027-9750 U.S.A. Phone: +1 303-661-9100 Fax: +1 303-661-9199 E-mail: [email protected]";

description ""; reference ""; revision 2015-02-25 { description ""; }

typedef ethernet-protocol-id-type { type enumeration { enum other { value 1;

 70 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum none { value 2; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum ethertype { value 3; description "A value of 'ethertype' means that the rule applies only to frames that contain an EtherType value. Ethertype values are contained in packets using the DEC-Intel- Xerox (DIX) encapsulation or the [RFC 1042] Sub-Network Access Protocol (SNAP) encapsulation formats."; } enum dsap { value 4; description "A value of 'dsap' means that the rule applies only to frames using the IEEE802.3 encapsulation format with a Destination Service Access Point (DSAP) other than 0xAA (which is reserved for SNAP)."; } enum mac { value 5; description " A value of 'mac' means that the rule applies only to MAC management messages for MAC management messages."; } enum all { value 6; description "A value of 'all' means that the rule matches all Ethernet frame. If the Ethernet frame contains an 802.1P/Q Tag header (i.e., EtherType 0x8100), this attribute applies to the embedded EtherType field within the 802.1p/Q header."; } } description "This enumerates the set of formats of the layer 3 protocol ID in the Ethernet frame."; }

container DataLI { list CMTSEnableDataLI { key LIid; leaf LIid { type uint32; description ""; } leaf InterceptDuration { type uint32; description ""; } leaf InterceptDuration2 { type uint32; description ""; } choice MediationFunctionAddress { case MediationFunctionV4 { leaf MediationFunctionAddressV4 { type inet:ipv4-address; description ""; } } case MediationFunctionV6 { leaf MediationFunctionAddressV6 { type inet:ipv6-address; description ""; } } } choice FlowDescriptor {

 71 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

case IPv4Tuple { leaf IPv4Source { type inet:ipv4-address; description ""; } leaf IPv4Dest { type inet:ipv4-address; description ""; } leaf IPv4SourcePort { type uint32; description ""; } leaf IPv4DestPort { type uint32; description ""; } leaf IPv4Protocol { type ethernet-protocol-id-type; description ""; } } case IPv6Tuple { leaf IPv6Source { type inet:ipv6-address; description ""; } leaf IPv6Dest { type inet:ipv6-address; description ""; } leaf IPv6SourcePort { type uint32; description ""; } leaf IPv6DestPort { type uint32; description ""; } leaf IPv6Protocol { type ethernet-protocol-id-type; description ""; } leaf FlowLabel { type string; description ""; } } } } } }

 72 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.5 Generic Flow Model (Northbound Data Model)

The Generic Flow model is intended to reside between the application layer and the SDN controller. Generic Flow consists of common features that can be used across different access technologies. Generic Flow is made up of the following: • Policy: This includes data rates (Min, Max, CIR), DSCP markings, Application ID (from PCMM), or this could translate to the Service Class Name Object in DOCSIS. • IP Address Information (IP Flows): This could take the form of classifiers or IP source and destination addresses; destination addresses can be multicast addresses. A pattern for this might be to use the PktClass object from DOCSIS which contains L2, L3, and L4 classification elements. • Service Chain Information: This might be represented as Opaque data that might be used in some of the Chaining decisions; e.g., Bearing traffic information (Video downstream/upstream, Voice, SIP signaling). • Direction: Uni-directional flows need to account for direction. Also upstream flows have a different set of parameters than downstream flows (e.g., in DOCSIS).

 73 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 42 - Generic Flow Data Model

 74 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.5.1 YANG Model for Generic Flow Model

module sdn { namespace "urn:cablelabs:params:xml:ns:yang:sdn"; prefix sdn; import ietf-inet-types { prefix inet; revision-date 2010-09-24; } import ietf-yang-types { prefix yang; revision-date 2010-09-24; } organization "Cable Television Laboratories, Inc."; contact "Postal: Cable Television Laboratories, Inc. 858 Coal Creek Circle Louisville, Colorado 80027-9750 U.S.A. Phone: +1 303-661-9100 Fax: +1 303-661-9199 E-mail: [email protected]";

description ""; reference ""; revision 2015-02-25 { description ""; }

typedef octet-data-type { type string { pattern "([0-9a-fA-F]{2})*"; } description "A derived type representing the lexical value space of XML Schema hexBinary defined as 'each binary octet is encoded as a character tuple, consisting of two hexadecimal digits ([0-9a-fA-F]) representing the octet code.' Please note that length constraints on this derived type needs to be in multiples of 2 to avoid conflicts between length and pattern space"; reference "[XML-Schema] 3.2.15 hexBinary"; } typedef ethernet-protocol-id-type { type enumeration { enum other { value 1; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum none { value 2; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum ethertype { value 3; description "A value of 'ethertype' means that the rule applies only to frames that contain an EtherType value. Ethertype values are contained in packets using the DEC-Intel- Xerox (DIX) encapsulation or the [RFC 1042] Sub-Network Access Protocol (SNAP) encapsulation formats."; } enum dsap { value 4; description "A value of 'dsap' means that the rule applies only to frames using the IEEE802.3 encapsulation format with a Destination Service Access Point (DSAP) other than 0xAA (which is reserved for SNAP)."; } enum mac { value 5;

 75 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

description " A value of 'mac' means that the rule applies only to MAC management messages for MAC management messages."; } enum all { value 6; description "A value of 'all' means that the rule matches all Ethernet frame. If the Ethernet frame contains an 802.1P/Q Tag header (i.e., EtherType 0x8100), this attribute applies to the embedded EtherType field within the 802.1p/Q header."; } } description "This enumerates the set of formats of the layer 3 protocol ID in the Ethernet frame."; }

typedef enet-mode { type enumeration { enum other { value 1; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum enet-tagged-mode { value 2; } enum enet-raw-mode { value 3; } } }

typedef encapsulation-type { type enumeration { enum other { value 1; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum ieee8021q { value 2; } enum ieee8021ad { value 3; } enum mplspw { value 4; } enum ieee8021ah { value 5; } } } typedef direction-type { type enumeration { enum other { value 1; description "The value of other is used when a vendor-extension has been implemented for this attribute."; } enum up { value 2; } enum down { value 3; } enum bidir { value 4; }

 76 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

} } container sdn { list subscriberlist { key subscriber-id; leaf subscriber-id { type string { length "1..255"; } description ""; } leaf subscriber-name { type string { length "1..255"; } } leaf account { type string { length "1..255"; } } list service { key service-id; min-elements 1; leaf service-id { type uint16 { range "1..65535"; } description ""; } leaf service-name { type string { length "1..255"; } } list flow { key "flow-id"; leaf flow-id { type uint16 { range "1..65535"; } description ""; } leaf direction { type enumeration { enum upstream { value 1; } enum downstream { value 2; } } }

leaf latency { type uint32; description ""; } leaf dot1pbits { type ccap-octet-data-type { length "2..100"; } } leaf max-rate { type uint64; description ""; } leaf min-rate { type uint64; description "";

 77 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

} list classifier-parameters { key "classifier-id"; min-elements 1; description ""; leaf classifier-id { type uint16 { range "1..65535"; } description ""; } leaf ip-tos-low { type ccap-octet-data-type { length "2"; } default 00; description "This attribute represents the low value of a range of ToS (Type of Service) octet values. The IP ToS octet, as originally defined in [RFC 791], has been superseded by the 6-bit Differentiated Services Field (DSField, [RFC 3260]) and the 2-bit Explicit Congestion Notification Field (ECN field, [RFC 3168]). This attribute is defined as an 8-bit octet as per the DOCSIS Specification for packet classification. "; } leaf ip-tos-high { type ccap-octet-data-type { length "2"; } default 00; description "This attribute represents the high value of a range of ToS octet values. The IP ToS octet, as originally defined in [RFC 791], has been superseded by the 6- bit Differentiated Services Field (DSField, [RFC 3260]) and the 2-bit Explicit Congestion Notification Field (ECN field, [RFC 3168]). This attribute is defined as an 8-bit octet as per the DOCSIS Specification for packet classification. "; } leaf dscp { type octet-data-type { length "2"; } default 00; description "8 bit value: most significant 3 bits are priority, next 3 are drop classifier, last two are ECN."; } leaf ip-tos-mask { type ccap-octet-data-type { length "2"; } default 00; description "This attribute represents the mask value that is bitwise ANDed with ToS octet in an IP packet, and the resulting value is used for range checking of IpTosLow and IpTosHigh."; } leaf ip-protocol { type uint16 { range "0..257"; } default 256; description "This attribute represents the value of the IP Protocol field required for IP packets to match this rule. The value 256 matches traffic with any IP Protocol value. The value 257 by convention matches both TCP and UDP."; } choice address-family { mandatory true; case ipv4 { container ipv4 { leaf source-ipv4-subnet { type inet:ipv4-prefix; mandatory true; description "This attribute specifies the value of the IP Source Address required for packets to match this rule and which bits of a packet's IP Source

 78 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Address are compared to match this rule. An IP packet matches the rule when the packet's IP Source Address bitwise ANDed with the mask value defined by the prefix equals the InetSrcAddr value. "; } leaf dest-ipv4-subnet { type inet:ipv4-prefix; mandatory true; description "This attribute specifies the value of the IP Destination Address required for packets to match this rule and which bits of a packet's IP Source Address are compared to match this rule. An IP packet matches the rule when the packet's IP Destination Address bitwise ANDed with the mask value defined by the prefix equals the InetDestAddr value."; } } } case ipv6 { container ipv6 { leaf source-ipv6-address { type inet:ipv6-prefix; mandatory true; description "This attribute specifies the value of the IP Source Address required for packets to match this rule and which bits of a packet's IP Source Address are compared to match this rule. An IP packet matches the rule when the packet's IP Source Address bitwise ANDed with the mask value defined by the prefix equals the InetSrcAddr value. "; } leaf dest-ipv6-address { type inet:ipv6-prefix; mandatory true; description "This attribute specifies the value of the IP Destination Address required for packets to match this rule and which bits of a packet's IP Source Address are compared to match this rule. An IP packet matches the rule when the packet's IP Destination Address bitwise ANDed with the mask value defined by the prefix equals the InetDestAddr value."; } leaf flow-label { type uint32 { range "0..1048575"; } default 0; description "This attribute represents the Flow Label field in the IPv6 header to be matched by the classifier. The value zero indicates that the Flow Label is not specified as part of the classifier and is not matched against packets."; } } } } leaf source-port-start { type inet:port-number; default 0; description "This attribute represents the low-end inclusive range of TCP/UDP source port numbers to which a packet is compared. This attribute is irrelevant for non- TCP/UDP IP packets."; } leaf source-port-end { type inet:port-number; default 65535; description "This attribute represents the high-end inclusive range of TCP/UDP source port numbers to which a packet is compared. This attribute is irrelevant for non- TCP/UDP IP packets."; } leaf destination-port-start { type inet:port-number; default 0; description "This attribute represents the low-end inclusive range of TCP/UDP destination port numbers to which a packet is compared. This attribute is irrelevant for non-TCP/UDP IP packets."; }

 79 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

leaf destination-port-end { type inet:port-number; default 65535; description "This attribute represents the high-end inclusive range of TCP/UDP destination port numbers to which a packet is compared. This attribute is irrelevant for non-TCP/UDP IP packets."; } leaf destination-mac-address { type yang:mac-address; default 00:00:00:00:00:00; description "This attribute represents the criteria to match against an Ethernet frame MAC address bitwise ANDed with DestMacMask."; } leaf destination-mac-mask { type yang:mac-address; default 00:00:00:00:00:00; description "An Ethernet frame matches an entry when its destination MAC address bitwise ANDed with the DestMacMask attribute equals the value of the DestMacAddr attribute."; } leaf source-mac-address { type yang:mac-address; default FF:FF:FF:FF:FF:FF; description "This attribute represents the value to match against an Ethernet frame source MAC address."; } leaf ethernet-protocol-id { type ethernet-protocol-id-type; default none; description "This attribute indicates the format of the layer 3 protocol ID in the Ethernet frame."; } leaf ethernet-protocol { type uint16; default 0; description "This attribute represents the Ethernet protocol type to be matched against the frames. For EnetProtocolType set to 'none', this attribute is ignored when considering whether a packet matches the current rule. If the attribute EnetProtocolType is 'ethertype', this attribute gives the 16-bit value of the EtherType that the packet needs to match in order to match the rule. If the attribute EnetProtocolType is 'dsap', the lower 8 bits of this attribute's value need to match the DSAP byte of the packet in order to match the rule. If the Ethernet frame contains an 802.1p/Q Tag header (i.e., EtherType 0x8100), this attribute applies to the embedded EtherType field within the 802.1p/Q header."; } leaf user-priority-applies { type boolean; default false; } leaf user-priority-low { type uint8 { range "0..7"; } default 0; description "This attribute applies only to Ethernet frames using the 802.1p/Q tag header (indicated with EtherType 0x8100). Such frames include a 16-bit Tag that contains a 3-bit Priority field and a 12-bit VLAN number. Tagged Ethernet frames need to have a 3-bit Priority field within the range of PriLow to PriHigh in order to match this rule."; } leaf user-priority-high { type uint8 { range "0..7"; } default 7; description "This attribute applies only to Ethernet frames using the 802.1p/Q tag header (indicated with EtherType 0x8100). Such frames include a 16-bit Tag that contains a 3-bit Priority field and a 12-bit VLAN number. Tagged Ethernet frames need to have a 3-bit Priority field within the range of PriLow to PriHigh in order to match this rule."; } leaf vlan-id { type uint16 {

 80 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

range "0 | 1..4094"; } default 0; description "This attribute applies only to Ethernet frames using the 802.1p/Q tag header. Tagged packets need to have a VLAN Identifier that matches the value in order to match the rule."; }

}

list tunnel-parameters { key "vlan-id"; leaf vlan-id { type uint32; } leaf encapsulation { type encapsulation-type; } leaf ethernet-mode { type enet-mode; } container endpoints { choice address-family { mandatory true; case ipv4 { container ipv4-endpoints { leaf first-endpoint { mandatory true; type inet:ipv4-address; } leaf second-endpoint { mandatory true; type inet:ipv4-address; } } } case ipv6 { container ipv6-endpoints { leaf first-endpoint { mandatory true; type inet:ipv6-address; } leaf second-endpoint { mandatory true; type inet:ipv6-address; } } } case mac { container mac-endpoints { leaf first-endpoint { mandatory true; type inet:mac-address; } leaf second-endpoint { mandatory true; type inet:mac-address; } } } } } } } list cpe { key cpe-mac-address; min-elements 1; leaf cpe-mac-address {

 81 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

type yang:mac-address; description "The MAC address of the cpe."; } leaf ipv4-address { type inet:ipv4-address; } list ipv6 { key ipv6-address; leaf ipv6-address { type inet:ipv6-address; } } leaf device-type { type string { length "1..32"; } } leaf vendor-id { type ccap-octet-data-type { length "64"; } } list cpe-flow { key flow-id; min-elements 1; leaf flow-id { type leafref { path "../../../service/flow/flow-id"; } } } } } } }

9.6 OpenDaylight PCMM Plug-in Data Model

[PCMM] (currently in the ODL Lithium release) consists of thirteen modules and two features that will be pared down to a single feature during the next Beryllium release. The bundles included in feature set "features- " that were originally built to leverage the OpenFlow modules will be deprecated. The bundles included in feature set "features-packetcable-policy" are fully functional and working as designed. This approach has been deemed the proper architecture for managing PCMM QoS, and this simplified model will be the best path to use for extension in the future.

 82 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.6.1 Traffic Profile Data Model Figure 43 provides a graphical representation of the flow-model based PCMM plug-in data model.

Figure 43 - Traffic Profile Data Model from Open Daylight SDN Controller

9.6.2 PCMM Traffic Profile Data Model Figure 44 provides a graphical representation of the PCMM traffic profile data model. This diagram is also available here: https://wiki.opendaylight.org/images/4/4d/Traffic_Profile.png

 83 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 44 - PCMM Traffic Profile Data Model

 84 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

9.6.3 OpenDaylight PCMM Plugin YANG Model

module packetcable { namespace "urn:packetcable"; prefix "pcmm";

import ietf-yang-types { prefix yang; } import ietf-inet-types { prefix inet; }

description "This module contains the PCMM Converged Cable Access Platform (CCAP) definitions"; organization "OpenDaylight Project";

revision 2015-03-27 { description "Initial revision of PCMM CCAP definitions"; }

// Global typedefs typedef service-class-name { type string { length "2..16"; } description "The Service Class Name is MUST be 2-16 bytes."; } typedef service-flow-direction { type enumeration { enum us { value "1"; description "Upstream service flow."; } enum ds { value "2"; description "Downstream service flow."; } } description "This value represents the service flow direction."; } typedef tp-protocol { type uint16 {range "0..257";} description "This value represents the IP transport protocol (or Next Header) where 256 is any protocol and 257 is TCP or UDP"; } typedef tos-byte { type uint8; description "TOS/TC byte or mask"; }

// CCAP devices container ccap { list ccaps { description " CCAP devices are known by their network name which is any string. Each CCAP device has a network address:port, a list of subscriber IP subnets, and a list of available Service Class Names. "; key "ccapId"; leaf ccapId { type string; description "CCAP Identity"; } uses ccap-attributes; } }

grouping ccap-attributes { description "

 85 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Each CCAP device has a COPS connection address:port, a list of subscriber IP subnets, and a list of available Service Class Names. "; container connection { leaf ipAddress { type inet:ip-address; description "IP Address of CCAP"; } leaf port { type inet:port-number; description "COPS session TCP port number"; default 3918; } } container amId { leaf am-tag { type uint16; description "Application Manager Tag -- unique for this operator"; } leaf am-type { type uint16; description "Application Manager Type -- unique for this AM tag"; } } leaf-list subscriber-subnets { type inet:ip-prefix; } leaf-list upstream-scns { type service-class-name; } leaf-list downstream-scns { type service-class-name; } leaf response { type string; description "HTTP response from the PUT operation provided by the API"; } }

// PCMM QoS Gates container qos { description " PCMM QoS Gates are organized as a tree by Application/Subscriber/Gate: Each Application is known by its appId which is any string. Each Subscriber is known by its subId which is a CPE IP address in either IPv4 or IPv6 format. Each Gate is known by its gateId which is any string.

The subscriber's CPE IP address is used to locate the CCAP device that is currently hosting the the CM that is connected to the subscriber's device. Therefore, it is not necessary for the PCMM applications to know the topology of the CCAP devices and CMs in the network path to their subscriber devices.

Note that each CCAP entry contains a list of connected subscriber IP subnets as well as a list of all Service Class Names (SCNs) available on the CCAP device. "; uses pcmm-qos-gates; }

grouping pcmm-qos-gates { list apps { key "appId"; leaf appId { type string; description "Application Identity"; }

 86 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

list subs { key "subId"; leaf subId { type string; description "Subscriber Identity -- must be a CM or CPE IP address"; } list gates { key "gateId"; leaf gateId { type string; description "Qos Gate Identity"; } uses pcmm-qos-gate-attributes; } } } }

grouping pcmm-qos-gate-attributes { uses pcmm-qos-gate-spec; uses pcmm-qos-traffic-profile; uses pcmm-qos-classifier; uses pcmm-qos-ext-classifier; uses pcmm-qos-ipv6-classifier; leaf response { type string; description "HTTP response from the PUT operation provided by the API"; } }

grouping pcmm-qos-gate-spec { container gate-spec { leaf direction { type service-flow-direction; description "Gate Direction (ignored for traffic profile SCN)"; } leaf dscp-tos-overwrite { type tos-byte; description "Optional DSCP/TOS overwrite value"; } leaf dscp-tos-mask { type tos-byte; description "Optional DSCP/TOS overwrite AND mask"; } } }

grouping pcmm-qos-traffic-profile { container traffic-profile { leaf service-class-name { type service-class-name; description "The Service Class Name (SCN). This SCN must be pre- provisioned on the target CCAP"; } } }

grouping tp-port-match-ranges { leaf srcPort-start { type inet:port-number; description "TCP/UDP source port range start."; } leaf srcPort-end { type inet:port-number; description "TCP/UDP source port range end."; } leaf dstPort-start { type inet:port-number; description "TCP/UDP destination port range start."; }

 87 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

leaf dstPort-end { type inet:port-number; description "TCP/UDP destination port range end."; } }

grouping pcmm-qos-classifier { container classifier { leaf srcIp { type inet:ipv4-address; description "Source IPv4 address (exact match)"; } leaf dstIp { type inet:ipv4-address; description "Destination IPv4 address (exact match)"; } leaf tos-byte { type tos-byte; description "TOS/DSCP match"; } leaf tos-mask { type tos-byte; description "TOS/DSCP mask"; } leaf protocol { type tp-protocol; description "IPv4 transport protocol"; } leaf srcPort { type inet:port-number; description "TCP/UDP source port (exact match)."; } leaf dstPort { type inet:port-number; description "TCP/UDP destination port (exact match)."; } } }

grouping pcmm-qos-ext-classifier { container ext-classifier { leaf srcIp { type inet:ipv4-address; description "Source IPv4 address match"; } leaf srcIpMask { type inet:ipv4-address; description "Source IPv4 mask"; } leaf dstIp { type inet:ipv4-address; description "Destination IPv4 address match"; } leaf dstIpMask { type inet:ipv4-address; description "Destination IPv4 mask"; } leaf tos-byte { type tos-byte; description "TOS/DSCP match"; } leaf tos-mask { type tos-byte; description "TOS/DSCP mask"; } leaf protocol { type tp-protocol; description "IPv4 transport protocol"; } uses tp-port-match-ranges; }

 88 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

}

grouping pcmm-qos-ipv6-classifier { container ipv6-classifier { leaf srcIp6 { type inet:ipv6-prefix; description "Source IPv6 prefix match in

notation"; } leaf dstIp6 { type inet:ipv6-prefix; description "Destination IPv6 prefix match in
notation"; } leaf tc-low { type tos-byte; description "TC low range match"; } leaf tc-high { type tos-byte; description "TC high range match"; } leaf tc-mask { type tos-byte; description "TC mask"; } leaf next-hdr { type tp-protocol; description "IPv6 Next Header"; } leaf flow-label { type uint32 { range "0 .. 1048575"; } description "IPv6 Flow Label (20 bits)"; } uses tp-port-match-ranges; } }

}

 89 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

10 NORTHBOUND AND SOUTHBOUND PROTOCOLS

There are many protocols that facilitate communication between an SDN controller to the application and to the underlying network element, e.g., CMTS, CCAP, etc. To simplify SDN controller implementation there is a desire to standardize on one or two applicable communication mechanisms. In this section, each individual protocol is summarized in terms of its current applicability, strengths, and weaknesses. At the end of this section, a recommendation is presented that accommodates both the current usage and also future development in light of SDN and NFV adoption in the networking industry.

10.1 PCMM

Packet Cable Multimedia (PCMM) was considered a mechanism for the southbound protocol from the SDN controller to the network elements or to the CMA. PCMM is currently implemented on all CMTS platforms, and a simple extension is needed to transport data from the SDN controller to network elements or to the CMA. While PCMM would work on the DOCSIS network, it has a few disadvantages. The Common Open Policy Service (COPS) protocol used in PCMM is only available on the DOCSIS network and not adopted by other network elements such as switches or routers. Thus its deployment will depend on a wider adoption of the protocol within the industry.

10.2 NETCONF

NETCONF protocol (see [RFC 6241]) provides mechanisms to install, manipulate, and delete configuration of network devices. It is designed to reduce programing effort involved in automating device configuration. NETCONF is based on secure transport and uses Extensible Markup Language (XML) based data encoding for configuration and state data as well as for protocol messages. The choice of data model language is independent. YANG (see [RFC 6020]) is a recommended NETCONF modeling language, which introduces advanced language features for configuration management. NETCONF provides mechanisms for multi-action transaction management and two-phase commit. The NETCONF protocol is based on a Remote Procedure Call (RPC) model. The base protocol specifies a set of RPCs that clients invoke to manipulate configuration data stores – e.g., get-config, edit-config, copy-config, etc. The data modules implemented by the managed device specify additional RPCs that can be used by clients to manipulate specific device configuration and state data. The RPCs applicable to each data module are specified as part of the YANG definition of the module. The advantage of NETCONF is that it supports a robust configuration change transaction involving a number of devices, and NETCONF is already implemented in network devices such as routers and switches by some major equipment vendors.

10.3 XMPP

The Extensible Messaging and Presence Protocol (XMPP) [RFC 6120] is an open source technology for real-time communication, using XML as the base format for exchanging information. In essence, XMPP provides a way to send small pieces of XML from one entity to another in close to real time. XMPP is used in instant messages, group chat, and social networking applications. Currently, there is no known implementation of XMPP in the cable infrastructure domain.

10.4 REST (RESTful API)

Representational State Transfer (REST) [Fielding-2000] is a client-server based protocol providing a stateless, cacheable, and redundant system with a uniform interface for exposing data resources directly to clients. Web services that comply with all of the above conditions are considered RESTful. REST interfaces are modeled on the CREATE, READ, UPDATE, and DELETE (CRUD) storage functions and implement them utilizing the HTTP transport protocol methods of POST, GET, PUT, and DELETE. Other HTTP

 90 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

methods such as OPTIONS/PATCH/HEAD can also be supported. REST is a set of pull-based interface principles, layered on top of the HTTP communication standard. It inherits the requirements and limitations of HTTP but is loose in how the HTTP mechanisms apply to data resources. REST doesn't have a native notification mechanism so most developers layer in WebSockets, Server-Sent Events or Long-Polling for asynchronous client-notification support. Authorizations for RESTful transactions are typically done either using Http Basic Auth, OAuth or a shared pre- defined key. Since this information is passed in an unsecured manner, it is recommended that REST interfaces always are connected via SSL (HTTPS). Procedure calls follow a typical URL pattern of host, port, resource and HTTP method, followed by an optional data body typically carrying either an XML or JSON datagram when required. URL encoding of parameters is also seen frequently in lieu of a datagram, typically in relation to GET and PUT methods. The major advantage of a REST API is platform independence. Since all communications are via HTTP leveraging the web services infrastructure, RPC calls are machine independent and data formats are well defined, especially when using either XML or JSON. REST APIs are also easy to expose to Web Browser AJAX calls so are often seen in the context of an Internet Single-Page Application. REST APIs also make good translation layer APIs bridging system, language or solution specific programming interfaces and granting access to those interfaces via a universally accessible front end. Drawbacks of the REST model include a lack of consistency in implementations. Since there is no standard, each solution that offers a REST API can have specific and unique usage requirements that force the consumer to develop custom interfaces for each supported API. This can lead to confusion in terminology, usage, and implementation and careful analysis of the particular REST API documentation is required.

10.5 RESTCONF

RESTCONF (see [RESTCONF]) is an HTTP based protocol used to access the data defined in YANG models using the concepts defined in NETCONF. It also uses and inherits many of the benefits of a RESTful interface but requires a higher level of uniformity in implementation. RESTCONF provides a simple subset of NETCONF functionality and designed to co-exists and be compatible with NETCONF. RESTCONF supports both XML and JSON for data type encoding. RESTCONF relies on Security (TLS) to provide privacy and data integrity between the two communication endpoints. RESTCONF is currently an IETF draft. Similar to RESTful API, RESTCONF uses CRUD operations on YANG defined models via the HTTP verbs: OPTIONS, HEAD, GET, POST, PUT, and DELETE. RESTCONF also integrates HTML5 Server-Sent Events for asynchronous notifications to clients. The RESTful interface implemented by RESTCONF contrasts to the RPC-based method of NETCONF. The RESTCONF protocol operates on a hierarchy of resources, each of which represents a manageable component within the device. RESTCONF resources are accessed via well-defined URIs and the HTTP methods mentioned above. For example, a NETCONF RPC operation is implemented with a RESTCONF HTTP GET method and a NETCONF operation=create RPC operation is implemented with a RESTCONF HTTP POST operation. For compatibility with YANG managed data modules that export application-specific RPC actions for NETCONF, RESTCONF supports use of the HTTP POST method to invoke those RPC calls. RESTCONF significantly reduces the transaction complexity of NETCONF. Each action on a resource is assumed to commit automatically on successful application and RESTCONF removes the option of two-phase commit. These simplifications make a RESTCONF based interface much easier to develop against, often leading to an increased "feature velocity". In addition, RESTCONF leverages wide pool of developers familiar with the RESTful web-client development model and the rich toolset available for developing and debugging HTTP-based applications. RESTCONF is currently supported by ODL but not yet exposed as a southbound plug-in/protocol.

 91 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

10.6 WebSockets

The WebSocket protocol, [RFC 6120], is a TCP-based data transport protocol that leverages the web services infrastructure of HTTP. It is designed to create a duplex data stream between two remote systems over an HTTP proxy. The sequence begins as an HTTP upgrade request, which specifies a new URL schema (ws:// or wss://) and finalizes the web socket connection. Data transmission can be text or binary based. The message format is not specified in the RFC, although there is support at the protocol layer for a number of sub-protocols. WebSockets also provides for extensibility. Two extensions have been formally defined: and multiplexing. The advantage of a WebSockets solution is that it can leverage the existing web services infrastructure on the network and therefore does not require a lot of custom overhead. Web sockets address the longstanding issue with HTTP/HTML being a transactional interface with no persistent interface. All major browsers support web socket interfaces today. Drawbacks of the WebSockets implementation include a heavier server load as the server needs to maintain open connections to its clients, an un-optimized data path bolted on top of HTTP infrastructure, and older web proxies that fail to handle the protocol correctly. Tunneling overcomes the failure of web proxies to adequately handle the WebSockets data stream, and the server load can be resolved with load balancing and more powerful hardware.

10.7 Recommendations for Southbound Protocols

The SDN controller can support multiple southbound protocols which can communicate to various types of network devices that support different protocols. The main usage of these protocols is for Service Orchestration and Provisioning. 10.7.1 One Protocol versus Multiple Protocols An MSO has the option of choosing a single protocol or multiple protocols for southbound communication needs, from the SDN controller to the CMTS, CMA, or other network equipment. MSOs would like to pick a single protocol which works to communicate across majority of devices in the MSO network. The group has general consensus around HTTP based protocols. 10.7.2 Static Configuration versus Dynamic Configuration Interactions between the SDN controller and the CCAP device can be thought of in two ways: persistent configuration and Transactional configuration. The persistent configuration doesn't need to change often (e.g., CMTS device configuration) while the transactional configurations (e.g., QoS for gaming or voice sessions) are more ephemeral in nature and need a lighter weight protocol. NETCONF, considered best for static or persistent configurations, is also known as Day0 configuration , because it gives a higher level of control over transaction-level concerns. RESTCONF should be used for more dynamic configuration needs as it simplifies the SDN controller code-base by removing transaction-level concerns. Table 10 - Southbound Protocols

Transport Protocols Data Format Usage Model NETCONF YANG Day 0 Configuration RESTful API JSON, XML Day 1 Configuration RESTCONF XML-YANG, JSON-YANG Day 1 Configuration

 92 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

10.7.3 Data Consistency Data consistency may be an issue when a device is written to by multiple protocols, which all have access to the same configuration item. Some of the considerations here are: 1. Allow access based on time/state domain. For example, allow NETCONF for initial setup and RESTCONF for the remainder of the running session. 2. Allow access based on device. For instance, legacy devices can use PCMM while new devices can use NETCONF/RESTCONF. 3. Allow access based on functionality. In this manner, data models and data elements are partitioned into several access groups for respective protocols. This seems to be a preferred approach among vendors and MSOs as it is in line with current configuration needs. To summarize, during the initial setup stage, the recommendation is to use NETCONF for bulk configuration. Once the device is in its running state, RESTCONF is used for dynamic configuration of services. The options above do not totally mitigate the multiple access issue. Yet, by combining time-wise segregation and functional segregation, one can reach a reasonable solution for this. In addition, multiple access to the same data element is not a new situation in current practice, thus vendors need to consider a each particular use case and construct appropriate solutions.

 93 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

11 SERVICE FUNCTION CHAINING

11.1 SFC Architecture

The term “service function chaining” (SFC) has emerged to describe the deployment of an aggregate of services that are applied to a specific customer's traffic. SFC uses both Software Defined Networking (SDN) and Network Function Virtualization (NFV) paradigms. Service Function Chaining provides the ability to define an ordered list of a network services (e.g., firewalls, load balancers). These service are then linked together in the network to create a service chain. Service Function Chaining allows operators to provide additional services to customers. It defines the needed data path manipulation to route the traffic through the Service Functions which are implemented in the network. Services such as firewall and parental control, which are embedded in the home router and managed by the customer, can now reside in the MSO network and be managed by the MSO. Other services such as video optimizer or carrier- grade NAT (CGN) can be implemented as a service on the network path. A service chain is essentially a policy construct: a series of service functions that a customer packet must traverse. For example, a service chain may define that all TCP port 80 traffic must pass through a firewall (FW), then intrusion prevention (IPS), and finally server load balancing (SLB).

Figure 45 - SFC and the DOCSIS Network

SDN technology can intelligently chain service functions, so that traffic from each subscriber only traverses a particular set of service functions as defined by the policy for that particular subscriber. One can apply service chaining policies to operator/user defined services. For example, an operator can configure a service chaining policy such that only web traffic is sent to a content optimization service. The solution can be integrated with a management and orchestration system to simplify configuration and management of service chains – the traffic path for any arbitrary flow can be dynamically changed by simply changing the policy associated with that flow – the SDN controller automatically programs routers, switches and application servers in the network. Using the capability to selectively steer certain portions of the network traffic to a specific set of services provides operators the benefit of optimally utilizing compute and network resources, thereby relieving them from having to continuously overprovision service-function capacity.

 94 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

With the use of open interfaces and an open platform, service providers can now benefit from and contribute to the rapid pace of innovation in the SDN ecosystem and count on unparalleled multi-vendor interoperability in a service provider network. The service function chaining technology also allows service providers to rapidly introduce and scale services.

11.2 Service Chaining Setup through Network Service Header (NSH)

Existing models used for the insertion of services suffer from a number of limitations. Today, the required service functions that must be applied to traffic for a given service are physically inserted on the data-forwarding path between communicating peers, and traffic is directed through them using VLANs and policy-based routing techniques. Consequently, services are coupled to the physical network topology creating constraints on service delivery and potentially inhibiting the network operator from optimally utilizing their service resources. New application deployment or the addition of new services into the network is constrained, and this practice limits scale, capacity, and redundancy. If the necessary service functions are not available to support a new application, then the network operator has no other option but to deploy additional hardware resources or reconfigure the network to accommodate the new service requirements. Furthermore, services are not easily moved, created or removed, even when virtualized service functions are deployed. This rigidity is the antithesis of highly elastic service environments that require rapid creation, destruction or movement of the service functions required for application delivery. Additionally, the transition to virtual platforms requires an agile service insertion model that supports elastic and very granular service insertion, and post- facto modification; that is, supports the movement of service functions and application workloads in the existing network, all while retaining the network and service policies and the ability to easily bind service policy to granular network-centric identifiers such as per-subscriber/per-tenant/per-vpn state. These factors provide strong motivation for a simplified and flexible service insertion model that optimizes the use of existing service resources, by allowing them to be shared and accessed in a manner that is completely independent of the underlying physical and routing topology of the network, while still being controlled by operator- specified policy. This may be realized by moving to a model where the functions involved in the delivery of a service are not required to reside on the default data path and specific traffic is instead steered through them, wherever they are deployed. This form of service deployment is commonly referred to as service chaining. Service chaining, using Network Service Headers (NSH), builds on an overlay-only service chain and addresses the shortcomings encountered with existing service deployment models. A fundamental principle is the notion that each service function within a given network domain is an independent resource that may be utilized. Therefore, the concept of a service function evolves: rather than being viewed as a bump in the wire, each service function becomes a resource within a specified administrative domain that is available for consumption. As such, service functions have a network locator and a variable set of attributes that describe the function offered. The combination of locator and attributes is used to construct a service chain, and, as with the overlay-based model, instantiation of the chain into the network is achieved using a service path that provides the overlay topology. NSH provides a consistent model for the sharing of information and context between network nodes and between service functions. NSH effectively creates a service plane by decoupling service chaining from the underlying network infrastructure, and enables the construction of complex service chains and the ability to carry additional metadata in the data plane. NSH headers are designed to be easily implementable across a range of network devices, both physical and virtual, including hardware-forwarding elements. Because service functions can and will be deployed in networks with a range of transport encapsulations, including under and overlays, NSH headers are designed to be carried between the transport encapsulation and the original packet payload. The NSH is added by a service classification function – a device or application. The classification function determines which packets require servicing, and correspondingly which service path to follow to apply the service functions for the associated service chain. NSH creates a dedicated service plane that addresses many of the limitations of existing service chaining technology previously highlighted. More specifically, NSH enables:

 95 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

1. Topological Independence: Service forwarding occurs within the service plane via a network overlay, and therefore the underlying network topology does not require modification. Service functions have a locator (e.g., IP address), to receive/send data within the service plane; the NSH contains an identifier that is used to uniquely identify a service chain, and the service functions within that service chain. 2. Service Chaining: NSH contains path information needed to create a service chain. Furthermore, NSH provides the ability to monitor and troubleshoot a service chain, end-to-end via service-specific OAM messages. Service chain information can be used by administrators (via a traffic analyzer, for example) for verification (account, ensure correct chaining, provide reports, etc.) of the service chain specifics of packets being forwarded along a service chain. 3. Metadata Sharing: NSH provides a mechanism to carry shared metadata between service functions, and between network devices and service functions. The semantics of the shared metadata is communicated via a control plane to participating nodes. Examples of metadata include classification information used for policy enforcement and network context for forwarding post-services. 4. Transport Agnostic: NSH is transport independent and can be used with overlay and underlay forwarding topologies.

11.3 SFC Implementation in a DOCSIS Network

This section describes the different points within the home and the DOCSIS network in which SFC packet marking can start. The problem statement is to figure out what is needed on the DOCSIS Network (CM/CMTS) to enable Service Function Chaining / Marking of traffic flows, as it crosses the Service Function Edge to the SFC domain

Figure 46 -Traffic Flows from DOCSIS Network through Different Service Chains

The starting points where the traffic is classified and marked may differ as described below but the end point is the same, as all traffic destined for the SFC domain will end at the SFC edge, as shown in Figure 47.

 96 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Figure 47 - Different Starting Point of SFC

11.3.1 SFC Initiating from Application An application can initiate SFC directly. This option is the most granular and most accurate when it comes to selecting a service chain because there is no second-guessing as to what the application needs. The downside is, of course, that it requires changes to the application software and the host OS it is utilizing. The application can be the starting point on the SFC. There are two methods from the application. The first is the overlay method which creates a tunnel (or a virtualconnection) from the application to the SFC edge. In this scenario, everything from the application would be sent to the SFC. The DOCSIS network would not be aware of the application contents and there would be no classification done on the DOCSIS Network (CM/CMTS). This method would not work if the application needed to leverage the DOCSIS QoS mechanism or QoS mechanism from the CMTS to the SFC edge. The second scenario is to mark the application so that the DOCSIS network and core network is aware. An example of this would be an MSO-provided VoIP service. The application would mark the traffic so that proper QoS could be applied on the DOCSIS network, and from the NSI side of CMTS to the SFC edge. Other MSO-provided applications, such as Video On Demand or home security, can leverage the QoS mechanism on DOCSIS and the rest of the network. 11.3.2 SFC Initiating from Home Gateway The home gateway is part of the and, while it might not have the ability to explicitly tag flows based on applications, it still has good visibility into information, such as the type of devices in the home and some information related to identify the user, for example, by using Wi-Fi . Using this information, it should be relatively easy to tag different flows with different SFCs; for example, devices that are defined as “kid devices” can be tagged with SFC that requires parental control while “parent devices” can skip it. In the context of a home gateway, it’s important not to confuse a tunnel that the home gateway might initiate with SFC. The two concepts are unrelated. A tunneling option from the home would be to use an overlay technique such as VXLAN or GRE tunnel from the gateway to the SFC edge. 11.3.3 SFC Initiating from CM Initiating SFC from the CM is not a desirable option. Implementing SFC in the CM would not scale and there is no added value. The SFC Marking functionality that can be done on the CM can also be done on the CMTS (as described in Section 11.3.4). From a granularity and feature point of view, it does not offer advantages over the CCAP/CMTS initiated SFC as described in the next section. 11.3.4 SFC Initiating from CCAP/CMTS It is fairly straightforward to associate DOCSIS upstream service flows with SFC from a data plane processing point of view. Having said that, one has to remember that service flows and service chains are different (and also unrelated to tunnels as mentioned in Section 11.3.2). A DOCSIS service flow is a “traffic lane” that may or may not

 97 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

be associated with a particular application. It is possible to tag traffic such that different applications have different service flows that map to corresponding SFC. However, one should be careful not to create an inflation of service flows as a result. It is better to use service flows to create a coarse separation of traffic; for example, video versus data versus voice, that can serve as a “hint” for further analysis in case it is needed (see Section 11.3.5). On the downstream side, the CMTS terminates a service chain. One can view the QoS that the CMTS applies as the end of the “service” that SFC defines; however, the CMTS would still use packet classification rules rather than inspecting the SFC header directly in order to do that. As a future extension, that CMTS can inspect SFC headers directly. 11.3.5 SFC Initiating from a Proxy Behind CCAP A proxy behind CCAP, a device such as a DPI appliance (physical or virtual), can analyze packet streams and dynamically assign SFC based on the inspected traffic. The upstream classification to DOCSIS service flows can serve as a hint to the appliance. Most importantly, if the tagging is applied correctly, some flows can skip the appliance altogether, resulting in a simpler and less expensive inspection device. For example, a home surveillance camera with a dedicated service flow can be directed immediately to the correct SFC.

11.4 Recommendation

In the short term, initiating the SFC from an external device seems the easiest since it has no dependencies on existing equipment. The next phase would be initiating SFC from the CMTS, since this is a cost-effective solution relative to changing home gateways or CM. SFC from the home gateway seems to be an in-between solution, not being as cost-effective as the former solutions but not being as granular as the application initiated SFC. Each starting point has pros and cons. The application starting point is very granular and there is doubt about whether the traffic should go through the SFC domain or not. Starting SFC from the application is useful when the operator has full control over the application. The downside to this method is that it is limited at this point; however, as MSOs start to expand into the home network, using an overlay technique may be the easiest to implement. Unfortunately, it does not take full advantage of the DOCSIS network and the rest of the MSO’s network. Further research is needed where the applications leverage the DOCSIS and the core network, not the preferred network. As stated before, starting from the CM doesn’t scale. Legacy CMs might not support it, and using it does not offer any advantages. Finally, a combination of using the home gateway and the CMTS might be the best option available right now. More research is needed to identify what is needed between the home gateway and the CMTS, but this option allows SFC to take advantage of the DOCSIS network and the core network.

 98 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

12 DOCSIS 3.1 PROFILE MANAGEMENT APPLICATION

12.1 Introduction

The SDN controller will implement a common set of protocols to talk with different network devices. This allows the MSO to start focusing on creating better services that run as ‘applications’ above the SDN controller and let the controller configure or program each network device appropriately. There are two kinds of applications that an MSO can create and deploy: • Applications that provide service to the end customer; • Applications that provide network services to the MSO. DOCSS 3.1 Profile Management belongs to the second class of applications that can be enabled by an SDN infrastructure and is the focus of this section. There are many DOCSIS features that are part of the CMTS but not directly tied to the DOCSIS protocol. DOCSIS 3.1 introduced many new features into the access network, including variable bit loading across a channel and the use of multiple modulations profiles for downstream and upstream channels, and upstream probes to check the quality of the upstream OFDMA signal. The configuration, initiation logic and compute processing needed to optimize some of these functions (e.g., downstream profile setup or load balancing of CMs) are not an intrinsic part of the DOCSIS MAC and PHY layers. This allows the functionality to be moved out of a CMTS and implemented as an ‘application’ running outside the CMTS. This application can communicate with the CMTS or the CMs to gather the needed information, process the data, and make intelligent decisions to set up the CMTS as needed. The various applications implemented externally communicate with the CMTS to optimize the overall network performance. To realize this profile management application, the basic steps are to develop the data models and a protocol to convey that information back and forth. The new physical layer described in [PHYv3.1] allows each subcarrier in a channel to use a different modulation order, organized as a "modulation profile". A well-designed, optimized set of modulation profiles allows a channel to operate with a lower SNR margin, potentially allowing a channel to operate at an overall higher throughput. The application that implements this optimization logic can be external to a CMTS, enabling the most efficient use of profiles across channels and CMs. For an operator, it also allows uniform operation of such algorithms across different CMTS platforms. This approach fits the architecture we have developed in this technical report. An SDN controller supporting the appropriate data models to represent the information needed, and supporting the appropriate southbound protocols to configure a CMTS will be a prerequisite. Also, the SDN controller through a Northbound API will expose the needed functionality to communicate with the DOCSIS network to the Profile Management application.

12.2 Problem Description

12.2.1 Background DOCSIS 3.1 introduced the concept of modulation profiles for OFDM channels. A modulation profile is a list of modulation orders, defined for each subcarrier within a channel. A CMTS can define multiple modulation profiles for use on a channel, where the profiles differ in the modulation orders assigned to each subcarrier. A CMTS can assign different downstream and upstream modulation profiles for different groups of CMs.

 99 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 48 - DOCSIS 3.1 Downstream OFDM Channel

12.2.2 Problem Statement and Goals Determining the best modulation profile to use on a channel is difficult, given the number of CMs and the differences in signal quality that they experience. The Profile Management Application (PMA) is designed to help operators determine what the best modulation profiles are for each channel, given the channel characteristics seen by each CM on the network. Currently the focus is only on downstream modulation profiles; upstream profiles will be considered at a later time. The goal of changing profiles is mainly to: • Increase throughput per CM • Maximize network capacity • Minimize errors The tasks an external Profile Management Application (PMA) will perform are as follows: 1. Create a set of optimized modulation profiles for use on a channel by selecting the best modulation order for each subcarrier based on the channel quality measured at the CMs using the channel profile test. (For all CMs) 2. For a new CM and periodically, find the best fit among existing modulation profiles and recommend modulation profile usage. (Per CM) 3. Create backup profiles or downgrade a CM based on errors on a certain profile. For example, based on CM performance and SNR margin, provide a better modulation profile for a CM. (Per CM) The application will make suggestions for the above three steps, but it is the CMTS’s responsibility to actually implement the changes on the DOCSIS network. The PMA could also be responsible for figuring how and when to roll out profile changes, in accordance with MSO policy, but ultimate control remains with the CMTS. In the future, we can also envision the possibility that the PMA will be responsible and maintain control of how all the profiles are actively managed. An additional use case to consider is how to determine CMTS profiles to CM population mapping to optimize network capacity.

 100 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

12.2.3 High Level Architecture This following network entities are depicted in the high-level architecture shown in Figure 49 below: • Profile Mangement Application (PMA): Responsible for gathering the data it needs to make profile decisions. It interacts with the CMTS through the SDN controller to initiate modulation profile tests, provide new or optimized modulation profiles, and provide suggestions or commands to use these modulation profiles. • SDN Controller: Mediation/Network Control layer between the applications and the DOCSIS network devices (CMTSs and CMs). Responsible for exposing the profile information from the CMTS/CMs and the profile actions available to the PMA. Also responsible for implementing the communitcation protocols to configure the DOCSIS devices and receive information from the network. • CMTS: Responsible for assigning CMs to use a given modulation profile based either on internal logic or commands originating from the PMA. Source of data for the PMA on network conditions, current configuration, and outcomes of modulation profile testing. • CMs: Use the modulation profile specified by the CMTS. Act as a source of channel quality data for the PMA.

Figure 49 - Possible Composition of the Profile Management Application At a high level, the PMA makes requests of the CMTS to get information from the CMTS, which in turn sends out MAC Management Messages (MMMs) to the CMs to collect the needed information and send it back to the PMA. The PMA also makes recommendations on the profiles to the CMTS, which the CMTS configures CMs to use at an appropriate time. 12.2.4 Areas of Focus The primary areas of focus in developing the PMA are: • Define PMA actions, that is, work that is done by the PMA to determine the set of modulation profiles that provides the best overall throughput on a channel. • Information needed by the PMA to analyze channel conditions across the CMs and perform its functions. The secondary areas of focus are: • How to get the needed information • How to effect the changes from PMA on CMTS • How to monitor the effect of changes on a CMTS

 101 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

12.3 PMA Use Cases

The current assumption for the PMA is a “pull” model, where the PMA has to explicitly query the CMTS or the CMs (or ask the SDN controller to make the query) in order to get the information it needs. A “push” model might be better. In other words, the CMTS is expected to inform the PMA (or tell the SDN controller which tells the PMA) whenever something changes, e.g., when a channel comes up or goes down, when CMs arrive or leave, when CM statistics are updated, when some sort of high error rate condition is flagged, and so forth. This may prove to be easier on the CMTS. With the pull model, the PMA must constantly query the CMTS to find out if anything has changed, and the CMTS has to keep responding to the same queries over and over even if everything is the same. In contrast, the push model means the CMTS mostly sends out information only when something has changed. 12.3.1 Case 1: New Channel Startup The new channel startup case (e.g., on initial deployment), is when the allocated spectrum is changing or when OCD parameters have been changed. Here the CMTS is just starting up the channel and so the channel has no CMs on it. At this point no transmissions are taking place, as the system is just initializing. (Note: this is not the only way to start up a channel.) • CMTS requests a set of new profiles for the new channel. • PMA requests information from CMTS: • Requests OCD parameters (including configuration about excluded bands), • Requests list of profiles active on the channel and CMs assigned to each channel; there will be none, so no further information needs to be requested. • PMA may get other information from places other than the CMTS: • E.g., it may have previous history for this plant segment/frequency range from DOCSIS 3.0 channels/CMs that have used it, • Or from an earlier DOCSIS 3.1 channel, • Or from PNM metrics such as spectrum captures gathered from other devices using other frequencies on the plant. • PMA responds to the CMTS request with a list of profiles to be used as a starting point for the new channel: • Since there are no CMs on the channel, no profile assignments to CMs are sent. 12.3.2 Case 2: Profile Optimization After completion of Case 1, the CMTS distributes DPDs and CMs begin to join the channel. The CMTS will assign each CM a set of profiles on its own, as the PMA initially would probably be too slow to be in the CM initialization/startup path. At this point, data can be gathered from and about the devices actually using the channel. After some period of time, it would be desirable to have the current set of profiles analyzed to see if it can/should be optimized to better match the conditions affecting devices actually present. There are two ways to work this under the pull model, as described below. 12.3.2.1 CMTS Initiated The CMTS may initiate profile optimization because (a) an operator manually instructs the CMTS to do so, or (b) the CMTS has some sort of internal metrics that trigger a request for profiles to be optimized. For example, a certain time period has elapsed, a certain number of new CMs have joined and/or old CMs have left, relative utilization of profiles has gotten unbalanced past a certain point (e.g., some profiles largely unused), and so forth. 12.3.2.2 PMA Initiated The advantage of the PMA initiating profile optimization is that the CMTS would not have to include any sort of internal metrics to trigger a request. Instead, the PMA would monitor items such as those described in Section 12.3.2.1 and decide when optimization is in order. Put another way, the PMA’s job would be not only to calculate optimized profiles, but also to determine when optimization should be considered. This seems like a logical thing to ask the PMA to do rather than the CMTS.

 102 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

A downside of this is that with a “pull” model, the PMA must frequently query the CMTS (say every 1 minute or 15 minutes or up to once an hour) for a list of profiles and assigned CMs. This is an example of a case where a “push” model would be lighter-weight – whenever a new CM joins, the CMTS sends information about it, and if no new CMs have joined, it doesn’t have to constantly answer queries about it. With either model, the PMA would probably want traffic statistics at frequent intervals (say tenths of seconds to a minute) and it would want to keep a history of these statistics. However, this is probably already being monitored by an NMS, so the PMA most likely could get it from there. 12.3.2.3 Steps to Optimize the Profile(s) For option 1, CMTS Initiated: • The CMTS sends a request to the PMA for optimization of profiles for the channel. For option 2, PMA Initiated, the above step is skipped. Instead, the PMA decides it is going to do an optimization without any explicit message to that effect. Everything else is the same. • Using the pull model, the PMA requests the information it needs to ensure that its data is current: • The PMA requests OCD parameters; the CMTS responds with this information. • The PMA requests list of profiles and CMs assigned to each; the CMTS responds with this information. • The PMA requests DPD information for each profile; the CMTS responds with this information. • The PMA requests OCD information from each active CM; this could either be requested from the CMTS or from the CMs directly. • The PMA may want other CM performance-related information; e.g., FEC statistics , packet counts (total/errored), etc. Perhaps it could get this from the CM or from the NMS. • The PMA may want profile traffic statistics (e.g., transmitted byte counts) – these might also come from the NMS. After gathering all this information, the PMA comes up with a list of profiles and a list of CMs to be assigned to each profile. • Now the PMA is expected to manage the process of changing over to the new profiles. Most likely it (and the operator) will want some amount of profile testing done prior to switching over to the new profiles. So the PMA may perform these steps: • If all 16 profiles on a CMTS are already in use, the PMA will first have to free up some profiles so that they can be configured for testing. The PMA will tell the CMTS to move CMs off of a few profiles. • The PMA sends the CMTS a list of profile assignments for certain CMs to move all CMs away from profiles that are to be decommissioned. • Once all CMs are moved, the PMA tells the CMTS to delete the now-unused profile(s). NOTE: In DOCSIS there isn’t a formal way to “delete” a profile. However, once no CMs are using a profile, its DPDs can disappear. • Or, if the PMA is going to start using these profiles for something else right away, it could just tell the CMTS to do a DPD change. • Either way, the PMA tells the CMTS to add a new profile or to change a currently unused profile to be one of the new profiles it wants to test. • The PMA tells CMTS to perform OPT testing on the new profile with a CM or CMs that it expects will be able to use the new profile. • If the testing is successful, the PMA tells CMTS to assign the profile to the CM and goes on to the next CM. Once all desired CMs have been moved to the first new profile, the PMA can repeat this process for subsequent profiles, until all CMs have been moved.

 103 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

If at any point problems are encountered, the PMA will alter its plan as needed. For example, if a particular CM doesn’t perform well enough in the profile (OPT) tests (too many FEC errors or whatever), the PMA may decide that it will instead move that CM to a lower or different new profile. This could even be part of the strategy; run profile tests on a profile which is “optimistic” for the CM in question, and see if it works. Recall that profile tests (OPTs) will typically take 3-10 seconds to run. Switching to a new set of profiles affecting many CMs/flows should be expected to take multiple tens of minutes or more. 12.3.3 Case 3: “Fallback” Use Case In the Fallback use case, a CM has been operating for some time on a channel (may be a short time or a long time) but begins experiencing errors. Here it is expected that the CMTS is the “first line of defense”. When the condition is detected (e.g., using CM-STATUS), the CMTS may/should take action right away to move the CM to a different profile. Involving the PMA might take too long; it is better to quickly move the CM to a profile that gives it some (less efficient) service than to let it continue to lose packets. If the PMA has listed “backup” profiles in the profile-to-CM assignments, the CMTS could first move the CM to the “backup” profile, otherwise it could move the CM to profile A or to whatever other profile it thinks will work. After this has been done, the CMTS might ask the PMA to intervene: • The CMTS sends a request to the PMA for a new profile for a specific CM. • Under “pull” model, the PMA now requests the latest information about the channel: • OCD (OFDM Channel Descriptor) • List of profiles and CMs assigned • DPDs for profiles Then the PMA requests latest information about the CM; e.g., ODS (may get this from CMTS or CM). The PMA also collects any other information such as traffic statistics from NMS, PNM information, etc. It may also want to examine its history of previous profile assignments for the CM. The PMA processes information and comes up with a recommendation. This could involve leaving the CM where it is, choosing a different profile for the CM from among those currently configured, or setting up a new profile and moving the CM and possibly other CMs over to it. Whatever it decides, it uses the same set of commands described in Section 12.3.2.3 to accomplish its goals. 12.3.4 Push versus Pull Approaches A “push” approach makes sense for some scenarios but not all. Specifically, anything reflecting channel properties or CM state should be pushed. This includes OCD and DPD information, profile/CM assignments, and CM join/leave. These things cannot change without the CMTS knowing and so there’s no point in having the PMA ask repeatedly for this information. It may ask at wide intervals (say hourly) to ensure nothing got missed but it should not have to poll every 10 seconds to stay up-to-date. A “pull” approach probably makes more sense for measurement results such as OPT or ODS. The PMA will want to be able to decide when it needs new information or when the information it has previously stored is sufficiently recent. This is not to say the CMTS couldn’t push these things if it had them and wanted to. For example, if it does an OPT test on a CM for its own reasons, it might make sense for it to share the results with the PMA. However, for these items, the PMA can’t assume that nothing has changed since the last set of results, so it’s fair to expect it to ask for what it needs when it needs it.

12.4 Data Elements and Actions

A Profile Management Application (PMA) needs to exchange information with CCAP devices (CMTSs and CMs). This includes gathering information about OFDM channel parameters and performance, as well as providing CCAP devices with OFDM channel modulation profile configuration decisions based on analyzing the gathered data. A PMA does this continuously to adapt to the changing conditions of OFDM channels and the CMs that use them.

 104 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

There is no need to do modulation profile management for the DOCSIS 3.1 PLC. The need for NCP modulation profile management is to be determined later. In this section, we focus only on the data modulation profile management. NOTE: This section covers only the downstream modulation profile management for now. The upstream profile management will be worked out at a later time. 12.4.1 Data/Messaging Scope This section defines the message content across the PMA and CMTS interface. NOTE: The PMA to CM interface is defined in [CM-OSSI]. Potential enhancement to that interface should belong to DOCSIS 3.1 OSSI. The type of information that is used by PMA may fall into one of the following categories: • OFDM channel parameters and statistics This includes the OFDM channel configuration parameters and error statistics from both the CMTS and the CMs. • DOCSIS network topology and subscriber information Information such as the fiber node layout, the CM association with service groups, could be useful for the PMA, as well as other SDN applications. • Topographical and other cable plant information Topographical and other cable plant information that is used by PNM could also be useful for the PMA. In this section, we focus on only the first category listed above, OFDM channel parameters and statistics. 12.4.2 Messages for the PMA-CMTS Interface This section defines the type of messages and the message contents needed for the PMA-CMTS interface. Table 11 provides a list of messages that are used for downstream channels. Table 11 - PMA-CMTS Downstream Modulation Profile Messages

Message Description Downstream OFDM Channel Descriptor Conveys the configured channel parameters for a downstream DOCSIS 3.1 channel. Downstream Profile Request Either a request from the CCAP for a new profile or a request from PMA for the details of an existing profile. Downstream Profile Descriptor Provides the configuration details of a modulation profile. Downstream Spectrum Request Request from the PMA for the RxMER values for a channel from a given CM. Downstream Spectrum Desciptor Conveys the per subcarrier RxMER measurements for a channel from a given CM. Downstream Profile Test Request Request from the PMA for the CCAP to test a specified modulation profile. Downstream Profile Test Response Conveys the results of the test of a modulation profile on a specified CM. CM-to-Profile Assignment Request Request for a list of CMs that are assigned a specified modulation profile. CM-to-Profile Assignment Descriptor Provides a list of CMs that are or are to be associated with a modulation profile. Profile-to-CM Assignment Request Request for a list of the profiles that are assigned to a CM. Profile-to-CM Assignment Descriptor Provides a list of channels and modulation profiles that are assigned or should be assigned to a CM

 105 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Although a PMA may manage multiple CMTSs, we assume that each CMTS can be uniquely identified by its host IP address, which is in each message’s IP packet header. So the message content defined here is within the context of a single CMTS. This section does not imply that the interface needs to use TLV-based messages. The intent of this section is to tease out the needed information exchange. 12.4.2.1 Downstream OFDM Channel Descriptor The PMA needs to have the OFDM channel parameters as configured on the CMTS. The Downstream OFDM channel descriptor is for OFDM parameters that are common across profiles. This message is from the CMTS to the PMA. This message closely resembles the OCD message defined in DOCSIS 3.1 MULPI (most fields are copied from [MULPIv3.1]). Table 12 - Downstream OFDM Channel Descriptor Message

Name Type Length (bytes) Value IfIndex Integer (Int32) 4 Config change count Unsigned Byte 1 Note: OCD parameters same as in Table 6-63 in [MULPIv3.1].

12.4.2.2 Downstream Profile Request When sent from the CMTS to the PMA, this message is used by the CMTS to request the PMA to provide a new set of profiles, or the details of a specific profile identified by profile ID, for a downstream OFDM channel. When sent from the PMA to the CMTS, this message requests the current profiles being used on the CMTS to be sent to the PMA. An IfIndex of 0 indicates a request for all the profiles and all downstream OFDM channels on a given CMTS. A Profile ID of 0xFF indicates a request for all profiles on the downstream OFDM channel specified by the IfIndex. Table 13 - Downstream Profile Request Message

Name Type Length (bytes) Value IfIndex Integer (Int32) 4 Profile ID Unsigned Byte 1

12.4.2.3 Downstream Profile Descriptor When sent from the CMTS to the PMA, this message is used to inform the PMA which modulation profile is currently being used on the CMTS. When sent from the PMA to the CMTS, this message is used to inform the CMTS of the profile computed by the PMA. In this case, the configuration change count is ignored. Table 14 - Downstream Profile Descriptor Message

Name Type Length (bytes) Value IfIndex Integer (Int32) 4 Profile ID Unsigned Byte 1 Profile attributes Unsigned Int 2 Bits definitions to indicate if the profile is used for multicast, voice, etc. Configuration change count Unsigned Byte Note: Subcarrier assignment TLV as defined in Table 6-64 and 6-65 in [MULPIv3.1].

 106 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Profile descriptors may be sent from the CMTS to the PMA, or from the PMA to the CMTS, with or without the Downstream Profile Request message. 12.4.2.4 OFDM Downstream Spectrum Request This message is used by the PMA to request that the CMTS send an ODS-REQ to a CM. If the CM MAC address is a broadcast address, then the request is for the CMTS to send ODS-REQ to all the CMs on the OFDM channel. Table 15 - OFDM Downstream Spectrum Request Message

Name Type Length (bytes) Value CM MAC address MacAddress 6 IfIndex Integer (Int32) 4

12.4.2.5 OFDM Downstream Spectrum Descriptor This message is sent from the CMTS to provide the PMA, the RxMER values for a given CM. It serves the same purpose as the ODS-RSP message defined in [MULPIv3.1]. This message may be sent from the CMTS to the PMA either unsolicited or in response to the OFDM Downstream Spectrum Request message. Table 16 - OFDM Downstream Spectrum Descriptor Message

Name Type Length (bytes) Value CM MAC address MacAddress 6 Timestamp (time of day) UnsignedInt 4 Time the measurement was done by the CM IfIndex Integer (Int32) 4 Note: ODS-RSP encodings as specified in Table 6-66 in [MULPIv3.1].

12.4.2.6 OFDM Downstream Profile Test Request This message is for the PMA to request that the CMTS send an OPT-REQ message to the specified CMs to test a modulation profile on a channel. A broadcast CM MAC address indicates a request to send the OPT-REQ to all CMs on the OFDM channel. A list of CM MAC addresses indicates a request to send the OPT-REQ to all the CMs in the list. Table 17 - OFDM Downstream Profile Test Request Message

Name Type Length (bytes) Value CM MAC address list MacAddress 6 * N IfIndex Integer (Int32) 4 Profile ID Unsigned Byte 4 Op code Unsigned Byte 1 1 – Start 2 – Abort Note: OPT-REQ TLV encodings as specified in Table 6-67 in [MULPIv3.1].

 107 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

12.4.2.7 OFDM Downstream Profile Test Response This message is how the CMTS informs the PMA of the downstream profile test results for a CM. If the request was sent to multiple CMs, one response will be sent for each CM that received the request. Table 18 - OFDM Downstream Profile Test Response Message

Name Type Length Value (bytes) CM MAC address MacAddress 6 Timestamp (time of day) Unsigned Int 4 IfIndex Integer (Int32) 4 Profile ID Unsigned Byte 1 Status Unsigned Byte 1 1 - Testing 2 - Profile Already Testing from Another Request 3 - No Free Profile Resource on CM 4 - Max Duration Expired 5 - Aborted 6 - Complete 7 - Profile already assigned to the CM All other values reserved Note: OPT-RSP TLV encodings as in Table 6-68 in [MULPIv3.1].

12.4.2.8 CM-to-Profile Assignment Request This message is used to request a list of CMs that are assigned to a profile. When sent from the CMTS to the PMA, it lists the suggested CM to profile assignment computed by the PMA. When sent from the PMA to the CMTS, it provides the current CM to profile assignment on the CMTS. Table 19 - CM-to-Profile Assignment Request Message

Name Type Length (bytes) Value IfIndex Integer (Int32) 4 Profile ID Unsigned Byte 1

12.4.2.9 CM-to-Profile Assignment Descriptor This message is used to provide a list of CMs that are assigned to a profile. When sent from the CMTS to the PMA, it describes the current CM to profile assignment used on the CMTS. When sent from the PMA to the CMTS, it describes the suggested CM to profile assignment as recommended by the PMA. The profile change count is ignored in this case. This message may be sent with or without the CM-to-profile assignment request. Table 20 - CM-to-Profile Assignment Descriptor Message

Name Type Length (bytes) Value IfIndex Integer (Int32) 4 Profile ID Unsigned Byte 1 Profile change count Unsigned Byte 1 Change Count from within DPD Profile attributes Integer (Int32) 32 bit mask Bit definitions to indicate if the profile is used for multicast, voice, etc. A list of tuples: (CM MAC address, CM MacAddress, 6+1 CM MAC address state) Unsigned Byte CM operStatus

 108 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

12.4.2.10 Profile-to-CM Assignment Request This message is used to request a list of modulation profiles that are assigned to a CM. When sent from the CMTS to the PMA, it requests the PMA to suggest the modulation profile to modem assignment for the CM. When sent from the PMA to the CMTS, it requests the current modulation profile to CM assignment on the CMTS for the specified CM. Table 21 - Profile-to-CM Assignment Message

Name Type Length (bytes) Value CM MAC address MacAddress 6

12.4.2.11 Profile-to-CM Assignment Descriptor This message is used to provide a list of downstream OFDM channels and modulation profiles assigned to a CM. When sent from the CMTS to the PMA, it describes the current profile to CM assignment used on the CMTS for that CM.When sent from the PMA to the CMTS, it describes the suggested modulation profiles to CM assignment as computed by the PMA. The profile change count is ignored in this case, and the tuple can be defined differently to exclude the profile change count. This message may be sent with or without the profile-to-CM assignment request. Table 22 - Profile-to-CM Assignment Message

Name Type Length (bytes) Value CM MAC address MacAddress 6 A list of (IfIndex, Profile ID, profile attributes, Integer (Int32), Unsigned Byte, (4+1+4+1) * n profile change count) Integer (Int32), Unsigned Byte

12.4.2.12 Additional DOCSIS Network and CM Information Additional information from the CMTS may be useful to the PMA, for example: • CM time offset to indicate the CM distance from the CMTS. • Fiber node configuration and modem service group assignment to indicate the location of the CM. Whether such messages should be defined within PMA or in the wider scope of PNM will be defined in future. 12.4.2.13 PMA Configuration Additional information may be useful to configure the PMA, for example: • Constraints on profiles, e.g., the upper limit of the range of bit loading values a profile may take. • Constraints on a CM’s bit loading ranges. • Number of subcarrier blocks. • Other information that can be exchanged at boot time that is not channel-specific, e.g., topology information.

 109 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

12.5 PMA Data Backend and Protocols

Figure 50 - PMA Data Backend and Protocols

Figure 50 shows the various protocols and locations/devices from which the data needed for a PMA application can be obtained.

12.6 Other PMA Considerations

12.6.1 Central Database The information exchange between the CMTS and the PMA is the key enabler for this application. This information can be exchanged in real time between the PMA and the CMTS. Another approach to ease the communication burden on the reporting entities could be to implement a central database which is updated with all the information. An SDN controller could host a master database to capture all the information exchange needed for this application. This database can hold all the information needed by a PMA, the profiles used and their details, which CMs are using which profiles, historical profile, and usage information. This gets updated every so often by the CMTS or by pulling data from other sources (e.g., PNM data). 12.6.1.1 Types of Data Needed There are various kinds of information needed by a PMA to compute the optimal profiles. All the data needed can be stored in a database accessed by the SDN controller, so that it could be used by multiple applications. The following is a list of the different kinds of data that can be useful to store in a database: • CMTS registration information (when it connects with an SDN controller) • PMA initial configuration information • Current CMTS OFDM channel configurations • List of all CMs that are online • List of profiles in use by each CM

 110 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

• Historical data: e.g., profile configurations which were used, not accepted etc. • Signal Quality Analytics for each Downstream Channel on the CMTS: • MER info from CMs • Data obtained from the OPT-RSP messages from a CM (see [MULPIv3.1]) • Statistics: FEC statistics (e.g., LDPC) , CRC statistics , and MAC Layer statistics • Proactive Network Maintenance (PNM) data from a CM (see section on PNM in [PHYv3.1]) • Data from the Symbol Capture function on a CM and a CMTS; this data can be used to figure out the transfer function of a channel 12.6.2 Multiple Masters Problem Once a PMA application is implemented, there can be more than one entity that can configure profiles to be used. In this case, both the CMTS and the PMA have the ability to calculate optimal profiles and configure the needed changes on the plant. The current design assumes that the PMA essentially makes suggestions or recommendations to the CMTS, which the CMTS can choose to implement or decline. At this point, the higher level understanding is that the CMTS can be autonomous, i.e., it does not have to use the PMA for all the use cases described previously, all the time (or for any of the use cases). In the future, a model where the PMA takes over absolute control of managing all the profiles on the CMTS can be considered. 12.6.3 Evaluating Profile Changes Once the PMA and CMTS implement the profile changes, the system needs to be monitored and evaluated on the effect of the changes. The throughput of each individual CM and the packet errors need to be measured to evaluate how each CM is performing on their assigned profiles. The network capacity also needs to be monitored and measured to evaluate if the profile changes were optimal in terms of maximizing the network capacity. These responsibilities will typically be shared between the CMTS and the PMA. This would require real time responsiveness on the PMA and implementation of intelligent algorithms, which can be configured with policies from the operator. 12.6.4 Policy Definition Changing a profile could cause temporary traffic interruptions for a CM. There are many situations in which a PMA or a CMTS will not be able to make a change to the profile assigned to a CM. This may be because the CM is running services like video streaming or supporting a voice call at that moment and service continuity is more important to the operator at that time compared to the positive effects of a profile change. All of these use cases need to be captured as a set of policy rules. These would be exposed as APIs for Policy Enforcement, for Policy Deployment, and for status reporting. This policy will be defined by an MSO during initialization of the PMA. During runtime, the CMTS or the PMA will check for service interruption policy (or other conditions) before implementing a profile change. 12.6.5 Data Acquisition Methods The choice for the PMA is to either reuse existing mechanisms of data retrieval from the CM or the CMTS, or to require support of new protocols such as RESTCONF (as described in this technical report) on a CMTS. The data from the CMTS and the CMs can be acquired in one of many ways. These include SNMP MIB reads, MAC Management messages, FTP data out of CM to a PNM server, or all the data and commands flow through the RESTCONF interface on the CMTS. There needs to be further analysis and design of what are the best mechanisms for sending data back and forth between a CMTS and the PMA. 12.6.6 Data Volume for Message Exchanges The relevant messages for profile management are described in the following subsections. 12.6.6.1 Downstream Profile Descriptor There is one downstream profile descriptor per profile, and it consists of: • MMM header + 1 byte DS Channel ID + 1 byte profile ID + 1 byte config change count

 111 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

• Subcarrier assignment TLV varies, up to 255 bytes • Transmission frequency: On PLC: 200-250 ms On Profile A: 500-600 ms

12.6.6.2 OFDM Downstream Spectrum • ODS-REQ: MMM header + 1 byte DS channel ID

• ODS-RSP: MMM header + 1 byte DS channel ID ODS-RSP-MP TLV size: (see Table 6-66 of [MULPIv3.1]) = 1+2+1+2+2+1+2+N = N+11 bytes (N ≤ 7680)

• Per-CM and-per-OFDM channel • Transmission time is vendor dependent • Will need to store the RxMER received 12.6.6.3 OFDM Downstream Profile Test • OPT-REQ : MMM header + 2 bytes reserved + 1byte DS Channel ID + 1byte profile ID + 1byte opcode • OPT-RSP : • MMM header + 2 bytes reserved + 1byte DS Channel ID + 1byte profile ID + 1byte opcode • 2N + 59 bytes (15419 bytes for N=7680) 12.6.6.4 Data Volume Estimate The approximate number for each CM and each profile is about 20 kB scale factors are: • Number of DOCSIS 3.1 CMs, e.g., 10K CM would be 200 MB. • Number of profiles, if we keep track of all candidate profiles for each CM. • Number of historical copies stored. E.g., 20 kB * 10K CMs * 10 profiles * 10 historical copies = 20 GB for one CMTS. 12.6.7 Gaps in the PMA Solution The design of the PMA needs further analysis in the following areas: • Initialization of the PMA and CMTS needs to be worked out. • How does the PMA decide when to update profiles on a CMTS/CM: • The PMA needs to decide when a CM should move to a lower or higher profile, this would be the PMA vendor’s trade secret. • The PMA could initiate a test on the current channel/RxMER (using MMM or PNM), get the required information, and then figure out recommendations on new profiles. • CMTS vendors need to investigate the mechanics of implementing support for a PMA application function on an existing CMTS platform. Specifically, when to kick off DOCSIS messages based on commands from the PMA and how to extract information from MMMs, etc. As the team develops the PMA application further and the data models mature, these gaps will be addressed.

 112 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

13 INTENT-BASED NETWORKING – VISION AND ARCHITECTURE

The traditional model for operating networks involves modeling the network topologies and protocols to implement the network. Intent based networking by contrast involves modeling the applications (or workloads) and their interactions to enable automation logic to drive the conversion to a network implementation. Intent-based networking involves translating ‘Intent’ expressed in a high level language applicable to the applications into parameters applied to the northbound interface (NBI) of a network controller that implements the network state needed to deliver the desired behavior. A simplified cable-centric architectural diagram is shown below.

Figure 51 - High Level Intent-Based Networking Architecture

The Intent Engine may be part of the SDN controller. Intent-based networking simplifies the task for network managers who only need to specify the distributed workload’s behaviors and communications requirements instead of requiring detailed network configuration expertise and understanding network protocols and equipment interfaces. Users select what they want via a web portal or app. The intent engine invokes algorithms/rules at run time to give the user what they want. In practice the network state is automatically realized end-to-end without manual intervention at run time or having to program detailed network implementation specifics at design time. This extensible interface will support creation of rules and constraints with associated Intent algorithms to a diversity of use cases. Support is planned for both ‘hard’ constraints, e.g., bandwidth (Mb/s), latency (ms), etc. and ‘soft’ constraints, e.g., “high quality”, “lowest-latency”. The Intent controller is responsible for re-computing and reporting the state of fulfillment of system intent whenever the state of the intent repository or the state of the network changes. Constraints can be added to do things such as ensuring efficient use of routing capacity taking into account demand and bottlenecks, minimizing energy consumption, maximizing equipment utilization, delivering requested bitrate, requested latency, in service system expansion/contraction, advanced security features, etc. How to present the users with choices, how to articulate ‘Intent’ and how to translate it into actionable network configuration commands is the problem being addressed by a variety of projects and organizations. This document provides an introduction to Intent-based networking and an overview of the leading industry activities. It also provides recommendations on next steps for MSOs to begin the journey to realize the benefits of Intent-based networking.

13.1 Use Cases and Intent-based Order Portals

MSO order portals will evolve to accommodate intent-based services and this will be a rich topic for innovation and service provider differentiation. Networks basically provide two capabilities: connectivity (i.e., forwarding/filtering/isolation rules) and Quality of Service (QoS). It is useful to consider a representative set of use

 113 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

cases which illustrate the opportunities and challenges for implementation of Intent-networking. Three high-level descriptive use cases are given here which span the connectivity and QoS. Each use case could be described as a hierarchy of three levels: at the highest level, the user has something in mind that she/he wants to accomplish. The next level is an application the user interacts with to express intent and the final level is whatever underlying network commands the intent gets translated into. Specification of Intent algorithms and translation to the commands required to configure the network infrastructure is the problem being addressed by a number of industry forums as detailed later in this report. And existing domain- specific languages supporting today’s OSS/BSS systems could also be mapped to an intent system to enable existing environments to start to move towards Intent-based networking. 13.1.1 VPN Service Order Scenario VPN is a popular business service provided by MSOs as it can be bundled with other services such as Internet Access, and with options such as a managed router. It can be described by Intent-based networking as a relationship between customer locations or groups joining a VPN as a single entity. The relationship can be fine-tuned; for example this group can communicate with that group but not others, etc. At the user level, the user might describe his requirement as "I want the network port in my home office to act like it's a port on the corporate LAN at work." At the application level, an operator might provide a web portal for customers to use to set up their services. From her home, the user starts a browser and connects to the web portal. She may need to input some parameters like a company ID, security code, and desired bit rate. At the network provisioning level, for example, if the user is served by a DOCSIS network, the result might be commands to implement a L2 or L3 VPN service. There can be several types of VPN service orders: new, change, disconnect, and inflight change/modification. In this example, we provide only “new” and “change” service orders. The new service order is to create a complete VPN. The change service order can be many things, such as “add new location to an existing VPN”. The following tables illustrate the kinds of data that would be applied at the network provisioning level to implement various types of VPN service order. Table 23 - Service Order Data for New VPN

# Name Type Note 1 Customer ID Table Table 2 Class of Service Platinum/Gold/Bronze This SLA will be translated to VPN QoS such as max delay/loss, and activated to identified devices. 3 Location(s) List of references to location data 4 Topology Hub-Spoke/Full If hub and spoke, indicate which location is hub. Partial mesh requires Mesh/Partial Mesh more information. 5 Internet Gateway Yes/No If customer wants to access Internet from the VPN. 6 Delivery date Date Desired delivery date of the VPN for the customer.

Table 24 - Service Order Data for Adding Location to Existing VPN

# Name Type Note 1 VPN Service ID Existing VPN 2 Location References to location data 3 Delivery date Date Desired delivery date of the VPN for the customer.

 114 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Table 25 - Example Location Information

# Name Type Note 1 Location Address Street address Note that this is not translated by BSS to the SDN controller. Access service required? Yes/No Yes if new access (e.g., GPON), no if use existing access 2 (e.g., DOCSIS). 3 Existing access provider Existing access ID(s) Multiple if back-up required 4 Existing access type GigE, GPON, DPOE, etc. For encapsulation type. 5 Access redundancy Yes/No If new access service is required. 6 Access Bandwidth CE ID Management IP address The assumption is to use existing access service, delivered 7 to a router that the provider knows and so can configure. 8 CE Interface ID Like Ether0.1

13.1.2 Gaming Service Order Scenario At the user level a gaming service order could be expressed by the user as: "I just bought a new network-based game for my smartphone and I want to get very fast responses when I press the buttons." At the application level, the Intent system would know how to speak the necessary protocols to ask a network for what it needs — specifically a reservation for a low-bandwidth, very-low-latency flow. The network commands, for example over DOCSIS, would configure a service flow with the desired parameters for bandwidth, priority, and maximum latency. Other access technologies and other parts of the network would be configured accordingly. 13.1.3 Home Security Service Order Scenario At the user level, a security service order could be expressed by the user as: "I have a home security system with video monitoring that I can access remotely. I want to be able to zoom in to get high-resolution video while I'm watching, but I don't want it to use hi-res all the time because that would exceed my monthly data plan cap." At the application level, the user interfaces to the application managing the security system to remotely view camera activity, control the zoom, and log out when he's done. The application responds by changing the amount of data it is recording/sending; it also knows the protocols needed to communicate with an Intent engine to request the bandwidth needed to support the desired zoom level. The network commands, for example over DOCSIS, would ultimately result in a Dynamic Service Change command to change the security application's Service Flow to increase or decrease the bandwidth as needed. Matching service changes would also be made for other parts of the data path. 13.1.4 Triple Play Service Order Scenario Triple play service means Broadband, Voice and TV or any combination, along with Value Added Services (VAS). Triple play service is provided on top of an access service (e.g., DPoE). There are some dependencies, for example, Voice or Video cannot be ordered without Broadband. At the user level a triple-play service order could be expressed by the user as: "I want a broadband service and a TV service, but I don’t want a voice service”. At the application level, the user interfaces to the application managing the triple play system and selects from a menu of offers. Offers could include different broadband speeds including QoS and latency options, a variety of TV bundles and voice. In addition, value-added services could be added such as voice mail, parental controls, etc. new value added services can be created from a bundle of services. For example: • If TV program is on and there is phone call, show caller ID on the TV screen. • If I am watching TV and the front door is opened, show security camera video on my TV screen. Intent-based networking can make these easier to be invoked by a customer portal and therefore more attractive to customers.

 115 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

The following tables illustrate the kinds of data that would be applied at the network provisioning level to implement various types of triple-play service order. Note that service orders invoking an SDN controller may have more details provided by the BSS, such as the type and MAC address of the CM that was dispatched to the customer location. Table 26 - Service Order Data for Broadband

# Name Type Note 1 Customer ID Table Table Street address This is not translated to the SDN controller, as it’s used 2 by BSS only. Rather, after the CPE (CM or ONT) is sent and deployed, its ID is used. 3 Upstream bandwidth Downstream bandwidth 4

Number of public IP This can be a Value Added Service (VAS), e.g., 5 addresses residential customers who want to host content could request a fixed public IP address. 6 Data volume cap 7 Delivery date Date

Table 27 - Service Order Data for Voice

# Name Type Note 1 Customer ID Table Table Number of lines/TNs This is not translated to the SDN controller, as it’s used 2 by BSS only. Rather, after the CPE (CM or ONT) is sent and deployed, its ID is used. LNP Phone number (s) Customer existing phone numbers (by their existing 3 Voice provider) to port in Voice mail This is just one example of Features. Other can be 3-way 4 conf. call, Caller ID, etc. 5 Access service ID Existing high speed data service if this is VoIP 6 Long Distance Provider ID 7 Delivery date Date Desired delivery date of the VPN for the customer

Table 28 - Service Order Data for TV

# Name Type Note 1 Customer ID Table Table TV Package(s) This is not translated to the SDN controller, as it’s used 2 by BSS only. Rather, after the CPE (CM or ONT) is sent and deployed, its ID is used. 3 Number of TVs /Terminals TV Everywhere is another option. 4 Access service ID Existing broadband service if this is VoIP. 5 DVR Size Gb 6 Delivery date Date Desired delivery date of the VPN for the customer.

 116 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

13.2 Related Industry Initiatives

There are a number of industry initiatives whose goals are to create ‘Intent’ capabilities for networks. This section provides an overview of these industry activities. It is not intended to be an exhaustive list or to rank any of the activities in terms of relevance. The field is extremely dynamic with new insights and solutions being developed constantly as SDN evolves. This review provides a brief description for a number of the higher profile industry efforts. Links are provided to access more detailed information. 13.2.1 Group Based Policy Group Based Policy (GBP) is a full Intent System. It utilizes a declarative language and API generated from its Intent data model. This allows consumers of the Intent System to contract the network services through a simple expression of intent, and without requiring detailed knowledge of the underlying network infrastructure. Using an Intent specific data model, GBP separates the consumer’s Intent functional requirements from any implementation details of the network infrastructure. Today there are group based policy projects in both OpenDaylight (ODL) and OpenStack projects. The architecture itself consists of two main sets of components: • The Intent Data Model; • Process – how the objects in the model relate to one another to fulfill the expressed Intent. The provisioning of the expressed Intent into concrete configuration is automated. ‘Renderers’ perform the Intent Automation function. Renderers are where the details of how to implement the policies reside. A single system might support a number of renderers; each supporting different features or methods to deliver a requested policy. The Open Daylight project has developed two renderers, one for Open VSwitch (OVS) Overlays and another for OpFlex.1 The most important elements used to represent high-level abstractions are the endpoints, endpoint groups, contracts, and policies. • An endpoint is simply a network endpoint. Typically this relates to a “workload” or “application instance”. Most importantly, each endpoint includes a set of properties and values. • An endpoint group is a set of endpoints that is defined based upon the endpoints sharing a common set of properties and values. An endpoint can be included in more than one endpoint group. • A contract is associated between two endpoint groups, where one is identified as a provider of the contract, the other a consumer of the contract, and a policy or set of policies. • The policies, or rules, will be applied to traffic that moves from any endpoint in the provider group to any in the consumer group. In Figure 52 below, repositories of endpoints, policies, etc., are defined, as are active contracts. The active contracts describe how policies are to be applied to traffic between pairs of endpoints (directional). These are then processed in renderers to define actual configuration updates that make the policies real.

1 http://www.cisco.com/c/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/white-paper-c11- 731302.html

 117 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 52 - Group Based Policy Architectural Components

When an application expresses its intent in these terms, the application need know nothing about the actual network actions to deliver on the intent. In fact, the request will be portable if renderers exist to support the contract on other network infrastructure. More information on GBP can be found on the OpenDaylight and OpenStack websites. Status Group Based Policy has been a part of OpenDaylight since the Helium release and will be enhanced with new features in the OpenDaylight Lithium release, scheduled for late June, 2015. Future releases are in planning and are receptive to requirements. 13.2.2 Open Networking Foundation (ONF) Common Intent Northbound Interface (NBI) Initiative The ONF NBI working group is seeking to unite all of the emerging intent projects with a project to define a common, infrastructure and controller agnostic NBI Information Model. Work is currently ongoing to define an NBI and to build reference implementations of it across several open source networking projects. Currently work is ongoing to create software artifacts in the ONF Repository that define Information Models for multiple open source projects that implement the NBI. The OpenDaylight Network Intent Composition (NIC) project, ONOS NIC project, and OpenStack have development work supporting this initiative in order to prove the infrastructure-agnostic benefits of the Intent approach. The implementation of the common NBI in the OpenDaylight NIC project has been developed in such a way that several existing intent-like projects can be “mapped” as backend implementations of the common intent NBI.

 118 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Projects such as GBP, NeMo, and others can provide diverse implementations and choices even on a single controller platform. In addition there are discussions ongoing in the OpenStack Congress project to ensure that the common intent NBI is complementary and capable of providing policy enforcement and reporting in support of Congress overall cloud policy goals. Some key tenets behind the open source work include: • The intent interface is invariant and should be common across multiple vendors, systems and protocols. A common intent interface for a service will remain the same regardless of its implementation details. Operators will be able to run the same service in different locations on different infrastructures. Further, operators will have the freedom to update their infrastructure without adversely affecting a service or their ability to support it. The rendering of the service might be different, but how users perceive it is constant. • Intent parameters may be mixed, creating a composition of one or more services. Separate, independently created services and their Intents can be mixed. It is necessary for the renderers below the intent, to resolve the ‘many writers’ problem also being addressed in the NIC project. The OpenStack Keystone project is addressing this too. The OpenStack Keystone project is working to address the syntax or grammar for expressing intent. It is also looking at the different types of intent and how these might be re-used and composed. It is targeted at enabling this common Intent across many systems. Plans include the ability for Keystone to act as common Intent NBI that communicates with a controller-specific Intent NBI. Figure 53 shows Keystone running over ONOS and OpenDaylight.

Figure 53 - OpenStack Keystone Supporting SDN Multiple Controllers with Intent

Status The ONF Intent work has been ongoing for some time. It has been engaged in defining NBIs for specific services as well as being a source for some of the early definition of network service Intent concepts. The Keystone project is quite new, the most recent project of those described in this document. The roadmap calls for a very early prototype to be ready by June, 2015. A more complete roadmap and details of the project are being defined at this time.

13.3 Network Intent Composition

Network Intent Composition Project (NIC) is a recently created Intent based project in OpenDaylight. It is one of several projects, ONOS is another, to support the controller-agnostic community developed Intent NBI being defined by the Network Intent Project in the ONF’s NBI working group.

 119 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Its stated goal is to “enable the controller to manage and direct network services and network resources based on describing the ‘Intent’ for network behaviors and network policies.” Its purpose is to develop a new extensible northbound interface (NBI) that allows users to describe what they want from the network rather than prescribe how to deliver that service. The NBI Intent interface will be exposed to network orchestration systems, SDN Applications and Network Operators. It may be defined as RESTCONF, and/or Java APIs. It will rely upon existing OpenDaylight southbound plug-ins to control the network devices and is protocol agnostic; it will work with various protocols such as OpenFlow, SNMP, OVSDB, etc. By developing NIC within the OpenDaylight community the project leverages its combined expertise. It also provides the opportunity to integrate and test the Intent NBI with existing and evolving network services that OpenDaylight supports. The NIC development process is use-case driven and incremental. The plan is to focus on use cases prioritized from among the broad community of contributors. The project is starting with some very simple use cases and will increase complexity over time. Limitations of the modeling will be identified over time and updated to provide the needed capabilities. Project deliverables are expected to include: • An NBI framework built on a modular / pluggable extensible YANG model. The framework will allow users to independently define new NBIs for new services. • Yang models to enable NBI support of a set of use cases. These will be simple use cases and models initially, evolving in complexity over time. • A reference design to support an SDN work item in the standards organization, the European Telecommunications Standards Institute (ETSI) Network Functions Virtualization Industry Specification Group (NFV ISG). The project will also tackle a significant technical problem, how to identify and resolve conflicts among multiple Intent driven service requests. It is highly likely that intentions and policies defined in the different services will come into conflict. It will be necessary for the NIC solution to resolve these differences, creating a consistent set of network actions. As the project documentation states, “The goal is to solve the multiple-writers/multiservice SDN problem with an intent-only distribution of the controller exposing intent as the intermediate language. More information can be found on the OpenDaylight NIC website. Status The OpenDaylight NIC project has been very active since its inception in January, 2015. The development team includes committers from leading vendors. Some of the work is being funded by the ONF. The capability is expected to have its initial release included in the Lithium release of OpenDaylight, scheduled for late June, 2015. During its initial planning stage, a large number of use cases were proposed, including, but not limited to, Bandwidth on Demand, QoS control, Service Chaining, DNS Monitoring, and Virtual Tenant Networks. It is expected that the initial release will include support for an Intent NBI for Virtual Tenant Networks and control of Group Based Policy. 13.3.1 NeMo NeMo provides a simple transaction based Intent-based NBI, enabling applications to create, modify and takedown virtual networks built on virtual nodes with policy-controlled flows. The NeMo Intent NBI allows an application to communicate with a controller, providing ten commands: • Four network commands: Node, Link, Flow, Policy • Six controller communication commands: connect, disconnect, transact, commit, notification, query An application exchanges NeMo commands using the REST Protocol to a controller running a Nemo language processing engine to instruct the controller to set up a virtual network of nodes and links with flow policy to control the data flows across the network links. NeMo uses an application’s view of the compute, storage, and network to allow an application to set any grouping of compute, storage, or network as a virtual ‘node’. This allows the application to decide what constitutes a compute

 120 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

node and what constitute a ‘link’ and a ‘flow’. From the application’s viewpoint, it intends to connect two or more nodes in a network. It does not matter to the application if the node is a single Virtual Machine (VM) or a cluster of interconnected compute and storage devices with many network connections. NeMo’s NBI API hides this complexity, making the application’s commands prescriptive and simple. Technically, NeMo is a declarative, domain specific policy language. NeMo’s language engine in the controller is associated with a model that allows a group of applications to have a set of pre-loaded definitions (model semantics) for nodes, flows, or policy. For example, a company node could be defined along with the necessary flows for accounting traffic or big-data transfers. NeMo is progressing as active projects in OpenDaylight, OPNFV and IETF. The goals of the OpenDaylight NeMo project are: • Design and develop consistent NBI models and patterns for intent networks. • Design the syntax for a language style NBI. • Design and develop a NEMO language engine for language parsing and model mapping to SB models. It is possible to reuse the ongoing NIC project in OpenDaylight for the intent manager and model mapping component. The goals of the OPNFV project2 are: • Provide a more abstract NBI alternative by extending the general cloud platform to simplify the orchestrator and VNF manager. • Compose various scenarios with a same set of abstractions. • Use the MDA approach for NBI consistency and interface automation. The goals of the IETF activity are: • Provide a clear definition of Intent that can be operationalized in networks. • Define use cases for intent networking. • Provide a gap analysis for other work in the IETF. • Create intent data models and information network models. • Standardize a protocol language for NeMo. • Standardize data models. Figure 54 shows how these activities relate.

2 https://wiki.opnfv.org/movie discusses the goals of the NeMo project.

 121 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Figure 54 - NeMo Relationship to Open Source and IETF

Status In 2014, the NeMo project provided early proof-of-concept code demos for an Intent-based interface that uses a domain specific language. Nemo is moving this work into open Source projects and the IETF. In 2015, public demonstrations were shown at conferences and at the IETF. 13.3.2 OpenStack Congress Congress aims to provide an extensible open-source framework for governance and regulatory compliance across any cloud services (e.g., application, network, compute and storage) within a dynamic infrastructure. It is a cloud service whose sole responsibility is policy enforcement. Congress aims to allow cloud administrators and tenants to use a high-level, general purpose, declarative language to describe business logic. The policy language does not include a fixed collection of policy types or built-in enforcement mechanisms; rather, a policy simply defines which states of the cloud are in compliance and which are not, where the state of the cloud is the collection of data provided by the cloud services available to Congress. Some examples are: • Application A is only allowed to communicate with Application B. • Virtual machine owned by Tenant A should always have a public network connection if Tenant A is part of the Group B. • Virtual Machine A should never be provisioned in a different geographic region than Storage B. Congress offers a pluggable architecture that connects to any collection of cloud services and can enforce policy: • Proactively: preventing violations before they occur. • Reactively: correcting violations after they occur. • Interactively: give administrators insight into policy and its violations, e.g., identifying violations, explaining their causes, computing potential remediation, simulating a sequence of changes. The policy language for Congress is Datalog, which is based on SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, but its syntax is more ‘terse’ making it better suited for expressing real-world policies. More information can be found on the OpenStack Congress website.

 122 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Status Congress began in late 2013 and is actively progressing code development with reference to policy enforcement capabilities in OpenStack such as Keystone, but also applicable to other policy enforcement enabled entities such as OpenDaylight. 13.3.3 IETF SUPA IETF SUPA (Simplified Use of Policy Abstractions) is a Birds of a Feather (BoF) proposing an IETF Working Group to develop a set of information models for defining standardized policy rules at different levels of abstraction, and will show how to map these (technology-independent) forms into YANG data models. The working group introduces the concepts of multi-level (multiple levels of abstraction) and multi-technology (e.g., IP, VPN, MPLS) network abstractions to address the current separation between development and deployment operations. Multiple levels of abstraction enable common concepts present in different technologies and implementations to be represented in a common manner. This facilitates using diverse components and technologies to implement a network service. Three information models are envisioned: • A generic information model that defines concepts needed by policy management independent of the form and content of the policy. • A more specific information model that refines the generic information model to specify how to build policy rules of the event-condition-action paradigm. • A more specific information model that refines the generic information model to specify how to build policy rules that declaratively specify what goals to achieve (but not how to achieve those goals). This set of generic policy information models will be mapped to a set of concrete YANG data models. These data models will provide a set of core YANG modules that define how to manage and communicate policies, expressed using the event-condition-action paradigm or the declarative goal-oriented paradigm, between systems. The proposed working group will focus in the first phase of its work on completing the set of information models required to construct an extensible, policy-based framework. These information models will lead to a set of core YANG data models for a policy-based management framework to monitor and control network services. The working group will reference the Distributed Data Center (DDC) use case, which includes the dynamic policy- driven provisioning and operation of inter-datacenter VPNs of various types, as a means to validate that the generic policy-framework is implementable and usable. Status The main contributors to SUPA come from industry and academia. Today it holds Birds of a Feather status in the IETF. This means it is in a pre-Working Group state and its timeline is still being decided upon. In July 2015, the decision will be made as to whether this will become a working group.

13.4 Conclusions and Recommendations

Intent-based networking is gaining prominence as an approach to streamline network operations and to provide a richer services experience for users by simplifying the way users can select complex bundles of network capabilities to meet their changing needs. The enablers for Intent-based networking are beginning to emerge (e.g., SDN and NFV) and this is providing a stimulus to innovation and implementation across a broad spectrum of the industry. Work is underway in the major open source projects with varying degrees of maturity. Industry efforts appear to be fragmented, but there is a diversity of players which bodes well for competitive solutions to emerge. However, it is not yet clear which approaches are likely to be successful, hence a ranking or decision on preferred approach is not deemed appropriate at this stage of industry development. However it is clear that the Open Networking Foundation effort to unify the approaches, OpenDaylight and OpenStack are the key forums to watch. Given the potential benefits for MSO operations and services innovation, and synergies with SDN and NFV, it is recommended that Intent-based networking is progressed as a future project within CableLabs Virtualization and Network Evolution (VNE) program.

 123 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

The following activities are recommended to be progressed in this next stage: • Identify the primary MSO use cases to be targeted including clear articulation of the problems that can be addressed by Intent-based networking. • Map the MSO uses cases to the Intent-based networking approaches being progressed in the industry to identify gaps and how to address them. • Help MSOs develop strategies to accommodate Intent-based networking capabilities within their networks and identify what resources will be needed to bridge the gaps. • Develop a strategy for industry engagement to encourage the ecosystem to address MSO needs. • If a business analysis justifies, define and implement a proof of concept based on a key MSO use case to focus effort on identifying the gaps needed to implement Intent-based networking in the MSO environment and to help build CableLabs and MSO expertise to support future specification work and deployment.

 124 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

14 CONTRIBUTION TO OPEN SOURCE CONTROLLERS

The industry needs to focus outward to the open source community, as that's where all the innovation and thought leadership regarding SDN architecture are happening. One example is the OpenDaylight (ODL) open source controller, which is an open platform for network programmability to enable SDN in networks of any scale. ODL software is a combination of components including a fully pluggable controller, interfaces, protocol plug-ins and applications. In the Helium release of OpenDaylight (Oct 2014), the ODL platform added a PacketCable MultiMedia plug-in, which essentially enables the ODL controller to communicate to an existing CMTS platform using the COPS interface and the PCMM data protocol. CableLabs led the software effort to contribute this southbound protocol plug-in and created needed data models to the ODL code base. In the Lithium release (June 2015), the PCMM API was updated with additional interfaces and functionality. Similar efforts will be needed to enable support and management of cable technologies by other open source controllers and platforms.

Figure 55 - OpenDaylight Architecture

 125 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

15 CONCLUSION

The SDN architecture described in this technical report forms the foundation for how some MSOs may choose to deploy and manage their networks in the future. Once an SDN controller is in the midst of the network, talking to the various devices and network components, operators can start focusing on developing better applications. As newer access technologies are added, they will be integrated into this framework and the existing services will apply to them seamlessly. The customer will be happier with the same service being available on their home network or, even if they are on the road roaming, as the SDN controller will be able to set up the same services across different networks that adopt SDN architecture. New concepts, such as service function chaining, are taking root within the industry. Different network functions and applications are being virtualized and being run on commercial off-the-shelf hardware in the cloud. The SDN controller and orchestrator are in the middle of that architecture, helping direct the traffic from the customers and/or access network to the appropriate VMs and services in the cloud. The SDN paradigm will bring simplicity to the MSO network operations by abstracting out the complexity of the individual network devices and their configuration.

 126 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Appendix I Examples of CCAP Abstractions The CMA framework can emulate one or more integrated CCAPs using the underlying Distributed CCAP physical infrastructure. In Figure 56, the CMA framework presents the abstraction of two CCAPs to the OSS/BSS. The CMA presents all the interfaces of the first remote node as part of the first CCAP (CCAP1), and the interfaces of the second and third remote node as part of the second CCAP (CCAP2).

Physical deployment OSS/NMS logical network view Remote Node 1 Cable CCAP 1 P1 CM 1 P2 P3 CM 2 P4 P1 CM 1 P2 P3 P1 CM 2 P4 CM 3 P2 P3 P1 P4 OLT CM 3 P2 P3 Remote Node 2 P4 CMA

P1 splitter CM 4 P1 P2 CM 4 P2 P3 P3 P4 P4 P5 P1 CM 5 P6 CM 5 P2 P7 P3 P8 P4 Remote Node 3 CCAP 2

Figure 56 - CCAP Abstractions

The remote nodes can be connected to an OLT using EPON/GPON as shown in the above figure, or they can be connected to a L2/L3 switch using P2P links.

 127 CableLabs 06/25/15 VNE-TR-SDN-ARCH-V01-150625 Open Networking

Appendix II Acknowledgments On behalf of our industry, we would like to thank the following individuals for their contributions to the development of this technical report.

Contributor Company Affiliation Contributor Company Affiliation Jeff Dement Arris Joe Solomon Dan Torbet Arris Mazen Khaddam Cox Niki Pantelias Broadcom Eric Bell Ericsson Miguel Alvarez CableLabs Mike Hamilton Ericsson Don Clarke CableLabs Marc Rapoport Ericsson Thomas Kee CableLabs Samir Parikh Gainspeed James Kim CableLabs Brett Kugler GDT Karthik Sundaresan CableLabs Dave Lenrow HP Nikhil Tayal CableLabs Hesham ElBakoury Huawei Jun Tian CableLabs Susan Hares Huawei Alon Bernstein Cisco Anh Le Netcracker Keith Burns Cisco Andrew Veitch Netcracker Paul Quinn Cisco Praveen Kumar Speedifi Anlu Yan Cisco Colin Howlett Vecima Manu Kaycee Ciena Doug Johnson Vecima

We would also like to thank the following individuals for their participation in the Open Networking working group and their contributions to the development of this SDN Architecture for Cable Access Networks.

Participant Company Affiliation Participant Company Affiliation Nick Cadwgan Alcatel-Lucent Ony Anglade Cox Marty Glapa Alcatel-Lucent Brent Bischoff Cox Stephen Peyton Maynard-Koran Alcatel-Lucent Bill Coward Cox Kristian Poscic Alcatel-Lucent Jeff Finklestein Cox Jeff Dement Arris Maz Khaddam Cox Dan Torbet Arris Paul Bateman Cyan Ed Mallette BHN Eric Bell Ericsson Niki Pantelias Broadcom Mike Hamilton Ericsson Paul Runcy Broadcom Marc Rapoport Ericsson Miguel Alvarez CableLabs Samir Parikh Gainspeed Don Clarke CableLabs Brett Kugler GDT Chris Donley CableLabs Nitin Kumar Harmonic Thomas Kee CableLabs Asaf Matatyaou Harmonic James Kim CableLabs Dave Lenrow HP Karthik Sundaresan CableLabs Hesham ElBakoury Huawei Nikhil Tayal CableLabs Susan Hares Huawei Jun Tian CableLabs Andy Smith Juniper Eduardo Panciera CableVision Argentina James Connolly Liberty Global Weidong Chen Casa Phil Oakley Liberty Global Mark Szczesniak Casa Slavka Trifonova Liberty Global Alon Bernstein Cisco Khalid Adnan Keith Burns Cisco Anh Tuan Le Netcracker Charles Duffy Cisco Klius Maxim Netcracker Dan Hegglin Cisco Andrew Veitch Netcracker

 128 CableLabs 06/25/15 SDN Architecture for Cable Access Networks Technical Report VNE-TR-SDN-ARCH-V01-150625

Participant Company Affiliation Participant Company Affiliation Paul Quinn Cisco Nasir Ansari Rogers Pawel Sowinski Cisco George Hart Rogers Anlu Yan Cisco Richard Lawson Rogers Philippe Perron Cogeco Ida Leung Rogers John Bevilacqua Comcast Derek DiGiacomo SCTE Brian Field Comcast Jonathan Kirkness Shaw Maurice Garcia Comcast Victor Zuo Shaw Yiu Lee Comcast Praveen Kumar Speedifi Nagesh Nandiraju Comcast Wes George Samer Patel Comcast Lance Hassan Time Warner Cable Saifur Rahman Comcast Kevin Noll Time Warner Cable Jorge Salinger Comcast Colin Howlett Vecima Joe Solomon Comcast Doug Johnson Vecima Mehmet Toy Comcast

Additionally, CableLabs would like to thank the SDN MSO Technical team for their continued support in driving the vision, the technology analysis and development.

Karthik Sundaresan, CableLabs

 129 CableLabs 06/25/15