
WHITEPAPER VIRTUALIZING YOUR LIVE-TV HEADEND: MULTICAST AND ZERO PACKET LOSS ON OPENSTACK WERNER GOLD RED HAT Munich, Germany [email protected] MARCO LÖTSCHER HEWLETT PACKARD ENTERPRISE Dübendorf, Switzerland [email protected] Swisscom TV migrated their first Live-TV channel processing to an IP-based virtualized infrastructure on OpenStack. Linear TV processing is one of the most challenging workloads to be run on a cloud due to carrier-grade network and service availability requirements combined with multicast traffic and complex brown field environments. In this paper we will explain why and how Swisscom, the market leading TV provider in Switzerland, is utilizing OpenStack to increase business agility and drastically reducing cost by virtualizing and orchestrating the management and production of linear TV channels. We will go into detail of what technical challenges were faced during the project phase and how they were solved: • How are NFV principles and reference architectures applicable to media workloads? • How can Swisscom save cost in the production of TV channels in a virtual headend? • Why did multicast traffic not work on Open vSwitch when the project started, and how was this problem solved? • What are the differences to containerized implementations and what is needed to make it work there? MOTIVATION TO VIRTUALIZE The media industry is facing a general trend to move from its current discrete and proprietary appliances towards Ethernet and IP, and further towards standard hardware and virtual workloads. The components of a traditional headend infrastructure consist of all functions to transform an uncompressed raw video stream into a distribution format (e.g. DVB-T, DVB-S, IPTV, etc.). The main task is to generate the various codecs and qualities (en- and transcoding) from UHD-screen to mobile consumption and multiplex the channels to fill the bandwidths of the transponder channels. Other tasks are encryption/DRM and probes for quality control. All these functions reside in appliances today which are connected on the ingress side with a proprietary but highly effective network (SDI). facebook.com/redhatinc Channel availability demands are being met by doubling the devices in the local compute @redhatnews center and doubling the compute centers themselves. Because the devices are specialized, linkedin.com/company/red-hat introduction of new formats and codecs often require costly hardware replacements. New channel deployments also involve new hardware purchase processes. redhat.com 1 Therefore, broadcasters and service providers seek to benefit from the trend towards virtualization and standardization in two ways. First and foremost, they want to achieve cost reduction by moving away from proprietary, hardware based infrastructures. Secondly, they want to leverage the agility and optimization of their service deployment. A virtualized headend brings the following benefits: • Automated channel deployment • Non-proprietary infrastructure • Quick and automated extensions • Simplicity: One interface for all • Mix and match media functions (i.e. encoders, muxers, scramblers, probes) from various vendors IMPLEMENTATION AT SWISSCOM Swisscom started to provide IPTV services in 2006. They became the market leader in TV Services in Switzerland. Today Swisscom is one of the first nationwide TV companies to run virtualized live linear IPTV services in production. In Swisscom’s legacy headend, almost all systems are based on proprietary hardware and multicast delivery. Several transcoder vendors are available, but each with its own control panel and complexity. Very little or no automation is present, with costly and long provisioning time. In order to achieve low delays comparable to those in satellite deployments, the video streams are being delivered via IP Multicast within Swisscom’s own network. Swisscom started the virtualization journey with the “heavy lifting” functions like live transcoding which created most benefits in business process acceleration and reduction of demands of new hardware purchases. This way economic benefits could be achieved very early within the project (Figure 1). 1st step virtualization 2nd step virtualization Acquisition Multicast Delivery (Live) DRM st FCC/RET 1 screen Client IPTV Managed-IP DCM network (ingest no-signal Transcoder IP from Receiver picture) Satellite TS CBR Multiplexer Rewrapper Set-top box / TV (separate Fullscreen/ PiP, scramble) Egress Probe (decoding streams n-screen Client and service IP from Ingress probeing) Unicast Delivery (Live & OnDemand) Probe Studio IPTV (T5 and packet Encoder errors) ABS ABR OTT network OTT Transcoder OTT Clients Unicast server (PC, Tablet, (OTT LiveTV, ReplayTV, nPVR, VOD) Smartphone) rd Encoder Receiver Transcoder Transcoder MUX 3 step virtualization Controller Controller Controller Controller Controller CBR Constant BitRate multicast ABR Adaptive Bit Rate unicast Figure 1: Functional view TS Transport Streaming WebCalls of the Swisscom headend ABS Adaptive Bit Streaming DCM Digital Content Manager redhat.com WHITEPAPER Virtualizing your Live-TV Headend 2 APPLYING NFV PRINCIPLES In 2012 the European Telecommunications Standards Institute (ETSI) formed an Industry Specification Group (ISG) to develop the standards to virtualize classical Telco functions and applications. Since then, the ETSI model has become a blueprint for managed virtual infrastructures as well as the orchestration and management of virtual functions. Due to the carrier grade and real-time requirements, it made sense to apply the same model to the virtualization and management infrastructure of media functions within the virtual headend. That step will allow to break through the traditional barriers between headend and delivery networks at a later stage. It enables carriers to move headend functions closer towards the network edge, which in addition will allow to reduce traffic and increase quality as well as customer satisfaction. TECHNICAL REQUIREMENTS The live experience of TV streaming is constrained by the delay of the stream to be received by the consumer. There are lags, that are caused by the digital processing in production. They are the same for all digital distribution channels and are not specific to IPTV. The headend adds delays throughout processing as well (transcoding, multiplexing, packaging, DRM). These typically sum up to a few hundred milliseconds and need to be considered, when moving from SDI and appliances towards IP and virtual signal processing. Another source of delay is the jitter buffer on the client side. Internet packets can take different routes and the order in which they are received may differ from the order in which they have been sent. The jitter buffer gives the client the chance to collect and reorder packets, before the data gets decoded. Otherwise picture quality will suffer from missing data. This is typically between about 100ms and a few seconds. A pure OTT service needs to distribute data in a point-to-point stream (unicast). Every viewer adds to the required total bandwidth and distribution from origin servers doesn’t scale this way. One solution to the problem is segment caching. The headend delivers the data in HLS and MPEG-DASH formats, which are packed into segments of 2-10 seconds. These segments can be cached and redistributed out of CDN caches. Each cache stage adds scalability to the playout infrastructure but also adds a delay of at least one segment. This typically results in total delays of 30 seconds and more for TCP-based unicast content delivery. Service Orchestration OSS/BSS NS and VNF Catalogs Service, VNF and Infrastructure NFV Description Orchestrator Transcoder EMS1 EMS2 EMS3 Muxer vHE Manager VNF VNF VNF VNF Manager(s) Probe 1 2 3 Virtual Virtual Virtual KVM, OVS, Ceph Computing Storage Network Virtualized Infrastructure Manager(s) OpenStack Virtualization Layer NFVI Computing Storage Network NFV MANO Standard Hardware Hardware Hardware Infastructure Hardware Resources Figure 2: Mapping of the ETSI/NFV Model to the Headend redhat.com WHITEPAPER Virtualizing your Live-TV Headend 3 This is where IP-multicast comes into play. Multicast is a one-to-many protocol, with which the receiver subscribes to a data stream. Using multicast over UDP for content distribution is a much better option for live TV. It scales very efficiently, because packets need to be distributed only once within a certain network section. Additionally, we do not need caches for scaling. But it has two constraints. First, the broadcaster needs to have full control over the distribution network to benefit from multicast support. Second, there is no way to recover lost packets. Each lost packet has a negative impact on the picture quality. Therefore, it is absolutely critical to deliver lossless streams with minimal jitter. The architecture requires a tight integration of the headend and the distribution network and puts limits onto public cloud hosted headends for large scale live broadcast. First and foremost, because they don’t support multicast at all, but especially because there is no end- to-end multicast from the headend to the settop box with comprehensive quality control. RED HAT OPENSTACK PLATFORM REQUIREMENTS OpenStack has its own standalone service for networking (as a service). Its main process is the Neutron server which exposes the networking API and uses a set of plugins for additional processing. The most important plugin for this use case is Open vSwitch (OVS), which provides a virtual network switch with VLAN capability.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages8 Page
-
File Size-