Elemental® Live

Total Page:16

File Type:pdf, Size:1020Kb

Elemental® Live ELEMENTAL® LIVE API AND USER GUIDE 2.20.3.0 RELEASE DECEMBER 23, 2020 Elemental Live API and User Guide TABLE OF CONTENTS • Overview • Web Interface • REST Interface • Elemental/Live/Live Event Parameters • SNMP Interface • Authentication • Reference • Supported Codecs • Supported Captions 12/23/20 Copyright 2020 AWS Elemental. All rights reserved. 2/171 Elemental Live API and User Guide Overview OVERVIEW • Purpose • Product Overview • Live Events • Live Event Profiles • Presets • Schedules • MPTS Multiplexers • Notifications • Statistics • Advanced Pre and Post Processing • Custom Scripts • Troubleshooting PURPOSE This document is intended for system integrators and users of Elemental® Live. It outlines interfaces for machine and human control, configuration, and monitoring. Each API is defined in enough detail to explain how to use the system and how it can be integrated into larger workflow automation systems. PRODUCT OVERVIEW Elemental Live is a powerful media encoder which can accept input through an HD-SDI interface, IP over Ethernet, or a local file. It can produce multiple video output streams using a variety of live streaming protocols. Elemental Live can be controlled, configured and monitored through the following interfaces: • Web browser via HTML • Web Services REST interface • SNMP interface Using a web browser is the easiest way to control, configure, and monitor Elemental Live. This interface is used when a human is interacting with the server, or when no automation or integration with other systems is required. Elemental recommends Mozilla Firefox as the client web browser. The REST-based interface supports all features of the web interface as well as automation features. More general information on REST-based interfaces is available online. The SNMP interface allows basic monitoring and control of the Elemental Live system. It allows a management system to query the state of the service and Live Events, as well as start and stop Live Events. Finally, secure shell access allows the user to modify the system's configuration files, directory structure, and built-in tests. The secure shell interface is provided for users who need to modify the base behavior of the Elemental Live system or for diagnostics. LIVE EVENTS A Live Event is an encoding session started and stopped by the operator of the system. Live Events are created by selecting an input from the available HD-SDI capture devices or specifying a network stream location. The operator can also select a file input, which will be decoded in real-time to simulate a live input. This feature is useful for testing encoding parameters and output destinations when a live source is not available. 12/23/20 Copyright 2020 AWS Elemental. All rights reserved. 3/171 Elemental Live API and User Guide Product Overview When using the web interface, the user can click the "Preview" button once they have filled in input parameters. Elemental Live will attempt to acquire the input source and display a scaled frame and audio level for the user to verify their source is correctly connected to Elemental Live. The operator then defines the output stream encoding parameters that they would like the system to produce. See Supported Codecs for information about which codecs Elemental Live supports. The user can select which GPU they would like the stream to utilize, or allow the system to automatically select the GPU. Output streams can be connected to multiple output groups, which define endpoints for streaming applications. Elemental Live currently supports Adobe RTMP outputs for Adobe Flash Media Server, push encoding for Microsoft Smooth Streaming, Apple iPhone live streaming, Dynamic Adaptive Streaming over HTTP (DASH), UDP output, reliable transport stream output, and archiving to the local hard drive. See Supported Codecs for a list of all the supported container formats. LIVE EVENT PROFILES A Live Event Profile is a saved Live Event definition that includes all output settings for a Live Event and, optionally, the input settings. Live Events can be submitted with a Live Event Profile ID and input parameters to re-use previously entered settings. Note that if a Live Event Profile is edited, those changes are only applied to Live Events created after the change. Live Events already in progress or in the PENDING state will retain the settings with which they were submitted. Some example Live Event Profiles are supplied by default in each release of the Elemental Live software. These examples should be copied if they are intended to be used in an actual workflow as they may change from release to release. PRESETS A preset is a predefined group of settings for a single output stream. A preset allows the user to create output streams targeted at a particular device or standard output format. For example, the H.264 HD Medium preset produces a stream with H.264 video at 1280 x 720 resolution and 2.4 mbit/s, and AAC audio at 128 kbit/s. Elemental maintains a list of common presets that are delivered to the Elemental Live system via software updates. Additionally, the user can specify named presets using any of the interfaces to the Elemental Live system. SCHEDULES Schedules can be created to run certain Live Event Profiles at scheduled times, or a set of repeating times. For example, a schedule can be created to run every weekday from 1:00PM to 2:00PM to stream a program. Each of these scheduled programs will be converted to a Live Event. MPTS MULTIPLEXER The Multi-Program Transport Stream multiplexer (MPTS mux) combines audio, video and data from multiple Live Events into a single MPEG-2 transport stream, output over UDP. MPTS muxes are managed from the MPTS Control page, accessed from the Event Control drop-down. The MPTS mux takes its inputs from one or more Live Events. In order to be eligible for MPTS muxing, a Live Event must be configured with a single UDP/TS output group that has an ‘MPTS Membership’. A setting of ‘Local’ allows the event to participate in an MPTS mux running on the Live node itself. The output also must be attached to a stream that is using either CBR or Statmux as its Rate Control Mode. If these criteria are met, the LiveEvent will be available for inclusion. Live Events can be added and removed from the MPTS at any time, but must be removed from one MPTS before joining another. When Live Events in an MPTS mux are configured with Statmux as their rate control mode, statistical multiplexing is used to allocate the total available MPTS bitrate among the video stream bitrates based on complexity. This is typically used for streams that are part of a fixed capacity transport mechanism. Bits are transferred dynamically from simple content to complex content, maximizing the overall visual quality of the output. NOTIFICATION Users can set up a Live Event so that a notification is sent if the Live Event is started, stopped, or has an alert or error. The user can be notified in the following ways: • Email 12/23/20 Copyright 2020 AWS Elemental. All rights reserved. 4/171 Elemental Live API and User Guide Product Overview • Web service callbacks - An HTTP POST will be performed to a URL that you provide, with information about the Live Event The user may also request details about a Live Event's status at any time. These details are described later in this document. STATISTICS Elemental Live is continuously logging statistics about media type, quality, speed, temperature (CPU and GPU), fan speed, and resource utilization (CPU, GPU, network, disk and memory). Historical statistics are available in the web interface, on the Stats page. ADVANCED PRE AND POST PROCESSING Most workflows have a certain number of custom commands that must be executed before or after a Live Event is run. Examples of these operations include: • Running custom validation on input or output files before or after a conversion • Running custom notifications before or after the Live Event is run Some of these commands are supported natively through the Elemental Live user interface, and the rest can be run through custom scripts that the user provides. CUSTOM SCRIPTS For each Live Event created, the user can specify a pre and/or a post script to run. The user specifies a location for the script as part of the Live Event UI or REST API. This location must be accessible by the server. It is recommended to put these scripts in the /opt/elemental_se/web/public/script directory; the Browse button for scripts is set up to search this directory. /opt/ elemental_se/web/public/script/example_script.rb is an example script that parses the input parameters using Ruby and prints them to the live_runner.output log file. The pre processing script is called from the elemental_se service just before the Live Event runs and must have execute permission for the elemental user. The Live Event's state is changed to PREPROCESSING when the pre script is running, and POSTPROCESSING when the post script is running. The Live Event can still be cancelled when it is in one of these states. The reported start and end times for a Live Event will contain the running time of these scripts; however, the elapsed time only measures the time spent processing video. The script is passed a JSON-formatted hash. The overall structure is described below: • id ID of the Live Event • script_type PRE for preprocessing, POST for post processing • inputs Array of all inputs. Each item in the array contains the following keys: • type Type of input (file_input,network_input,smpte2022_dash7_network_input,smpte2110_input,device_input) • uri Path to input file • output_groups Array of all output groups. Each item in the array contains the following keys: • name Indicates group type (Archive, Apple HLS, etc.) • outputs Array of all outputs in this group.
Recommended publications
  • A Novel Real-Time Video Transmission Approach for Remote Laboratory Development
    Wang et al. A NOVEL REAL-TIME VIDEO TRANSMISSION APPROACH 99 A novel real-time video transmission approach for remote laboratory development Ning Wang, Xuemin Chen, Texas Southern University, Houston, USA Gangbing Song, University of Houston, Houston, USA Hamid Parsaei, Texas A&M University at Qatar, Doha, Qatar Article first published in “iJOE International Journal of Online Engineering”, ABSTRACT. Remote laboratories are an inevitable necessity for V. 11 (2015), n. 1, as open Internet enabled education in Science, Technology, Engineering, access article distributed and Math (STEM) fields due to their effectiveness, flexibility, cost under the terms of the Creative Commons savings, and the fact that they provide many benefits to students, Attribution License 3.0 instructors and researchers. Accordingly, real-time experiment (http://creativecommons. live video streaming is an essential part of remote experimentation org/licenses/by/3.0/at/ operation. Nevertheless, in the development of real-time deed.en) http://online-journals.org/ experiment video transmission, it is a key and difficult issue that index.php/i-joe/article/ the video is transferred across the network firewall in most of view/3167 the current remote laboratory solutions. To address the network firewall issue, we developed a complete novel solution via HTTP Live Streaming (HLS) protocol and FFMPEG that is a powerful cross-platform command line video transcode/encoding software package on the server side. In this paper, a novel, real-time video streaming transmission approach based on HLS for the remote laboratory development is presented. With this new solution, the terminal users can view the real-time experiment live video streaming on any portable device without any firewall issues or the need for a third party plug-in.
    [Show full text]
  • Wowza Media Server® - Overview
    Wowza Media Server® - Overview Wowza® Media Systems, LLC. June 2013, Wowza Media Server version 3.6 Copyright © 2006 – 2013 Wowza Media Systems, LLC. All rights reserved. Wowza Media Server version 3.6 Overview Copyright © 2006 - 2013 Wowza Media Systems, LLC. All rights reserved. This document is for informational purposes only and in no way shall be interpreted or construed to create any warranties of any kind, either express or implied, regarding the information contained herein. Third-Party Information This document contains links to third party websites that are not under the control of Wowza Media Systems, LLC ("Wowza") and Wowza is not responsible for the content on any linked site. If you access a third party website mentioned in this document, then you do so at your own risk. Wowza provides these links only as a convenience, and the inclusion of any link does not imply that Wowza endorses or accepts any responsibility for the content on third party sites. Trademarks Wowza, Wowza Media Systems, Wowza Media Server and related logos are either registered trademarks or trademarks of Wowza Media Systems, LLC in the United States and/or other countries. Adobe and Flash are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. Microsoft and Silverlight are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. QuickTime, iPhone, iPad and iPod touch are either registered trademarks or trademarks of Apple, Inc. in the United States and/or other countries. Other product names, logos, designs, titles, words or phrases mentioned may be third party registered trademarks or trademarks in the United States and/or other countries.
    [Show full text]
  • XMP SPECIFICATION PART 3 STORAGE in FILES Copyright © 2016 Adobe Systems Incorporated
    XMP SPECIFICATION PART 3 STORAGE IN FILES Copyright © 2016 Adobe Systems Incorporated. All rights reserved. Adobe XMP Specification Part 3: Storage in Files NOTICE: All information contained herein is the property of Adobe Systems Incorporated. No part of this publication (whether in hardcopy or electronic form) may be reproduced or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written consent of Adobe Systems Incorporated. Adobe, the Adobe logo, Acrobat, Acrobat Distiller, Flash, FrameMaker, InDesign, Illustrator, Photoshop, PostScript, and the XMP logo are either registered trademarks or trademarks of Adobe Systems Incorporated in the United States and/or other countries. MS-DOS, Windows, and Windows NT are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. Apple, Macintosh, Mac OS and QuickTime are trademarks of Apple Computer, Inc., registered in the United States and other countries. UNIX is a trademark in the United States and other countries, licensed exclusively through X/Open Company, Ltd. All other trademarks are the property of their respective owners. This publication and the information herein is furnished AS IS, is subject to change without notice, and should not be construed as a commitment by Adobe Systems Incorporated. Adobe Systems Incorporated assumes no responsibility or liability for any errors or inaccuracies, makes no warranty of any kind (express, implied, or statutory) with respect to this publication, and expressly disclaims any and all warranties of merchantability, fitness for particular purposes, and noninfringement of third party rights. Contents 1 Embedding XMP metadata in application files .
    [Show full text]
  • Cloud-Based Mobile Video Streaming Techniques
    Global Journal of Computer Science and Technology Network, Web & Security Volume 12 Issue 17 Version 1.0 Year 2012 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172 & Print ISSN: 0975-4350 Cloud-Based Mobile Video Streaming Techniques By Saurabh Goel Abstract - Reasoning processing is changing the landscape of the electronic digital multi-media market by moving the end customers concentrate from possession of video to buying entry to them in the form of on-demand delivery solutions. At the same time, the cloud is also being used to store possessed video paths and create solutions that help audience to discover a whole new range of multi-media. Cellular devices are a key car owner of this change, due to their natural mobility and exclusively high transmission rate among end customers. This document investigates cloud centered video streaming methods particularly from the mobile viewpoint. The qualitative part of the research contains explanations of current video development methods, streaming methods and third celebration cloud centered streaming solutions for different mobile which shows my realistic work relevant to streaming methods with RTMP protocols family and solutions for iPhone, Android, Smart mobile phones, Window and BalackBerry phones etc. Keywords : QCIF, CIF, 4CIF, HD, FFMPEG encoding/ streaming, zencoder cloud based encoding API , amazon cloud front service, video streaming, H.264, MPEG- 4, RTMP, RTMPT, RTMPE, RTMPTE. GJCST-E Classification : C.2.4 Cloud-Based Mobile Video Streaming Techniques Strictly as per the compliance and regulations of: © 2012. Saurabh Goel. This is a research/review paper, distributed under the terms of the Creative Commons Attribution- Noncommercial 3.0 Unported License http://creativecommons.org/licenses/by-nc/3.0/), permitting all non-commercial use, distribution, and reproduction inany medium, provided the original work is properly cited.
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • A Review of HTTP Live Streaming
    A Review of HTTP Live Streaming Andrew Fecheyr-Lippens ([email protected]) Abstract TODO: write abstract January 2010 A Review of HTTP Live Streaming Andrew Fecheyr-Lippens Table of Contents 1. Introduction to streaming media 3 On-demand vs live streaming 3 2. Popular solutions 5 RTP/RTSP 5 Adobe Flash Media Streaming Server 5 3. Apple’s HTTP Live Streaming 7 HTTP Streaming Architecture 7 Server Components 9 Media Segment Files 10 Distribution Components 11 Client Component 11 Using HTTP Live Streaming 12 Session Types 12 Content Protection 13 Caching and Delivery Protocols 14 Stream Alternatives 14 Failover Protection 15 4. Critical Comparison 16 Ease of Setup 16 HTTP Live Streaming - Apple tools 17 HTTP Live Streaming - Open Source tools 17 RTP/RTSP - Darwin Streaming Server 18 Compatibility 19 Features 20 Ease of Distribution 21 Cost 22 5. Conclusion 23 6. Appendices 25 Appendix 1 - Index file generated by Apple tool 25 Appendix 2 - Configuration file for the O.S. toolchain 26 Appendix 3 - Index files generated by O.S. toolchain 27 Appendix 4 - Web server access log 28 Appendix 5 - Akamai HD for iPhone architecture 29 7. References 30 2 A Review of HTTP Live Streaming Andrew Fecheyr-Lippens 1.Introduction to streaming media In the early 1990s consumer-grade personal computers became powerful enough to display video and playback audio. These early forms of computer media were usually delivered over non-streaming channels, by playing it back from CD-ROMs or by downloading a digital file from a remote web server and saving it to a local hard drive on the end user's computer.
    [Show full text]
  • Game Audio Via Openal
    Game Audio via OpenAL Summary In the Graphics For Games module, you learnt how to use OpenGL to create complex graphical scenes, improving your programming skills along the way, and learning about data structures such as scene graphs. In this workshop, you'll see how to add sounds to your game worlds using the OpenAL sound library. New Concepts Sound in games, OpenAL, PCM audio, binary file formats, FourCC codes, WAV files, limited resource management Introduction Audio has played an important part in gaming almost as long as there have been games to play - even Pong back in 1972 had simple sound effects. We've moved a long way from then; the 80s brought dedicated audio hardware such as the Commodore 64's SID chip that could play 3 simultaneous syn- thesised sounds, and later the Amiga brought the ability to play audio samples to the home gaming market. In modern gaming hardware, we can expect to hear many simultaneous sounds and music tracks, often in surround sound. Game developers now employ dedicated sound engineers that will carefully adjust the sounds in each game release to create an immersive aural experience - making sure that each individual sound is uniquely identifiable and correctly equalised, and that every music track suits the situation they will be played in. At the heart of a game's audio experience is the code that plays back the game sounds, and cal- culates which speakers they should use - the sound system. Although we can't hope to compete with the complex sound systems of AAA games, we should still be able to make a robust, simple system for the addition of sound in our 3D games, and that's what this workshop will assist you in creating.
    [Show full text]
  • Codec Is a Portmanteau of Either
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • The Qumu AWS Live Encoder High Performance Video Processing for Live Streaming
    The Qumu AWS Live Encoder High Performance Video Processing for Live Streaming Whether you’re broadcasting a CEO event, new product introduction or crisis Overview communication, the Qumu AWS Live Encoder makes live, enterprise-wide video communication efficient, reliable and secure. HIGH PERFORMANCE Deliver content via Apple HLS, RTMP, Microsoft Smooth The Qumu Live Encoder (QLE) from AWS to the Video Control Center (VCC) and Streaming, or transport streams. is a video processing engine that provides automated ingest of archives for video-on- Alternatively, create mezzanine real-time video and audio encoding for live demand viewing. deliverables for wrapping with streaming via the Qumu enterprise video separate packages to reduce The Linux-based system can also be controlled platform. Powered by AWS Elemental Live, network bandwidth. the QLE performs simultaneous processing of through an intuitive web interface to set up multiple video outputs, delivering the high- and monitor the QLE, eliminating the need to VERSATILE DEPLOYMENT quality, high-efficiency performance required use external remote control tools. The QLE Control the Linux-based system for formatting live video for any device. The allows users to configure broadcasts that can through an intuitive web interface QLE is designed to integrate seamlessly into publish to both internal (i.e., Qumu VideoNet or REST / XML APIs for quick and the Qumu delivery workflow, with the flexibility Edge) and external (i.e., Amazon CloudFront, simple integration into existing workflows. Unified control and to evolve as technology requires. The QLE Akamai) distribution points simultaneously. management reduces setup time, provides flexible input options from HD-SDI In addition, for customers requiring live simplifies maintenance tasks and and HDMI inputs to both local file and HLS or multitrack audio for live audio translations allows for centralized upgrades of MPEG Transport Stream network inputs.
    [Show full text]
  • FLV File Format
    Video File Format Specification Version 10 Copyright © 2008 Adobe Systems Incorporated. All rights reserved. This manual may not be copied, photocopied, reproduced, translated, or converted to any electronic or machine-readable form in whole or in part without written approval from Adobe Systems Incorporated. Notwithstanding the foregoing, a person obtaining an electronic version of this manual from Adobe may print out one copy of this manual provided that no part of this manual may be printed out, reproduced, distributed, resold, or transmitted for any other purposes, including, without limitation, commercial purposes, such as selling copies of this documentation or providing paid-for support services. Trademarks Adobe, ActionScript, Flash, Flash Media Server, XMP, and Flash Player are either registered trademarks or trademarks of Adobe Systems Incorporated and may be registered in the United States or in other jurisdictions including internationally. Other product names, logos, designs, titles, words, or phrases mentioned within this publication may be trademarks, service marks, or trade names of Adobe Systems Incorporated or other entities and may be registered in certain jurisdictions including internationally. No right or license is granted to any Adobe trademark. Third-Party Information This guide contains links to third-party websites that are not under the control of Adobe Systems Incorporated, and Adobe Systems Incorporated is not responsible for the content on any linked site. If you access a third-party website mentioned in this guide, then you do so at your own risk. Adobe Systems Incorporated provides these links only as a convenience, and the inclusion of the link does not imply that Adobe Systems Incorporated endorses or accepts any responsibility for the content on those third- party sites.
    [Show full text]
  • Technologie Pro Streaming Videa Video Streaming Technologies
    View metadata, citation and similar papers at core.ac.uk brought to you by CORE provided by Digital Library of the Czech Technical University in Prague ČESKÉ VYSOKÉ U ČENÍ TECHNICKÉ V PRAZE Fakulta elektrotechnická Katedra radioelektroniky Technologie pro streaming videa Video streaming technologies Diplomová práce Studijní program: Komunikace, multimédia a elektronika (magisterský) Studijní obor: Multimediální technika Vedoucí práce: Ing. Karel Fliegel, Ph.D. Bc. Václav Popelka Praha 2013 Prohlášení Prohlašuji, že jsem předloženou práci vypracoval samostatn ě a že jsem uvedl veškeré použité informa ční zdroje v souladu s Metodickým pokynem o dodržování etických princip ů při p říprav ě vysokoškolských záv ěre čných prací. V Praze 29. prosince 2013 ____________ Podpis studenta Zadání (p řepis) České vysoké u čení technické v Praze Fakulta elektrotechnická Katedra radioelektroniky ZADÁNÍ DIPLOMOVÉ PRÁCE Student: Bc. Václav Popelka Studijní program: Komunikace, multimédia a elektronika (magisterský) Obor: Multimediální technika Název tématu: Technologie pro streaming videa / Video streaming technologies Pokyny pro vypracování: Podejte p řehled sou časných technologií pro streaming videa. Zam ěř te se zejména na nejnov ější standard MPEG-DASH (Dynamic Adaptive Streaming over HTTP). Popište typické artefakty, které p ři streamingu videa pomocí různých technologií vznikají a analyzujte jejich vliv na vnímanou kvalitu obrazu. Realizujte model streamingového serveru a s jeho využitím porovnejte ú činnost a výpo četní náro čnost jednotlivých technologií. Seznam odborné literatury: [1] Wu, H. R., Rao, K. R.: Digital Video Image Quality and Perceptual Coding, Signal Processing and Communications, CRC Press, New York, 2006. [2] Winkler, S.: Digital Video Quality: Vision Models and Metrics, Wiley, 2005. Vedoucí: Ing.
    [Show full text]
  • TAPAS: a Tool for Rapid Prototyping of Adaptive Streaming Algorithms
    TAPAS: a Tool for rApid Prototyping of Adaptive Streaming algorithms Luca De Cicco Vito Caldaralo Politecnico di Bari, Italy Quavlive srl, Italy [email protected] [email protected] Vittorio Palmisano Saverio Mascolo Quavlive srl, Italy Politecnico di Bari, Italy [email protected] [email protected] ABSTRACT and user screen resolution. The two leading standardiza- tion efforts related to adaptive streaming are the HTTP The central component of any adaptive video streaming sys- 1 tem is the stream-switching controller. This paper intro- Live Streaming (HLS) proposed by Apple and the ISO duces TAPAS, an open-source Tool for rApid Prototyping standard Dynamic Adaptive Streaming over HTTP (MPEG- of Adaptive Streaming control algorithms. TAPAS is a flex- DASH) [10]. Both the standards require the video content ible and extensible video streaming client written in python to be encoded at different bitrates and resolutions, the video that allows to easily design and carry out experimental per- levels or representations. Then, each video level is logically or physically divided into segments, or chunks, of fixed dura- formance evaluations of adaptive streaming controllers with- 2 out needing to write the code to download video segments, tions. A manifest file is stored at the server and is used to parse manifest files, and decode the video. TAPAS currently associate to each (chunk,video level) pair its corresponding supports DASH and HLS and has been designed to minimize URL. The client downloads and parses the manifest file and the CPU and memory footprint so that experiments involv- constructs a data structure that is used to request consec- ing a large number of concurrent video flows can be carried utive video segments of possibly different representations.
    [Show full text]