Applications Analysis of Video Streaming in Industrial

Total Page:16

File Type:pdf, Size:1020Kb

Applications Analysis of Video Streaming in Industrial Analysis of video streaming in industrial IoT applications Cédric Surmont Student number: 01502396 Supervisors: Prof. dr. Bruno Volckaert, Prof. dr. ir. Filip De Turck Counsellors: Dwight Kerkhove, Tom Goethals Master's dissertation submitted in order to obtain the academic degree of Master of Science in Information Engineering Technology Academic year 2019-2020 Word of thanks During this semester I have been working on this dissertation that ends my Master of Science in Information Engineering Technology at the Ghent University. This would not have been possible without the help of the following people. First of all I would like to thank my counsellors Dwight Kerkhove and Tom Goethals for guiding me, answering all my questions and giving me helpful tips and tricks. I want to thank Prof. dr. Bruno Volckaert and Prof. dr. ir. Filip De Turck for allowing me to choose this interesting subject for my dissertation. Finally I want to thank my parents, Peter Surmont and Ann Verschelde, for giving me the opportunity to obtain this degree and my girlfriend, Justine Opsomer, for the support. Cédric Surmont De auteur geeft de toelating deze masterproef voor consultatie beschikbaar te stellen en delen van de masterproef te kopiëren voor persoonlijk gebruik. Elk ander gebruik valt onder de bepalingen van het auteursrecht, in het bijzonder met betrekking tot de verplichting de bron uitdrukkelijk te vermelden bij het aanhalen van resultaten uit deze masterproef. The author gives permission to make this master dissertation available for consultation and to copy parts of this master dissertation for personal use. In all cases of other use, the copyright terms have to be respected, in particular with regard to the obligation to state explicitly the source when quoting results from this master dissertation. ii Analyse van video streaming in industriële IoT applicaties (NL) door Cédric Surmont Academiejaar 2019-2020 Universiteit Gent Faculteit Ingenieurswetenschappen en Architectuur Promotoren: Prof. dr. Bruno Volckaert, Prof. dr. ir. Filip De Turck Begeleiders: Dwight Kerkhove, Tom Goethals Abstract - NL Het Internet of Things groeit zeer snel en steeds meer apparaten zijn verbonden met het Internet. Ook in de industrie wordt dit concept gebruikt, bijvoorbeeld in het City of Things1 project van imec. In dat project worden zeer veel sensoren geïnstalleerd in Antwerpen en met die data kunnen allerlei experimenten en voorspellingen gedaan worden. Het grootste deel van deze sensoren produceren numerieke data die gemakkelijk verwerkt en opgeslagen worden. Een andere soort data, die ook moet verwerkt worden, bestaat uit video beelden, die geproduceerd worden door de vele security camera’s in de stad. De beelden van deze camera’s, en ook andere zoals inspectiecamera’s in de industrie, ondergaan meestal een transcoding proces vooraleer de data kan gebruikt worden. Dit (live) proces verwerkt de inkomende videobeelden van verschillende bronnen en zet dit om naar de gewenste output. Deze output kan dan live getoond worden, soms in verschillende kwaliteiten, of wordt opgeslagen om later opgevraagd te worden. Dergelijke processen vormen een zware last op de architectuur en de verschillende aspecten ervan zullen in deze scriptie geanalyseerd worden. Trefwoorden: Video streamen, transcoden, live, compressie, automatisatie, bash, codec 1https://www.imec-int.com/cityofthings/city-of-things-for-researchers iii Analysis of video streaming in industrial IoT applications (EN) by Cédric Surmont Academic Year 2019-2020 University Ghent Faculty of Engineering and Architecture Supervisors: Prof. dr. Bruno Volckaert, Prof. dr. ir. Filip De Turck Counsellors: Dwight Kerkhove, Tom Goethals Abstract - EN The Internet of Things is a rapidly growing space. More and more devices get smarter and are connected to the Internet. In an industrial context IoT is widely used to obtain huge amounts of sensor data from machines, which is analyzed and distilled into useful metrics such as maintenance prediction, anomaly detection and performance. Imec’s City of Things project2 is the perfect example of an industrial IoT environment. One example of these machines in the City of Things project that transfer data for processing are the CCTV cameras. These cameras produce video files 24/7 that have to be available for live monitoring and Video On Demand (VOD) services. Processing video streams require a different kind of architecture than other sensor data. Therefore, in this dissertation, the effects of video streaming in an industrial IoT context will be analyzed. Specifically, the live transcoding process that takes place at the main framework will be tested and experimented on. This process accepts its input video from multiple sources and transforms each input to an output format with certain well defined properties. The different aspects of this operation will be thoroughly inspected and tested for multiple scenarios. Keywords: Video streaming, transcoding, live, compression, automation, bash, codec 2https://www.imec-int.com/cityofthings/city-of-things-for-researchers Analyse van video streaming in industriële IoT applicaties Cédric Surmont Promotoren: Bruno Volckaert, Filip De Turck Begeleiders: Dwight Kerkhove, Tom Goethals Abstract— Dit artikel analyseert het live transcoding proces dat een reeks data uitdrukken in een som van cosinus functies gebruikt wordt om live video aan te bieden. Dit proces zal worden met verschillende frequenties. Deze som heeft specifieke ei- opgezet om de videobeelden van het City of Things project van imec te verwerken. De huidige opstelling zorgt voor een te grote last op de genschappen die veel gemakkelijker gecomprimeerd kunnen CPU’s van de architectuur en dit zorgt ervoor dat buffers vollopen worden met bijvoorbeeld Huffman coding. De implemen- waardoor het proces stil kan vallen. De invloed van verschillende pa- taties van de H.264 specificatie en haar opvolger H.265 die rameters, opties en codecs op de performantie zal onderzocht worden. in deze scriptie gebruikt worden zijn respectievelijk libx264 en libx265. Kernwoorden— Video streamen, transcoden, live, compressie, au- tomatisatie, bash, codec Dit geheel kan zeer snel worden uitgevoerd door een computer, maar veel verschillende aspecten beïnvloeden dit I. Introductie proces. De verschillende eigenschappen van de input zoals de resolutie van de beelden, de frame rate, maar ook de in- ET Internet of Things groeit zeer snel en steeds meer houd spelen een rol in de snelheid waarmee een CPU deze Happaraten zijn verbonden met het Internet. Ook in transcoding kan uitvoeren. de industrie wordt dit concept gebruikt, bijvoorbeeld in het City of Things project van imec. In dat project wor- III. Testomgeving den zeer veel sensoren geïnstalleerd in Antwerpen en met Met de hulp van tools zoals Docker, FFmpeg en de die data kunnen allerlei experimenten en voorspellingen ge- Virtual wall van imec kan dit proces grondig onderzocht daan worden. Het grootste deel van deze sensoren produ- worden door middel van verschillende experimenten. Het ceren numerieke data die gemakkelijk verwerkt en opgesla- transcoding proces wordt telkens uitgevoerd in een custom gen worden. Een andere soort data, die ook moet verwerkt Docker container. Deze container heeft als basis de Ubuntu worden, bestaat uit video beelden, die geproduceerd wor- 18.04 image en is uitgebreid met de FFmpeg en youtube- den door de vele security camera’s in de stad. dl packages. Door middel van een entrypoint script, ge- schreven in bash, wordt bij de start van deze container een De beelden van deze camera’s, en ook andere zoals inspec- meegegeven video file doorgepipet, zoals een livestream, tiecamera’s in de industrie, ondergaan meestal een trans- naar het live transcoding script. Dit transcoding script be- coding proces vooraleer de data kan gebruikt worden. Dit staat voornamelijk uit een groot FFmpeg commando dat (live) proces verwerkt de inkomende videobeelden van ver- een DASH1 output genereert. DASH splits een video be- schillende bronnen en zet dit om naar de gewenste output. stand in korte fragmenten die afzonderlijk over het internet Deze output kan dan live getoond worden, soms in verschil- kunnen verstuurd worden om video streaming mogelijk te lende kwaliteiten, of wordt opgeslagen om later opgevraagd maken zonder eerst de volledige video te moeten downloa- te worden. Dergelijke processen vormen een zware last op den. Dit FFmpeg commando voert de transcoding uit met de architectuur en zullen in dit artikel geanalyseerd wor- de verschillende meegegeven parameters en opties. Om het den. effect van deze opties te testen wordt in elk experiment telkens één optie gewijzigd en worden de resultaten verge- II. Probleemstelling leken met de originele. Niet enkel de opties van het proces Het transcoding proces bestaat uit twee fases: De bin- worden onderzocht maar ook de invloed die verschillende nenkomende beelden worden eerst gedecodeerd naar een soorten input hebben. raw video formaat zoals YUV. Vanuit dit formaat kan de Deze resultaten bestaan uit onder andere de peak signal- video terug geëncodeerd worden tot de gewenste output. to-noise ratio (PSNR) en de structural similarity index De technologieën die dit enCOderen en DECoderen uitvoe- (SSIM) om de kwaliteit van de output te kunnen meten. ren worden codecs genoemd. Deze codecs zijn implementa- Deze waardes worden per frame berekend en geven een idee ties van gestandaardiseerde formaten, de meest gebruikte van de visuele kwaliteit van de geproduceerde bestanden is de H.264 standaardisatie.
Recommended publications
  • Capturejayhx Release Notes
    CapturejayHX release notes - Version 2.3.7 - 1.7.17.10600 – 09/01/2019 - capturejayHX adds support for streaming and network playback of Secure Reliable Transport (SRT) feeds. SRT is a protocol designed for live video streaming over the public internet and provides reliable transmission similar to TCP, however, it does so at the application layer, using UDP protocol as an underlying transport layer. It supports packet recovery while maintaining low latency (default: 120 ms). SRT also supports encryption using AES. - Added RTP streaming support - Added RTP Pro-MPEG streaming support - Added MJPEG encoder for UDP streaming - Updated FFmpeg to version 4.0.2 - Added QuickSync H.264 video encoding and multichannel audio support (up to 16 channels) for WebRTC - Improved playback of RTSP streams. - Updated NDI to 3.5 version. - Updated NVIDIA components to Video_Codec_SDK_8.1.24 - Lots of other improvements and fixes in the capture/playback engine. Winjay S.r.l. Via Enrico Dandolo, 73 - 76123 ANDRIA (BT) – VAT ID: IT-05652830729 Infoline / Fax: 0883-55.34.10 / http://www.winjay.it - [email protected] - Version 2.3.6 - 1.7.12.9930 – 03/12/2018 - Added QuickSync H.264 video encoding and multichannel audio support (up to 16 channels) for WebRTC - Improved playback of RTSP streams. - Core FFmpeg components updated to release 3.4.2 version - Updated NVIDIA components to Video_Codec_SDK_8.1.24 - Fixed correct frame order in UDP streams playback - Fixed audio/video synchronization issue after temporary loss of an input signal - Fixed RTSP/RTMP streams reconnect problem on network failure - Lots of other improvements and fixes in the capture/playback engine.
    [Show full text]
  • Ffmpeg Command Android Studio
    Ffmpeg command android studio Continue FFMpeg/FFprobe is designed for Android. Run the FFmpeg and FFprobe commands with ease in your Android project. About this project is a continuation of the FFmpeg Android Java fork by WritingMinds. This plug captures the CAN LINK EXECUTABLE ffmpeg: it has the issue of text movement on x86 devices along with some other bugfixes, new features and the latest FFmpeg builds. Bravobit FFmpeg-Android architecture works on the following architectures: armv7-neon armv8 x86 x86_64 FFmpeg assemblage FFmpeg in this project was built with the following libraries: x264 r2851 ba24899 libpng 1.6.0 21 free type2 2.8.1 libmp3lame 3.100 libvorbis 1.3.5 libvpx v1.6.1-1456-g7d1bf5d libopus 1.2.1 fontconfig 2.11.11.294 libass 0.14.0 fribidi 0.19.7 Expat 2.1.0 fdk-aac 0.1.6 Features Uses the newest FFmpeg release n4.0-39-gda39990 Uses the native capabilities of the processor on the ARM FFprobe architecture bundled in this library too included the Network Features Multithreading Use Start To Enable Dependency Dependencies 'implementation':nl.bravobit:android-ffmpeg:1.1.7' Check if FFmpeg is supported To check Whether FFmpeg is available on your device you can use the following method. if (FFmpeg.getInstance (this) you will run the FFmpeg command In this code example we will run the ffmpeg version team. FFmpeg ffmpeg - FFmpeg.getInstance (context); to run the ffmpeg-version command you just need to go through the version of ffmpeg.execute (cmd, the new ExecuteBinaryResponseHandler () - @Override public void onStart () @Override public void on Progress (String message) @Override public void on The Mail (String message) @Override public emptiness onSuccess (String message) @Override public emptiness onFinish () Stop (or leave) FFmp to stop the FFmpeg process running, just call .send'ytSignal () at FFtask, which works: FFmpeg ffmpeg and FFmpeg.getInstance (context); FFtask ffTask - ffmpeg.execute (..
    [Show full text]
  • High Efficiency, Moderate Complexity Video Codec Using Only RF IPR
    Thor High Efficiency, Moderate Complexity Video Codec using only RF IPR draft-fuldseth-netvc-thor-00 Arild Fuldseth, Gisle Bjontegaard (Cisco) IETF 93 – Prague, CZ – July 2015 1 Design principles • Moderate complexity to allow real-time implementation in SW on common HW, as well as new HW designs • Basic building blocks from well-known hybrid approach (motion compensated prediction and transform coding) • Common design elements in modern codecs – Larger block sizes and transforms, up to 64x64 – Quarter pixel interpolation, motion vector prediction, etc. • Cisco RF IPR (note well: declaration filed on draft) – Deblocking, transforms, etc. (some also essential in H.265/4) • Avoid non-RF IPR – If/when others offer RF IPR, design/performance will improve 2 Encoder Architecture Input Transform Quantizer Entropy Output video Coding bitstream - Inverse Transform Intra Frame Prediction Loop filters Inter Frame Prediction Reconstructed Motion Frame Estimation Memory 3 Decoder Architecture Input Entropy Inverse Bitstream Decoding Transform Intra Frame Prediction Loop filters Inter Frame Prediction Output video Reconstructed Frame Memory 4 Block Structure • Super block (SB) 64x64 • Quad-tree split into coding blocks (CB) >= 8x8 • Multiple prediction blocks (PB) per CB • Intra: 1 PB per CB • Inter: 1, 2 (rectangular) or 4 (square) PBs per CB • 1 or 4 transform blocks (TB) per CB 5 Coding-block modes • Intra • Inter0 MV index, no residual information • Inter1 MV index, residual information • Inter2 Explicit motion vector information, residual information
    [Show full text]
  • Hardware for Speech and Audio Coding
    Linköping Studies in Science and Technology Thesis No. 1093 Hardware for Speech and Audio Coding Mikael Olausson LiU-TEK-LIC-2004:22 Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping, Sweden Linköping 2004 Linköping Studies in Science and Technology Thesis No. 1093 Hardware for Speech and Audio Coding Mikael Olausson LiU-TEK-LIC-2004:22 Department of Electrical Engineering Linköpings universitet, SE-581 83 Linköping, Sweden Linköping 2004 ISBN 91-7373-953-7 ISSN 0280-7971 ii Abstract While the Micro Processors (MPUs) as a general purpose CPU are converging (into Intel Pentium), the DSP processors are diverging. In 1995, approximately 50% of the DSP processors on the market were general purpose processors, but last year only 15% were general purpose DSP processors on the market. The reason general purpose DSP processors fall short to the application specific DSP processors is that most users want to achieve highest performance under mini- mized power consumption and minimized silicon costs. Therefore, a DSP proces- sor must be an Application Specific Instruction set Processor (ASIP) for a group of domain specific applications. An essential feature of the ASIP is its functional acceleration on instruction level, which gives the specific instruction set architecture for a group of appli- cations. Hardware acceleration for digital signal processing in DSP processors is essential to enhance the performance while keeping enough flexibility. In the last 20 years, researchers and DSP semiconductor companies have been working on different kinds of accelerations for digital signal processing. The trade-off be- tween the performance and the flexibility is always an interesting question because all DSP algorithms are "application specific"; the acceleration for audio may not be suitable for the acceleration of baseband signal processing.
    [Show full text]
  • On Engagement with ICT Standards and Their Implementations in Open Source Software Projects: Experiences and Insights from the Multimedia Field
    International Journal of Standardization Research Volume 19 • Issue 1 On Engagement With ICT Standards and Their Implementations in Open Source Software Projects: Experiences and Insights From the Multimedia Field Jonas Gamalielsson, University of Skövde, Sweden Björn Lundell, University of Skövde, Sweden ABSTRACT The overarching goal in this paper is to investigate organisational engagement with an ICT standard and open source software (OSS) projects that implement the standard, with a specific focus on the multimedia field, which is relevant in light of the wide deployment of standards and different legal challenges in this field. The first part reports on experiences and insights from engagement with standards in the multimedia field and from implementation of such standards in OSS projects. The second part focuses on the case of the ITU-T H.264 standard and the two OSS projects OpenH264 and x264 that implement the standard, and reports on a characterisation of organisations that engage with and control the H.264 standard, and organisations that engage with and control OSS projects implementing the H.264 standard. Further, projects for standardisation and implementation of H.264 are contrasted with respect to mix of contributing organisations, and findings are related to organisational strategies of contributing organisations and previous research. KEywordS AVC, H.264, Involvement, ISO, ITU-T, OpenH264, Participation, x264 1 INTROdUCTION There are a number of different challenges related to provision of standards in the software sector, that can impact on the extent to which it is possible to faithfully implement the specification of a standard in software systems (Blind and Böhm, 2019; Gamalielsson and Lundell, 2013; Lundell et al., 2019; UK, 2015).
    [Show full text]
  • Efficient Multi-Codec Support for OTT Services: HEVC/H.265 And/Or AV1?
    Efficient Multi-Codec Support for OTT Services: HEVC/H.265 and/or AV1? Christian Timmerer†,‡, Martin Smole‡, and Christopher Mueller‡ ‡Bitmovin Inc., †Alpen-Adria-Universität San Francisco, CA, USA and Klagenfurt, Austria, EU ‡{firstname.lastname}@bitmovin.com, †{firstname.lastname}@itec.aau.at Abstract – The success of HTTP adaptive streaming is un- multiple versions (e.g., different resolutions and bitrates) and disputed and technical standards begin to converge to com- each version is divided into predefined pieces of a few sec- mon formats reducing market fragmentation. However, other onds (typically 2-10s). A client first receives a manifest de- obstacles appear in form of multiple video codecs to be sup- scribing the available content on a server, and then, the client ported in the future, which calls for an efficient multi-codec requests pieces based on its context (e.g., observed available support for over-the-top services. In this paper, we review the bandwidth, buffer status, decoding capabilities). Thus, it is state of the art of HTTP adaptive streaming formats with re- able to adapt the media presentation in a dynamic, adaptive spect to new services and video codecs from a deployment way. perspective. Our findings reveal that multi-codec support is The existing different formats use slightly different ter- inevitable for a successful deployment of today's and future minology. Adopting DASH terminology, the versions are re- services and applications. ferred to as representations and pieces are called segments, which we will use henceforth. The major differences between INTRODUCTION these formats are shown in Table 1. We note a strong differ- entiation in the manifest format and it is expected that both Today's over-the-top (OTT) services account for more than MPEG's media presentation description (MPD) and HLS's 70 percent of the internet traffic and this number is expected playlist (m3u8) will coexist at least for some time.
    [Show full text]
  • CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter Prediction using Binary classification techniques A graduate project submitted in partial fulfillment of the requirements for the degree of Master of Science in Software Engineering by Alex Kit Romero May 2020 The graduate project of Alex Kit Romero is approved: ____________________________________ ____________ Dr. Katya Mkrtchyan Date ____________________________________ ____________ Dr. Kyle Dewey Date ____________________________________ ____________ Dr. John J. Noga, Chair Date California State University, Northridge ii Dedication This project is dedicated to all of the Computer Science professors that I have come in contact with other the years who have inspired and encouraged me to pursue a career in computer science. The words and wisdom of these professors are what pushed me to try harder and accomplish more than I ever thought possible. I would like to give a big thanks to the open source community and my fellow cohort of computer science co-workers for always being there with answers to my numerous questions and inquiries. Without their guidance and expertise, I could not have been successful. Lastly, I would like to thank my friends and family who have supported and uplifted me throughout the years. Thank you for believing in me and always telling me to never give up. iii Table of Contents Signature Page ................................................................................................................................ ii Dedication .....................................................................................................................................
    [Show full text]
  • Arxiv:2002.01657V1 [Eess.IV] 5 Feb 2020 Port Lossless Model to Compress Images Lossless
    LEARNED LOSSLESS IMAGE COMPRESSION WITH A HYPERPRIOR AND DISCRETIZED GAUSSIAN MIXTURE LIKELIHOODS Zhengxue Cheng, Heming Sun, Masaru Takeuchi, Jiro Katto Department of Computer Science and Communications Engineering, Waseda University, Tokyo, Japan. ABSTRACT effectively in [12, 13, 14]. Some methods decorrelate each Lossless image compression is an important task in the field channel of latent codes and apply deep residual learning to of multimedia communication. Traditional image codecs improve the performance as [15, 16, 17]. However, deep typically support lossless mode, such as WebP, JPEG2000, learning based lossless compression has rarely discussed. FLIF. Recently, deep learning based approaches have started One related work is L3C [18] to propose a hierarchical archi- to show the potential at this point. HyperPrior is an effective tecture with 3 scales to compress images lossless. technique proposed for lossy image compression. This paper In this paper, we propose a learned lossless image com- generalizes the hyperprior from lossy model to lossless com- pression using a hyperprior and discretized Gaussian mixture pression, and proposes a L2-norm term into the loss function likelihoods. Our contributions mainly consist of two aspects. to speed up training procedure. Besides, this paper also in- First, we generalize the hyperprior from lossy model to loss- vestigated different parameterized models for latent codes, less compression model, and propose a loss function with L2- and propose to use Gaussian mixture likelihoods to achieve norm for lossless compression to speed up training. Second, adaptive and flexible context models. Experimental results we investigate four parameterized distributions and propose validate our method can outperform existing deep learning to use Gaussian mixture likelihoods for the context model.
    [Show full text]
  • Ffmpeg Codecs Documentation Table of Contents
    FFmpeg Codecs Documentation Table of Contents 1 Description 2 Codec Options 3 Decoders 4 Video Decoders 4.1 hevc 4.2 rawvideo 4.2.1 Options 5 Audio Decoders 5.1 ac3 5.1.1 AC-3 Decoder Options 5.2 flac 5.2.1 FLAC Decoder options 5.3 ffwavesynth 5.4 libcelt 5.5 libgsm 5.6 libilbc 5.6.1 Options 5.7 libopencore-amrnb 5.8 libopencore-amrwb 5.9 libopus 6 Subtitles Decoders 6.1 dvbsub 6.1.1 Options 6.2 dvdsub 6.2.1 Options 6.3 libzvbi-teletext 6.3.1 Options 7 Encoders 8 Audio Encoders 8.1 aac 8.1.1 Options 8.2 ac3 and ac3_fixed 8.2.1 AC-3 Metadata 8.2.1.1 Metadata Control Options 8.2.1.2 Downmix Levels 8.2.1.3 Audio Production Information 8.2.1.4 Other Metadata Options 8.2.2 Extended Bitstream Information 8.2.2.1 Extended Bitstream Information - Part 1 8.2.2.2 Extended Bitstream Information - Part 2 8.2.3 Other AC-3 Encoding Options 8.2.4 Floating-Point-Only AC-3 Encoding Options 8.3 flac 8.3.1 Options 8.4 opus 8.4.1 Options 8.5 libfdk_aac 8.5.1 Options 8.5.2 Examples 8.6 libmp3lame 8.6.1 Options 8.7 libopencore-amrnb 8.7.1 Options 8.8 libopus 8.8.1 Option Mapping 8.9 libshine 8.9.1 Options 8.10 libtwolame 8.10.1 Options 8.11 libvo-amrwbenc 8.11.1 Options 8.12 libvorbis 8.12.1 Options 8.13 libwavpack 8.13.1 Options 8.14 mjpeg 8.14.1 Options 8.15 wavpack 8.15.1 Options 8.15.1.1 Shared options 8.15.1.2 Private options 9 Video Encoders 9.1 Hap 9.1.1 Options 9.2 jpeg2000 9.2.1 Options 9.3 libkvazaar 9.3.1 Options 9.4 libopenh264 9.4.1 Options 9.5 libtheora 9.5.1 Options 9.5.2 Examples 9.6 libvpx 9.6.1 Options 9.7 libwebp 9.7.1 Pixel Format 9.7.2 Options 9.8 libx264, libx264rgb 9.8.1 Supported Pixel Formats 9.8.2 Options 9.9 libx265 9.9.1 Options 9.10 libxvid 9.10.1 Options 9.11 mpeg2 9.11.1 Options 9.12 png 9.12.1 Private options 9.13 ProRes 9.13.1 Private Options for prores-ks 9.13.2 Speed considerations 9.14 QSV encoders 9.15 snow 9.15.1 Options 9.16 vc2 9.16.1 Options 10 Subtitles Encoders 10.1 dvdsub 10.1.1 Options 11 See Also 12 Authors 1 Description# TOC This document describes the codecs (decoders and encoders) provided by the libavcodec library.
    [Show full text]
  • Course Outline & Schedule
    Course Outline & Schedule Call US 408-759-5074 or UK +44 20 7620 0033 Open Media Encoding Techniques Course Code PWL396 Duration 3 Day Course Price $2,815 Course Description Video, TV and Image technology today dominates Internet Services. Whether it be for live TV, Streamed movies or video clips within social media video images are everywhere. The long-term efficiency of services depends upon the methods and mechanisms used to encode these services. We have relied upon the developments from the Digital Video Broadcast industry, ISO, MPEG and the ITU to provide us with standard ways to achieve this. However the patent royalty cost is now considered to be holding back this efficient development. All commercial users of the normal encoding used such as H.264, H.265, HEVC and other standardised codecs are required to pay royalties for using this technology through a firm of US patent Lawyers known as MPEG-LA. Each new ITU- T standard encoding requires new and increased payments. The Alliance for Open Media is founded by leading Internet companies focused on developing next-generation media formats, codecs and technologies in the public interest. The new Alliance is committing its collective technology and expertise to meet growing Internet demand for top-quality video, audio, imagery and streaming across devices of all kinds and for users worldwide. The aim is to develop royalty free standardized encoding based upon the technology contributed by its members. This course provides a technical study of Video Coding and the technologies which the developing Open Media implementations are based upon.
    [Show full text]
  • Globalmeet Collaboration Deployment Guide
    1. Deployment Guide GlobalMeet® Collaboration December 2020 Table of Contents Introduction 3 Contents of this guide 3 Intended audience 3 Version information 3 What’s new in this guide 4 About GlobalMeet Collaboration 5 Meeting features 5 Restricting meeting features 6 Desktop apps 6 Mobile apps 7 GlobalMeet for Outlook 8 File library 8 Storage 8 Supported file formats 8 Video file formats and codecs 9 Integrations 9 Google and Outlook calendars 9 GlobalMeet for Microsoft Teams 10 Language support 10 GlobalMeet meeting room, desktop and mobile apps 10 GlobalMeet for Outlook 10 Administrative portals 10 Branding and customization 11 Logo specs 11 Upload custom logos 12 System requirements 13 Web 13 GlobalMeet desktop apps 13 GlobalMeet mobile apps 14 GlobalMeet for Outlook 15 Network considerations 16 Network traffic 16 Note about network quality 16 Ports and protocols 17 Browser and proxy considerations 17 December 2020 GlobalMeet® Collaboration Deployment Guide | 1 Table of Contents Firewall transversal 17 Required domains 18 GlobalMeet Outlook add-in 18 Bandwidth considerations 19 Bandwidth estimating notes 20 GlobalMeet VRC implementation considerations 21 IP whitelisting (all systems) 21 Supported endpoints 21 H.323 and SIP firewall ports 22 Bandwidth considerations 22 Single sign-on (SAML) 23 Overview of the setup process 23 Required information 24 GlobalMeet login details 24 Data required by GlobalMeet 24 Application installers 25 GlobalMeet desktop apps 25 GlobalMeet mobile apps 25 GlobalMeet browser plugin 25 GlobalMeet for Outlook
    [Show full text]
  • Video Compression Optimized for Racing Drones
    Video compression optimized for racing drones Henrik Theolin Computer Science and Engineering, master's level 2018 Luleå University of Technology Department of Computer Science, Electrical and Space Engineering Video compression optimized for racing drones November 10, 2018 Preface To my wife and son always! Without you I'd never try to become smarter. Thanks to my supervisor Staffan Johansson at Neava for providing room, tools and the guidance needed to perform this thesis. To my examiner Rickard Nilsson for helping me focus on the task and reminding me of the time-limit to complete the report. i of ii Video compression optimized for racing drones November 10, 2018 Abstract This thesis is a report on the findings of different video coding tech- niques and their suitability for a low powered lightweight system mounted on a racing drone. Low latency, high consistency and a robust video stream is of the utmost importance. The literature consists of multiple comparisons and reports on the efficiency for the most commonly used video compression algorithms. These reports and findings are mainly not used on a low latency system but are testing in a laboratory environment with settings unusable for a real-time system. The literature that deals with low latency video streaming and network instability shows that only a limited set of each compression algorithms are available to ensure low complexity and no added delay to the coding process. The findings re- sulted in that AVC/H.264 was the most suited compression algorithm and more precise the x264 implementation was the most optimized to be able to perform well on the low powered system.
    [Show full text]