Copyright by Jian He 2020 the Dissertation Committee for Jian He Certifies That This Is the Approved Version of the Following Dissertation

Total Page:16

File Type:pdf, Size:1020Kb

Copyright by Jian He 2020 the Dissertation Committee for Jian He Certifies That This Is the Approved Version of the Following Dissertation Copyright by Jian He 2020 The Dissertation Committee for Jian He certifies that this is the approved version of the following dissertation: Empowering Video Applications for Mobile Devices Committee: Lili Qiu, Supervisor Mohamed G. Gouda Aloysius Mok Xiaoqing Zhu Empowering Video Applications for Mobile Devices by Jian He DISSERTATION Presented to the Faculty of the Graduate School of The University of Texas at Austin in Partial Fulfillment of the Requirements for the Degree of DOCTOR OF PHILOSOPHY THE UNIVERSITY OF TEXAS AT AUSTIN May 2020 Acknowledgments First and foremost, I want to thank my advisor Prof. Lili Qiu, for the support and guidance I have received over the past few years. I appreciate all her contributions of time, ideas and funding to make my Ph.D. experience productive and stimulating. The enthusiasm she has for her research signifi- cantly motivated to concentrate on my research especially during tough times in my Ph.D. pursuit. She taught me how to crystallize ideas into solid and fancy research works. I definitely believe that working with her will help me have a more successful career in the future. I also want to thank all the members in my dissertation committee, Prof. Mohamed G. Gouda, Prof. Aloysius Mok and Dr. Xiaoqing Zhu. I owe many thanks to them for their insightful comments on my dissertation. I was very fortunate to collaborate with Wenguang Mao, Mubashir Qureshi, Ghufran Baig, Zaiwei Zhang, Yuchen Cui, Sangki Yun, Zhaoyuan He, Chenxi Yang, Wangyang Li and Yichao Chen on many interesting works. They always had time and passion to devote to my research projects. Without their support, I could not complete those projects smoothly. I want to thank my colleagues Mei Wang, Wei Sun and Swadhin Pradhan for their great help make my research life enjoyable. I would like to thank Xiaoqing Zhu, Shruti Sanadhya, Sangki Yun, iv Christina Vlachou and Kyu-Han Kim. They were my mentors during my internships at Cisco and HP Labs. I had very fun experiences with them to learn how to do projects in industry companies and research labs. They encouraged me a lot to seek for more success in my career. I feel extremely grateful to have my many friends who brought lots of joy to my life at UT: Chen Chen, Lei Xu, Yuxiang Lin, Wenhui Zhang, Hangchen Yu, Zhiting Zhu, Yuepeng Wang, Xinyu Wang, Ye Zhang, and many others. More importantly, I owe my sincere gratitude to Xiaoting Liu, who provided me her continuous care. I will never forget the days and nights we had together to go through the hard times during the COVID-19 outbreak. I wish you all the best! Finally, I would like to thank my family for all their love and encour- agement. I dedicate this dissertation to you. v Empowering Video Applications for Mobile Devices Jian He, Ph.D. The University of Texas at Austin, 2020 Supervisor: Lili Qiu The popularity of video applications has grown rapidly. There are two main trends in the development of video applications: (i) video streaming sup- porting higher-resolution videos and 360◦ videos, (ii) providing video analytics (e.g., running object detection on video frames). In this dissertation, we focus on how to improve performance of streaming 360◦ and 4K videos and running real-time video analytics on mobile devices. We identify a few major challenges to guarantee high user experience for running video applications on mobile devices. First, existing video ap- plications call for high-resolution videos(e.g., 4K). Due to limited hardware resource on mobile devices, it is slow to code high-resolution videos. It is crit- ical to design a light-weight video codec to provide fast video coding as well as high compression efficiency for mobile devices. Second, wireless channels have unpredictable throughput fluctuation. It is necessary to design a robust rate adaptation algorithm to adjust video quality according to the varying vi network condition. Third, streaming entire panoramic video views wastes lots of bandwidth, while only transmitting the portion visible to the users FoV significantly degrades video quality. It is hard to save bandwidth while main- taining high video quality with inevitable head movement prediction error. Last, motion based object tracking can speed up video analytics, but existing motion estimation is noisy due to the presence of complex background and object size or shape changes. In this dissertation, we will show how to address the above mentioned challenges. We propose a new layered coding design to code high-resolution video data. It can effectively adapt to varying data rates on demand by first sending the base layer and then opportunistically sending more layers when- ever the link allows. We further design an optimization algorithm to decide which video layers to send according to available throughput. Compared with existing rate adaptation algorithm, our algorithm includes the new dimen- sion of deciding the number of layers to transmit. We design a novel layered tile-based encoding framework for 360◦ videos. It can achieve efficient video coding, bandwidth saving, and robustness against head movement prediction error. Moreover, we design a robust technique to extract reliable motion from video frames. We use a combination of feature maps and motion to generate a representative mask which can reliably capture the motion of object pixels and the changes of the overall object shape or size. First, we implement our tile-based layered encoding framework Rubiks on mobile devices for 360◦ video streaming. We exploit spatial and tempo- vii ral characteristics of 360◦ videos for encoding. Specifically, Rubiks splits the 360◦ video spatially into tiles and temporally into layers. The client runs an optimization routine to determine the video data that needs to be fetched to optimize user QoE. Using this encoding approach, we can send the video portions that have a high probability of viewing at a higher quality and the portion that has a lower probability of viewing at a lower quality. By control- ling the amount of data sent, the data can be decoded in time. Rubiks can save significant bandwidth while maximizing the users QoE and decoding the video in a timely manner. Compared with existing approaches, Rubiks can achieve up to 69% improvement in user QoE and 49% in bandwidth savings over existing approaches. Next, we design a system Jigsaw to support live 4K video streaming over wireless networks using commodity mobile devices. Given the high data rate requirement of 4K videos, 60GHz is appealing, but its large and unpredictable throughput fluctuation makes it hard to provide desirable user experience. We propose a novel system Jigsaw, which consists of (i) easy-to-compute layered video coding to seamlessly adapt to unpredictable wireless link fluctuations, (ii) efficient client GPU implementation of video coding on commodity mobile devices, and (iii) effectively leveraging both WiFi and WiGig through delayed video adaptation and smart scheduling. Using real experiments and emulation, we demonstrate the feasibility and effectiveness of Jigsaw. Our results show that it improves PSNR by 6-15dB and improves SSIM by 0.011-0.217 over state-of-the-art approaches. viii Finally, we develop a novel mobile video analytics system Sight. Its unique features include (i) high accuracy, (ii) real-time, and (iii) running ex- clusively on a mobile device without the need of edge/cloud server or network connectivity. At its heart lies an effective technique to reliably extract motion from video frames and use the motion to speed up video analytics. Unlike the existing motion extraction, our technique is robust to background noise and changes in object sizes. Using extensive evaluation, we show that Sight can support real-time object tracking at 30 frames/second (fps) on Nvidia Jetson TX2. For single-object tracking, Sight improves the average Intersection-over- Union (IoU) by 88%, improves the mean Average Precision (mAP) by 207% and reduces the average hardware resource usage by 45% over state-of-the-art approach. For multi-object tracking, Sight improves IoU by 69%, improves mAP by 173% and reduces resource usage by around 32%. ix Table of Contents Acknowledgments iv Abstract vi List of Tables xiv List of Figures xv Chapter 1. Introduction 1 1.1 Background . .1 1.2 Motivation . .3 1.2.1 Video Streaming . .3 1.2.2 Mobile Video Analytics . .5 1.3 Challenges . .6 1.3.1 Video Streaming . .6 1.3.2 Video Coding . .8 1.3.3 Video Analytics . .8 1.4 Our Approach . .9 1.5 Summary of Contributions . 11 1.6 Dissertation Outline . 13 Chapter 2. Related Work 14 2.1 Video Streaming Algorithms . 14 2.2 Wireless Technologies . 16 2.3 Video Coding . 18 2.4 Mobile Video Analytics . 19 x Chapter 3. Practical 360◦ Video Streaming for Smartphones 22 3.1 Background for 360◦ Video Streaming . 22 3.1.1 Existing Streaming Framework . 22 3.1.2 H.264 and HEVC Codecs . 23 3.1.3 Scalable Video Coding . 24 3.2 Motivation . 25 3.2.1 Real-Time Media Codecs . 25 3.2.2 Limitations of Existing Approaches . 26 3.2.2.1 Decoding Time . 27 3.2.2.2 Bandwidth Savings . 29 3.2.2.3 Video Quality . 30 3.2.3 Insights From Existing Approaches . 32 3.3 Challenges . 33 3.4 Our Approach . 34 3.4.1 Video Encoding . 35 3.4.2 360◦ Video Rate Adaptation . 36 3.4.2.1 MPC-based Optimization Framework . 37 3.4.2.2 User QoE . 38 3.4.2.3 Estimate Video Quality . 40 3.4.2.4 Decoding Time . 43 3.4.2.5 Improving Efficiency . 44 3.5 System Design for Rubiks ..................... 45 3.5.1 System Architecture .
Recommended publications
  • Transcoding SDK Combine Your Encoding Presets Into a Single Tool
    DATASHEET | Page 1 Transcoding SDK Combine your encoding presets into a single tool MainConcept Transcoding SDK is an all-in-one production tool offering HOW DOES IT WORK? developers the ability to manage multiple codecs and parameters in one • Transcoding SDK works as an place. This streamlined SDK supports the latest encoders and decoders additional layer above MainConcept from MainConcept, including HEVC/H.265, AVC/H.264, DVCPRO, and codecs. MPEG-2. The transcoder generates compliant streams across different • The easy-to-use API replaces the devices, media types, and camcorder formats, and includes support for need to set conversion parameters MPEG-DASH and Apple HLS adaptive bitstream formats. Compliance manually by allowing you to configure ensures content is delivered that meets each unique specification. the encoders with predefined profiles, letting the transcoding engine take Transcoding SDK was created to simplify the workflow for developers care of the rest. who frequently move between codecs and output to a multitude of • If needed, manual control of the configurations. conversion process is supported, including source/target destinations, export presets, transcoding, and filter AVAILABLE PACKAGES parameters. HEVC/H.265 HEVC/H.265 encoder for creating HLS, DASH-265, and other ENCODER PACKAGE generic 8-bit/10-bit 4:2:0 and 4:2:2 streams in ES, MP4 and TS file formats. Includes hardware encoding support using Intel Quick KEY FEATURES Sync Video (IQSV) and NVIDIA NVENC (including Hybrid GPU) for Windows and Linux. • Integrated SDKs for fast deployment HEVC/H.265 SABET HEVC/H.265 encoder package plus Smart Adaptive Bitrate Encod- of transcoding tools ENCODER PACKAGE ing Technology (SABET).
    [Show full text]
  • MSI Afterburner V4.6.4
    MSI Afterburner v4.6.4 MSI Afterburner is ultimate graphics card utility, co-developed by MSI and RivaTuner teams. Please visit https://msi.com/page/afterburner to get more information about the product and download new versions SYSTEM REQUIREMENTS: ...................................................................................................................................... 3 FEATURES: ............................................................................................................................................................. 3 KNOWN LIMITATIONS:........................................................................................................................................... 4 REVISION HISTORY: ................................................................................................................................................ 5 VERSION 4.6.4 .............................................................................................................................................................. 5 VERSION 4.6.3 (PUBLISHED ON 03.03.2021) .................................................................................................................... 5 VERSION 4.6.2 (PUBLISHED ON 29.10.2019) .................................................................................................................... 6 VERSION 4.6.1 (PUBLISHED ON 21.04.2019) .................................................................................................................... 7 VERSION 4.6.0 (PUBLISHED ON
    [Show full text]
  • Processing Multimedia Workloads on Heterogeneous Multicore Architectures
    Doctoral Dissertation Processing Multimedia Workloads on Heterogeneous Multicore Architectures H˚akon Kvale Stensland February 2015 Submitted to the Faculty of Mathematics and Natural Sciences at the University of Oslo in partial fulfilment of the requirements for the degree of Philosophiae Doctor © Håkon Kvale Stensland, 2015 Series of dissertations submitted to the Faculty of Mathematics and Natural Sciences, University of Oslo No. 1601 ISSN 1501-7710 All rights reserved. No part of this publication may be reproduced or transmitted, in any form or by any means, without permission. Cover: Hanne Baadsgaard Utigard. Printed in Norway: AIT Oslo AS. Produced in co-operation with Akademika Publishing. The thesis is produced by Akademika Publishing merely in connection with the thesis defence. Kindly direct all inquiries regarding the thesis to the copyright holder or the unit which grants the doctorate. Abstract Processor architectures have been evolving quickly since the introduction of the central processing unit. For a very long time, one of the important means of increasing per- formance was to increase the clock frequency. However, in the last decade, processor manufacturers have hit the so-called power wall, with high heat dissipation. To overcome this problem, processors were designed with reduced clock frequencies but with multiple cores and, later, heterogeneous processing elements. This shift introduced a new challenge for programmers: Legacy applications, written without parallelization in mind, gain no benefits from moving to multicore and heterogeneous architectures. Another challenge for the programmers is that heterogeneous architecture designs are very different with respect to caches, memory types, execution unit organization, and so forth and, in the worst case, a programmer must completely rewrite the application to obtain the best performance on the new architecture.
    [Show full text]
  • Mechdyne-TGX-2.1-Installation-Guide
    TGX Install Guide Version 2.1.3 Mechdyne Corporation March 2021 TGX INSTALL GUIDE VERSION 2.1.3 Copyright© 2021 Mechdyne Corporation All Rights Reserved. Purchasers of TGX licenses are given limited permission to reproduce this manual, provided the copies are for their use only and are not sold or distributed to third parties. All such copies must contain the title page and this notice page in their entirety. The TGX software program and accompanying documentation described herein are sold under license agreement. Their use, duplication, and disclosure are subject to the restrictions stated in the license agreement. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. This publication is provided “as is” without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non- infringement. Any Mechdyne Corporation publication may include inaccuracies or typographical errors. Changes are periodically made to these publications, and changes may be incorporated in new editions. Mechdyne may improve or change its products described in any publication at any time without notice. Mechdyne assumes no responsibility for and disclaims all liability for any errors or omissions in this publication. Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply. TGX is a trademark of Mechdyne Corporation. Windows® is registered trademarks of Microsoft Corporation. Linux® is registered trademark of Linus Torvalds.
    [Show full text]
  • NVIDIA GRID™ Virtual PC and Virtual Apps
    NVIDIA VIRTUAL PC AND VIRTUAL APPS GPU-ACCELERATED PERFORMANCE FOR THE VIRTUAL ENTERPRISE Desktop virtualization has been around for many Reason 1: Flexible Workers Require a years, but some organizations still struggle to Seamless Experience deliver a user experience that stands up to what As organizations form return to work plans, workers have enjoyed on physical PCs. Remote it is clear that remote work will be part of the offices, virtual collaboration, and fast access long term solution. By allowing employees to seamlessly transition between the office and to information, coupled with the rising use of home, organizations are adopting the flexible modern, graphics-intensive applications, make office model. User experience has become more GPUs more relevant than ever. important than ever. Video collaboration, as well as simple productivity applications found in Microsoft NVIDIA® Virtual PC (vPC) and Virtual Apps (vApps) Windows 10 (Win 10), Office 365, web browsers, and improve virtual desktops and applications for streaming video can benefit from GPU acceleration. every user, with proven performance built on NVIDIA GPUs for exceptional productivity, security, and IT manageability. The virtualization software divides NVIDIA GPU resources, so the GPU can be 82% of organizations shared across multiple virtual machines running have the option of any application. working remotely. Here are three powerful reasons to deploy vPC and vApps in your data center. Traditional desktop and laptop PCs boost application performance with embedded or integrated GPUs. NVIDIA VIRTUAL PC AND VIRTUAL APPS | SOlutiON OvERVIEW | SEP21 However, when making the transition from physical to Reason 3: There Are More Users to Support virtual, IT has traditionally left the computer graphics Than Ever Before burden—such as from DirectX and OpenGL workloads Today’s virtual desktops and applications require and video streaming—to be handled by a server CPU.
    [Show full text]
  • Volume 2 – Vidéo Sous Linux
    Volume 2 – Vidéo sous linux Installation des outils vidéo V6.3 du 20 mars 2020 Par Olivier Hoarau ([email protected]) Vidéo sous linux Volume 1 - Installation des outils vidéo Volume 2 - Tutoriel Kdenlive Volume 3 - Tutoriel cinelerra Volume 4 - Tutoriel OpenShot Video Editor Volume 5 - Tutoriel LiVES Table des matières 1 HISTORIQUE DU DOCUMENT................................................................................................................................4 2 PRÉAMBULE ET LICENCE......................................................................................................................................4 3 PRÉSENTATION ET AVERTISSEMENT................................................................................................................5 4 DÉFINITIONS ET AUTRES NOTIONS VIDÉO......................................................................................................6 4.1 CONTENEUR................................................................................................................................................................6 4.2 CODEC.......................................................................................................................................................................6 5 LES OUTILS DE BASE POUR LA VIDÉO...............................................................................................................7 5.1 PRÉSENTATION.............................................................................................................................................................7
    [Show full text]
  • NVIDIA Quadro by PNY Spring 07 Sales Presentation
    VR/AR Enterprise Value Propositions Data immersion and 3D conceptualization NVIDIA RTX Server High-Performance Visual Computing in the Data Center NVIDIA RTX Server Do your life’s work from anywhere Creators, designers, data scientists, engineers, government workers, and students around the world are working or learning remotely wherever possible. They still need the powerful performance they relied on in the office, lab, and classroom to keep up with complex workloads like interactive graphics, data analytics, machine learning, and AI. NVIDIA RTX Server gives you the power to tackle critical day-to-day tasks and compute- heavy workloads – from home or wherever you need to work. Visual Computing Today Increasing daily workflow challenges ~6.5 Billion Render Hours Per Year 30,000 New Products Launch Each Year 2.5 Quintillion Bytes of Data Created Each Day 80% of Applications Utilize AI by 2020 $12.9 Trillion in Global Construction by 2022 $13.45 Billion Simulation Software Market by 2022 American Gods image courtesy of Tendril NVIDIA RTX Server High-performance, flexible visual computing in the data center Highly Flexible Reference Design Delivered by Select OEM Partners I Scalable Configurations Ii Cost Effective and Power Efficient Powerful Virtual Workstations Accelerated Rendering Data Science CAE and Simulation NVIDIA Turing The power of Quadro RTX from desktop to data center Physical Workstations Virtual Workstations NVIDIA RTX Technology Ray Tracing AI Visualization Compute NVIDIA RTX Server Spans Industries Near universal applicability
    [Show full text]
  • NVIDIA GRID™ Virtual PC and Virtual Apps
    NVIDIA GRID VIRTUAL PC AND VIRTUAL APPS GPU-ACCELERATED PERFORMANCE FOR THE VIRTUAL ENTERPRISE Desktop virtualization has been around for many Reason 1: Every App Is a Graphics App. years, but some organizations still struggle to Even simple productivity applications found in deliver a user experience that stands up to what Microsoft Windows 10 (Win 10), Office 2016, web workers have enjoyed on physical PCs. While browsers, and streaming video can benefit from IT has traditionally settled for a “good enough” GPU acceleration. A recent study showed that the number of applications that use graphics user experience, today’s workforce is more acceleration has doubled since 2012. Today, over tech savvy and increasingly made up of digital 60 percent of enterprise users work with at least natives who expect a dynamic, multimedia-rich one of these applications.1 experience. NVIDIA GRID® Virtual PC (GRID vPC) and GRID® Virtual Apps (GRID vApps) improve virtual desktops and The number of graphics accelerated applications has applications for every user, with proven performance doubled since 2012.¹ built on NVIDIA® GPUs for exceptional productivity, security, and IT manageability. The virtualization software divides NVIDIA GPU resources, so the GPU can be shared across multiple virtual machines Traditional desktop and laptop PCs boost application running any application. performance with embedded or integrated GPUs. Here are three powerful reasons to deploy GRID vPC However, when making the transition from physical to and GRID vApps in your data center. virtual, IT has traditionally left the computer graphics burden—such as from DirectX and OpenGL workloads 1 Data from Lakeside Software’s SysTrack Community, 2017.
    [Show full text]
  • Zefektivnění Práce V Kreativních Softwarech Pomocí Nových Technologií Společnosti NVIDIA
    JANÁČKOVA AKADEMIE MÚZICKÝCH UMĚNÍ V BRNĚ Divadelní fakulta Ateliér audiovizuální tvorby a divadla Zefektivnění práce v kreativních softwarech pomocí nových technologií společnosti NVIDIA Diplomová práce Autor práce: Bc. Matej Hudák Vedoucí práce: MgA. Tomáš Gruna Oponent práce: Ing. Dalibor Vlašín Brno 2021 Bibliografický záznam HUDÁK, Matej. Zefektivnění práce v kreativních softwarech pomocí nových technologií společnosti NVIDIA [Streamlining of work in creative softwares using the new NVIDIA technologies]. Brno: Janáčkova akademie múzických umění v Brně, Divadelní Fakulta, Ateliér audiovizuální tvorby a divadla, 2021. 83 stran. Vedoucí diplomové práce MgA. Tomáš Gruna. Anotace Tato diplomová práce se zabývá možnostmi zvýšení účinnosti práce v kreativních softwarech pomocí nových technologií společnosti NVIDIA. Za asistence profesionálů, zkoumá v jednotlivých softwarech využitelnost těchto technologií, popisuje jejich výhody, nevýhody a samotný vliv na pracovní postup. Také s touto problematikou seznamuje čtenáře. Annotation This thesis deals with the possibilities of improving effectiveness of work with creative softwares using NVIDIA technologies. It investigates in the capabilities of these technologies under the supervision of experts. It describes advantages and disadvantages of every examined software and the influence on the working procedures. Furthermore, it acquaints the reader with this problematic. Klíčová slova software, NVIDIA, technologie, grafická karta, kreativní, film, herní, průmysl, fotka, Adobe, Premiere, Photoshop, DaVinci Resolve, OBS, vysílaní, Unreal Engine, NGX, Maxine, Broadcast Keywords software, NVIDIA, technology, GPU, graphics, card, creative, gaming, film, movie, photo, Adobe, Premiere, Photoshop, DaVinci Resolve, OBS, stream, Unreal Engine, NGX, Maxine, Broadcast Prohlášení Prohlašuji, že jsem předkládanou diplomovou práci zpracoval samostatně a použil jen uvedené prameny a literaturu. V Brně, dne 1.5.2021 Bc.
    [Show full text]
  • Trueconf Brings 4K Video Conferencing to Smart Tvs Trueconf Introduced 4K (2160P) Video Calls to Smart Tvs for NVIDIA SHIELD TV Users
    TrueConf Brings 4K Video Conferencing to Smart TVs TrueConf introduced 4K (2160p) video calls to smart TVs for NVIDIA SHIELD TV users. Backed by NVIDIA, TrueConf has released a new solution to run 4K video conferences on smart TVs based on NVIDIA SHIELD TV consoles. High quality video conferencing has become possible thanks to SHIELD TV processing powers: powerful NVIDIA Tegra X1 processor equipped with a 256-core graphics engine based on Maxwell architecture with support for hardware encoding of video streams at 60 FPS. The integration is powered by NVIDIA NVENC technology, which has been supported in TrueConf for Android application. Video is transmitted at 2160p and 30 FPS. Incoming and outgoing streams are processed using H.264 codec. TrueConf integration turns NVIDIA SHIELD TV into a full-scaled Android-based 4K video conferencing endpoint for living rooms or offices. Just connect a USB camera and TV to your console to call your friends and colleagues and enjoy high-definition video on a large TV screen. With NVIDIA SHIELD TV, your living room — and any other TV-equipped room — can easily become a meeting space. Use your console as a video conferencing endpoint to run video sessions. TrueConf collaboration tools allow you to chat, share your content, record video conferences and much more. TrueConf for Android TV users have access to all the features of TrueConf for Android, while the application interface is fully adapted for gamepad or remote control. The app is already available on Google Play Market. “We are pleased to introduce new features for NVIDIA SHIELD TV users.
    [Show full text]
  • Delivering Transformational User Experience with Blast Extreme Adaptive Transport and NVIDIA GRID
    Delivering Transformational User Experience with Blast Extreme Adaptive Transport and NVIDIA GRID. Kiran Rao – Director, Product Management at VMware Luke Wignall – Sr. Manager, Performance Engineering at NVIDIA © 2014 VMware Inc. All rights reserved. Challenges for Virtual Graphics Professional graphics workloads require great user experience in both LAN & WAN environments. UX Require Rely on heavy User density is “snappy” encoding and limited by CPU experience decoding bottleneck VMware Horizon Gets Even Better with NVIDIA GRID NVIDIA GRID NVIDIA GRID Virtual PC Virtual Workstation NVIDIA NVIDIA Quadro QuadroGraphics Driver Driver Driver NVIDIA GRID vGPU manager vSphere vGPU vGPU NVSMI, NVSMI, NVML Scheduling – 3D, CE, NVENC, NVDEC NVIDIA Tesla GPU NVIDIA GRID management tools GRID management NVIDIA H.264 Encode Server Blast Extreme Protocol Blast Extreme: Unified Protocol for All VMware Products • A new VMware controlled protocol for a richer app & desktop experience • Protocol optimized for mobile and overall lower client TCO • Horizon remote experience features work with Blast Extreme and updated Horizon clients • Performance on par or exceeding all competitive protocols • Rapid client proliferation from strong Horizon Client ecosystem 2013 2015 2016 2017 BEAT 5 Blast Extreme Designed for All Use Cases Windows VDI, RDSH Apps/Desktop & Linux VDI SDKs Feature-Rich User Experience Hosted Apps Printing Scanning Smart USB Audio In/Out Client Drive Windows Media File Type Unified Webcams & RDS & Imaging Card Redirection Redirection Association
    [Show full text]
  • H264 Encoding Available on Etere ETX
    19/11/2015 Technology Etere a consistent system H264 Encoding Available on Etere ETX Etere 26.1 launches Etere ETX with H264 encoding, the most commonly used codec that is able to produce good video quality at a significantly lower bit rate. Etere ETX is the next generation of 4K IT based solution, cost- efficient video management system to be launched on Etere 26.1. In addition to being a complete channel in a box with full IP in and out capabilities, Etere ETX comes with H264 capabilities! What is H264 and H264 Encoding Etere software upgrade H264 encoding is one of the most commonly used formats for VER 26.1 the recording, compression, and distribution of video content. It is known to be able to produce good video quality at substantially lower number of bits that are processed per unit of time compared to previous standards. Popular Uses of H264 Encoding ■ One of the video encoding standards for Blu-ray Discs ■ Streaming internet sources, such as videos from Vimeo, YouTube, iTunes Store ■HDTV broadcasts over terrestrial (Advanced Television ETX Systems Committee standards) ■ Web software such as Microsoft Silverlight ■ ISDB-T ■ DVB-T ■ DVB-T2 ■ Cable (DVB-C) ■ Satellite (DVB-S and DVB-S2) Flexibility of Multiple File Formats H264 can be integrated into multiple types of file formats that contain various types of compressed data. It is frequently produced in MPEG-4, which uses the .MP4 extension, as well as QuickTime (.MOV), 3GP for mobile phones (.3GP), and the MPEG transport stream (.ts). H264 video is also commonly encoded with audio compressed with the AAC (Advanced Audio Coding) codec, which is an ISO/IEC standard (MPEG4 Part 3).
    [Show full text]