A Data Store for Analytics on Large Videos

Total Page:16

File Type:pdf, Size:1020Kb

A Data Store for Analytics on Large Videos VStore: A Data Store for Analytics on Large Videos Tiantu Xu Luis Materon Botelho Felix Xiaozhu Lin Purdue ECE Purdue ECE Purdue ECE Abstract Backward derivation of configuration We present VStore, a data store for supporting fast, resource- Fidelity Coding Fidelity efficient analytics over large archival videos. VStore manages Video video ingestion, storage, retrieval, and consumption. It con- f c f Operator Data trols video formats along the video data path. It is challenged @ accuracy Ingestion Storage Retrieval Consume by i) the huge combinatorial space of video format knobs; ii) Transcoder Disk Decoder/ CPU/GPU the complex impacts of these knobs and their high profiling bw space disk bw cycles cost; iii) optimizing for multiple resource types. It explores an idea called backward derivation of configuration: in the Figure 1. The VStore architecture, showing the video data opposite direction along the video data path, VStore passes path and backward derivation of configuration. the video quantity and quality expected by analytics back- ward to retrieval, to storage, and to ingestion. In this process, VStore derives an optimal set of video formats, optimizing for different resources in a progressive manner. are reported to run more than 200 cameras 24×7 [57]. In VStore automatically derives large, complex configura- such deployment, a single camera produces as much as 24 tions consisting of more than one hundred knobs over tens GB encoded video footage per day (720p at 30 fps). of video formats. In response to queries, VStore selects video Retrospective video analytics To generate insights from formats catering to the executed operators and the target enormous video data, retrospective analytics is vital: video accuracy. It streams video data from disks through decoder streams are captured and stored on disks for a user-defined to operators. It runs queries as fast as 362× of video realtime. lifespan; users run queries over the stored videos on demand. Retrospective analytics offers several key advantages that CCS Concepts • Information systems → Data analyt- live analytics lacks. i) Analyzing many video streams in real ics; • Computing methodologies → Computer vision time is expensive, e.g., running deep neural networks over tasks; Object recognition; live videos from a $200 camera may require a $4000 GPU [34]. Keywords Video Analytics, Data Store, Deep Neural Net- ii) Query types may only become known after the video works capture [33]. iii) At query time, users may interactively revise their query types or parameters [19, 33], which may not be ACM Reference Format: foreseen at ingestion time. iv) In many applications, only a Tiantu Xu, Luis Materon Botelho, and Felix Xiaozhu Lin. 2019. VStore: A Data Store for Analytics on Large Videos. In Fourteenth small fraction of the video will be eventually queried [27], EuroSys Conference 2019 (EuroSys ’19), March 25–28, 2019, Dresden, making live analytics an overkill. Germany. ACM, New York, NY, USA, 17 pages. https://doi:org/ A video query (e.g., “what are the license plate numbers 10:1145/3302424:3303971 of all blue cars in the last week?”) is typically executed as a cascade of operators [32–34, 60, 67]. Given a query, a query 1 Introduction engine assembles a cascade and run the operators. Query engines typically expose to users the trade-offs between op- arXiv:1810.01794v3 [cs.DB] 17 Feb 2019 Pervasive cameras produce videos at an unprecedented rate. erator accuracy and resource costs, allowing users to obtain Over the past 10 years, the annual shipments of surveillance inaccurate results with a shorter wait. The users thus can cameras grow by 10×, to 130M per year [28]. Many campuses explore large videos interactively [33, 34]. Recent query en- Permission to make digital or hard copies of all or part of this work for gines show promise of high speed, e.g., consuming one-day personal or classroom use is granted without fee provided that copies are not video in several minutes [34]. made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components Need for a video store While recent query engines as- of this work owned by others than ACM must be honored. Abstracting with sume all input data as raw frames present in memory, there credit is permitted. To copy otherwise, or republish, to post on servers or to lacks a video store that manages large videos for analyt- redistribute to lists, requires prior specific permission and/or a fee. Request ics. The store should orchestrate four major stages on the permissions from [email protected]. video data path: ingestion, storage, retrieval, and consump- EuroSys ’19, March 25–28, 2019, Dresden, Germany tion, as shown in Figure 1. The four stages demand multiple © 2019 Association for Computing Machinery. ACM ISBN 978-1-4503-6281-8/19/03...$15.00 hardware resources, including encoder/decoder bandwidth, https://doi:org/10:1145/3302424:3303971 disk space, and CPU/GPU cycles for query execution. The 1 EuroSys ’19, March 25–28, 2019, Dresden, Germany Tiantu Xu, Luis Materon Botelho, and Felix Xiaozhu Lin resource demands are high, thanks to large video data. De- Frame diff Specialized Full Motion License detector neural net neural net detector plate detector OCR mands for different resource types may conflict. Towards [Diff] [S-NN] [NN] [Motion] [License] optimizing these stages for resource efficiency, classic video databases are inadequate [35]: they were designed for hu- (a) Car detector. Diff filters out b) Vehicle license plate recognition. man consumers watching videos at 1×–2× speed of video similar frames; S-NN rapidly Motion filters frames w/ little realtime; they are incapable of serving some algorithmic con- detects part of cars; NN motion; License spots plate regions sumers, i.e., operators, processing videos at more than 1000× analyzes remaining frames for OCR to recognize characters video realtime. Shifting part of the query to ingestion [24] Figure 2. Video queries as operator cascades [34, 46]. has important limitations and does not obviate the need for such a video store, as we will show in the paper. Towards designing a video store, we advocate for tak- costs; iii) from the storage formats, VStore derives a data ero- ing a key opportunity: as video flows through its data path, sion plan, which gradually deletes aging video data, trading the store should control video formats (fidelity and cod- off analytics speed for lower storage cost. ing) through extensive video parameters called knobs. These Through evaluation with two real-world queries over six knobs have significant impacts on resource costs and analyt- video datasets, we demonstrate that VStore is capable of de- ics accuracy, opening a rich space of trade-offs. riving large, complex configuration with hundreds of knobs We present VStore, a system managing large videos for ret- over tens of video formats, which are infeasible for humans rospective analytics. The primary feature of VStore is its auto- to tune. Following the configuration, VStore stores multiple matic configuration of video formats. As video streams arrive, formats for each video footage. To serve queries, it streams VStore saves multiple video versions and judiciously sets video data (encoded or raw) from disks through decoder to their storage formats; in response to queries, VStore retrieves operators, running queries as fast as 362× of video realtime. stored video versions and converts them into consumption As users lower the target query accuracy, VStore elastically formats catering to the executed operators. Through con- scales down the costs by switching operators to cheaper figuring video formats, VStore ensures operators tomeet video formats, accelerating the query by two orders of mag- their desired accuracies at high speed; it prevents video re- nitude. This query speed is 150× higher compared to systems trieval from bottlenecking consumption; it ensures resource that lack automatic configuration of video formats. VStore consumption to respect budgets. reduces the total configuration overhead by 5×. To decide video formats, VStore is challenged by i) an Contributions We have made the following contributions. enormous combinatorial space of video knobs; ii) complex • We make a case for a new video store for serving retro- impacts of these knobs and high profiling costs; iii) optimiz- spective analytics over large videos. We formulate the design ing for multiple resource types. These challenges were unad- problem and experimentally explore the design space. dressed: while classic video databases may save video con- tents in multiple formats, their format choices are oblivious • To design such a video store, we identify the configuration to analytics and often ad hoc [35]; while existing query en- of video formats as the central concern. We present a novel gines recognize the significance of video formats32 [ , 33, 67] approach called backward derivation. With this approach, and optimize them for query execution, they omit video we contribute new techniques for searching large spaces of coding, storage, and retrieval, which are all crucial to retro- video knobs, for coalescing stored video formats, and for spective analytics. eroding aging video data. To address these challenges, our key idea behind VStore • We report VStore, a concrete implementation of our de- is backward derivation, shown in Figure 1. In the opposite sign. Our evaluation shows promising results. VStore is the direction of the video data path, VStore passes the desired first holistic system that manages the full video lifecycle data quantity and quality from algorithmic consumers back- optimized for retrospective analytics, to our knowledge. ward to retrieval, to storage, and to ingestion. In this pro- cess, VStore optimizes for different resources in a progres- 2 Motivations sive manner; it elastically trades off among them to respect 2.1 Retrospective Video analytics resource budgets. More specifically, i) from operators and their desired accuracies, VStore derives video formats for Query & operators A video query is typically executed as fastest data consumption, for which it effectively searches a cascade of operators.
Recommended publications
  • Nvidia Video Codec Sdk - Encoder
    NVIDIA VIDEO CODEC SDK - ENCODER Application Note vNVENC_DA-6209-001_v14 | July 2021 Table of Contents Chapter 1. NVIDIA Hardware Video Encoder.......................................................................1 1.1. Introduction............................................................................................................................... 1 1.2. NVENC Capabilities...................................................................................................................1 1.3. NVENC Licensing Policy...........................................................................................................3 1.4. NVENC Performance................................................................................................................ 3 1.5. Programming NVENC...............................................................................................................5 1.6. FFmpeg Support....................................................................................................................... 5 NVIDIA VIDEO CODEC SDK - ENCODER vNVENC_DA-6209-001_v14 | ii Chapter 1. NVIDIA Hardware Video Encoder 1.1. Introduction NVIDIA GPUs - beginning with the Kepler generation - contain a hardware-based encoder (referred to as NVENC in this document) which provides fully accelerated hardware-based video encoding and is independent of graphics/CUDA cores. With end-to-end encoding offloaded to NVENC, the graphics/CUDA cores and the CPU cores are free for other operations. For example, in a game recording scenario,
    [Show full text]
  • Tegra Linux Driver Package
    TEGRA LINUX DRIVER PACKAGE RN_05071-R32 | March 18, 2019 Subject to Change 32.1 Release Notes RN_05071-R32 Table of Contents 1.0 About this Release ................................................................................... 3 1.1 Login Credentials ............................................................................................... 4 2.0 Known Issues .......................................................................................... 5 2.1 General System Usability ...................................................................................... 5 2.2 Boot .............................................................................................................. 6 2.3 Camera ........................................................................................................... 6 2.4 CUDA Samples .................................................................................................. 7 2.5 Multimedia ....................................................................................................... 7 3.0 Top Fixed Issues ...................................................................................... 9 3.1 General System Usability ...................................................................................... 9 3.2 Camera ........................................................................................................... 9 4.0 Documentation Corrections ..................................................................... 10 4.1 Adaptation and Bring-Up Guide ............................................................................
    [Show full text]
  • Mechdyne-TGX-2.1-Installation-Guide
    TGX Install Guide Version 2.1.3 Mechdyne Corporation March 2021 TGX INSTALL GUIDE VERSION 2.1.3 Copyright© 2021 Mechdyne Corporation All Rights Reserved. Purchasers of TGX licenses are given limited permission to reproduce this manual, provided the copies are for their use only and are not sold or distributed to third parties. All such copies must contain the title page and this notice page in their entirety. The TGX software program and accompanying documentation described herein are sold under license agreement. Their use, duplication, and disclosure are subject to the restrictions stated in the license agreement. Consistent with FAR 12.211 and 12.212, Commercial Computer Software, Computer Software Documentation, and Technical Data for Commercial Items are licensed to the U.S. Government under vendor's standard commercial license. This publication is provided “as is” without warranty of any kind, either express or implied, including, but not limited to, the implied warranties of merchantability, fitness for a particular purpose, or non- infringement. Any Mechdyne Corporation publication may include inaccuracies or typographical errors. Changes are periodically made to these publications, and changes may be incorporated in new editions. Mechdyne may improve or change its products described in any publication at any time without notice. Mechdyne assumes no responsibility for and disclaims all liability for any errors or omissions in this publication. Some jurisdictions do not allow the exclusion of implied warranties, so the above exclusion may not apply. TGX is a trademark of Mechdyne Corporation. Windows® is registered trademarks of Microsoft Corporation. Linux® is registered trademark of Linus Torvalds.
    [Show full text]
  • NVIDIA RTX Virtual Workstation
    NVIDIA RTX Virtual Workstation Sizing Guide NVIDIA RTX Virtual Workstation | 1 Executive Summary Document History nv-quadro-vgpu-deployment-guide-citrixonvmware-v2-082020 Version Date Authors Description of Change 01 Aug 17, 2020 AFS, JJC, EA Initial Release 02 Jan 08, 2021 CW Version 2 03 Jan 14, 2021 AFS Branding Update NVIDIA RTX Virtual Workstation | 2 Executive Summary Table of Contents Chapter 1. Executive Summary ......................................................................................................... 5 1.1 What is NVIDIA RTX vWS? ....................................................................................................... 5 1.2 Why NVIDIA vGPU? ................................................................................................................. 6 1.3 NVIDIA vGPU Architecture....................................................................................................... 6 1.4 Recommended NVIDIA GPU’s for NVIDIA RTX vWS ................................................................ 7 Chapter 2. Sizing Methodology......................................................................................................... 9 2.1 vGPU Profiles ........................................................................................................................... 9 2.2 vCPU Oversubscription .......................................................................................................... 10 Chapter 3. Tools .............................................................................................................................
    [Show full text]
  • NVIDIA A100 Tensor Core GPU Architecture UNPRECEDENTED ACCELERATION at EVERY SCALE
    NVIDIA A100 Tensor Core GPU Architecture UNPRECEDENTED ACCELERATION AT EVERY SCALE V1.0 Table of Contents Introduction 7 Introducing NVIDIA A100 Tensor Core GPU - our 8th Generation Data Center GPU for the Age of Elastic Computing 9 NVIDIA A100 Tensor Core GPU Overview 11 Next-generation Data Center and Cloud GPU 11 Industry-leading Performance for AI, HPC, and Data Analytics 12 A100 GPU Key Features Summary 14 A100 GPU Streaming Multiprocessor (SM) 15 40 GB HBM2 and 40 MB L2 Cache 16 Multi-Instance GPU (MIG) 16 Third-Generation NVLink 16 Support for NVIDIA Magnum IO™ and Mellanox Interconnect Solutions 17 PCIe Gen 4 with SR-IOV 17 Improved Error and Fault Detection, Isolation, and Containment 17 Asynchronous Copy 17 Asynchronous Barrier 17 Task Graph Acceleration 18 NVIDIA A100 Tensor Core GPU Architecture In-Depth 19 A100 SM Architecture 20 Third-Generation NVIDIA Tensor Core 23 A100 Tensor Cores Boost Throughput 24 A100 Tensor Cores Support All DL Data Types 26 A100 Tensor Cores Accelerate HPC 28 Mixed Precision Tensor Cores for HPC 28 A100 Introduces Fine-Grained Structured Sparsity 31 Sparse Matrix Definition 31 Sparse Matrix Multiply-Accumulate (MMA) Operations 32 Combined L1 Data Cache and Shared Memory 33 Simultaneous Execution of FP32 and INT32 Operations 34 A100 HBM2 and L2 Cache Memory Architectures 34 ii NVIDIA A100 Tensor Core GPU Architecture A100 HBM2 DRAM Subsystem 34 ECC Memory Resiliency 35 A100 L2 Cache 35 Maximizing Tensor Core Performance and Efficiency for Deep Learning Applications 37 Strong Scaling Deep Learning
    [Show full text]
  • Tegra Linux Driver Package
    TEGRA LINUX DRIVER PACKAGE RN_05071-R32 | July 17, 2019 Subject to Change 32.2 Release Notes RN_05071-R32 Table of Contents 1.0 About this Release ................................................................................... 4 1.1 Login Credentials ............................................................................................... 5 2.0 Known Issues .......................................................................................... 6 2.1 General System Usability ...................................................................................... 6 2.2 Boot .............................................................................................................. 7 2.3 Camera ........................................................................................................... 8 2.4 Connectivity ..................................................................................................... 9 2.5 CUDA ........................................................................................................... 10 2.6 Jetson Developer Kit ......................................................................................... 10 2.7 Multimedia ..................................................................................................... 10 2.8 SDK Manager .................................................................................................. 11 3.0 Top Fixed Issues .................................................................................... 12 3.1 General System Usability ...................................................................................
    [Show full text]
  • NVIDIA DESIGNWORKS Ankit Patel - [email protected] Prerna Dogra - [email protected]
    NVIDIA DESIGNWORKS Ankit Patel - [email protected] Prerna Dogra - [email protected] 1 Autonomous Driving Deep Learning Visual Effects Virtual Desktops Gaming Product Design Visual Computing is our singular mission 2 RENDERING PHYSICS Iray SDK OptiX SDK MDL SDK NV Pro Pipeline vMaterials PhysX VOXELS VIDEO MANAGEMENT GVDB Voxels VXGI GPUDirect for Video Video Codec SDK GRID SW MGMT SDK NVAPI/NVWMI DISPLAY https://developer.nvidia.com/designworks Multi-Display Capture SDK Warp and Blend 3 End-Users (Designers, Artists, Scientists) Application Partners Tools and technologies for Professional Visualization Application Developers NVWMI VXGI GRID Video Codec Mosaic Capture PhysX OptiX Management vMaterials NvPro Pipeline GPU Direct for Video Warp and Blend GVDB Iray NVAPI MDL GRAPHICS DRIVER CUDA DRIVER NVIDIA GPU 4 RENDERING 5 IRAY SDK Rendering by simulating the physical behavior of lights and materials • Use Case: physically based rendering for product design • Unprecedented visual quality and fidelity; enabling fluid and interactive product design flow Iray 2017 developer.nvidia.com/iray-sdk 6 MDL SDK Material Definition Language for seamless and quick integration of physically based materials into renderers • Use Case: physically based rendering for product design Physically based Materials • Enables designers and artists to understand how materials impact product design MDL support in Iray for Maya MDL SDK 2017 developer.nvidia.com/mdl-sdk 7 vMaterials Library with hundreds of ready to use real world materials • Use Case: physically based
    [Show full text]
  • Nvidia Video Codec Sdk - Decoder
    NVIDIA VIDEO CODEC SDK - DECODER Application Note vNVDEC_DA-08097-001_v08 | July 2021 Table of Contents Chapter 1. NVIDIA Hardware Video Decoder.......................................................................1 1.1. Introduction............................................................................................................................... 1 1.2. NVDEC Capabilities...................................................................................................................1 1.3. What’s new................................................................................................................................ 2 1.4. NVDEC Performance................................................................................................................ 2 1.5. Programming NVDEC............................................................................................................... 4 1.6. FFmpeg Support....................................................................................................................... 4 NVIDIA VIDEO CODEC SDK - DECODER vNVDEC_DA-08097-001_v08 | ii Chapter 1. NVIDIA Hardware Video Decoder 1.1. Introduction NVIDIA GPUs contain a hardware-based decoder (referred to as NVDEC in this document) which provides fully accelerated hardware-based video decoding for several popular codecs. With complete decoding offloaded to NVDEC, the graphics engine and CPU are free for other operations. NVDEC supports much faster than real-time decoding which makes it suitable for transcoding scenarios
    [Show full text]
  • Studies on CUDA Offloading for Real-Time Simulation And
    Studies on CUDA Offloading for Real-Time Simulation and Visualization Edgar Josafat Mart´ınez-Noriega 電気通信大学 情報・通信工学専攻 A dissertation submitted in partial satisfaction of the requirements for the degree Doctor of Engineering March 2020 Studies on CUDA Offloading for Real-Time Simulation and Visualization Chairperson: Prof. Narumi Tetsu (成見 哲 先生) Member: Prof. Terada Minoru (寺田 実 先生) Member: Prof. Nakatani Yoshinobu (仲谷 栄伸 先生) Member: Prof. Yoshinaga Tsutomu (吉永 努 先生) Member: Prof. Miwa Shinobu (三輪 忍 先生) © Copyright Edgar Josafat Mart´ınez-Noriega, 2020 All rights reserved. 概要 リアルタイムのシミュレーションと可視化のためのCUDA のオフロードに関する 研究 マルチネス ノリエガ エドガー ホサファット 電気通信大学 本論文では, GPU を使ったリアルタイムのシミュレーションを可視化する際に, 計算部 分をネットワークの先にオフロードすることで計算効率を向上できることを示している. GPU はもともと3D グラフィックス用に開発されたものではあるが, 近年はGPGPU を呼 ばれる汎用的な計算が行えるようになってきている. 様々なコンピュータシミュレーショ ンもGPU 上で実行出来るが, その中でCUDA と呼ばれるアーキテクチャはGPU 業界の事 実上の標準技術となっている. 一方タブレットやスマートホンのようなモバイルデバイス は, タッチ機能や加速度センサーのようなPC には無かった機能が追加されており, データ を可視化し操作する際のやり方が以前とは変わってきている. 例えば分子シミュレーショ ンの世界では, インタラクティブに操作可能な程シミュレーションが高速化されてきてお り, 特定の分子を人工的に動かすことで周りの分子の反応を見るなどシミュレーション技 術の新しい方向性が生まれている. ただしモバイルデバイスには消費電力的な制約があ り, PC 用のGPU 程の性能は期待出来ない. このようなモバイルデバイスの性能を補完す るために, クラウド技術を使う方法がある. つまり計算の重い部分に関してはネットワー クの先のGPU サーバーに処理を任せる. このようなやり方をCUDA のオフロードと呼び, GVirtus, ShadowFax, DS-CUDA, GPUvm, MGP, vCUDA, rCUDA 等のフレームワークが 提唱されている. 本論文では, リアルタイムの分子動力学シミュレーションを対象のアプリ ケーションと定め,タブレット端末上で高速に実行するために有効なオフロードの方法を 検討した. 最初にDS-CUDA を用いてCUDA の計算だけをGPU サーバーにオフロードす るシステムを評価した. 特にこれまでサポートされていなかったAndroid タブレットから のオフロードシステムも開発した. この結果タブレット単体に比べて高い演算性能は達成 できたものの, 画面表示のフレームレートが十分に滑らかには出来なかった.
    [Show full text]
  • IN-SITU VIS in the AGE of HPC+AI Peter Messmer, WOIV – ISC 2018, Frankfurt, 6/28/18 EXCITING TIMES for VISUALIZATION
    IN-SITU VIS IN THE AGE OF HPC+AI Peter Messmer, WOIV – ISC 2018, Frankfurt, 6/28/18 EXCITING TIMES FOR VISUALIZATION Huge dataset everywhere Novel workflows for HPC * AI Rendering technology leap 2 DEEP LEARNING COMES TO HPC Accelerates Scientific Discovery UIUC & NCSA: ASTROPHYSICS U. FLORIDA & UNC: DRUG DISCOVERY LBNL: High Energy Particle PHYSICS 5,000X LIGO Signal Processing 300,000X Molecular Energetics Prediction 100,000X faster simulated output by GAN PRINCETON & ITER: CLEAN ENERGY U.S. DoE: PARTICLE PHYSICS Univ. of Illinois: DRUG DISCOVERY 50% Higher Accuracy for Fusion Sustainment 33% More Accurate Neutrino Detection 2X Accuracy for Protein Structure Prediction 3 The most exciting phrase to hear in science, the one that heralds new discoveries, is not “Eureka!” but “That’s funny …” — Isaac Asimov 4 ANOMALY DETECTION Look at all the data and learn from it! Vast amount of data produced by simulations and experiments I/O limitations force throw-away Human in the loop infeasible “In-situ vis 2.0” Dataset: Visualization of historic cyclones from JWTC hurricane report from 1979 to 2016 Kim et al, 2017) Run inference on the simulation data to detect anomalies 5 Deep Learning for Climate Modeling ANOMALY DETECTION IN CLIMATE DATA Identifying “extreme” weather events in multi-decadal datasets with 5-layered Convolutional Neural Network. Reaching 99.98% of detection accuracy. (Kim et al, 2017) Dataset: Visualization of historic cyclones from JWTC hurricane report from 1979 to 2016 MNIST structure Systemic framework for detection and localization
    [Show full text]
  • NVIDIA® RTX™ A4000 GPU Performance Features NVIDIA Ampere Architecture NVIDIA RTX A4000 Is the Most Powerful Single Slot
    NVIDIA RTX NVIDIA A4000 RFEATURESTX A4000 &FEATURES BENEFITS & BENEFITS FEATURES AND BENEFITS NVIDIA® RTX™ A4000 GPU Performance Features NVIDIA Ampere Architecture NVIDIA RTX A4000 is the most powerful single slot GPU solution offering high performance real-time ray tracing, AI-accelerated compute, and professional graphics rendering. Building upon the major SM enhancements from the Turing GPU, the NVIDIA Ampere architecture enhances ray tracing operations, tensor matrix opera tions, and concurrent executions of FP32 and INT32 operations. CUDA Cores The NVIDIA Ampere architecture-based CUDA cores bring up to 2.7X the single-precision floating point (FP32) throughput compared to the previous generation, providing significant performance improvements for graphics workflows such as 3D model development and compute for workloads such as desktop simulation for computer-aided engineering (CAE). The RTX A4000 enables two FP32 primary data paths, doubli ng the peak FP32 operations. 2nd Generation RT Cores Incorporating 2nd generation ray tracing engines, NVIDIA Ampere architecture-based GPUs provide incredible ray traced rendering performance. A single RTX A4000 board can render complex professional models with physically accurate shadows, reflections, and refractions to empower users with instant insight. Working in concert with applications leveraging APIs such as NVIDIA OptiX, Microsoft DXR and Vulkan ray tracing, systems based on the RTX A40 00 will power truly interactive design workflows to provide immediate feedback for unprecedented levels of productivity. The RTX A4000 is up to 2X faster in ray tracing compared to the previous generation. This technology also speeds up the rendering of ray- traced motion blur for faster results with greater visual accuracy.
    [Show full text]
  • Nvidia Video Codec Sdk - Decoder
    NVIDIA VIDEO CODEC SDK - DECODER Programming Guide vNVDECODEAPI_PG-08085-001_v07 | July 2021 Table of Contents Chapter 1. Overview..............................................................................................................1 1.1. Supported Codecs.....................................................................................................................1 Chapter 2. Video Decoder Capabilities................................................................................ 3 Chapter 3. Video Decoder Pipeline...................................................................................... 5 Chapter 4. Using NVIDIA Video Decoder (NVDECODE API)................................................. 7 4.1. Video Parser.............................................................................................................................. 7 4.1.1. Creating a parser............................................................................................................... 7 4.1.2. Parsing the packets........................................................................................................... 8 4.1.3. Destroying parser...............................................................................................................8 4.2. Video Decoder........................................................................................................................... 9 4.2.1. Querying decode capabilities.............................................................................................9 4.2.2.
    [Show full text]