
Khronos Standard APIs for Accelerating Vision and Inferencing Neil Trevett Khronos President NVIDIA VP Developer Ecosystems 22nd September 2020 © 2020 The Khronos Group Khronos Connects Software to Silicon Open interoperability standards to enable software to effectively harness the power of 3D and multiprocessor acceleration 3D graphics, XR, parallel programming, vision acceleration and machine learning Non-profit, member-driven standards-defining industry consortium Open to any interested company All Khronos standards are royalty-free Well-defined IP Framework protects participant’s intellectual property Founded in 2000 >150 Members ~ 40% US, 30% Europe, 30% Asia © 2020 The Khronos Group 2 Khronos Active Initiatives 3D Graphics 3D Assets Portable XR Parallel Computation Desktop, Mobile, Web Authoring Augmented and Vision, Inferencing, Embedded and Safety Critical and Delivery Virtual Reality Machine Learning © 2020 The Khronos Group 3 Khronos Compute Acceleration Standards Higher-level Languages and APIs Streamlined development and Single source C++ programming Graph-based vision and performance portability with compute acceleration inferencing acceleration Lower-level APIs Direct Hardware Control GPU rendering + Intermediate Heterogeneous compute Representation (IR) compute acceleration supporting parallel Increasing industry acceleration execution and interest in parallel graphics compute acceleration to combat the ‘End of Moore’s Law’ Hardware GPU CPUCPUCPU GPUGPU FPGA DSP AI/Tensor HW Custom Hardware © 2020 The Khronos Group 4 Embedded Vision and Inferencing Acceleration Networks trained on high-end Neural Network Training desktop and cloud systems Training Data Trained Compilation Networks Ingestion Applications link to compiled Compiled Vision / Inferencing C++ Application inferencing code or call Engine Code vision/inferencing API Code Sensor Hardware Acceleration APIs Data Diverse Embedded Hardware DSP Dedicated (GPUs, DSPs, FPGAs) Hardware FPGA GPU © 2020 The Khronos Group 5 NNEF Neural Network Exchange Format Before - Training and Inferencing Fragmentation Training Framework 1 Inference Engine 1 Training Framework 2 Inference Engine 2 Training Framework 3 Inference Engine 3 Every Inferencing Engine needs a custom importer from every Framework After - NN Training and Inferencing Interoperability Training Framework 1 Inference Engine 1 Training Framework 2 Inference Engine 2 Training Framework 3 Inference Engine 3 Common optimization and processing tools © 2020 The Khronos Group 6 NNEF and ONNX ONNX and NNEF Embedded Inferencing Import Training Interchange are Complementary ONNX moves quickly to track authoring Defined Specification Open Source Project framework updates NNEF provides a stable bridge from Multi-company Governance at Khronos Initiated by Facebook & Microsoft training into edge inferencing engines Stability for hardware deployment Software stack flexibility NNEF V1.0 released in August 2018 ONNX 1.6 Released in September 2019 After positive industry feedback on Provisional Specification. Introduced support for Quantization Maintenance update issued in September 2019 ONNX Runtime being integrated with GPU inferencing engines Extensions to V1.0 released for expanded functionality such as NVIDIA TensorRT NNEF Working Group Participants ONNX Supporters © 2020 The Khronos Group 7 NNEF Open Source Tools Ecosystem Compound operations TensorFlow and captured by exporting TensorFlow Lite graph Python script Import/Export NNEF Model Zoo Now available on GitHub. Useful for checking that ingested NNEF produces ONNX Caffe and Import/Export Caffe2 acceptable results on target system Import/Export Files NNEF adopts a rigorous approach to design lifecycle Syntax OpenVX Especially important for safety-critical or Parser and Ingestion and mission-critical applications in automotive, Validator Execution industrial and infrastructure markets NNEF open source projects hosted on Khronos NNEF GitHub repository under Apache 2.0 https://github.com/KhronosGroup/NNEF-Tools © 2020 The Khronos Group 8 SYCL Single Source C++ Parallel Programming Complex ML frameworks can be directly compiled SYCL-BLAS, SYCL-DNN, Standard C++ C++ ML and accelerated SYCL-Eigen, Application Libraries Frameworks SYCL Parallel STL Code C++ Template C++ Template C++ Template C++ templates and lambda functions separate host & Libraries Libraries Libraries accelerated device code SYCL Compiler CPU C++ Kernel Fusion can give for OpenCL Compiler better performance on complex apps and libs than hand-coding CPU Accelerated code passed into device CPUCPUCPU GPUGPU SYCL is ideal for accelerating larger OpenCL compilers C++-based engines and applications FPGA DSP with performance portability AI/Tensor HW Custom Hardware © 2020 The Khronos Group 9 SYCL Implementations SYCL, OpenCL and SPIR-V, as open industry standards, enable flexible integration and SYCL deployment of multiple acceleration technologies Source Code SYCL enables Khronos to influence ISO C++ to (eventually) support heterogeneous compute DPC++ ComputeCpp triSYCL hipSYCL Uses LLVM/Clang SYCL 1.2.1 on Open source SYCL 1.2.1 on Part of oneAPI multiple hardware test bed CUDA & HIP/ROCm Experimental Any CPU CUDA+PTX Any CPU OpenCL+PTX OpenMP OpenMP CUDA NVIDIA GPUs NVIDIA GPUs Any CPU Any CPU NVIDIA GPUs OpenCL + OpenCL + OpenCL + ROCm SPIR-V SPIR(-V) SPIR/LLVM AMD GPUs Intel CPUs Intel CPUs XILINX FPGAs Intel GPUs Intel GPUs POCL Multiple Backends in Development Intel FPGAs Intel FPGAs (open source OpenCL supporting CPUs and NVIDIA GPUs and more) SYCL beginning to be supported on multiple AMD GPUs (depends on driver stack) low-level APIs in addition to OpenCL Arm Mali e.g. ROCm and CUDA IMG PowerVR For more information: http://sycl.tech Renesas R-Car © 2020 The Khronos Group 10 OpenVX Cross-Vendor Vision and Inferencing OpenVX High-level graph-based abstraction for portable, efficient vision processing Graph can contain vision processing and NN nodes – enables global optimizations Optimized OpenVX drivers created, optimized and shipped by processor vendors Implementable on almost any hardware or processor with performance portability Run-time graph execution need very little host CPU interaction OpenVX Graph Vision Node Native Vision Vision Downstream Camera Application Node Node Control CNN Nodes Processing NNEF Translator converts NNEF representation into OpenVX Node Graphs Performance comparable to hand-optimized, non-portable code Real, complex applications on real, complex hardware Much lower development effort than hand-optimized code Hardware Implementations © 2020 The Khronos Group 11 OpenVX 1.3 Released October 2019 Functionality Consolidation into Core Deployment Flexibility through Feature Sets Neural Net Extension, NNEF Kernel Import, Conformant Implementations ship one or more complete feature sets Safety Critical etc. Enables market-focused Implementations - Baseline Graph Infrastructure (enables other Feature Sets) Open Source Conformance Test Suite - Default Vision Functions https://github.com/KhronosGroup/OpenVX-cts/tree/openvx_1.3 - Enhanced Vision Functions (introduced in OpenVX 1.2) - Neural Network Inferencing (including tensor objects) OpenCL Interop - NNEF Kernel import (including tensor objects) Custom accelerated Nodes - Binary Images - Safety Critical (reduced features for easier safety certification) https://www.khronos.org/registry/OpenVX/specs/1.3/html/OpenVX_Specification_1_3.html Application OpenVX user-kernels can access command queue and cl_mem objects to asynchronously schedule Fully asynchronous host-device OpenCL kernel execution operations during data exchange Copy or export OpenVX data objects cl_mem buffers into OpenVX data cl_mem buffers objects OpenCL Command Queue Runtime Map or copy OpenVX data objects into cl_mem buffers Runtime OpenVX/OpenCL Interop © 2020 The Khronos Group 12 Open Source OpenVX & Samples Open Source OpenVX Tutorial and Code Samples Fully Conformant Open Source OpenVX 1.3 https://github.com/rgiduthuri/openvx_tutorial for Raspberry Pi https://github.com/KhronosGroup/openvx-samples https://github.com/KhronosGroup/OpenVX-sample-impl/tree/openvx_1.3 Raspberry Pi 3 and 4 Model B with Raspbian OS Memory access optimization via tiling/chaining Highly optimized kernels on multimedia instruction set Automatic parallelization for multicore CPUs and GPUs Automatic merging of common kernel sequences © 2020 The Khronos Group 13 OpenCL is Widely Deployed and Used Desktop Creative Apps Parallel Machine Learning Languages Libraries and Frameworks The industry’s most pervasive, cross-vendor, open standard for low-level heterogeneous parallel programming Intel Molecular Modelling Libraries Intel Modo clDNN Xiaomi ForceBalance SYCL-DNN Machine Learning Vision, Imaging Math and Physics Vegas Pro Compilers and Video Libraries Libraries Linear Algebra NNAPI Libraries VeriSilicon SYCL-BLAS Synopsis MetaWare EV TI DL Library (TIDL) CLBlast Arm Compute Library Accelerated Implementations https://en.wikipedia.org/wiki/List_of_OpenCL_applications © 2020 The Khronos Group 14 OpenCL – Low-level Parallel Programing Programming and Runtime Framework OpenCL for Application Acceleration KernelOpenCL Code KernelOpenCL Code Offload compute-intensive kernels onto parallel KernelOpenCL C Host CodeKernel heterogeneous processors Code CPUs, GPUs, DSPs, FPGAs, Tensor Processors CPU OpenCL C or C++ kernel languages Runtime OpenCL API to compile, load and execute Platform Layer API kernels across devices Query, select and initialize compute devices GPU CPU DSP Runtime API CPU
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages23 Page
-
File Size-