Video4linux2: Path to a Standardized Codec
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
GLSL 4.50 Spec
The OpenGL® Shading Language Language Version: 4.50 Document Revision: 7 09-May-2017 Editor: John Kessenich, Google Version 1.1 Authors: John Kessenich, Dave Baldwin, Randi Rost Copyright (c) 2008-2017 The Khronos Group Inc. All Rights Reserved. This specification is protected by copyright laws and contains material proprietary to the Khronos Group, Inc. It or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast, or otherwise exploited in any manner without the express prior written permission of Khronos Group. You may use this specification for implementing the functionality therein, without altering or removing any trademark, copyright or other notice from the specification, but the receipt or possession of this specification does not convey any rights to reproduce, disclose, or distribute its contents, or to manufacture, use, or sell anything that it may describe, in whole or in part. Khronos Group grants express permission to any current Promoter, Contributor or Adopter member of Khronos to copy and redistribute UNMODIFIED versions of this specification in any fashion, provided that NO CHARGE is made for the specification and the latest available update of the specification for any version of the API is used whenever possible. Such distributed specification may be reformatted AS LONG AS the contents of the specification are not changed in any way. The specification may be incorporated into a product that is sold as long as such product includes significant independent work developed by the seller. A link to the current version of this specification on the Khronos Group website should be included whenever possible with specification distributions. -
Customizable Multimedia Devices in Virtual Environments
Customizable Multimedia Devices in Virtual Environments Ankur Pai, Balasubramanian Seshasayee, Himanshu Raj and Karsten Schwan Center for Experimental Research in Computer Systems Georgia Institute of Technology Atlanta, Georgia 30332–0250 fpaiankur, bala, rhim, [email protected] Abstract—The separation of logical from physical devices maintain a consistent view of physical devices by presenting provided by modern virtualization techniques can be used to only a common minimal set of device features is too limiting enhance device functionality, including by emulating non-native and worse, it cannot account for changes in the environmental device functions in software. Such logically extended devices are particularly promising in the mobile domain, for embedded conditions or contexts in which devices are used. machines, handhelds, or phones. Challenges arise, however, from We argue that device sharing and interoperation in mobile the highly heterogeneous nature of portables and the devices with systems requires models and interfaces with which virtual- which they can interact. This suggests that device extensions ization infrastructures can dynamically change device func- should be dynamic, varied at runtime to adjust to new user needs and/or changes in environmental conditions. This paper tionality to account for runtime changes in device use, con- presents a model for runtime device extension and for then using ditions, and contexts. This paper presents such a model. It such extended devices. The model provides uniform interfaces to implements the safe, runtime extension of device functionality, native and extended devices, permits the safe runtime extension by enabling the virtualization layer, guest operating systems, of device functionality, and can be shown to operate across both or applications to safely specify and deploy custom code that stationary and mobile platforms, and arbitrary operating systems and applications. -
Report of Contributions
X.Org Developers Conference 2020 Report of Contributions https://xdc2020.x.org/e/XDC2020 X.Org Developer … / Report of Contributions State of text input on Wayland Contribution ID: 1 Type: not specified State of text input on Wayland Wednesday, 16 September 2020 20:15 (5 minutes) Between the last impromptu talk at GUADEC 2018, text input on Wayland has become more organized and more widely adopted. As before, the three-pronged approach of text_input, in- put_method, and virtual keyboard still causes confusion, but increased interest in implementing it helps find problems and come closer to something that really works for many usecases. The talk will mention how a broken assumption causes a broken protocol, and why we’re notdone with Wayland input methods yet. It’s recommended to people who want to know more about the current state of input methods on Wayland. Recommended background: aforementioned GUADEC talk, wayland-protocols reposi- tory, my blog: https://dcz_self.gitlab.io/ Code of Conduct Yes GSoC, EVoC or Outreachy No Primary author: DCZ, Dorota Session Classification: Demos / Lightning talks I Track Classification: Lightning Talk September 30, 2021 Page 1 X.Org Developer … / Report of Contributions IGT GPU Tools 2020 Update Contribution ID: 2 Type: not specified IGT GPU Tools 2020 Update Wednesday, 16 September 2020 20:00 (5 minutes) Short update on IGT - what has changed in the last year, where are we right now and what we have planned for the near future. IGT GPU Tools is a collection of tools and tests aiding development of DRM drivers. It’s widely used by Intel in its public CI system. -
Version 7.8-Systemd
Linux From Scratch Version 7.8-systemd Created by Gerard Beekmans Edited by Douglas R. Reno Linux From Scratch: Version 7.8-systemd by Created by Gerard Beekmans and Edited by Douglas R. Reno Copyright © 1999-2015 Gerard Beekmans Copyright © 1999-2015, Gerard Beekmans All rights reserved. This book is licensed under a Creative Commons License. Computer instructions may be extracted from the book under the MIT License. Linux® is a registered trademark of Linus Torvalds. Linux From Scratch - Version 7.8-systemd Table of Contents Preface .......................................................................................................................................................................... vii i. Foreword ............................................................................................................................................................. vii ii. Audience ............................................................................................................................................................ vii iii. LFS Target Architectures ................................................................................................................................ viii iv. LFS and Standards ............................................................................................................................................ ix v. Rationale for Packages in the Book .................................................................................................................... x vi. Prerequisites -
Blackberry QNX Multimedia Suite
PRODUCT BRIEF QNX Multimedia Suite The QNX Multimedia Suite is a comprehensive collection of media technology that has evolved over the years to keep pace with the latest media requirements of current-day embedded systems. Proven in tens of millions of automotive infotainment head units, the suite enables media-rich, high-quality playback, encoding and streaming of audio and video content. The multimedia suite comprises a modular, highly-scalable architecture that enables building high value, customized solutions that range from simple media players to networked systems in the car. The suite is optimized to leverage system-on-chip (SoC) video acceleration, in addition to supporting OpenMAX AL, an industry open standard API for application-level access to a device’s audio, video and imaging capabilities. Overview Consumer’s demand for multimedia has fueled an anywhere- o QNX SDK for Smartphone Connectivity (with support for Apple anytime paradigm, making multimedia ubiquitous in embedded CarPlay and Android Auto) systems. More and more embedded applications have require- o Qt distributions for QNX SDP 7 ments for audio, video and communication processing capabilities. For example, an infotainment system’s media player enables o QNX CAR Platform for Infotainment playback of content, stored either on-board or accessed from an • Support for a variety of external media stores external drive, mobile device or streamed over IP via a browser. Increasingly, these systems also have streaming requirements for Features at a Glance distributing content across a network, for instance from a head Multimedia Playback unit to the digital instrument cluster or rear seat entertainment units. Multimedia is also becoming pervasive in other markets, • Software-based audio CODECs such as medical, industrial, and whitegoods where user interfaces • Hardware accelerated video CODECs are increasingly providing users with a rich media experience. -
Camera Driver Development.Pptx
CAMERA DRIVER DEVELOPMENT Kyle Xu OUTLINE Prepare the Hardware Set up Environment Write a Camera Driver Solve Remaining Problems PREPARE THE HARDWARE TI OMAP4 Pandaboard Aptina AR0832 Image Sensor Ominivision OV5650 Image Sensor Pandaboard Adapters and Connectors Datasheets AR0832 OV5650 PREPARE THE HARDWARE System Setup OUTLINE Prepare the Hardware Set up Environment Write a Camera Driver Solve Remaining Problems SET UP ENVIRONMENT Install Ubuntu 12.04 on Pandaboard Powerful and fast server for cross compilation 32-core 5 min on server vs. 5 hours on Pandaboard Understand Data Path and Interfaces DATA PATH Camera Serial Interface (CSI-2) Image Sub-System (ISS) Image Sensor INTERFACE: CSI-2 N data lanes and one data clock lane Each lane is a pair of +/- lines. I2C standard control interface (CCI) SCL SDA INTERFACE: Hardware Level Pandaboard Camera Expansion Connector (J-17) 5 CSI-2 lanes (including one CLK lane) I2C control lines (SCL, SDA) Other lines for battery power, camera clock, GPIO… Image Sensor 2 CSI-2 data lanes + 1 CSI-2 clock lane I2C control lines (as slave) Powered from on board battery OUTLINE Prepare the Hardware Set up Environment Write a Camera Driver Solve Remaining Problems WRITE A CAMERA DRIVER Describe the camera to the board Write the driver file a. Voltage Regulator and Clock b. I2C Bus c. V4L2 d. Driver Framework e. Complete Template Implement the driver to Linux Kernel DESCRIBE THE CAMERA TO THE BOARD Create ar0832.h to describe camera’s platform information struct ar0832_platform_data containing fields about voltage regulator, clock, power on/off methods Edit board-omap4panda-camer.c Assign values or functions to ar0832_platform_data’s fields. -
The Opencl Specification
The OpenCL Specification Version: 2.0 Document Revision: 22 Khronos OpenCL Working Group Editor: Aaftab Munshi Last Revision Date: 3/18/14 Page 1 1. INTRODUCTION ............................................................................................................... 10 2. GLOSSARY ......................................................................................................................... 12 3. THE OPENCL ARCHITECTURE .................................................................................... 23 3.1 Platform Model ................................................................................................................................ 23 3.2 Execution Model .............................................................................................................................. 25 3.2.1 Execution Model: Mapping work-items onto an NDRange ........................................................................28 3.2.2 Execution Model: Execution of kernel-instances ........................................................................................30 3.2.3 Execution Model: Device-side enqueue ......................................................................................................31 3.2.4 Execution Model: Synchronization .............................................................................................................32 3.2.5 Execution Model: Categories of Kernels ....................................................................................................33 3.3 Memory -
Linux Kernal II 9.1 Architecture
Page 1 of 7 Linux Kernal II 9.1 Architecture: The Linux kernel is a Unix-like operating system kernel used by a variety of operating systems based on it, which are usually in the form of Linux distributions. The Linux kernel is a prominent example of free and open source software. Programming language The Linux kernel is written in the version of the C programming language supported by GCC (which has introduced a number of extensions and changes to standard C), together with a number of short sections of code written in the assembly language (in GCC's "AT&T-style" syntax) of the target architecture. Because of the extensions to C it supports, GCC was for a long time the only compiler capable of correctly building the Linux kernel. Compiler compatibility GCC is the default compiler for the Linux kernel source. In 2004, Intel claimed to have modified the kernel so that its C compiler also was capable of compiling it. There was another such reported success in 2009 with a modified 2.6.22 version of the kernel. Since 2010, effort has been underway to build the Linux kernel with Clang, an alternative compiler for the C language; as of 12 April 2014, the official kernel could almost be compiled by Clang. The project dedicated to this effort is named LLVMLinxu after the LLVM compiler infrastructure upon which Clang is built. LLVMLinux does not aim to fork either the Linux kernel or the LLVM, therefore it is a meta-project composed of patches that are eventually submitted to the upstream projects. -
Openbricks Embedded Linux Framework - User Manual I
OpenBricks Embedded Linux Framework - User Manual i OpenBricks Embedded Linux Framework - User Manual OpenBricks Embedded Linux Framework - User Manual ii Contents 1 OpenBricks Introduction 1 1.1 What is it ?......................................................1 1.2 Who is it for ?.....................................................1 1.3 Which hardware is supported ?............................................1 1.4 What does the software offer ?............................................1 1.5 Who’s using it ?....................................................1 2 List of supported features 2 2.1 Key Features.....................................................2 2.2 Applicative Toolkits..................................................2 2.3 Graphic Extensions..................................................2 2.4 Video Extensions...................................................3 2.5 Audio Extensions...................................................3 2.6 Media Players.....................................................3 2.7 Key Audio/Video Profiles...............................................3 2.8 Networking Features.................................................3 2.9 Supported Filesystems................................................4 2.10 Toolchain Features..................................................4 3 OpenBricks Supported Platforms 5 3.1 Supported Hardware Architectures..........................................5 3.2 Available Platforms..................................................5 3.3 Certified Platforms..................................................7 -
Augmented Reality Navigation
VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA ELEKTROTECHNIKY A KOMUNIKAČNÍCH TECHNOLOGIÍ ÚSTAV RADIOELEKTRONIKY FACULTY OF ELECTRICAL ENGINEERING AND COMMUNICATION DEPARTMENT OF RADIO ELECTRONICS AUGMENTED REALITY APPLICATIONS IN EMBEDDED NAVIGATION DEVICES BAKALÁŘSKÁ PRÁCE BACHELOR'S THESIS AUTOR PRÁCE MARTIN JAROŠ AUTHOR BRNO 2014 VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ BRNO UNIVERSITY OF TECHNOLOGY FAKULTA ELEKTROTECHNIKY A KOMUNIKAČNÍCH TECHNOLOGIÍ ÚSTAV RADIOELEKTRONIKY FACULTY OF ELECTRICAL ENGINEERING AND COMMUNICATION DEPARTMENT OF RADIO ELECTRONICS AUGMENTED REALITY APPLICATIONS IN EMBEDDED NAVIGATION DEVICES AUGMENTED REALITY APPLICATIONS IN EMBEDDED NAVIGATION DEVICES BAKALÁŘSKÁ PRÁCE BACHELOR'S THESIS AUTOR PRÁCE MARTIN JAROŠ AUTHOR VEDOUCÍ PRÁCE doc. Ing. TOMÁŠ FRÝZA, Ph.D. SUPERVISOR BRNO 2014 VYSOKÉ UČENÍ TECHNICKÉ V BRNĚ Fakulta elektrotechniky a komunikačních technologií Ústav radioelektroniky Bakalářská práce bakalářský studijní obor Elektronika a sdělovací technika Student: Martin Jaroš ID: 146847 Ročník: 3 Akademický rok: 2013/2014 NÁZEV TÉMATU: Augmented reality applications in embedded navigation devices POKYNY PRO VYPRACOVÁNÍ: Analyze the hardware possibilities of the OMAP platform and design an application to effectively combine captured video data and rendered virtual scene based on navigational data from GPS and INS sensors. Design and create a functional prototype. Examine practical use cases of the proposed navigation device, design applicable user interface. DOPORUČENÁ LITERATURA: [1] BIMBER, O.; RASKAR, R. Spatial augmented reality: merging real and virtual worlds. Wellesley: A K Peters, 2005, 369 p. ISBN 15-688-1230-2. [2] Texas Instruments. OMAP 4460 Multimedia Device [online]. 2012 - [cit. 8. listopadu 2012]. Available: http://www.ti.com/product/omap4460. Termín zadání: 10.2.2014 Termín odevzdání: 30.5.2014 Vedoucí práce: doc. -
Gstreamer and Dmabuf
GStreamer and dmabuf OMAP4+ graphics/multimedia update Rob Clark Outline • A quick hardware overview • Kernel infrastructure: drm/gem, rpmsg+dce, dmabuf • Blinky s***.. putting pixels on the screen • Bringing it all together in GStreamer A quick hardware overview DMM/Tiler • Like a system-wide GART – Provides a contiguous view of memory to various hw accelerators: IVAHD, ISS, DSS • Provides tiling modes for enhanced memory bandwidth efficiency – For initiators like IVAHD which access memory in 2D block patterns • Provides support for rotation – Zero cost rotation for DSS/ISS access in 0º/90º/180º/270º orientations (with horizontal or vertical reflection) IVA-HD • Multi-codec hw video encode/decode – H.264 BP/MP/HP encode/decode – MPEG-4 SP/ASP encode/decode – MPEG-2 SP/MP encode/decode – MJPEG encode/decode – VC1/WMV9 decode – etc DSS – Display Subsystem • Display Subsystem – 4 video pipes, 3 support scaling and YUV – Any number of video pipes can be attached to one of 3 “overlay manager” to route to a display Kernel infrastructure: drm/gem, rpmsg+dce, dmabuf DRM Overview • DRM → Direct Rendering Manager – Started life heavily based on x86/desktop graphics card architecture – But more recently has evolved to better support ARM and other SoC platforms • KMS → Kernel Mode Setting – Replaces fbdev for more advanced display management – Hotplug, multiple display support (spanning/cloning) – And more recently support for overlays (planes) • GEM → Graphics Execution Manager – But the important/useful part here is the graphics/multimedia buffer management DRM - KMS • Models the display hardware as: – Connector → the thing that the display connects to • Handles DDC/EDID, hotplug detection – Encoder → takes pixel data from CRTC and encodes it to a format suitable for connectors • ie. -
GPU Technology Conference 2010 Sessions on 2095
GPU Technology Conference 2010 Sessions on Video Processing (subject to change) IMPORTANT: Visit www.nvidia.com/gtc for the most up-to-date schedule and to enroll into sessions to ensure your spot in the most popular courses. 2095 - Building High Density Real-Time Video Processing Systems Learn how GPU Direct can be used to effectively build real time, high performance, cost effective video processing products. We will focus especially on how to optimize bus throughput while keeping CPU load and latency minimal. Speaker: Ronny Dewaele, Barco Topics: Video Processing, Imaging Time: Thursday, September, 23rd, 16:00 - 16:50 2029 - Computer Vision Algorithms for Automating HD Post- Production Discover how post-production tasks can be accelerated by taking advantage of GPU-based algorithms. In this talk we present computer vision algorithms for corner detection, feature point tracking, image warping and image inpainting, and their efficient implementation on GPUs using CUDA. We also show how to use these algorithms to do real-time stabilization and temporal re-sampling (re-timing) of high definition video sequences, both common tasks in post-production. Benchmarking of the GPU implementations against optimized CPU algorithms demonstrates a speedup of approximately an order of magnitude. Speaker: Hannes Fassold, JOANNEUM RESEARCH Topics: Computer Vision, Video Processing Time: Wednesday, September, 22nd, 15:00 - 15:50 2125 - Developing GPU Enabled Visual Effects For Film And Video 1 The arrival of fully programable GPUs is now changing the visual effects industry, which traditionally relied on CPU computation to create their spectacular imagery. Implementing the complex image processing algorithms used by VFX is a challenge, but the payoffs in terms of interactivity and throughput can be enormous.