Synthesis of Simulation and Implementation Code for Openmax Multimedia Heterogeneous Systems from UML/MARTE Models

Total Page:16

File Type:pdf, Size:1020Kb

Synthesis of Simulation and Implementation Code for Openmax Multimedia Heterogeneous Systems from UML/MARTE Models Multimed Tools Appl (2017) 76:8195–8226 DOI 10.1007/s11042-016-3448-5 Synthesis of simulation and implementation code for OpenMAX multimedia heterogeneous systems from UML/MARTE models D. de la Fuente1 & J. Barba1 & J. C. López1 & P. Peñil1 & H. Posadas1 & P. Sánchez 1 Received: 21 January 2015 /Revised: 31 January 2016 /Accepted: 15 March 2016 / Published online: 30 March 2016 # Springer Science+Business Media New York 2016 Abstract The design of multimedia systems is becoming a more and more challenging task due to the combination of growing functionalities and strict performance requirements along with reduced time-to-market. In this context, the OpenMAX initiative defines a standard interface for the development and interconnection of HW and SW multimedia components. However, the simulation and implementation steps required to obtain the final prototypes of such complex systems are still a challenge. To solve these problems, this paper presents a framework which enables automatic code generation from high-level UML/MARTE models. SystemC and VHDL codes are synthesized according to the OpenMAX specification require- ments and they are integrated with the application SW, derived from task-based systems models. The generation of the SystemC executable specification enables easy simulation and verification of multimedia systems. After this verification stage, the framework automatically provides the VHDL code which feeds the final implementation and synthesis stage for the * D. de la Fuente [email protected] J. Barba [email protected] J. C. López [email protected] P. Peñil [email protected] H. Posadas [email protected] P. Sánchez [email protected] 1 University of Castilla-La Mancha, Ciudad Real, Spain 8196 Multimed Tools Appl (2017) 76:8195–8226 target platform. To demonstrate this approach, a SOBEL-based use case has been implemented with the developed framework. Keywords OpenMAX . UML/MARTE . SystemC . VHDL . Automatic code generation 1 Introduction The design of multimedia embedded systems is a highly competitive context. New multimedia devices typically include a wide set of applications, with an increasing number of intensive data processing operations in order to fulfil the new standards of audio, video or image quality. Furthermore, the development of a successful product greatly depends on being the first product on the market providing these new complex functionalities. However, these intensive data processing operations require an increment in the computing power of the system. To cope with this complexity, high performance features are demanded. For this purpose, some parts of the system can require implementations on heterogeneous platforms. Therefore, HW/SW integration is necessary. However, common HW/SW design flows are far from being easy and quick to be applied. The lack of standardization in HW/SW integration has led to ad- hoc implementations, requiring in-depth knowledge and great effort from the developers. A consequence of large, complex systems is the multiple design variations that can be considered during the design process. The elements that compose the system can have different properties with implications in their behaviour. These properties can be examined, enabling a design exploration process (DSE) to obtain the best configuration for each system element so as to optimize them and achieve the performance requirements. Furthermore, current chips integrate multiprocessor systems, usually combining a growing number of general-purpose processors (GPPs), different types of processing units (Digital Signal Processors and Graphic Processor Units) with configurable devices. In this way, application elements can be implemented either as SW or HW components during the design process. Therefore, new design approaches should establish a DSE process to find the best config- uration of the system elements, according to the specific characteristics which determine the correctness of the global system behaviour. In addition, these new design approaches should enable exploration of the system element mapping, taking advantage of the heterogeneous nature of current platforms in order to achieve the best system implementation according to the available resources. Different approaches have appeared in order to manage the design of large, complex systems. The adoption of standards supports the development of portable, flexible and reusable designs for embedded systems. At the same time, high-level methodologies also provide powerful solutions for system development and component reuse. Development of electronic system-level (ESL) design methodologies [17] provides a strategy for designing complex systems, in which the initial key activity is specification. SystemC [19, 38] is the most popular language, widely accepted by the ESL community. SystemC is a specification language to model system at functional level. Model-driven development (MDD) methodologies can simplify specifications and make them more understandable, which are major requirements for tackling the design challenge [14]. As an example, the use of standard languages, such as Unified Modelling Language (UML, [42]), provides easy to read and portable specifications. Multimed Tools Appl (2017) 76:8195–8226 8197 MDD methodologies are commonly adopted to handle the design of complex large functionalities. The latest design methodologies start from high-level UML models combined with algorithmic codes (e.g. C, C++, Matlab, etc.) of the different system components [40]. In these models, the user defines the system functionality and the target platform where this functionality is executed. The combination of both approaches takes advantage of the potential synergies in order to obtain an improved result. The approach presented here combines the benefits of the OpenMAX standard with a UML-based synthesis solution. The OpenMAX standard [27] is an initiative promoted by the Khronos Group and supported by many important companies such as AMD, Intel, ARM, Sony, Nokia, NVIDIA, Samsung, etc. OpenMAX is based on components and defines a standardized media compo- nent interface for audio, video, image and others (as defined in the standard itself). The OpenMAX middleware allows developers and platform providers to integrate and communi- cate with multimedia codecs implemented in hardware or software. Through the use of OpenMAX in embedded systems, developers can reduce the effort required to design multimedia HW/SW systems because: (a) the whole core logic can be reused when targeting a new platform (standardized interfaces), (b) it is not necessary to hand write the drivers or code that depends on these components and, (c) communication issues are separated from the processing primitives (the same synchronization protocols, standardized communication mechanism, etc.). In order to provide UML with specific semantics to support complex system design, a set of profiles have been developed. In the specific context of embedded systems, the Modelling and Analysis of Real-Time and Embedded Systems profile (MARTE [25]) adds to the UML language the concepts and semantics needed to describe real-time features at high abstraction levels. The UML Testing Profile [26] enables the definition of models that capture scenarios for system testing. Following this combined approach, this paper presents an infrastructure for automatic code generation for OpenMAX multimedia systems simulation and for the later implementation from UML/MARTE. This infrastructure enables executable specifications to be automatically obtained. Also, this executable models are used to simulate the OpenMAX designs captured in UML/MARTE and decide which system configuration is the most suitable for a latter validation of the system requirements. For the simulation, SystemC [35] has been used in this work since it is a modelling language that is applied to system-level modelling, architectural exploration, performance modelling, software development, functional verification and high-level synthesis. In addition to that, the SystemC executable specification automatic generated corresponds to a HW OpenMAX Integration Layer infrastructure [10]. This HW OpenMAX IL infrastructure can be executed and explored, considering different automatically generated test-benches as well. Finally, the UML/MARTE methodology captures enough information about the target platform to enable automatic VHDL code generation for the implementation of the SW/HW component interconnections, which are allocated in a Field Programmable Gate Array (FPGA). The paper is organized as follows; in Section 2, a study of the state-of-the-art is presented. Section 3 provides the context in which this work is developed. In Section 4, the complete design flow is described. All the aspects of the UML/MARTE methodology, which are the main goal of this paper, are presented in Section 5. In Section 6, the OpenMAX-SystemC simulation process is explained. Later, in Section 7 the OpenMAX synthesis process for 8198 Multimed Tools Appl (2017) 76:8195–8226 generation of VHDL code is defined. A study case is proposed in Section 8 and finally, some conclusions are presented in Section 9. 2 State-of-the-art In order to facilitate understanding of this section, three main groups of related works will be described. These groups have a correspondence with the three main pillars used in this proposal: OpenMax as the standard of reference for integration and multimedia platform modelling, SystemC for executable specifications and UML language
Recommended publications
  • GLSL 4.50 Spec
    The OpenGL® Shading Language Language Version: 4.50 Document Revision: 7 09-May-2017 Editor: John Kessenich, Google Version 1.1 Authors: John Kessenich, Dave Baldwin, Randi Rost Copyright (c) 2008-2017 The Khronos Group Inc. All Rights Reserved. This specification is protected by copyright laws and contains material proprietary to the Khronos Group, Inc. It or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast, or otherwise exploited in any manner without the express prior written permission of Khronos Group. You may use this specification for implementing the functionality therein, without altering or removing any trademark, copyright or other notice from the specification, but the receipt or possession of this specification does not convey any rights to reproduce, disclose, or distribute its contents, or to manufacture, use, or sell anything that it may describe, in whole or in part. Khronos Group grants express permission to any current Promoter, Contributor or Adopter member of Khronos to copy and redistribute UNMODIFIED versions of this specification in any fashion, provided that NO CHARGE is made for the specification and the latest available update of the specification for any version of the API is used whenever possible. Such distributed specification may be reformatted AS LONG AS the contents of the specification are not changed in any way. The specification may be incorporated into a product that is sold as long as such product includes significant independent work developed by the seller. A link to the current version of this specification on the Khronos Group website should be included whenever possible with specification distributions.
    [Show full text]
  • Report of Contributions
    X.Org Developers Conference 2020 Report of Contributions https://xdc2020.x.org/e/XDC2020 X.Org Developer … / Report of Contributions State of text input on Wayland Contribution ID: 1 Type: not specified State of text input on Wayland Wednesday, 16 September 2020 20:15 (5 minutes) Between the last impromptu talk at GUADEC 2018, text input on Wayland has become more organized and more widely adopted. As before, the three-pronged approach of text_input, in- put_method, and virtual keyboard still causes confusion, but increased interest in implementing it helps find problems and come closer to something that really works for many usecases. The talk will mention how a broken assumption causes a broken protocol, and why we’re notdone with Wayland input methods yet. It’s recommended to people who want to know more about the current state of input methods on Wayland. Recommended background: aforementioned GUADEC talk, wayland-protocols reposi- tory, my blog: https://dcz_self.gitlab.io/ Code of Conduct Yes GSoC, EVoC or Outreachy No Primary author: DCZ, Dorota Session Classification: Demos / Lightning talks I Track Classification: Lightning Talk September 30, 2021 Page 1 X.Org Developer … / Report of Contributions IGT GPU Tools 2020 Update Contribution ID: 2 Type: not specified IGT GPU Tools 2020 Update Wednesday, 16 September 2020 20:00 (5 minutes) Short update on IGT - what has changed in the last year, where are we right now and what we have planned for the near future. IGT GPU Tools is a collection of tools and tests aiding development of DRM drivers. It’s widely used by Intel in its public CI system.
    [Show full text]
  • Blackberry QNX Multimedia Suite
    PRODUCT BRIEF QNX Multimedia Suite The QNX Multimedia Suite is a comprehensive collection of media technology that has evolved over the years to keep pace with the latest media requirements of current-day embedded systems. Proven in tens of millions of automotive infotainment head units, the suite enables media-rich, high-quality playback, encoding and streaming of audio and video content. The multimedia suite comprises a modular, highly-scalable architecture that enables building high value, customized solutions that range from simple media players to networked systems in the car. The suite is optimized to leverage system-on-chip (SoC) video acceleration, in addition to supporting OpenMAX AL, an industry open standard API for application-level access to a device’s audio, video and imaging capabilities. Overview Consumer’s demand for multimedia has fueled an anywhere- o QNX SDK for Smartphone Connectivity (with support for Apple anytime paradigm, making multimedia ubiquitous in embedded CarPlay and Android Auto) systems. More and more embedded applications have require- o Qt distributions for QNX SDP 7 ments for audio, video and communication processing capabilities. For example, an infotainment system’s media player enables o QNX CAR Platform for Infotainment playback of content, stored either on-board or accessed from an • Support for a variety of external media stores external drive, mobile device or streamed over IP via a browser. Increasingly, these systems also have streaming requirements for Features at a Glance distributing content across a network, for instance from a head Multimedia Playback unit to the digital instrument cluster or rear seat entertainment units. Multimedia is also becoming pervasive in other markets, • Software-based audio CODECs such as medical, industrial, and whitegoods where user interfaces • Hardware accelerated video CODECs are increasingly providing users with a rich media experience.
    [Show full text]
  • The Opencl Specification
    The OpenCL Specification Version: 2.0 Document Revision: 22 Khronos OpenCL Working Group Editor: Aaftab Munshi Last Revision Date: 3/18/14 Page 1 1. INTRODUCTION ............................................................................................................... 10 2. GLOSSARY ......................................................................................................................... 12 3. THE OPENCL ARCHITECTURE .................................................................................... 23 3.1 Platform Model ................................................................................................................................ 23 3.2 Execution Model .............................................................................................................................. 25 3.2.1 Execution Model: Mapping work-items onto an NDRange ........................................................................28 3.2.2 Execution Model: Execution of kernel-instances ........................................................................................30 3.2.3 Execution Model: Device-side enqueue ......................................................................................................31 3.2.4 Execution Model: Synchronization .............................................................................................................32 3.2.5 Execution Model: Categories of Kernels ....................................................................................................33 3.3 Memory
    [Show full text]
  • Openbricks Embedded Linux Framework - User Manual I
    OpenBricks Embedded Linux Framework - User Manual i OpenBricks Embedded Linux Framework - User Manual OpenBricks Embedded Linux Framework - User Manual ii Contents 1 OpenBricks Introduction 1 1.1 What is it ?......................................................1 1.2 Who is it for ?.....................................................1 1.3 Which hardware is supported ?............................................1 1.4 What does the software offer ?............................................1 1.5 Who’s using it ?....................................................1 2 List of supported features 2 2.1 Key Features.....................................................2 2.2 Applicative Toolkits..................................................2 2.3 Graphic Extensions..................................................2 2.4 Video Extensions...................................................3 2.5 Audio Extensions...................................................3 2.6 Media Players.....................................................3 2.7 Key Audio/Video Profiles...............................................3 2.8 Networking Features.................................................3 2.9 Supported Filesystems................................................4 2.10 Toolchain Features..................................................4 3 OpenBricks Supported Platforms 5 3.1 Supported Hardware Architectures..........................................5 3.2 Available Platforms..................................................5 3.3 Certified Platforms..................................................7
    [Show full text]
  • Attendee Demographics
    DEMOGRAPHICS 20REPORT 19 2020 Conferences: April 18–22, 2020 Exhibits: April 19–22 Show Floor Now Open Sunday! 2019 Conferences: April 6–11, 2019 Exhibits: April 8–11 Las Vegas Convention Center, Las Vegas, Nevada USA NABShow.com ATTENDANCE HIGHLIGHTS OVERVIEW 27% 63,331 Exhibitors BUYERS 4% Other 24,896 91,921 TOTAL EXHIBITORS 69% TOTAL NAB SHOW REGISTRANTS Buyers Includes BEA registrations 24,086 INTERNATIONAL NAB SHOW REGISTRANTS from 160+ COUNTRIES 1,635* 963,411* 1,361 EXHIBITING NET SQ. FT. PRESS COMPANIES 89,503 m2 *Includes unique companies on the Exhibit Floor and those in Attractions, Pavilions, Meeting Rooms and Suites. 2019 NAB SHOW DEMOGRAPHICS REPORT PRIMARY BUSINESS Total Buyer Audience and Data Total Buyers: 63,331 ADVERTISING/PUBLIC RELATIONS/MARKETING 6% AUDIO PRODUCTION/POST-PRODUCTION SERVICE 21% BROADERCASTING/CARRIER 19% Cable/MSO Satellite (Radio or Television) Internet/Social Media Telco (Wireline/Wireless) Radio (Broadcast) Television (Broadcast) CONTENT/CHANNEL 8% Film/TV Studio Podcasting Independent Filmmaker Gaming Programming Network Photography DIGITAL MEDIA 4% DISTRIBUTOR/DEALER/RESELLER 4% EDUCATION 3% FAITH-BASED ORGANIZATION 1% FINANCIAL 1% HEALTHCARE/MEDICAL .4% SPORTS: TEAM/LEAGUE/VENUE 1% GOVERNMENT/NON-PROFIT 1% MANUFACTURER/SUPPLIER (HARDWARE) 3% PERFORMING ARTS/MUSIC/LIVE ENTERTAINMENT 1% RENTAL EQUIPMENT 1% SYSTEMS INTEGRATION 3% VIDEO PRODUCTION/POST-PRODUCTION 8% Video Production Services/Facility Video Post-Production Services/Facility WEB SERVICES/SOFTWARE MANUFACTURER 8% OTHER 7% 2019 NAB
    [Show full text]
  • THIN FILM ELECTRONICS ASA (A Norwegian Public Limited Liability Company Organized Under the Laws of Norway with Business Registration Number 889 186 232)
    THIN FILM ELECTRONICS ASA (a Norwegian public limited liability company organized under the laws of Norway with business registration number 889 186 232) Listing of 68,922,869 Private Placement Shares issued in a Private Placement Listing of up to 679,182,172 Warrant Shares in connection with the potential exercise of Warrants B and Warrants C (collectively the “Warrants”) The information contained in this prospectus (the “Prospectus”) relates to (i) the listing on Oslo Børs, a stock exchange operated by Oslo Børs ASA (the “Oslo Børs”), of 68,922,869 new shares (the “Private Placement Shares”), at a subscription price of NOK 0.82 per Private Placement Share (the “Subscription Price”), each with a nominal value of NOK 0.11, in Thin Film Electronics ASA (“Thinfilm” or the “Company”, and together with its consolidated subsidiaries, the “Group”), issued in a private placement directed towards certain investors for gross proceeds of approximately NOK 56.5 million (the “Private Placement”), and (ii) the listing of up to 679,182,172 shares on Oslo Børs issued in connection with exercise of Warrants B and Warrants C (the “Warrant Shares”), at an exercise price of NOK 0.25 per Warrant Share (the “Exercise Price”), each with a nominal value of NOK 0.11. The Private Placement Shares and the Warrant Shares will collectively be referred to as the “New Shares”. The Private Placement Shares were issued by a resolution by the Company’s Board of Directors (the “Board”) on 1 March 2021, pursuant to an authorization from the Extraordinary General Meeting on 19 August 2020.
    [Show full text]
  • Gstreamer and Dmabuf
    GStreamer and dmabuf OMAP4+ graphics/multimedia update Rob Clark Outline • A quick hardware overview • Kernel infrastructure: drm/gem, rpmsg+dce, dmabuf • Blinky s***.. putting pixels on the screen • Bringing it all together in GStreamer A quick hardware overview DMM/Tiler • Like a system-wide GART – Provides a contiguous view of memory to various hw accelerators: IVAHD, ISS, DSS • Provides tiling modes for enhanced memory bandwidth efficiency – For initiators like IVAHD which access memory in 2D block patterns • Provides support for rotation – Zero cost rotation for DSS/ISS access in 0º/90º/180º/270º orientations (with horizontal or vertical reflection) IVA-HD • Multi-codec hw video encode/decode – H.264 BP/MP/HP encode/decode – MPEG-4 SP/ASP encode/decode – MPEG-2 SP/MP encode/decode – MJPEG encode/decode – VC1/WMV9 decode – etc DSS – Display Subsystem • Display Subsystem – 4 video pipes, 3 support scaling and YUV – Any number of video pipes can be attached to one of 3 “overlay manager” to route to a display Kernel infrastructure: drm/gem, rpmsg+dce, dmabuf DRM Overview • DRM → Direct Rendering Manager – Started life heavily based on x86/desktop graphics card architecture – But more recently has evolved to better support ARM and other SoC platforms • KMS → Kernel Mode Setting – Replaces fbdev for more advanced display management – Hotplug, multiple display support (spanning/cloning) – And more recently support for overlays (planes) • GEM → Graphics Execution Manager – But the important/useful part here is the graphics/multimedia buffer management DRM - KMS • Models the display hardware as: – Connector → the thing that the display connects to • Handles DDC/EDID, hotplug detection – Encoder → takes pixel data from CRTC and encodes it to a format suitable for connectors • ie.
    [Show full text]
  • M-Learning Tools and Applications
    2342-2 Scientific m-Learning 4 - 7 June 2012 m-Learning Tools and Applications TRIVEDI Kirankumar Rajnikant Shantilal Shah Engineering College New Sidsar Campu, PO Vartej Bhavnagar 364001 Gujarat INDIA m-Learning Tools and Applications Scientific m-learning @ ICTP , Italy Kiran Trivedi Associate Professor Dept of Electronics & Communication Engineering. S.S.Engineering College, Bhavnagar, Gujarat Technological University Gujarat, India [email protected] Mobile & Wireless Learning • Mobile = Wireless • Wireless ≠ Mobile (not always) • M-learning is always mobile and wireless. • E-learning can be wireless but not mobile Scientific m-learning @ ICTP Italy Smart Phones • Combines PDA and Mobile Connectivity. • Supports Office Applications • WLAN, UMTS, High Resolution Camera • GPS, Accelerometer, Compass • Large Display, High End Processor, Memory and long lasting battery. Scientific m-learning @ ICTP Italy The Revolution .. • Psion Organizer II • 8 bit processor • 9V Battery • OPL – Language • Memory Extensions, plug-ins • Birth of Symbian 1984 2012 Scientific m-learning @ ICTP Italy History of Smartphone • 1994 : IBM Simon • First “Smartphone” • PIM, Data Communication Scientific m-learning @ ICTP Italy Scientific m-learning @ ICTP Italy The First Nokia Smartphones • 2001 : Nokia 7650 • GPRS : HSCSD • Light – Proximity Sensor • Symbian OS ! • Nokia N95 (March 07) • Having almost all features Scientific m-learning @ ICTP Italy S60 and UIQ Scientific m-learning @ ICTP Italy Scientific m-learning @ ICTP Italy Know your target-know your device
    [Show full text]
  • NY Amended Class Action Complaint (2009)
    SUPREME COURT OF THE STATE OF NEW YORK COUNTY OF QUEENS : COMMERCIAL DIVISION x MICHAEL JIANNARAS, on Behalf of : Index No. 21262/09 Himself and All Others Similarly Situated, : : Plaintiff, : The Honorable Marguerite A. Grays, J.S.C. : vs. : : MIKE ALFANT, MIKE KOPETSKI, J. AMENDED CLASS ACTION COMPLAINT : ALLEN KOSOWSKY, JAMES MEYER, : AFSANEH NAIMOLLAH, THOMAS : WEIGMAN, ON2 TECHNOLOGIES, INC. : and GOOGLE INC., : : Defendants. : x Plaintiff, by his attorneys, alleges upon information and belief, except for those allegations that pertain to him, which are alleged upon personal knowledge, as follows: NATURE OF THE ACTION 1. Plaintiff brings this shareholder class action on behalf of himself and all other public shareholders of On2 Technologies, Inc. (“On2” or the “Company”), against On2 and its Board of Directors (the “Board” or “Individual Defendants”), arising out of the proposed sale of On2 to defendant Google Inc. (“Google”) in a transaction valued at approximately $106.5 million pursuant to which each share of On2 common stock will be exchanged for 60 cents worth of Google Class A common stock (the “Proposed Transaction”). 2. In connection with the Proposed Transaction, however, the Board failed to discharge its fiduciary duties to the shareholders by, inter alia : (i) failing to ensure that they will receive maximum value for their shares; (ii) failing to conduct an appropriate sale process; (iii) implementing preclusive deal protections that will inhibit an alternate transaction; (iv) favoring the interests of certain “insider” shareholders over the interests of the Company’s unaffiliated public shareholders; (v) falsely portraying the Proposed Transaction as one in which the On2 shareholders will receive Google stock in exchange for their shares; and (vi) favoring its own interests in connection with the Proposed Transaction by attempting to extinguish shareholder derivative standing to evade liability for admitted accounting improprieties that resulted in the generation of false financial statements.
    [Show full text]
  • The Openvx™ Specification
    The OpenVX™ Specification Version 1.2 Document Revision: dba1aa3 Generated on Wed Oct 11 2017 20:00:10 Khronos Vision Working Group Editor: Stephen Ramm Copyright ©2016-2017 The Khronos Group Inc. i Copyright ©2016-2017 The Khronos Group Inc. All Rights Reserved. This specification is protected by copyright laws and contains material proprietary to the Khronos Group, Inc. It or any components may not be reproduced, republished, distributed, transmitted, displayed, broadcast or otherwise exploited in any manner without the express prior written permission of Khronos Group. You may use this specifica- tion for implementing the functionality therein, without altering or removing any trademark, copyright or other notice from the specification, but the receipt or possession of this specification does not convey any rights to reproduce, disclose, or distribute its contents, or to manufacture, use, or sell anything that it may describe, in whole or in part. Khronos Group grants express permission to any current Promoter, Contributor or Adopter member of Khronos to copy and redistribute UNMODIFIED versions of this specification in any fashion, provided that NO CHARGE is made for the specification and the latest available update of the specification for any version of the API is used whenever possible. Such distributed specification may be re-formatted AS LONG AS the contents of the specifi- cation are not changed in any way. The specification may be incorporated into a product that is sold as long as such product includes significant independent work developed by the seller. A link to the current version of this specification on the Khronos Group web-site should be included whenever possible with specification distributions.
    [Show full text]
  • The Road to the Mainline Zynqmp VCU Driver
    The Road to the Mainline ZynqMP VCU Driver FOSDEM ’21 Michael Tretter – [email protected] https://www.pengutronix.de Agenda Xilinx Zynq® UltraScale+™ MPSoC H.264/H.265 Video Codec Unit Video Encoders in Mainline Linux VCU Mainline Driver: Allegro A Glimpse into the Future 2/47 Xilinx Zynq® UltraScale+™ MPSoC 3/47 ZynqMP Platform Overview Luca Ceresoli: ARM64 + FPGA and more: Linux on the Xilinx ZynqMP https://archive.fosdem.org/ 2018/schedule/event/arm6 4_and_fpga 4/47 ZynqMP Mainline Status Mainline Linux just works, e.g., on ZCU104 Evaluation Kit U-Boot, Barebox, FSBL Sometimes more reliable with Xilinx downstream Xilinx is actively mainlining their drivers 5/47 Make Sure that Your ZynqMP has a VCU ZU # E V ZU: Zynq Ultrascale+ #: Value Index C/E: Processor System Identifier G/V: Engine Type 6/47 Focus on Video Encoding VCU supports video decoding, as well Linux mainline driver only supports encoding Decoding might be focus in a future talk 7/47 Basic Video Encoding Knowledge Expected Paul Kocialkowski: Supporting Hardware-Accelerated Video Encoding with Mainline https://www.youtube.com/watch?v=S5wCdZfGFew 8/47 H.264/H.265 Video Codec Unit 9/47 VCU: Documentation Hardware configuration Software usage Available on the Xilinx Website 10/47 VCU: Features The encoder engine is designed to process video streams using the HEVC (ISO/IEC 23008-2 high-efficiency Video Coding) and AVC (ISO/IEC 14496-10 Advanced Video Coding) standards. It provides complete support for these standards, including support for 8-bit and 10-bit color, Y- only (monochrome), 4:2:0 and 4:2:2 Chroma formats, up to 4K UHD at 60 Hz performance.
    [Show full text]