Overview of the High Efficiency Video Coding (HEVC) Standard
Total Page:16
File Type:pdf, Size:1020Kb
Load more
Recommended publications
-
ACM Multimedia 2007
ACM Multimedia 2007 Call for Papers (PDF) University of Augsburg Augsburg, Germany September 24 – 29, 2007 http://www.acmmm07.org ACM Multimedia 2007 invites you to participate in the premier annual multimedia conference, covering all aspects of multimedia computing: from underlying technologies to applications, theory to practice, and servers to networks to devices. A final version of the conference program is now available. Registration will be open from 7:30am to 7pm on Monday and 7:30am to 5pm on Tuesday, Wednesday and Thursday On Friday registration is only possible from 7:30am to 1pm. ACM Business Meeting is schedule for Wednesday 9/26/2007 during Lunch (Mensa). Ramesh Jain and Klara Nahrstedt will talk about the SIGMM status SIGMM web status Status of other conferences/journals (TOMCCAP/MMSJ/…) sponsored by SIGMM Miscellaneous Also Chairs of MM’08 will give their advertisement Call for proposals for MM’09 + contenders will present their pitch Announcement of Euro ACM SIGMM Miscellaneous Travel information: How to get to Augsburg How to get to the conference sites (Augsburg public transport) Conference site maps and directions (excerpt from the conference program) Images of Augsburg The technical program will consist of plenary sessions and talks with topics of interest in: (a) Multimedia content analysis, processing, and retrieval; (b) Multimedia networking, sensor networks, and systems support; (c) Multimedia tools, end-systems, and applications; and (d) Multimedia interfaces; Awards will be given to the best paper and the best student paper as well as to best demo, best art program paper, and the best-contributed open-source software. -
Versatile Video Coding – the Next-Generation Video Standard of the Joint Video Experts Team
31.07.2018 Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team Mile High Video Workshop, Denver July 31, 2018 Gary J. Sullivan, JVET co-chair Acknowledgement: Presentation prepared with Jens-Rainer Ohm and Mathias Wien, Institute of Communication Engineering, RWTH Aachen University 1. Introduction Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team 1 31.07.2018 Video coding standardization organisations • ISO/IEC MPEG = “Moving Picture Experts Group” (ISO/IEC JTC 1/SC 29/WG 11 = International Standardization Organization and International Electrotechnical Commission, Joint Technical Committee 1, Subcommittee 29, Working Group 11) • ITU-T VCEG = “Video Coding Experts Group” (ITU-T SG16/Q6 = International Telecommunications Union – Telecommunications Standardization Sector (ITU-T, a United Nations Organization, formerly CCITT), Study Group 16, Working Party 3, Question 6) • JVT = “Joint Video Team” collaborative team of MPEG & VCEG, responsible for developing AVC (discontinued in 2009) • JCT-VC = “Joint Collaborative Team on Video Coding” team of MPEG & VCEG , responsible for developing HEVC (established January 2010) • JVET = “Joint Video Experts Team” responsible for developing VVC (established Oct. 2015) – previously called “Joint Video Exploration Team” 3 Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team Gary Sullivan | Jens-Rainer Ohm | Mathias Wien | July 31, 2018 History of international video coding standardization -
Binary Image Compression Using Neighborhood Coding
BINARY IMAGE COMPRESSION USING NEIGHBORHOOD CODING Tiago B. A. de Carvalho1, Tsang Ing Ren1, George D.C. Cavalcanti1, Tsang Ing Jyh2 1Center of Informatics, Federal University of Pernambuco, Brazil. 2Alcatel-Lucent, Bell Labs, Belgium E-mail: 1{tbac,tir,gdcc}@cin.ufpe.br, [email protected] ABSTRACT Bitmap image format requires a reasonable great amount of computer memory, since for each pixel it is necessary a Compression plays an important role in the storage and set of bits to represent it, and a relatively small image can transmission of digital image. Binary image compression is contain millions of pixels. Even a binary image that uses just of essential value for document imaging. Here, we propose a a bit per pixel can demand a large amount of disk space. novel compression technique based on the concept of There are dozens of bitmap image formats that use neighborhood coding, which codes each pixel of an image compression techniques, e.g., TIFF, GIF, PNG, JBIG and according to the number of neighbor pixels in different JPEG. The TIFF format can also use different types of directions. The proposed technique is a lossless compression compression methods such as CCITT Group 3 or Group 4. scheme, which also uses run-length encoding (RLE) and We propose a novel lossless binary image compression Huffman coding. We evaluated and compared this method to technique based on neighborhood coding scheme described several image file format, using test images taken from the in [2]. The codification starts by transforming each pixel of MPEG-7 core experiment CE-shape and the binary image the image into a vector. -
VOL. E100-C NO. 6 JUNE 2017 the Usage of This PDF File Must Comply
VOL. E100-C NO. 6 JUNE 2017 The usage of this PDF file must comply with the IEICE Provisions on Copyright. The author(s) can distribute this PDF file for research and educational (nonprofit) purposes only. Distribution by anyone other than the author(s) is prohibited. IEICE TRANS. ELECTRON., VOL.E100–C, NO.6 JUNE 2017 643 PAPER A High-Throughput and Compact Hardware Implementation for the Reconstruction Loop in HEVC Intra Encoding Yibo FAN†a), Member, Leilei HUANG†, Zheng XIE†, and Xiaoyang ZENG†, Nonmembers SUMMARY In the newly finalized video coding standard, namely high 4×4, 8×8, 16×16, 32×32 and 64×64 with 35 possible pre- efficiency video coding (HEVC), new notations like coding unit (CU), pre- diction modes in intra prediction. Although several fast diction unit (PU) and transformation unit (TU) are introduced to improve mode decision designs have been proposed, still a consid- the coding performance. As a result, the reconstruction loop in intra en- coding is heavily burdened to choose the best partitions or modes for them. erable amount of candidate PU modes, PU partitions or TU In order to solve the bottleneck problems in cycle and hardware cost, this partitions are needed to be traversed by the reconstruction paper proposed a high-throughput and compact implementation for such a loop. reconstruction loop. By “high-throughput”, it refers to that it has a fixed It can be inferred that the reconstruction loop in intra throughput of 32 pixel/cycle independent of the TU/PU size (except for 4×4 TUs). By “compact”, it refers to that it fully explores the reusability prediction has become a bottleneck in cycle and hardware between discrete cosine transform (DCT) and inverse discrete cosine trans- cost. -
Security Solutions Y in MPEG Family of MPEG Family of Standards
1 Security solutions in MPEG family of standards TdjEbhiiTouradj Ebrahimi [email protected] NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG: Moving Picture Experts Group 2 • MPEG-1 (1992): MP3, Video CD, first generation set-top box, … • MPEG-2 (1994): Digital TV, HDTV, DVD, DVB, Professional, … • MPEG-4 (1998, 99, ongoing): Coding of Audiovisual Objects • MPEG-7 (2001, ongo ing ): DitiDescription of Multimedia Content • MPEG-21 (2002, ongoing): Multimedia Framework NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG-1 - ISO/IEC 11172:1992 3 • Coding of moving pictures and associated audio for digital storage media at up to about 1,5 Mbit/s – Part 1 Systems - Program Stream – Part 2 Video – Part 3 Audio – Part 4 Conformance – Part 5 Reference software NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG-2 - ISO/IEC 13818:1994 4 • Generic coding of moving pictures and associated audio – Part 1 Systems - joint with ITU – Part 2 Video - joint with ITU – Part 3 Audio – Part 4 Conformance – Part 5 Reference software – Part 6 DSM CC – Par t 7 AAC - Advance d Au dio Co ding – Part 9 RTI - Real Time Interface – Part 10 Conformance extension - DSM-CC – Part 11 IPMP on MPEG-2 Systems NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG-4 - ISO/IEC 14496:1998 5 • Coding of audio-visual objects – Part 1 Systems – Part 2 Visual – Part 3 Audio – Part 4 Conformance – Part 5 Reference -
Dynamic Resource Management of Network-On-Chip Platforms for Multi-Stream Video Processing
DYNAMIC RESOURCE MANAGEMENT OF NETWORK-ON-CHIP PLATFORMS FOR MULTI-STREAM VIDEO PROCESSING Hashan Roshantha Mendis Doctor of Engineering University of York Computer Science March 2017 2 Abstract This thesis considers resource management in the context of parallel multiple video stream de- coding, on multicore/many-core platforms. Such platforms have tens or hundreds of on-chip processing elements which are connected via a Network-on-Chip (NoC). Inefficient task allo- cation configurations can negatively affect the communication cost and resource contention in the platform, leading to predictability and performance issues. Efficient resource management for large-scale complex workloads is considered a challenging research problem; especially when applications such as video streaming and decoding have dynamic and unpredictable workload characteristics. For these type of applications, runtime heuristic-based task mapping techniques are required. As the application and platform size increase, decentralised resource management techniques are more desirable to overcome the reliability and performance bot- tlenecks in centralised management. In this work, several heuristic-based runtime resource management techniques, targeting real-time video decoding workloads are proposed. Firstly, two admission control approaches are proposed; one fully deterministic and highly predictable; the other is heuristic-based, which balances predictability and performance. Secondly, a pair of runtime task mapping schemes are presented, which make use of limited known application properties, communication cost and blocking-aware heuristics. Combined with the proposed deterministic admission con- troller, these techniques can provide strict timing guarantees for hard real-time streams whilst improving resource usage. The third contribution in this thesis is a distributed, bio-inspired, low-overhead, task re-allocation technique, which is used to further improve the timeliness and workload distribution of admitted soft real-time streams. -
Parameter Optimization in H.265 Rate-Distortion by Single Frame
https://doi.org/10.2352/ISSN.2470-1173.2019.11.IPAS-262 © 2019, Society for Imaging Science and Technology Parameter optimization in H.265 Rate-Distortion by single- frame semantic scene analysis Ahmed M. Hamza; University of Portsmouth Abdelrahman Abdelazim; Blackpool and the Fylde College Djamel Ait-Boudaoud; University of Portsmouth Abstract with Q being the quantization value for the source. The relation is The H.265/HEVC (High Efficiency Video Coding) codec and derived based on several assumptions about the source probability its 3D extensions have crucial rate-distortion mechanisms that distribution within the quantization intervals, and the nature of help determine coding efficiency. We have introduced in this the rate-distortion relations themselves (constantly differentiable work a new system of Lagrangian parameterization in RDO cost throughout, etc.). The value initially used for c in the literature functions, based on semantic cues in an image, starting with the was 0.85. This was modified and made adaptive in subsequent current HEVC formulation of the Lagrangian hyper-parameter standards including HEVC/H.265. heuristics. Two semantic scenery flag algorithms are presented Our work here investigates further adaptations to the rate- and tested within the Lagrangian formulation as weighted factors. distortion Lagrangian by semantic algorithms, made possible by The investigation of whether the semantic gap between the coder recent frameworks in computer vision. and the image content is holding back the block-coding mecha- nisms as a whole from achieving greater efficiency has yielded a Adaptive Lambda Hyper-parameters in HEVC positive answer. Early versions of the rate-distortion parameters in H.263 were replaced with more sophisticated models in subsequent stan- Introduction dards. -
The H.264 Advanced Video Coding (AVC) Standard
Whitepaper: The H.264 Advanced Video Coding (AVC) Standard What It Means to Web Camera Performance Introduction A new generation of webcams is hitting the market that makes video conferencing a more lifelike experience for users, thanks to adoption of the breakthrough H.264 standard. This white paper explains some of the key benefits of H.264 encoding and why cameras with this technology should be on the shopping list of every business. The Need for Compression Today, Internet connection rates average in the range of a few megabits per second. While VGA video requires 147 megabits per second (Mbps) of data, full high definition (HD) 1080p video requires almost one gigabit per second of data, as illustrated in Table 1. Table 1. Display Resolution Format Comparison Format Horizontal Pixels Vertical Lines Pixels Megabits per second (Mbps) QVGA 320 240 76,800 37 VGA 640 480 307,200 147 720p 1280 720 921,600 442 1080p 1920 1080 2,073,600 995 Video Compression Techniques Digital video streams, especially at high definition (HD) resolution, represent huge amounts of data. In order to achieve real-time HD resolution over typical Internet connection bandwidths, video compression is required. The amount of compression required to transmit 1080p video over a three megabits per second link is 332:1! Video compression techniques use mathematical algorithms to reduce the amount of data needed to transmit or store video. Lossless Compression Lossless compression changes how data is stored without resulting in any loss of information. Zip files are losslessly compressed so that when they are unzipped, the original files are recovered. -
LOW COMPLEXITY H.264 to VC-1 TRANSCODER by VIDHYA
LOW COMPLEXITY H.264 TO VC-1 TRANSCODER by VIDHYA VIJAYAKUMAR Presented to the Faculty of the Graduate School of The University of Texas at Arlington in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING THE UNIVERSITY OF TEXAS AT ARLINGTON AUGUST 2010 Copyright © by Vidhya Vijayakumar 2010 All Rights Reserved ACKNOWLEDGEMENTS As true as it would be with any research effort, this endeavor would not have been possible without the guidance and support of a number of people whom I stand to thank at this juncture. First and foremost, I express my sincere gratitude to my advisor and mentor, Dr. K.R. Rao, who has been the backbone of this whole exercise. I am greatly indebted for all the things that I have learnt from him, academically and otherwise. I thank Dr. Ishfaq Ahmad for being my co-advisor and mentor and for his invaluable guidance and support. I was fortunate to work with Dr. Ahmad as his research assistant on the latest trends in video compression and it has been an invaluable experience. I thank my mentor, Mr. Vishy Swaminathan, and my team members at Adobe Systems for giving me an opportunity to work in the industry and guide me during my internship. I would like to thank the other members of my advisory committee Dr. W. Alan Davis and Dr. William E Dillon for reviewing the thesis document and offering insightful comments. I express my gratitude Dr. Jonathan Bredow and the Electrical Engineering department for purchasing the software required for this thesis and giving me the chance to work on cutting edge technologies. -
Design and Implementation of a Fast HEVC Random Access Video Encoder
Design and Implementation of a Fast HEVC Random Access Video Encoder ALFREDO SCACCIALEPRE Master's Degree Project Stockholm, Sweden March 2014 XR-EE-KT 2014:003 Contents 1 Introduction 11 1.1 Background . 11 1.2 Thesis work . 12 1.2.1 Factors to consider . 12 1.3 The problem . 12 1.3.1 C65 . 12 1.4 Methods and thesis outline . 13 1.4.1 Methods . 13 1.4.2 Objective measurement . 14 1.4.3 Subjective measurement . 14 1.4.4 Test sequences . 14 1.4.5 Thesis outline . 16 1.4.6 Abbreviations . 16 2 General concepts 19 2.1 Color spaces . 19 2.2 Frames, slices and tiles . 19 2.2.1 Frames . 19 2.2.2 Slices and Tiles . 19 2.3 Predictions . 20 2.3.1 Intra . 20 2.3.2 Inter . 20 2.4 Merge mode . 20 2.4.1 Skip mode . 20 2.5 AMVP mode . 20 2.5.1 I, P and B frames . 21 2.6 CTU, CU, CTB, CB, PB, and TB . 21 2.7 Transforms . 23 2.8 Quantization . 24 2.9 Coding . 24 2.10 Reference picture lists . 24 2.11 Gop structure . 24 2.12 Temporal scalability . 25 2.13 Hierarchical B pictures . 25 2.14 Decoded picture buffer (DPB) . 25 2.15 Low delay and random access configurations . 26 2.16 H.264 and its encoders . 26 2.16.1 H.264 . 26 1 2 CONTENTS 3 Preliminary tests 27 3.1 Speed - quality considerations . 27 3.1.1 Interactive applications . 27 3.1.2 Entertainment applications . -
PDF Image JBIG2 Compression and Decompression with JBIG2 Encoding and Decoding SDK Library | 1
PDF image JBIG2 compression and decompression with JBIG2 encoding and decoding SDK library | 1 JBIG2 is an image compression standard for bi-level images developed by the Joint bi-level Image Expert Group. It is suitable for lossless compression and lossy compression. According to the group’s press release, in its lossless mode, JBIG2 usually generates files that are one- third to one-fifth the size of the fax group 4 and twice the size of JBIG, which was previously released by the group. The double-layer compression standard. JBIG2 was released as an international standard ITU in 2000. JBIG2 compression JBIG2 is an international standard for bi-level image compression. By segmenting the image into overlapping and/or non-overlapping areas of text, halftones and general content, compression techniques optimized for each content type are used: *Text area: The text area is composed of characters that are well suited for symbol-based encoding methods. Usually, each symbol will correspond to a character bitmap, and a sub-image represents a character or text. For each uppercase and lowercase character used on the front face, there is usually only one character bitmap (or sub-image) in the symbol dictionary. For example, the dictionary will have an “a” bitmap, an “A” bitmap, a “b” bitmap, and so on. VeryUtils.com PDF image JBIG2 compression and decompression with JBIG2 encoding and decoding SDK library | 1 PDF image JBIG2 compression and decompression with JBIG2 encoding and decoding SDK library | 2 *Halftone area: Halftone areas are similar to text areas because they consist of patterns arranged in a regular grid. -
Extended Multiple Feature-Based Classifications for Adaptive Loop Filtering
SIP (2019),vol.8,e28,page1of11©TheAuthors,2019. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. doi:10.1017/ATSIP.2019.19 original paper Extended multiple feature-based classifications for adaptive loop filtering johannes erfurt,1‡ wang-q lim,1‡ heiko schwarz,1 detlev marpe1 andthomas wiegand 1,2 Recent progress in video compression is seemingly reaching its limits making it very hard to improve coding efficiency significantly further. The adaptive loop filter (ALF) has been a topic of interest for many years. ALF reaches high coding gains and has motivated many researchers over the past years to further improve the state-of-the-art algo- rithms. The main idea of ALF is to apply a classification to partition the set of all sample locations into multiple classes. After that, Wiener filters are calculated and applied for each class. Therefore, the performance of ALF essen- tially relies on how its classification behaves. In this paper, we extensively analyze multiple feature-based classifica- tions for ALF (MCALF) and extend the original MCALF by incorporating sample adaptive offset filtering. Further- more, we derive new block-based classifications which can be applied in MCALF to reduce its complexity. Experimental results show that our extended MCALF can further improve compression efficiency compared to the original MCALF algorithm. Keywords: Video compression, Adaptive loop filtering, Multiple classifications, SAO filtering Received 24 May 2019; Revised 06 October 2019 1.