Hardware Implementation of HEVC Inverse Transform in 45Nm CMOS

Total Page:16

File Type:pdf, Size:1020Kb

Hardware Implementation of HEVC Inverse Transform in 45Nm CMOS IEEE Latin American Symposium on Circuits and Systems (LASCAS) 2020; San José, Costa Rica; February 2020 Hardware Implementation of HEVC Inverse Transform in 45nm CMOS Richard Calusdian and Aaron Stillmaker Electrical and Computer Engineering Department, California State University, Fresno, CA, USA [email protected], [email protected] Abstract—The High Efficiency Video Coding (HEVC) stan- quantized according to a particular mode to eliminate high dard relies on the use of the inverse discrete cosine transform frequency data. The resulting data is then entropy encoded and (IDCT) to perform video decompression. HEVC has increased then transmitted for storage or to a decoder which performs the complexity of the decoder and the inverse transform lends itself well to hardware acceleration due to repeated addition and inverse operations to reconstruct the video. The HEVC decoder multiplication on unit blocks. A hardware implementation of uses the same as the encoder but in inverse to undue the the inverse quantization and inverse transform, compliant to the encoding. The decoder undoes quantization using the inverse HEVC standard, is presented. The design targets the 4 4 inverse ⇥ quantization and undoes the forward transformation using the quantization and transform performing a synthesis and place & inverse discrete cosine transformation (IDCT). route flow using the Nangate FreePDK45 Open Cell Library. The operational frequency of presented design supports 4K video at A. Transform up to 30 frames/ sec. The core area of the presented design 2 HEVC defines a 2D transform for sizes 4 4, 8 8, 16 16, takes up 14 664 µm and can operate at max. frequency of ⇥ ⇥ ⇥ 367 MHz. and 32 32. The transform is used to change the representa- ⇥ Index Terms—HEVC, Inverse Discrete Cosine Transform, tion of the residual signal from the spatial domain into the IDCT frequency domain. In HEVC this is accomplished by using a finite approximation to the DCT. HEVC explicitly defines the I. INTRODUCTION matrix values of the DCT as integer values to make the math The same group that released the H.264 video compression more amenable to digital systems and to produce consistent standard [1], has also developed a follow-up, the High Ef- results. Also, the integer values avoid encoder/decoder mis- ficiency Video Coding (HEVC) standard, also known as the matches due to differing precision of representations of the H.265/MPEG-HEVC standard or simply H.265 [2]. This latest DCT matrix values. The equations for the HEVC DCT and standard builds on the framework of H.264 and expands and IDCT for an input residual block U and transform matrix D extends the tools and features of the standard. The standard are shown in (1) and (2). improved upon the existing coding efficiency while specifically addressing the needs of the proliferation of High Definition DCT(U)=D U DT = Y (1) (HD) video and Ultra High Definition (UHD) while also ⇥ ⇥ IDCT(Y )=DT Y D = U (2) adding support for parallel architectures [3]. ⇥ ⇥ In order to process a video sequence, each frame of the In actual implementation the inverse transform is computed sequence is partitioned into non-overlapping square blocks of by separating the equation in (2) into two one-dimensional pixels. These square blocks can then be sub-divided further transformations in succession. Once the DCT is performed, down into smaller coding blocks for eventual spatial or tem- the coefficients of the resulting matrix represent increasing poral coding. The encoder algorithm will encode large blocks frequency components of the image, starting with a DC where possible to reduce the bit rate and leave the small blocks coefficient in the upper-left corner and increasing as we move only where needed to retain detail. The H.264 standard calls down and to either side, with the highest frequency component its primary block a macroblock and is of size 16 16 pixels, ⇥ in the lower-right corner. Consider the case of a flat residual whereas HEVC uses the term coding tree unit (CTU), which block, such that the residual block has all entries of the same can be as small as 16 16 and as large as 64 64. An advantage ⇥ ⇥ value. After transformation, such a block will yield a set of of the larger block size of 32 32 or 64 64, is that for large ⇥ ⇥ coefficients with only one non-zero element.The interpretation flat areas, the HEVC standard can offer better efficiency as of this follows from the fact that the coefficients arranged with compared to the smaller 16 16 macroblock of H.264 [3]. ⇥ the lowest frequency term positioned in the upper-left corner The next step is to subtract the predicted block from the and increasing frequencies are placed below and to the right current block producing a residual, or difference, signal. It as we move away from this term. The highest frequency is is this residual signal which is then transformed using an thus positioned at the lower-right corner. integer approximation to the discrete cosine transform (DCT) In the resulting transformed block, the upper-left corner into the frequency domain. The transform coefficients are then term is called the DC coefficient and represents the average 978-1-7281-3427-7/20/$31.00 ©2020 IEEE value of the residual block of pixels. It tends to be the case that video has most of its energy in the lower frequency of transform size. This is accomplished by having a flexible components, and interestingly, the human eye is not sensitive architecture that shares data in parallel across column and row to high frequency contrasts. Both of these facts, allow video processing units. These processing units are equally used for to be compressed by discarding the higher frequency content both the smallest and largest HEVC supported block sizes. with little loss of detail as perceived by the human eye [4]. The previous standard, H.264, exclusively relies on the III. DESIGN DCT, but in the case of HEVC, the use of the DCT is The design of the 4 4 inverse quantization and inverse ⇥ augmented by use of the discrete sine transform (DST) for transform consists of three unique design entries. The first 4 4 luma intra-prediction blocks [2]. It was found that using is the RTL design done in Verilog and then verified with ⇥ the DST on 4 4 luma intra-prediction blocks improved the ModelSim and MATLAB. The second phase consists of using ⇥ bitstream compression by about 1% [5]. Synopsys Design Compiler to then synthesize the design and produce files for use in the last phase. Finally, the placement B. Quantization and routing uses files from the synthesis step to produce Once transformed, the resulting coefficients are quantized a placed and routed design. Cadence Innovus was used for by dividing them by an integer and rounding down. The divisor placement and routing. is called the quantization step and is derived from the encoded A. IDCT parameter, quantization parameter, QP. As a result of the division and rounding, some coefficients will be rounded down The IDCT module takes inverse quantization data, performs to zero and discarded. For larger values of the quantization the 2D transform as two separate 1D transforms and outputs step, more zeroes will be produced in the resuting data. residual data. Typically, for natural scenes, the higher frequency components Consider the DCT transform matrix shown in (3) and are smaller in magnitude and after division by QP, these values defined by the HEVC standard. This matrix and its inverse are used in the calculation of the 4 4 DCT and 4 4 IDCT. will round to zero. The larger the value of QP, the more ⇥ ⇥ coefficients will result in zero values, discarded, and thus more For notation purposes we will denote this matrix D. compression will be achieved at the cost of picture detail in the 64 64 64 64 reconstructed picture. This process of quantization is a lossy 83 36 36 83 step since data is being thrown away. Provided only the higher (3) 264 64 −64− 64 3 frequency components are removed, the resulting picture will − − 636 83 83 367 appear to be as detailed as the original, to the observer. 6 − − 7 The four point4 1D inverse transform of5 this matrix with an II. RELATED WORK input vector x is given by DT x. ⇥ Ma et al. [6] exploit the (anti-)symmetry property of the Performing a direct transformation, this will require 16 DCT matrix and continues the decomposition into multiple multiplications and 12 additions. If, instead, we decompose factors using sparse matrices to reduce the number of multi- the 1D transform of D and x using the even-odd algorithm plications and additions when compared to the direct method. as proposed by Budagavi et al. [11] then some savings can The design was done using combinational circuits only, thus be realized. The even-odd decomposition will decompose a no clock frequency is provided. The design also appears to use matrix into an even and odd part and the output y which two IDCT blocks which will impact circuit area and power. represents the 1D inverse transform result. Conceicao et al. [7], proposes a fast 4-point IDCT that uses The HEVC standard also provides scaling of the output data statistical information of the transformed residual data where after each 1D inverse transform stage. Scaling is also done dur- quality is maintained per the PSNR measurement along with a ing the forward transform, performed as part of the encoding small but desirable decrease in bit rate. Their approach reduced process. The purpose of this scaling is to both maintain the circuit complexity and high pixel throughput.
Recommended publications
  • VOL. E100-C NO. 6 JUNE 2017 the Usage of This PDF File Must Comply
    VOL. E100-C NO. 6 JUNE 2017 The usage of this PDF file must comply with the IEICE Provisions on Copyright. The author(s) can distribute this PDF file for research and educational (nonprofit) purposes only. Distribution by anyone other than the author(s) is prohibited. IEICE TRANS. ELECTRON., VOL.E100–C, NO.6 JUNE 2017 643 PAPER A High-Throughput and Compact Hardware Implementation for the Reconstruction Loop in HEVC Intra Encoding Yibo FAN†a), Member, Leilei HUANG†, Zheng XIE†, and Xiaoyang ZENG†, Nonmembers SUMMARY In the newly finalized video coding standard, namely high 4×4, 8×8, 16×16, 32×32 and 64×64 with 35 possible pre- efficiency video coding (HEVC), new notations like coding unit (CU), pre- diction modes in intra prediction. Although several fast diction unit (PU) and transformation unit (TU) are introduced to improve mode decision designs have been proposed, still a consid- the coding performance. As a result, the reconstruction loop in intra en- coding is heavily burdened to choose the best partitions or modes for them. erable amount of candidate PU modes, PU partitions or TU In order to solve the bottleneck problems in cycle and hardware cost, this partitions are needed to be traversed by the reconstruction paper proposed a high-throughput and compact implementation for such a loop. reconstruction loop. By “high-throughput”, it refers to that it has a fixed It can be inferred that the reconstruction loop in intra throughput of 32 pixel/cycle independent of the TU/PU size (except for 4×4 TUs). By “compact”, it refers to that it fully explores the reusability prediction has become a bottleneck in cycle and hardware between discrete cosine transform (DCT) and inverse discrete cosine trans- cost.
    [Show full text]
  • Dynamic Resource Management of Network-On-Chip Platforms for Multi-Stream Video Processing
    DYNAMIC RESOURCE MANAGEMENT OF NETWORK-ON-CHIP PLATFORMS FOR MULTI-STREAM VIDEO PROCESSING Hashan Roshantha Mendis Doctor of Engineering University of York Computer Science March 2017 2 Abstract This thesis considers resource management in the context of parallel multiple video stream de- coding, on multicore/many-core platforms. Such platforms have tens or hundreds of on-chip processing elements which are connected via a Network-on-Chip (NoC). Inefficient task allo- cation configurations can negatively affect the communication cost and resource contention in the platform, leading to predictability and performance issues. Efficient resource management for large-scale complex workloads is considered a challenging research problem; especially when applications such as video streaming and decoding have dynamic and unpredictable workload characteristics. For these type of applications, runtime heuristic-based task mapping techniques are required. As the application and platform size increase, decentralised resource management techniques are more desirable to overcome the reliability and performance bot- tlenecks in centralised management. In this work, several heuristic-based runtime resource management techniques, targeting real-time video decoding workloads are proposed. Firstly, two admission control approaches are proposed; one fully deterministic and highly predictable; the other is heuristic-based, which balances predictability and performance. Secondly, a pair of runtime task mapping schemes are presented, which make use of limited known application properties, communication cost and blocking-aware heuristics. Combined with the proposed deterministic admission con- troller, these techniques can provide strict timing guarantees for hard real-time streams whilst improving resource usage. The third contribution in this thesis is a distributed, bio-inspired, low-overhead, task re-allocation technique, which is used to further improve the timeliness and workload distribution of admitted soft real-time streams.
    [Show full text]
  • Parameter Optimization in H.265 Rate-Distortion by Single Frame
    https://doi.org/10.2352/ISSN.2470-1173.2019.11.IPAS-262 © 2019, Society for Imaging Science and Technology Parameter optimization in H.265 Rate-Distortion by single- frame semantic scene analysis Ahmed M. Hamza; University of Portsmouth Abdelrahman Abdelazim; Blackpool and the Fylde College Djamel Ait-Boudaoud; University of Portsmouth Abstract with Q being the quantization value for the source. The relation is The H.265/HEVC (High Efficiency Video Coding) codec and derived based on several assumptions about the source probability its 3D extensions have crucial rate-distortion mechanisms that distribution within the quantization intervals, and the nature of help determine coding efficiency. We have introduced in this the rate-distortion relations themselves (constantly differentiable work a new system of Lagrangian parameterization in RDO cost throughout, etc.). The value initially used for c in the literature functions, based on semantic cues in an image, starting with the was 0.85. This was modified and made adaptive in subsequent current HEVC formulation of the Lagrangian hyper-parameter standards including HEVC/H.265. heuristics. Two semantic scenery flag algorithms are presented Our work here investigates further adaptations to the rate- and tested within the Lagrangian formulation as weighted factors. distortion Lagrangian by semantic algorithms, made possible by The investigation of whether the semantic gap between the coder recent frameworks in computer vision. and the image content is holding back the block-coding mecha- nisms as a whole from achieving greater efficiency has yielded a Adaptive Lambda Hyper-parameters in HEVC positive answer. Early versions of the rate-distortion parameters in H.263 were replaced with more sophisticated models in subsequent stan- Introduction dards.
    [Show full text]
  • Design and Implementation of a Fast HEVC Random Access Video Encoder
    Design and Implementation of a Fast HEVC Random Access Video Encoder ALFREDO SCACCIALEPRE Master's Degree Project Stockholm, Sweden March 2014 XR-EE-KT 2014:003 Contents 1 Introduction 11 1.1 Background . 11 1.2 Thesis work . 12 1.2.1 Factors to consider . 12 1.3 The problem . 12 1.3.1 C65 . 12 1.4 Methods and thesis outline . 13 1.4.1 Methods . 13 1.4.2 Objective measurement . 14 1.4.3 Subjective measurement . 14 1.4.4 Test sequences . 14 1.4.5 Thesis outline . 16 1.4.6 Abbreviations . 16 2 General concepts 19 2.1 Color spaces . 19 2.2 Frames, slices and tiles . 19 2.2.1 Frames . 19 2.2.2 Slices and Tiles . 19 2.3 Predictions . 20 2.3.1 Intra . 20 2.3.2 Inter . 20 2.4 Merge mode . 20 2.4.1 Skip mode . 20 2.5 AMVP mode . 20 2.5.1 I, P and B frames . 21 2.6 CTU, CU, CTB, CB, PB, and TB . 21 2.7 Transforms . 23 2.8 Quantization . 24 2.9 Coding . 24 2.10 Reference picture lists . 24 2.11 Gop structure . 24 2.12 Temporal scalability . 25 2.13 Hierarchical B pictures . 25 2.14 Decoded picture buffer (DPB) . 25 2.15 Low delay and random access configurations . 26 2.16 H.264 and its encoders . 26 2.16.1 H.264 . 26 1 2 CONTENTS 3 Preliminary tests 27 3.1 Speed - quality considerations . 27 3.1.1 Interactive applications . 27 3.1.2 Entertainment applications .
    [Show full text]
  • Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding
    RESEARCH ARTICLE Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding Yuan Gao1,2,3☯, Pengyu Liu1,2,3☯, Yueying Wu1,2,3, Kebin Jia1,2,3*, Guandong Gao1,2,3 1 Beijing Advanced Innovation Center for Future Internet Technology, Beijing University of Technology, Beijing, China, 2 Beijing Laboratory of Advanced Information Networks, Beijing, China, 3 College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing, China ☯ These authors contributed equally to this work. * [email protected] Abstract In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Inno- vative works for improving coding tree to further reduce encoding time are stated in this OPEN ACCESS paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding Citation: Gao Y, Liu P, Wu Y, Jia K, Gao G (2016) unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among Fast Coding Unit Encoding Mechanism for Low CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU Complexity Video Coding. PLoS ONE 11(3): coding tree probability model is proposed for modeling and predicting CU distribution. Even- e0151689. doi:10.1371/journal.pone.0151689 tually, a CU coding tree probability update is proposed, aiming to address probabilistic Editor: You Yang, Huazhong University of Science model distortion problems caused by CC. Experimental results show that the proposed low and Technology, CHINA complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy Received: September 23, 2015 coding and 42% for visually lossless coding and lossless coding.
    [Show full text]
  • The Open-Source Turing Codec: Towards Fast, Flexible and Parallel Hevc Encoding
    THE OPEN-SOURCE TURING CODEC: TOWARDS FAST, FLEXIBLE AND PARALLEL HEVC ENCODING S. G. Blasi1, M. Naccari1, R. Weerakkody1, J. Funnell2 and M. Mrak1 1BBC, Research and Development Department, UK 2Parabola Research, UK ABSTRACT The Turing codec is an open-source software codec compliant with the HEVC standard and specifically designed for speed, flexibility, parallelisation and high coding efficiency. The Turing codec was designed starting from a completely novel backbone to comply with the Main and Main10 profiles of HEVC, and has many desirable features for practical codecs such as very low memory consumption, advanced parallelisation schemes and fast encoding algorithms. This paper presents a technical description of the Turing codec as well as a comparison of its performance with other similar encoders. The codec is capable of cutting the encoding complexity by an average 87% with respect to the HEVC reference implementation for an average coding penalty of 11% higher rates in compression efficiency at the same peak-signal-noise-ratio level. INTRODUCTION The ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) combined their expertise to form the Joint Collaborative Team on Video Coding (JCT-VC), and finalised the first version of the H.265/High Efficiency Video Coding (HEVC) standard (1) in January 2013. HEVC provides the same perceived video quality as its predecessor H.264/Advanced Video Coding (AVC) at considerably lower bitrates (the MPEG final verification tests report average bitrate reductions of up to 60% (2) when coding Ultra High Definition, UHD, content). HEVC specifies larger block sizes for prediction and transform, an additional in-loop filter, and new coding modes for intra- and inter-prediction.
    [Show full text]
  • Design and Analysis of Video Compression Technique Using Hevc Intra-Frame Coding K
    ISSN: 2277-9655 [Reddy* et al., 6(6): June, 2017] Impact Factor: 4.116 IC™ Value: 3.00 CODEN: IJESS7 IJESRT INTERNATIONAL JOURNAL OF ENGINEERING SCIENCES & RESEARCH TECHNOLOGY DESIGN AND ANALYSIS OF VIDEO COMPRESSION TECHNIQUE USING HEVC INTRA-FRAME CODING K. Sripal Reddy*1, Boppidi Srikanth2 & C. Loknath Reddy3 *1,2&3Vardhaman College of Engineering DOI: 10.5281/zenodo.817839 ABSTRACT High Efficiency Video Coding (HEVC) is currently being prepared as the newest video coding standard of the ITU-T Video Coding Experts Group and the ISO/IEC Moving Picture Experts Group. The main goal of the HEVC standardization effort is to enable significantly improved compression performance relative to existing standards in the range of 50% bit-rate reduction for equal perceptual video quality. Intra-frame coding is essential in both still image and video coding. In the block-based coding scheme, the spatial redundancy can be removed by utilizing the correlation between the current pixel and its neighboring reconstructed pixels from the differential pulse code modulation (DPCM) in the early video coding standard H.261 to the angular intra prediction in the latest H.265/HEVC [1], different intra prediction schemes are employed. Almost without exception, linear filters are used in these prediction schemes. This paper provides an overview of the Intra-frame coding techniques of the HEVC standard. KEYWORDS: High Efficiency Video Coding (HEVC), Intra- frame coding, angular intra prediction. I. INTRODUCTION Along with the development of multimedia and hardware technologies, the demand for high-resolution video services with better quality has been increasing. These days, the demand for ultrahigh definition (UHD) video services is emerging, and its resolution is higher than that of full high definition (FHD), by a factor of 4 or more.
    [Show full text]
  • Parallel Deblocking Filter Based on Modified Order of Accessing the Coding Tree Units for HEVC on Multicore Processor
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS VOL. 11, NO. 3, Mar. 2017 1684 Copyright ⓒ2017 KSII Parallel Deblocking Filter Based on Modified Order of Accessing the Coding Tree Units for HEVC on Multicore Processor Haiwei Lei1, Wenyi Liu1 and Anhong Wang2 1 Key Laboratory of Instrumentation Science & Dynamic Measurement, Ministry of Education, North University of China Taiyuan, 030051- China [e-mail: [email protected]; [email protected]] 2 School of Electronic Information Engineering, Taiyuan University of Science and Technology Taiyuan, 030024 - China [e-mail: [email protected]] *Corresponding author: Wenyi Liu; Anhong Wang Received April 11, 2016; revised October 18, 2016; revised December 12, 2016; accepted January 15, 2017; published March 31, 2017 Abstract The deblocking filter (DF) reduces blocking artifacts in encoded video sequences, and thereby significantly improves the subjective and objective quality of videos. Statistics show that the DF accounts for 5–18% of the total decoding time in high-efficiency video coding. Therefore, speeding up the DF will improve codec performance, especially for the decoder. In view of the rapid development of multicore technology, we propose a parallel DF scheme based on a modified order of accessing the coding tree units (CTUs) by analyzing the data dependencies between adjacent CTUs. This enables the DF to run in parallel, providing accelerated performance and more flexibility in the degree of parallelism, as well as finer parallel granularity. We additionally solve the problems of variable privatization and thread synchronization in the parallelization of the DF. Finally, the DF module is parallelized based on the HM16.1 reference software using OpenMP technology.
    [Show full text]
  • Two-Dimensional Audio Compression Method Using Video Coding Schemes
    electronics Article Two-Dimensional Audio Compression Method Using Video Coding Schemes Seonjae Kim 1 , Dongsan Jun 1,*, Byung-Gyu Kim 2,*, Seungkwon Beack 3, Misuk Lee 3 and Taejin Lee 3 1 Department of Convergence IT Engineering, Kyungnam University, Changwon 51767, Korea; [email protected] 2 Department of IT Engineering, Sookmyung Women’s University, Seoul 04310, Korea 3 Electronics and Telecommunications Research Institute (ETRI), Daejeon 34129, Korea; [email protected] (S.B.); [email protected] (M.L.); [email protected] (T.L.) * Correspondence: [email protected] (D.J.); [email protected] (B.-G.K.) Abstract: As video compression is one of the core technologies that enables seamless media streaming within the available network bandwidth, it is crucial to employ media codecs to support powerful coding performance and higher visual quality. Versatile Video Coding (VVC) is the latest video coding standard developed by the Joint Video Experts Team (JVET) that can compress original data hundreds of times in the image or video; the latest audio coding standard, Unified Speech and Audio Coding (USAC), achieves a compression rate of about 20 times for audio or speech data. In this paper, we propose a pre-processing method to generate a two-dimensional (2D) audio signal as an input of a VVC encoder, and investigate the applicability to 2D audio compression using the video coding scheme. To evaluate the coding performance, we measure both signal-to-noise ratio (SNR) and bits per sample (bps). The experimental result shows the possibility of researching 2D audio encoding using video coding schemes.
    [Show full text]
  • Proceedings of the 2018 Symposium on Information Theory and Signal Processing in the Benelux, May 31
    PROCEEDINGS of the 2018 Symposium on Information Theory and Signal Processing in the Benelux May 31-1 June, 2018, University of Twente, Enschede, The Netherlands https://www.utwente.nl/en/eemcs/sitb2018/ Luuk Spreeuwers & Jasper Goseling (Editors) ISBN 978-90-365-4570-9 The symposium is organized under the auspices of Werkgemeenschap Informatie- en Communicatietheorie (WIC) & IEEE Benelux Signal Processing Chapter and supported by Gauss Foundation (sponsoring best student paper award) IEEE Benelux Information Theory Chapter IEEE Benelux Signal Processing Chapter Werkgemeenschap Informatie- en Communicatietheorie (WIC) Previous Symposia 1. 1980 Zoetermeer, The Netherlands, Delft University of Technology 2. 1981 Zoetermeer, The Netherlands, Delft University of Technology 3. 1982 Zoetermeer, The Netherlands, Delft University of Technology 4. 1983 Haasrode, Belgium ISBN 90-334-0690-X 5. 1984 Aalten, The Netherlands ISBN 90-71048-01-2 6. 1985 Mierlo, The Netherlands ISBN 90-71048-02-0 7. 1986 Noordwijkerhout, The Netherlands ISBN 90-6275-272-1 8. 1987 Deventer, The Netherlands ISBN 90-71048-03-9 9. 1988 Mierlo, The Netherlands ISBN 90-71048-04-7 10. 1989 Houthalen, Belgium ISBN 90-71048-05-5 11. 1990 Noordwijkerhout, The Netherlands ISBN 90-71048-06-3 12. 1991 Veldhoven, The Netherlands ISBN 90-71048-07-1 13. 1992 Enschede, The Netherlands ISBN 90-71048-08-X 14. 1993 Veldhoven, The Netherlands ISBN 90-71048-09-8 15. 1994 Louvain-la-Neuve, Belgium ISBN 90-71048-10-1 16. 1995 Nieuwerkerk a/d IJssel, The Netherlands ISBN 90-71048-11-X 17. 1996 Enschede, The Netherlands ISBN 90-365-0812-6 18. 1997 Veldhoven, The Netherlands ISBN 90-71048-12-8 19.
    [Show full text]
  • An 8-Week, Open-Label Study in Depressed Patients
    An Early CU Partition Mode Decision Algorithm in VVC Based on Variogram for Virtual Reality 360 Degree Videos Mengmeng Zhang ( [email protected] ) Beijing Polytechnic College Yan Hou North China University of Technology Zhi Liu North China University of Technology Research Keywords: VVC, virtual reality, empirical variogram, intra coding, Mahalanobis distance Posted Date: May 10th, 2021 DOI: https://doi.org/10.21203/rs.3.rs-481775/v1 License: This work is licensed under a Creative Commons Attribution 4.0 International License. Read Full License An Early CU Partition Mode Decision Algorithm in VVC Based on Variogram for Virtual Reality 360 Degree Videos Mengmeng Zhang*12, Yan Hou2 and Zhi Liu*2 *Correspondence: Mengmeng Zhang: [email protected] Zhi Liu: [email protected] Yan Hou: [email protected] 1 Beijing Polytechnic College, Beijing 100144, China 2 North China University of Technology, Beijing, 100144, China Full list of author information is available at the end of the article. 1 Abstract 360-degree videos have become increasingly popular with the development of virtual reality (VR) technology. These videos are converted to a 2D image plane format before being encoded with standard encoders. To improve coding efficiency, a new generation video coding standard has been launched to be known as Versatile Video Coding (VVC). However, the computational complexity of VVC makes it time-consuming to compress 360-degree videos of high resolution. The diversity of CU partitioning modes of VVC greatly increases the computational complexity. Through statistical experiments on ERP videos, it is found that the probability of using horizontal partitioning for such videos is greater than that of vertical partitioning.
    [Show full text]
  • Entropy Coding in HEVC
    Entropy Coding in HEVC The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Citation Sze, Vivienne, and Detlev Marpe. “Entropy Coding in HEVC.” High Efficiency Video Coding (HEVC) (2014): 209–274. As Published http://dx.doi.org/10.1007/978-3-319-06895-4_8 Publisher Springer-Verlag Version Author's final manuscript Citable link http://hdl.handle.net/1721.1/100315 Terms of Use Creative Commons Attribution-Noncommercial-Share Alike Detailed Terms http://creativecommons.org/licenses/by-nc-sa/4.0/ Entropy Coding in HEVC Vivienne Sze and Detlev Marpe Abstract Context-Based Adaptive Binary Arithmetic Coding (CABAC) is a method of entropy coding first introduced in H.264/AVC and now used in the latest High Ef- ficiency Video Coding (HEVC) standard. While it provides high coding efficiency, the data dependencies in H.264/AVC CABAC make it challenging to parallelize and thus limit its throughput. Accordingly, during the standardization of entropy coding for HEVC, both aspects of coding efficiency and throughput were considered. This chapter describes the functionality and design methodology behind CABAC entropy coding in HEVC. 1 Introduction Context-Based Adaptive Binary Arithmetic Coding (CABAC) [51] is a form of en- tropy coding used in H.264/AVC [3] and also in HEVC [5]. Entropy coding is a loss- less compression scheme that uses the statistical properties to compress data such that the number of bits used to represent the data is logarithmically proportional to the probability of the data. For instance, when compressing a string of characters, frequently used characters are each represented by a few bits, while infrequently used characters are each represented by many bits.
    [Show full text]