Deblocking Scheme for JPEG-Coded Images Using Sparse Representation and All Phase Biorthogonal Transform

Total Page:16

File Type:pdf, Size:1020Kb

Deblocking Scheme for JPEG-Coded Images Using Sparse Representation and All Phase Biorthogonal Transform Journal of Communications Vol. 11, No. 12, December 2016 Deblocking Scheme for JPEG-Coded Images Using Sparse Representation and All Phase Biorthogonal Transform Liping Wang, Chengyou Wang, and Xiao Zhou School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, China Email: [email protected]; {wangchengyou, zhouxiao}@sdu.edu.cn Abstract—For compressed images, a major drawback is that The image deblocking algorithm aims to alleviate those images will exhibit severe blocking artifacts at very low blocking artifacts and improve visual quality of bit rates due to adopting Block-Based Discrete Cosine compressed images. Deblocking methods can be mainly Transform (BDCT). In this paper, a novel deblocking scheme divided into two categories: in-loop processing methods using sparse representation is proposed. A new transform called All Phase Biorthogonal Transform (APBT) was proposed in and post-processing methods. The in-loop deblocking recent years. APBT has the similar energy compaction property processing methods not only avoid the propagation of with Discrete Cosine Transform (DCT). It has very good blocking artifacts between adjacent frames, but also can column properties, high frequency attenuation characteristics, enhance coding efficiency. For example, H.264/AVC [2] low frequency energy aggregation, and so on. In this paper, we adopts the in-loop deblocking processing. Some use it to generate the over-completed dictionary for sparse researchers have designed the in-loop deblocking filter coding. For Orthogonal Matching Pursuit (OMP), we select an [4]. Numerous experimental results manifest that the in- adaptive residual threshold by combining blind image blocking assessment. Experimental results show that this new scheme is loop deblocking filter can provide both objective and effective in image deblocking and can avoid over-blurring of subjective improvement compared with the compressed edges and textures. We can obtain deblocked images in the images. The image deblocking methods which is receiver. performed after decompressed is called post-processing. Index Terms—Image deblocking, sparse representation, All Therefore, post-processing is more practical and used for Phase Biorthogonal Transform (APBT), Orthogonal Matching image deblocking. Pursuit (OMP) For removing and eliminating the blocking artifacts, numerous image deblocking approaches based on post- processing have been proposed, such as filtering methods I. INTRODUCTION [5] and Projection Onto Convex Sets (POCS) [6]. With the development and progress of technology, the Recently, learning-based sparse representation has been resolution of the imaging device is higher and higher, successfully used for image deblocking. Jung et al. [7] which leads to the amount of image data increasing obtained an over-completed dictionary using the K- exponentially. To solve the contradiction between the Singular Value Decomposition (K-SVD) algorithm [8], data rate and channel bandwidth, the massive data must and proposed new method to automatically estimate the be compressed to meet the needs of image transmission residual threshold for Orthogonal Matching Pursuit and storage. The Block-Based Discrete Cosine Transform (OMP) [9]. Instead of processing each image patch (BDCT) has good features of regularity and simplicity for individually, Zhang et al. [10] proposed Group-based hardware implementation. Therefore, BDCT is widely Sparse Representation (GSR), and a novel image utilized by several image and video international coding deblocking method is proposed based on GSR and standards, such as JPEG [1], H.264/AVC [2], and Quantization Constraint (QC) prior. Similarly, Zhao et al. H.265/HEVC [3]. However, at lower bit rates, the [11] proposed Structural Sparse Representation (SSR). blocking artifacts are caused by quantization error. The And combined with QC prior, a novel algorithm for blocking artifacts make image’s visual quality image deblocking was proposed. All of them degenerated. Besides, it also has a negative impact on the demonstrated that sparse representation can effectively further image processing. Therefore, image deblocking is remove image deblocking and ensure the image’s visual necessary and important, especially for highly quality. Our work has been inspired by the image compressed images. deblocking results based on sparse representation. In this paper, we will mainly focuses on image deblocking method for JPEG-coded images. Manuscript received August 28, 2016; revised December 20, 2016. For image deblocking method based on sparse This work was supported by the National Natural Science representation, a proper dictionary and sparse coding Foundation of China (No. 61201371), the promotive research fund for excellent young and middle-aged scientists of Shandong Province, algorithm are needed. Because All Phase Biorthogonal China (No. BS2013DX022), and the Natural Science Foundation of Transform (APBT) [12] has very good column properties, Shandong Province, China (No. ZR2015PF004). high frequency attenuation characteristics, low frequency Corresponding author email: [email protected]. energy aggregation, and so on. The dictionary based on doi:10.12720/jcm.11.12.1095-1101 ©2016 Journal of Communications 1095 Journal of Communications Vol. 11, No. 12, December 2016 APBT can reveal the sparsity of image better than the In our work, we use OMP due to its simplicity and dictionary based on the commonly used orthogonal basis. efficiency. In this paper, we use the APBT to produce the over- B. JPEG Model completed dictionary. For sparse coding algorithm, an adaptive residual threshold for OMP is used by The frame of the image compression standard used combining blind image blocking assessment. here is JPEG [1], which contains three basic steps: DCT, Experimental results show that our proposed scheme is DCT coefficient quantization, and Huffman entropy better than other methods. encoding. The decoding process is inverse process of The rest of this paper is organized as follows. Section encoding. The process of encoding is shown as follows: II concisely introduces the sparse representation and Step 1. The input image is first converted into the JPEG model. Section III presents the image deblocking YCrCb color space and then grouped into blocks of size scheme for JPEG compressed images using sparse 8×8. representation and APBT. In Section VI, experimental Step 2. Before carrying out a BDCT, the input image results and comparisons are presented. Finally, data are shifted from unsigned integers to signed integers. conclusion and further research are given in Section V. Step 3. Each block is transformed by BDCT. Each block will include 64 DCT coefficients composed of one II. SPARSE REPRESENTATION AND JPEG MODEL DC coefficient and 63 AC coefficients. Step 4. Quantize each matrix block. A. Sparse Representation Model Step 5. After quantization, the DC coefficient of each For image patches of size nn , they are converted block is conducted a differential pulse code modulation into column vectors x n in lexicographical order. In (DPCM) coding. Before entropy encoding, the 63 coefficients are scanned into a 1-D zig-zag sequence. order to construct sparse-land model, a dictionary of size D nk (with kn , implying that it is redundant) is III. PROPOSED SCHEME defined. In general, the dictionary is assumed to be known and fixed. According to the sparse representation The framework of our proposed method is shown in model, every image patch x could be represented Fig. 1. Our proposed method includes two parts: the sparsely over this dictionary D , generation of over-completed dictionary and sparse coding with adaptive residual thresholds. ˆ argmin || ||0 subject to Dx (1) Start Input image The || ||0 represents the number of the nonzero entries in . In general, ||ˆ || n . The basic principle Extracting 8×8 0 image blocks of sparse representation is that every signal can be Changing into considered as a linear combination of few columns of the column vectors dictionary D . Blocks matrix If we substitute a specific requirement of an allowable bounded representation error ||Dx ||2 for the Selecting 10000 columns rough constraint Dx , this sparse representation model will become more precise. For the sparse-land SubB matrix input signal x and the noisy output signal y which is SubB[i][j]=SubB[i][j]- means(SubB[:,j]) contaminated by white Gaussian noise with standard Selecting optimal APBT dictionary Coefs=OMP(SubB,D) deviation , the maximum a posteriori (MAP) estimator error residual T D for denoising this noisy signal is represented as SubB[:,j]=SubB[:,j]* Coefs+means(SubB[:,j]) 2 ˆ argmin || ||02 subject to ||Dy || T (2) N The blocks matrix finished? where T is residual threshold, and it is decided by and Y Image blocks . The optimization task of (2) can be also changed into restructuring 2 Deblocking image ˆ argmin ||Dy ||20 || || (3) End For a proper value of , the (2) and (3) are equivalent. Fig. 1. Flow chart of our proposed method At last, the denoised image can be obtained by xDˆ αˆ . Generally, (2) or (3) is very hard to solve. However, A. The Over-Completed APBT Dictionary we can efficiently solve this problem using several According to all phase digital filtering, there are three available approximation techniques, such as OMP [9], kinds of APBT based on WHT, DCT, and IDCT [12]. In Basis Pursuit (BP) [13], and Matching Pursuit (MP) [14]. this paper, we will focus on the APBT based on DCT and ©2016 Journal of Communications 1096 Journal of Communications Vol. 11, No. 12, December 2016 IDCT in order to compare with the DCT. Similar to DCT Similarly, APIDCBT (APBT based on IDCT) can be matrix, APBT can also be used to generate the over- denoted by completed dictionary for image spares representation. 1 Taking APDCBT (APBT based on DCT) for example, , m0,1, , N 1, n 0, N the process of two-dimensional APBT is shown as V (,)mn (6) N m 2 1 m (2 n 1)π mN0,1, , 1, follows. For a signal sequence with N points x and the cos , NN2 2 nN1,2, , 1. APDCBT matrix V with size of NN , the transform coefficients y can be denoted by (4) and (5) after two- Fig.
Recommended publications
  • A Deblocking Filter Hardware Architecture for the High Efficiency
    A Deblocking Filter Hardware Architecture for the High Efficiency Video Coding Standard Cláudio Machado Diniz1, Muhammad Shafique2, Felipe Vogel Dalcin1, Sergio Bampi1, Jörg Henkel2 1Informatics Institute, PPGC, Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Brazil 2Chair for Embedded Systems (CES), Karlsruhe Institute of Technology (KIT), Germany {cmdiniz, fvdalcin, bampi}@inf.ufrgs.br; {muhammad.shafique, henkel}@kit.edu Abstract—The new deblocking filter (DF) tool of the next encoder configuration: (i) Random Access (RA) configuration1 generation High Efficiency Video Coding (HEVC) standard is with Group of Pictures (GOP) equal to 8 (ii) Intra period2 for one of the most time consuming algorithms in video decoding. In each video sequence is defined as in [8] depending upon the order to achieve real-time performance at low-power specific frame rate of the video sequence, e.g. 24, 30, 50 or 60 consumption, we developed a hardware accelerator for this filter. frames per second (fps); (iii) each sequence is encoded with This paper proposes a high throughput hardware architecture four different Quantization Parameter (QP) values for HEVC deblocking filter employing hardware reuse to QP={22,27,32,37} as defined in the HEVC Common Test accelerate filtering decision units with a low area cost. Our Conditions [8]. Fig. 1 shows the accumulated execution time architecture achieves either higher or equivalent throughput (in % of total decoding time) of all functions included in C++ (4096x2048 @ 60 fps) with 5X-6X lower area compared to state- class TComLoopFilter that implement the DF in HEVC of-the-art deblocking filter architectures. decoder software. DF contributes to up to 5%-18% to the total Keywords—HEVC coding; Deblocking Filter; Hardware decoding time, depending on video sequence and QP.
    [Show full text]
  • An Efficient Pipeline Architecture for Deblocking Filter in H.264/AVC
    IEICE TRANS. INF. & SYST., VOL.E90–D, NO.1 JANUARY 2007 99 PAPER Special Section on Advanced Image Technology An Efficient Pipeline Architecture for Deblocking Filter in H.264/AV C Chung-Ming CHEN†a), Member and Chung-Ho CHEN†b), Nonmember SUMMARY In this paper, we study and analyze the computational complexity of deblocking filter in H.264/AVC baseline decoder based on SimpleScalar/ARM simulator. The simulation result shows that the mem- ory reference, content activity check operations, and filter operations are known to be very time consuming in the decoder of this new video cod- ing standard. In order to improve overall system performance, we propose a novel processing order with efficient VLSI architecture which simultane- ously processes the horizontal filtering of vertical edge and vertical filtering of horizontal edge. As a result, the memory performance of the proposed architecture is improved by four times when compared to the software im- plementation. Moreover, the system performance of our design signifi- cantly outperforms the previous proposals. key words: deblocking filter, H.264/AVC, video coding 1. Introduction Fig. 1 Block diagram of H.264/AV C . Video compression is a critical component in today’s multi- media systems. The limited transmission bandwidth or stor- As our experiment result indicates, the operation of age capacity for applications such as DVD, digital televi- the deblocking filter is the most time consuming part of sion, or internet video streaming emphasizes the demand for the H.264/AVC video decoder. The block-based structure higher video compression rates. To achieve this demand, of the H.264/AVC architecture produces artifacts known as the new video coding standard Recommendation H.264 of blocking artifacts.
    [Show full text]
  • Variable Block-Based Deblocking Filter for H.264/Avc
    VARIABLE BLOCK-BASED DEBLOCKING FILTER FOR H.264/AVC Seung-Ho Shin, Young-Joon Chai, Kyu-Sik Jang, Tae-Yong Kim GSAIM, Chung-Ang University, Seoul, Korea ABSTRACT macroblocks up to 4x4 blocks, it is possible to decrease artifacts on the boundaries between 4x4 blocks. In the The deblocking filter in H.264/AVC is used on low-end process of decreasing the blocking artifacts, however, the terminals to a limited extent due to computational actual images’ edges may erroneously be blurred. And it complexity. In this paper, Variable block-based deblocking may not be used on low-end terminals due to complex filter that efficiently eliminates the blocking artifacts computation and large memory capacity. Despite such occurred during the compression of low-bit rates digital shortcomings, however, the deblocking filter can be said to motion pictures is suggested. Blocking artifacts are plaid be the most essential technology in enhancing the subjective images appear on the block boundaries due to DCT and image quality. The general opinion of the subjective image quantization. In the method proposed in this paper, the quality test proves that there is distinguishing difference in image's spatial correlational characteristics are extracted by the image qualities with and without the deblocking filter using the variable block information of motion used [6]. compensation; the filtering is divided into 4 modes according to the characteristics, and adaptive filtering is In this paper, we suggest a block-based deblocking filter by executed in the divided regions. The proposed deblocking using the variable block information of the motion method eliminates the blocking artifacts, prevents excessive compensation.
    [Show full text]
  • A Deblocking Filter Architecture for High Efficiency Video Coding Standard (HEVC)
    UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL INSTITUTO DE INFORMÁTICA CURSO DE ENGENHARIA DE COMPUTAÇÃO FELIPE VOGEL DALCIN A Deblocking Filter Architecture for High Efficiency Video Coding Standard (HEVC) Final report presented in partial fulfillment of the Requirements for the degree in Computer Engineering. Advisor: Prof. Dr. Sergio Bampi Co-advisor: MSc. Cláudio Machado Diniz Porto Alegre 2014 UNIVERSIDADE FEDERAL DO RIO GRANDE DO SUL Reitor: Prof. Carlos Alexandre Netto Vice-Reitor: Prof. Rui Vicente Oppermann Pró-Reitor de Graduação: Prof. Sérgio Roberto Kieling Diretor do Instituto de Informática: Prof. Luís da Cunha Lamb Coordenador do Curso de Engenharia de Computação: Prof. Marcelo Götz Bibliotecária-Chefe do Instituto de Informática: Beatriz Regina Bastos Haro “Docendo discimus.” Seneca AKNOWLEDGEMENTS I would like to express my gratitude to my advisor, Prof. Dr. Sergio Bampi, who accepted me as lab assistant in 2010 to work for the research group under his supervision, bringing me the opportunity to stay in touch with a high-level academic environment. A very special thanks goes out to my co-advisor, MSc. Cláudio Machado Diniz, whose expertise, understanding, patience and unconditionall support were vital to the accomplishment of this work, specially taking into considerations the reduced amount of time he had to review this work. Last but not least, I would like to thank my parents and friends for their unconditional support throughout my degree. A special remark goes to my girlfriend, who kept me motivated during the last weeks of work to get this report done. In conclusion, I recognize that this research would not have been possible without the opportunities offered by both the Universidade Federal do Rio Grande do Sul, in Brazil, and the Technische Universität Kaiserslautern, in Germany.
    [Show full text]
  • Optimization Methods for Data Compression
    OPTIMIZATION METHODS FOR DATA COMPRESSION A Dissertation Presented to The Faculty of the Graduate School of Arts and Sciences Brandeis University Computer Science James A. Storer Advisor In Partial Fulfillment of the Requirements for the Degree Doctor of Philosophy by Giovanni Motta May 2002 This dissertation, directed and approved by Giovanni Motta's Committee, has been accepted and approved by the Graduate Faculty of Brandeis University in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY _________________________ Dean of Arts and Sciences Dissertation Committee __________________________ James A. Storer ____________________________ Martin Cohn ____________________________ Jordan Pollack ____________________________ Bruno Carpentieri ii To My Parents. iii ACKNOWLEDGEMENTS I wish to thank: Bruno Carpentieri, Martin Cohn, Antonella Di Lillo, Jordan Pollack, Francesco Rizzo, James Storer for their support and collaboration. I also thank Jeanne DeBaie, Myrna Fox, Julio Santana for making my life at Brandeis easier and enjoyable. iv ABSTRACT Optimization Methods for Data Compression A dissertation presented to the Faculty of the Graduate School of Arts and Sciences of Brandeis University, Waltham, Massachusetts by Giovanni Motta Many data compression algorithms use ad–hoc techniques to compress data efficiently. Only in very few cases, can data compressors be proved to achieve optimality on a specific information source, and even in these cases, algorithms often use sub–optimal procedures in their execution. It is appropriate to ask whether the replacement of a sub–optimal strategy by an optimal one in the execution of a given algorithm results in a substantial improvement of its performance. Because of the differences between algorithms the answer to this question is domain dependent and our investigation is based on a case–by–case analysis of the effects of using an optimization procedure in a data compression algorithm.
    [Show full text]
  • Low Complexity Deblocking Filter Perceptual Optimization for the Hevc Codec
    2011 18th IEEE International Conference on Image Processing LOW COMPLEXITY DEBLOCKING FILTER PERCEPTUAL OPTIMIZATION FOR THE HEVC CODEC Matteo Naccari 1, Catarina Brites 1,2 , João Ascenso 1,3 and Fernando Pereira 1,2 Instituto de Telecomunicações 1 – Instituto Superior Técnico 2, Instituto Superior de Engenharia de Lisboa 3 {matteo.naccari, catarina.brites, joao.ascenso, fernando.pereira}@lx.it.pt ABSTRACT codec. The perceptual optimization is performed by varying the The compression efficiency of the state-of-art H.264/AVC video two aforementioned offsets to minimize a Generalized Block-edge coding standard must be improved to accommodate the compres- Impairment Metric (GBIM) [7], taken as good quality metric to sion needs of high definition videos. To this end, ITU and MPEG quantify the blocking artifacts visibility. The proposed novelty may started a new standardization project called High Efficiency Video be summarized in two main contributions: first, the GBIM is ex- Coding. The video codec under development still relies on trans- tended to consider the new block sizes considered in the TMuC form domain quantization and includes the same in-loop deblock- codec and, second, a low complexity deblocking filter offsets per- ing filter adopted in the H.264/AVC standard to reduce quantiza- ceptual optimization is proposed to improve the GBIM quality tion blocking artifacts. This deblocking filter provides two offsets while significantly reducing the computational resources that to vary the amount of filtering for each image area. This paper would be required by a brute force approach where all possible proposes a perceptual optimization of these offsets based on a offset values would be exhaustively tested.
    [Show full text]
  • Multimedia Compression and Communication
    UNIT I – MULTIMEDIA COMPONENTS DSEC/ECE/QB DHANALAKSHMI SRINIVASAN ENGINEERING COLLEGE, PERAMBALUR DEPARTMENT OF ELECTRONICS & COMMUNICATION ENGINEERING EC6018 MULTIMEDIA COMPRESSION & COMMUNICATION UNIT I MULTIMEDIA COMPONENTS PART – A 1. What are the responsibilities of interface and information designers in the development of a multimedia project? (A/M 15) An interface designer is responsible for: i) creating software device that organizes content, allows users to access or modify content and present that content on the screen, ii) building a user friendly interface. Information designers, who structure content, determine user pathways and feedback and select presentation media. 2. List the features of multimedia. (A/M 14) A Multimedia system has four basic characteristics: Multimedia systems must be computer controlled. Multimedia systems are integrated. The information they handle must be represented digitally. The interface to the final presentation of media is usually interactive. 3. What are the multimedia components? (A/M 17) Text, Audio, Images, Animations, Video and interactive content are the multimedia components. The first multimedia element is text. Text is the most common multimedia element. 4. Differentiate Serif and Sans serif fonts. (N/D 16, A/M 16, N/D 15) Answer: S.No. Serif fonts Sans serif fonts The ones without such decorative A font that has decorative corners or 1. corners are called Sans Serif (No stands at the corners is called Serif. Serif) fonts. Sans means “without”. So Sans Serif 2. Serif stands for stroke or line font means font without strokes or lines. Serif fonts have the extra stroke or Sans-Serif doesn’t have any such 3.
    [Show full text]
  • Introduction to Data Compression
    THIRD EDITION Introduction to Data Compression Khalid Sayood University of Nebraska AMSTERDAM • BOSTON • HEIDELBERG . LONDON f^PflÄ NEW YORK . OXFORD • PARIS • SAN DIEGO ^£^11ш&3- SAN FRANCISCO * SINGAPORE . SYDNEY . TOKYO ELSEVIER Morgan Kaufmann is an imprint of Elsevier MORGAN KAUFMANКN Contents Preface 1 Introduction 1.1 Compression Techniques 1.1.1 Lossless Compression 1.1.2 Lossy Compression 1.1.3 Measures of Performance 1.2 Modeling and Coding 1.3 Summary 1.4 Projects and Problems 2 Mathematical Preliminaries for Lossless Compression 2.1 Overview 2.2 A Brief Introduction to Information Theory 2.2.1 Derivation of Average Information -fc 2.3 Models 2.3.1 Physical Models 2.3.2 Probability Models 2.3.3 Markov Models 2.3.4 Composite Source Model 2.4 Coding 2.4.1 Uniquely Decodable Codes 2.4.2 Prefix Codes 2.4.3 The Kraft-McMillan Inequality * 2.5 Algorithmic Information Theory 2.6 Minimum Description Length Principle 2.7 Summary 2.8 Projects and Problems 3 Huffman Coding 3.1 Overview 3.2 The Huffman Coding Algorithm 3.2.1 Minimum Variance Huffman Codes 3.2.2 Optimality of Huffman Codes * 3.2.3 Length of Huffman Codes * 3.2.4 Extended Huffman Codes * CONTENTS 3.3 Nonbinary Huffman Codes if 55 3.4 Adaptive Huffman Coding 58 3.4.1 Update Procedure 59 3.4.2 Encoding Procedure 62 3.4.3 Decoding Procedure 63 3.5 Golomb Codes 65 3.6 Rice Codes 67 3.6.1 CCSDS Recommendation for Lossless Compression 67 3.7 Tunstall Codes 69 3.8 Applications of Huffman Coding 72 3.8.1 Lossless Image Compression 72 3.8.2 Text Compression 74 3.8.3 Audio Compression
    [Show full text]
  • Source and Channel Coding for Audiovisual Communication Systems
    Thesis for the degree of Doctor of Philosophy Source and Channel Coding for Audiovisual Communication Systems Moo Young Kim Sound and Image Processing Laboratory Department of Signals, Sensors, and Systems Kungliga Tekniska H¨ogskolan Stockholm 2004 Kim, Moo Young Source and Channel Coding for Audiovisual Communication Systems Copyright c 2004 Moo Young Kim except where otherwise stated. All rights reserved. ISBN 91-628-6198-0 TRITA-S3-SIP 2004:3 ISSN 1652-4500 ISRN KTH/S3/SIP-04:03-SE Sound and Image Processing Laboratory Department of Signals, Sensors, and Systems Kungliga Tekniska H¨ogskolan SE-100 44 Stockholm, Sweden Telephone + 46 (0)8-790 7790 To my family Abstract Topics in source and channel coding for audiovisual communication systems are studied. The goal of source coding is to represent a source with the lowest possible rate to achieve a particular distortion, or with the lowest possible distortion at a given rate. Channel coding adds redundancy to quantized source information to recover channel errors. This thesis consists of four topics. Firstly, based on high-rate theory, we propose Karhunen-Lo`eve trans- form (KLT)-based classified vector quantization (VQ) to efficiently utilize optimal VQ advantages over scalar quantization (SQ). Compared with code- excited linear predictive (CELP) speech coding, KLT-based classified VQ provides not only a higher SNR and perceptual quality, but also lower com- putational complexity. Further improvement is obtained by companding. Secondly, we compare various transmitter-based packet-loss recovery techniques from a rate-distortion viewpoint for real-time audiovisual com- munication systems over the Internet. We conclude that, in most circum- stances, multiple description coding (MDC) is the best packet-loss recov- ery technique.
    [Show full text]
  • Fpga Implementation of Deblocking Filter Custom Instruction Hardware on Nios-Ii Based Soc
    International Journal of VLSI design & Communication Systems (VLSICS) Vol.2, No.4, December 2011 FPGA IMPLEMENTATION OF DEBLOCKING FILTER CUSTOM INSTRUCTION HARDWARE ON NIOS-II BASED SOC Bolla Leela Naresh 1 N.V.Narayana Rao 2 and Addanki Purna Ramesh 3 Department of ECE, Sri Vasavi Engg College, Tadepalligudem, West Godavari (dt), Andhra Pradesh, India [email protected] [email protected] [email protected] ABSTRACT This paper presents a frame work for hardware acceleration for post video processing system implemented on FPGA. The deblocking filter algorithms ported on SOC having Altera NIOS-II soft core processor.SOC designed with the help of SOPC builder .Custom instructions are chosen by identifying the most frequently used tasks in the algorithm and the instruction set of NIOS-II processor has been extended. Deblocking filter new instruction added to the processor that are implemented in hardware and interfaced to the NIOS- II processor. New instruction added to the processor to boost the performance of the deblocking filter algorithm. Use of custom instructions the implemented tasks have been accelerated by 5.88%. The benefit of the speed is obtained at the cost of very small hardware resources. KEYWORDS Deblocking filter , SOC, NIOS-II soft processor, FPGA 1. INTRODUCTION Video broadcasting over the internet to handheld devices and mobile phones is becoming increasingly popular in the recent years. Also High Definition TVs are becoming common. But due to the limitation in the transmission bandwidth the videos will generally be encoded using video coding techniques which uses DCT and quantization that brings blockiness in the decoded video.
    [Show full text]
  • Bit-Rate Aware Reconfigurable Architecture for H.264/Avc Deblocking Filter
    University of Central Florida STARS Electronic Theses and Dissertations, 2004-2019 2010 Bit-rate Aware Reconfigurable Architecture For H.264/avc Deblocking Filter Rakan Khraisha University of Central Florida Part of the Electrical and Electronics Commons Find similar works at: https://stars.library.ucf.edu/etd University of Central Florida Libraries http://library.ucf.edu This Masters Thesis (Open Access) is brought to you for free and open access by STARS. It has been accepted for inclusion in Electronic Theses and Dissertations, 2004-2019 by an authorized administrator of STARS. For more information, please contact [email protected]. STARS Citation Khraisha, Rakan, "Bit-rate Aware Reconfigurable Architecture For H.264/avc Deblocking Filter" (2010). Electronic Theses and Dissertations, 2004-2019. 1571. https://stars.library.ucf.edu/etd/1571 BIT-RATE AWARE RECONFIGURABLE ARCHITECTURE FOR H.264/AVC DEBLOCKING FILTER by RAKAN KHRAISHA B.S. University of Jordan, 2007 A thesis submitted in partial fulfillment of the requirements for the degree of Master of Science in the School of Electrical Engineering and Computer Science in the College of Engineering and Computer Science at the University of Central Florida Orlando, Florida Summer Term 2010 ©2010 Rakan Khraisha ii ABSTARCT In H.264/AVC, DeBlocking Filter (DBF) achieves bit rate savings and it is used to improve visual quality by reducing the presence of blocking artifacts. However, these advantages come at the expense of increasing computational complexity of the DBF due to highly adaptive mode decision and small 4x4 block size. The DBF easily accounts for one third of the computational complexity of the decoder.
    [Show full text]
  • A Near Optimal Deblocking Filter for H.264 Advanced Video Coding
    A Near Optimal Deblocking Filter for H.264 Advanced Video Coding Shen-Yu Shih Cheng-Ru Chang Youn-Long Lin Department of Computer Science National Tsing Hua University Hsin-Chu, Taiwan 300 Tel : +886-3-573-1072 e-mail: [email protected] Abstract - We propose a near optimal hardware architecture activated to perform compensation. Meanwhile, the Inverse for deblocking filter in H.264/MPEG-4 AVC. We propose a Quantization and Inverse DCT (IQ/IDCT) module reads novel filtering order and a data reuse strategy that result in coefficient data from Coeff mem and transforms them back significant saving in filtering time, local memory usage, and to residuals. The Picture Reconstruction (Pic Rec) module memory traffic. Every 16x16 macroblock requires 192 filtering combines the compensated data with residuals. Finally, the operations. After a few initialization cycles, our 5-stage pipelined architecture is able to perform one filtering operation Deblocking Filter (DF) module gets reconstructed data to per cycle. Compared with some state-of-the-art designs, our perform filtering and outputs the filtered macroblock to architecture delivers the fastest level of performance while refMB mem for reference and display. This paper presents using much smaller gate count and memory. We have our design and implementation of the DF module. implemented and integrated the proposed deblocking filter into an H.264 main profile video decoder and verified it with an H.264 stream Raw Data Ref frame FPGA prototype. mem Refidx Para mem VLC mem I. Introduction MV Picture mem manage Bitstream mem Rec mem H.264/MPEG-4 AVC is an emerging video coding MB MC Pred Pic refMB CABAC info DF standard [1][2].
    [Show full text]