Video Compression and Communications: from Basics to H.261, H.263, H.264, MPEG2, MPEG4 for DVB and HSDPA-Style Adaptive Turbo-Transceivers

Total Page:16

File Type:pdf, Size:1020Kb

Video Compression and Communications: from Basics to H.261, H.263, H.264, MPEG2, MPEG4 for DVB and HSDPA-Style Adaptive Turbo-Transceivers Video Compression and Communications: From Basics to H.261, H.263, H.264, MPEG2, MPEG4 for DVB and HSDPA-Style Adaptive Turbo-Transceivers by c L. Hanzo, P.J. Cherriman, J. Streit Department of Electronics and Computer Science, University of Southampton, UK About the Authors Lajos Hanzo (http://www-mobile.ecs.soton.ac.uk) FREng, FIEEE, FIET, DSc received his degree in electronics in 1976 and his doctorate in 1983. During his 30-year career in telecommunications he has held various research and academic posts in Hungary, Germany and the UK. Since 1986 he has been with the School of Electronics and Computer Science, University of Southampton, UK, where he holds the chair in telecom- munications. He has co-authored 15 books on mobile radio communica- tions totalling in excess of 10 000, published about 700 research papers, acted as TPC Chair of IEEE conferences, presented keynote lectures and been awarded a number of distinctions. Currently he is directing an academic research team, working on a range of research projects in the field of wireless multimedia communications sponsored by industry, the Engineering and Physical Sciences Research Council (EPSRC) UK, the European IST Programme and the Mobile Virtual Centre of Excellence (VCE), UK. He is an enthusiastic supporter of industrial and academic liaison and he offers a range of industrial courses. He is also an IEEE Distinguished Lecturer of both the Communications Society and the Vehicular Technology Society (VTS). Since 2005 he has been a Governer of the VTS. For further information on research in progress and associated publications please refer to http://www-mobile.ecs.soton.ac.uk Peter J. Cherriman graduated in 1994 with an M.Eng. degree in In- formation Engineering from the University of Southampton, UK. Since 1994 he has been with the Department of Electronics and Computer Sci- ence at the University of Southampton, UK, working towards a Ph.D. in mobile video networking which was completed in 1999. Currently he is working on projects for the Mobile Virtual Centre of Excellence, UK. His current areas of research include robust video coding, microcellular radio systems, power control, dynamic channel allocation and multiple access protocols. He has published about two dozen conference and journal papers, and holds several patents. Jurgen¨ Streit received his Dipl.-Ing. Degree in electronic engineering from the Aachen University of Technology in 1993 and his Ph.D. in image coding from the Department of Electronics and Computer Sci- ence, University of Southampton, UK, in 1995. From 1992 to 1996 Dr. Streit had been with the Department of Electronics and Computer Sci- ence working in the Communications Research Group. His work led to numerous publications. Since then he has joined a management consul- tancy working as an information technology consultant. i ii Other Wiley and IEEE Press Books on Related Topics 1 • R. Steele, L. Hanzo (Ed): Mobile Radio Communications: Second and Third Gener- ation Cellular and WATM Systems, John Wiley and IEEE Press, 2nd edition, 1999, ISBN 07 273-1406-8, 1064 pages • L. Hanzo, F.C.A. Somerville, J.P. Woodard: Voice Compression and Communications: Principles and Applications for Fixed and Wireless Channels; IEEE Press and John Wiley, 2001, 642 pages • L. Hanzo, P. Cherriman, J. Streit: Wireless Video Communications: Second to Third Generation and Beyond, IEEE Press and John Wiley, 2001, 1093 pages • L. Hanzo, T.H. Liew, B.L. Yeap: Turbo Coding, Turbo Equalisation and Space-Time Coding, John Wiley and IEEE Press, 2002, 751 pages • J.S. Blogh, L. Hanzo: Third-Generation Systems and Intelligent Wireless Networking: Smart Antennas and Adaptive Modulation, John Wiley and IEEE Press, 2002, 408 pages • L. Hanzo, C.H. Wong, M.S. Yee: Adaptive Wireless Transceivers: Turbo-Coded, Turbo- Equalised and Space-Time Coded TDMA, CDMA and OFDM Systems, John Wiley and IEEE Press, 2002, 737 pages • L. Hanzo, L-L. Yang, E-L. Kuan, K. Yen: Single- and Multi-Carrier CDMA: Multi- User Detection, Space-Time Spreading, Synchronisation, Networking and Standards, John Wiley and IEEE Press, June 2003, 1060 pages • L. Hanzo, M. Munster,¨ T. Keller, B-J. Choi, OFDM and MC-CDMA for Broadband Multi-User Communications, WLANs and Broadcasting, John-Wiley and IEEE Press, 2003, 978 pages • L. Hanzo, S-X. Ng, T. Keller and W.T. Webb, Quadrature Amplitude Modulation: From Basics to Adaptive Trellis-Coded, Turbo-Equalised and Space-Time Coded OFDM, CDMA and MC-CDMA Systems, John Wiley and IEEE Press, 2004, 1105 pages 1For detailed contents and sample chapters please refer to http://www-mobile.ecs.soton.ac.uk iii iv • L. Hanzo, T. Keller: An OFDM and MC-CDMA Primer, John Wiley and IEEE Press, 2006, 430 pages • L. Hanzo, F.C.A. Somerville, J.P. Woodard: Voice and Audio Compression for Wireless Communications, John Wiley and IEEE Press, 2007, 858 pages • L. Hanzo, P.J. Cherriman, J. Streit: Video Compression and Communications: H.261, H.263, H.264, MPEG4 and HSDPA-Style Adaptive Turbo-Transceivers John Wiley and IEEE Press, 2007, 680 pages • L. Hanzo, J. Blogh and S. Ni: HSDPA-Style FDD Versus TDD Networking: Smart Antennas and Adaptive Modulation John Wiley and IEEE Press, 2007, 650 pages viii Contents About the Authors i Other Wiley and IEEE Press Books on Related Topics i Preface v Acknowledgments vii I Video Codecs for HSDPA-Style Adaptive Videophones 1 2 Fractal Image Codecs 23 2.1 FractalPrinciples ............................... 23 2.2 One-Dimensional Fractal Coding . 26 2.2.1 FractalCodecDesign. 29 2.2.2 Fractal Codec Performance . 31 2.3 ErrorSensitivityandComplexity . ..... 35 2.4 SummaryandConclusions . 36 3 Low Bit-Rate DCT Codecs and HSDPA-Style Videophones 39 3.1 VideoCodecOutline .............................. 39 3.2 The Principle of Motion Compensation . 41 3.2.1 DistanceMeasures .. ..... ..... ..... ..... ..... 44 3.2.2 MotionSearchAlgorithms . 46 3.2.2.1 Full or Exhaustive Motion Search . 46 3.2.2.2 Gradient-Based Motion Estimation . 47 3.2.2.3 Hierarchical or Tree Search . 48 3.2.2.4 Subsampling Search . 49 3.2.2.5 Post-Processing of Motion Vectors . 50 3.2.2.6 Gain-Cost-Controlled Motion Compensation . 51 3.2.3 Other Motion Estimation Techniques . 52 3.2.3.1 Pel-Recursive Displacement Estimation . 52 ix x CONTENTS 3.2.3.2 Grid Interpolation Techniques . 53 3.2.3.3 MC Using Higher Order Transformations . 54 3.2.3.4 MCintheTransformDomain . 54 3.2.4 Conclusion ............................... 55 3.3 TransformCoding................................ 55 3.3.1 One-Dimensional Transform Coding . 55 3.3.2 Two-Dimensional Transform Coding . 56 3.3.3 Quantizer Training for Single-Class DCT . 60 3.3.4 Quantizer Training for Multiclass DCT . 61 3.4 TheCodecOutline ............................... 64 3.5 InitialIntra-FrameCoding . 64 3.6 Gain-Controlled Motion Compensation . ..... 64 3.7 The MCER Active/Passive Concept . 66 3.8 Partial Forced Update of the Reconstructed Frame Buffers .......... 67 3.9 The Gain/Cost-Controlled Inter-Frame Codec . ........ 69 3.9.1 Complexity Considerations and Reduction Techniques ........ 70 3.10 TheBit-AllocationStrategy. ..... 71 3.11Results...................................... 73 3.12 DCT Codec Performance under Erroneous Conditions . ......... 74 3.12.1 BitSensitivity.............................. 75 3.12.2 BitSensitivityofCodecIandII . 78 3.13 DCT-Based Low-Rate Video Transceivers . ...... 79 3.13.1 ChoiceofModem ........................... 79 3.13.2 Source-Matched Transceiver . 79 3.13.2.1 System1........................... 79 3.13.2.1.1 System Concept . 79 3.13.2.1.2 Sensitivity-Matched Modulation . 80 3.13.2.1.3 Source Sensitivity . 80 3.13.2.1.4 Forward Error Correction . 81 3.13.2.1.5 Transmission Format . 82 3.13.2.2 System2........................... 84 3.13.2.2.1 Automatic Repeat Request . 84 3.13.2.3 Systems3–5 ......................... 85 3.14 SystemPerformance .............................. 86 3.14.1 PerformanceofSystem1. 86 3.14.2 PerformanceofSystem2. 89 3.14.2.1 FERPerformance . 89 3.14.2.2 Slot Occupancy Performance . 90 3.14.2.3 PSNR Performance . 92 3.14.3 Performance of Systems 3–5 . 93 3.15 SummaryandConclusions . 97 CONTENTS xi 4 Low Bit-Rate VQ Codecs and HSDPA-Style Videophones 99 4.1 Introduction................................... 99 4.2 TheCodebookDesign ............................. 99 4.3 TheVectorQuantizerDesign . 101 4.3.1 Mean and Shape Gain Vector Quantization . 106 4.3.2 Adaptive Vector Quantization . 107 4.3.3 Classified Vector Quantization . 109 4.3.4 AlgorithmicComplexity . 110 4.4 Performance under Erroneous Conditions . 112 4.4.1 Bit-AllocationStrategy. 112 4.4.2 BitSensitivity..............................113 4.5 VQ-Based Low-Rate Video Transceivers . 115 4.5.1 ChoiceofModulation . 115 4.5.2 ForwardErrorCorrection . 116 4.5.3 ArchitectureofSystem1 . 118 4.5.4 ArchitectureofSystem2 . 119 4.5.5 ArchitectureofSystems3–6 . 120 4.6 SystemPerformance ..............................120 4.6.1 SimulationEnvironment . 120 4.6.2 PerformanceofSystems1and3 . 121 4.6.3 PerformanceofSystems4and5 . 123 4.6.4 PerformanceofSystems2and6 . 125 4.7 Joint Iterative Decoding of Trellis-Based VQ-Video and TCM . 126 4.7.1 Introduction...............................126 4.7.2 SystemOverview. .127 4.7.3 Compression ..............................127 4.7.4 Vector quantization decomposition . 128 4.7.5 Serial concatenation and iterative decoding . .......128 4.7.6 TransmissionFrameStructure . 130 4.7.7 Frame difference decomposition . 130 4.7.8 VQcodebook..............................132 4.7.9 VQ-induced code constraints . 133 4.7.10 VQtrellisstructure . 134 4.7.11 VQEncoding..............................137 4.7.12 VQDecoding..............................138 4.7.13 Results .................................140 4.8 SummaryandConclusions . 143 5 Low Bit-Rate Quad-Tree-Based Codecs and HSDPA-Style Videophones 147 5.1 Introduction...................................147 5.2 Quad-TreeDecomposition . 147 5.3 Quad-TreeIntensityMatch . 150 5.3.1 Zero-OrderIntensityMatch . 150 5.3.2 First-OrderIntensityMatch . 152 5.3.3 Decomposition Algorithmic Issues . 153 5.4 Model-Based Parametric Enhancement . 156 xii CONTENTS 5.4.1 EyeandMouthDetection . 157 5.4.2 Parametric Codebook Training .
Recommended publications
  • How to Exploit the Transferability of Learned Image Compression to Conventional Codecs
    How to Exploit the Transferability of Learned Image Compression to Conventional Codecs Jan P. Klopp Keng-Chi Liu National Taiwan University Taiwan AI Labs [email protected] [email protected] Liang-Gee Chen Shao-Yi Chien National Taiwan University [email protected] [email protected] Abstract Lossy compression optimises the objective Lossy image compression is often limited by the sim- L = R + λD (1) plicity of the chosen loss measure. Recent research sug- gests that generative adversarial networks have the ability where R and D stand for rate and distortion, respectively, to overcome this limitation and serve as a multi-modal loss, and λ controls their weight relative to each other. In prac- especially for textures. Together with learned image com- tice, computational efficiency is another constraint as at pression, these two techniques can be used to great effect least the decoder needs to process high resolutions in real- when relaxing the commonly employed tight measures of time under a limited power envelope, typically necessitating distortion. However, convolutional neural network-based dedicated hardware implementations. Requirements for the algorithms have a large computational footprint. Ideally, encoder are more relaxed, often allowing even offline en- an existing conventional codec should stay in place, ensur- coding without demanding real-time capability. ing faster adoption and adherence to a balanced computa- Recent research has developed along two lines: evolu- tional envelope. tion of exiting coding technologies, such as H264 [41] or As a possible avenue to this goal, we propose and investi- H265 [35], culminating in the most recent AV1 codec, on gate how learned image coding can be used as a surrogate the one hand.
    [Show full text]
  • Video Codec Requirements and Evaluation Methodology
    Video Codec Requirements 47pt 30pt and Evaluation Methodology Color::white : LT Medium Font to be used by customers and : Arial www.huawei.com draft-filippov-netvc-requirements-01 Alexey Filippov, Huawei Technologies 35pt Contents Font to be used by customers and partners : • An overview of applications • Requirements 18pt • Evaluation methodology Font to be used by customers • Conclusions and partners : Slide 2 Page 2 35pt Applications Font to be used by customers and partners : • Internet Protocol Television (IPTV) • Video conferencing 18pt • Video sharing Font to be used by customers • Screencasting and partners : • Game streaming • Video monitoring / surveillance Slide 3 35pt Internet Protocol Television (IPTV) Font to be used by customers and partners : • Basic requirements: . Random access to pictures 18pt Random Access Period (RAP) should be kept small enough (approximately, 1-15 seconds); Font to be used by customers . Temporal (frame-rate) scalability; and partners : . Error robustness • Optional requirements: . resolution and quality (SNR) scalability Slide 4 35pt Internet Protocol Television (IPTV) Font to be used by customers and partners : Resolution Frame-rate, fps Picture access mode 2160p (4K),3840x2160 60 RA 18pt 1080p, 1920x1080 24, 50, 60 RA 1080i, 1920x1080 30 (60 fields per second) RA Font to be used by customers and partners : 720p, 1280x720 50, 60 RA 576p (EDTV), 720x576 25, 50 RA 576i (SDTV), 720x576 25, 30 RA 480p (EDTV), 720x480 50, 60 RA 480i (SDTV), 720x480 25, 30 RA Slide 5 35pt Video conferencing Font to be used by customers and partners : • Basic requirements: . Delay should be kept as low as possible 18pt The preferable and maximum delay values should be less than 100 ms and 350 ms, respectively Font to be used by customers .
    [Show full text]
  • Chapter 9 Image Compression Standards
    Fundamentals of Multimedia, Chapter 9 Chapter 9 Image Compression Standards 9.1 The JPEG Standard 9.2 The JPEG2000 Standard 9.3 The JPEG-LS Standard 9.4 Bi-level Image Compression Standards 9.5 Further Exploration 1 Li & Drew c Prentice Hall 2003 ! Fundamentals of Multimedia, Chapter 9 9.1 The JPEG Standard JPEG is an image compression standard that was developed • by the “Joint Photographic Experts Group”. JPEG was for- mally accepted as an international standard in 1992. JPEG is a lossy image compression method. It employs a • transform coding method using the DCT (Discrete Cosine Transform). An image is a function of i and j (or conventionally x and y) • in the spatial domain. The 2D DCT is used as one step in JPEG in order to yield a frequency response which is a function F (u, v) in the spatial frequency domain, indexed by two integers u and v. 2 Li & Drew c Prentice Hall 2003 ! Fundamentals of Multimedia, Chapter 9 Observations for JPEG Image Compression The effectiveness of the DCT transform coding method in • JPEG relies on 3 major observations: Observation 1: Useful image contents change relatively slowly across the image, i.e., it is unusual for intensity values to vary widely several times in a small area, for example, within an 8 8 × image block. much of the information in an image is repeated, hence “spa- • tial redundancy”. 3 Li & Drew c Prentice Hall 2003 ! Fundamentals of Multimedia, Chapter 9 Observations for JPEG Image Compression (cont’d) Observation 2: Psychophysical experiments suggest that hu- mans are much less likely to notice the loss of very high spatial frequency components than the loss of lower frequency compo- nents.
    [Show full text]
  • (A/V Codecs) REDCODE RAW (.R3D) ARRIRAW
    What is a Codec? Codec is a portmanteau of either "Compressor-Decompressor" or "Coder-Decoder," which describes a device or program capable of performing transformations on a data stream or signal. Codecs encode a stream or signal for transmission, storage or encryption and decode it for viewing or editing. Codecs are often used in videoconferencing and streaming media solutions. A video codec converts analog video signals from a video camera into digital signals for transmission. It then converts the digital signals back to analog for display. An audio codec converts analog audio signals from a microphone into digital signals for transmission. It then converts the digital signals back to analog for playing. The raw encoded form of audio and video data is often called essence, to distinguish it from the metadata information that together make up the information content of the stream and any "wrapper" data that is then added to aid access to or improve the robustness of the stream. Most codecs are lossy, in order to get a reasonably small file size. There are lossless codecs as well, but for most purposes the almost imperceptible increase in quality is not worth the considerable increase in data size. The main exception is if the data will undergo more processing in the future, in which case the repeated lossy encoding would damage the eventual quality too much. Many multimedia data streams need to contain both audio and video data, and often some form of metadata that permits synchronization of the audio and video. Each of these three streams may be handled by different programs, processes, or hardware; but for the multimedia data stream to be useful in stored or transmitted form, they must be encapsulated together in a container format.
    [Show full text]
  • Image Compression Using Discrete Cosine Transform Method
    Qusay Kanaan Kadhim, International Journal of Computer Science and Mobile Computing, Vol.5 Issue.9, September- 2016, pg. 186-192 Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology ISSN 2320–088X IMPACT FACTOR: 5.258 IJCSMC, Vol. 5, Issue. 9, September 2016, pg.186 – 192 Image Compression Using Discrete Cosine Transform Method Qusay Kanaan Kadhim Al-Yarmook University College / Computer Science Department, Iraq [email protected] ABSTRACT: The processing of digital images took a wide importance in the knowledge field in the last decades ago due to the rapid development in the communication techniques and the need to find and develop methods assist in enhancing and exploiting the image information. The field of digital images compression becomes an important field of digital images processing fields due to the need to exploit the available storage space as much as possible and reduce the time required to transmit the image. Baseline JPEG Standard technique is used in compression of images with 8-bit color depth. Basically, this scheme consists of seven operations which are the sampling, the partitioning, the transform, the quantization, the entropy coding and Huffman coding. First, the sampling process is used to reduce the size of the image and the number bits required to represent it. Next, the partitioning process is applied to the image to get (8×8) image block. Then, the discrete cosine transform is used to transform the image block data from spatial domain to frequency domain to make the data easy to process.
    [Show full text]
  • Task-Aware Quantization Network for JPEG Image Compression
    Task-Aware Quantization Network for JPEG Image Compression Jinyoung Choi1 and Bohyung Han1 Dept. of ECE & ASRI, Seoul National University, Korea fjin0.choi,[email protected] Abstract. We propose to learn a deep neural network for JPEG im- age compression, which predicts image-specific optimized quantization tables fully compatible with the standard JPEG encoder and decoder. Moreover, our approach provides the capability to learn task-specific quantization tables in a principled way by adjusting the objective func- tion of the network. The main challenge to realize this idea is that there exist non-differentiable components in the encoder such as run-length encoding and Huffman coding and it is not straightforward to predict the probability distribution of the quantized image representations. We address these issues by learning a differentiable loss function that approx- imates bitrates using simple network blocks|two MLPs and an LSTM. We evaluate the proposed algorithm using multiple task-specific losses| two for semantic image understanding and another two for conventional image compression|and demonstrate the effectiveness of our approach to the individual tasks. Keywords: JPEG image compression, adaptive quantization, bitrate approximation. 1 Introduction Image compression is a classical task to reduce the file size of an input image while minimizing the loss of visual quality. This task has two categories|lossy and lossless compression. Lossless compression algorithms preserve the contents of input images perfectly even after compression, but their compression rates are typically low. On the other hand, lossy compression techniques allow the degra- dation of the original images by quantization and reduce the file size significantly compared to lossless counterparts.
    [Show full text]
  • CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter Prediction using Binary classification techniques A graduate project submitted in partial fulfillment of the requirements for the degree of Master of Science in Software Engineering by Alex Kit Romero May 2020 The graduate project of Alex Kit Romero is approved: ____________________________________ ____________ Dr. Katya Mkrtchyan Date ____________________________________ ____________ Dr. Kyle Dewey Date ____________________________________ ____________ Dr. John J. Noga, Chair Date California State University, Northridge ii Dedication This project is dedicated to all of the Computer Science professors that I have come in contact with other the years who have inspired and encouraged me to pursue a career in computer science. The words and wisdom of these professors are what pushed me to try harder and accomplish more than I ever thought possible. I would like to give a big thanks to the open source community and my fellow cohort of computer science co-workers for always being there with answers to my numerous questions and inquiries. Without their guidance and expertise, I could not have been successful. Lastly, I would like to thank my friends and family who have supported and uplifted me throughout the years. Thank you for believing in me and always telling me to never give up. iii Table of Contents Signature Page ................................................................................................................................ ii Dedication .....................................................................................................................................
    [Show full text]
  • Why Compression Matters
    Computational thinking for digital technologies: Snapshot 2 PROGRESS OUTCOME 6 Why compression matters Context Sarah has been investigating the concept of compression coding, by looking at the different ways images can be compressed and the trade-offs of using different methods of compression in relation to the size and quality of images. She researches the topic by doing several web searches and produces a report with her findings. Insight 1: Reasons for compressing files I realised that data compression is extremely useful as it reduces the amount of space needed to store files. Computers and storage devices have limited space, so using smaller files allows you to store more files. Compressed files also download more quickly, which saves time – and money, because you pay less for electricity and bandwidth. Compression is also important in terms of HCI (human-computer interaction), because if images or videos are not compressed they take longer to download and, rather than waiting, many users will move on to another website. Insight 2: Representing images in an uncompressed form Most people don’t realise that a computer image is made up of tiny dots of colour, called pixels (short for “picture elements”). Each pixel has components of red, green and blue light and is represented by a binary number. The more bits used to represent each component, the greater the depth of colour in your image. For example, if red is represented by 8 bits, green by 8 bits and blue by 8 bits, 16,777,216 different colours can be represented, as shown below: 8 bits corresponds to 256 possible numbers in binary red x green x blue = 256 x 256 x 256 = 16,777,216 possible colour combinations 1 Insight 3: Lossy compression and human perception There are many different formats for file compression such as JPEG (image compression), MP3 (audio compression) and MPEG (audio and video compression).
    [Show full text]
  • Website Quality Evaluation Based on Image Compression Techniques
    Volume 7, Issue 2, February 2017 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Website Quality Evaluation based On Image Compression Techniques Algorithm Chandran.M1 Ramani A.V2 MCA Department & SRMV CAS, Coimbatore, HEAD OF CS & SRMV CAS, Coimbatore, Tamilnadu, India Tamilnadu, India DOI: 10.23956/ijarcsse/V7I2/0146 Abstract-Internet is a way of presenting information knowledge the webpages plays a vital role in delivering information and performing to the users. The information and performing delivered by the webpages should not be delayed and should be qualitative. ie. The performance of webpages should be good. Webpage performance can get degraded by many numbers of factors. One such factor is loading of images in the webpages. Image consumes more loading time in webpage and may degrade the performance of the webpage. In order to avoid the degradation of webpage and to improve the performance, this research work implements the concept of optimization of images. Images can be optimized using several factors such as compression, choosing right image formats, optimizing the CSS etc. This research work concentrates on choosing the right image formats. Different image formats have been analyzed and tested in the SRKVCAS webpages. Based on the analysis, the suitable image format is suggested to be get used in the webpage to improve the performance. Keywords: Image, CSS, Image Compression and Lossless I. INTRODUCTION As images continues to be the largest part of webpage content, it is critically important for web developers to take aggressive control of the image sizes and quality in order to deliver a fastest loading and responsive site for the users.
    [Show full text]
  • Comparison of Video Compression Standards
    International Journal of Computer and Electrical Engineering, Vol. 5, No. 6, December 2013 Comparison of Video Compression Standards S. Ponlatha and R. S. Sabeenian display digital pictures. Each pixel is thus represented by Abstract—In order to ensure compatibility among video one R, G, and B components. The 2D array of pixels that codecs from different manufacturers and applications and to constitutes a picture is actually three 2D arrays with one simplify the development of new applications, intensive efforts array for each of the RGB components. A resolution of 8 have been undertaken in recent years to define digital video bits per component is usually sufficient for typical consumer standards Over the past decades, digital video compression applications. technologies have become an integral part of the way we create, communicate and consume visual information. Digital A. The Need for Compression video communication can be found today in many application sceneries such as broadcast services over satellite and Fortunately, digital video has significant redundancies terrestrial channels, digital video storage, wires and wireless and eliminating or reducing those redundancies results in conversational services and etc. The data quantity is very large compression. Video compression can be lossy or loss less. for the digital video and the memory of the storage devices and Loss less video compression reproduces identical video the bandwidth of the transmission channel are not infinite, so after de-compression. We primarily consider lossy it is not practical for us to store the full digital video without compression that yields perceptually equivalent, but not processing. For instance, we have a 720 x 480 pixels per frame,30 frames per second, total 90 minutes full color video, identical video compared to the uncompressed source.
    [Show full text]
  • Discernible Image Compression
    Discernible Image Compression Zhaohui Yang1;2, Yunhe Wang2, Chang Xu3y , Peng Du4, Chao Xu1, Chunjing Xu2, Qi Tian5∗ 1 Key Lab of Machine Perception (MOE), Dept. of Machine Intelligence, Peking University. 2 Noah’s Ark Lab, Huawei Technologies. 3 School of Computer Science, Faculty of Engineering, University of Sydney. 4 Huawei Technologies. 5 Cloud & AI, Huawei Technologies. [email protected],{yunhe.wang,xuchunjing,tian.qi1}@huawei.com,[email protected] [email protected],[email protected] ABSTRACT ACM Reference Format: 1;2 2 3 4 1 Image compression, as one of the fundamental low-level image Zhaohui Yang , Yunhe Wang , Chang Xu y , Peng Du , Chao Xu , Chun- jing Xu2, Qi Tian5. 2020. Discernible Image Compression. In 28th ACM processing tasks, is very essential for computer vision. Tremendous International Conference on Multimedia (MM ’20), October 12–16, 2020, Seat- computing and storage resources can be preserved with a trivial tle, WA, USA.. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/ amount of visual information. Conventional image compression 3394171.3413968 methods tend to obtain compressed images by minimizing their appearance discrepancy with the corresponding original images, 1 INTRODUCTION but pay little attention to their efficacy in downstream perception tasks, e.g., image recognition and object detection. Thus, some of Recently, more and more computer vision (CV) tasks such as image compressed images could be recognized with bias. In contrast, this recognition [6, 18, 53–55], visual segmentation [27], object detec- paper aims to produce compressed images by pursuing both appear- tion [15, 17, 35], and face verification [45], are well addressed by ance and perceptual consistency.
    [Show full text]
  • A Dataflow Description of ACC-JPEG Codec
    A Dataflow Description of ACC-JPEG Codec Khaled Jerbi1,2, Tarek Ouni1 and Mohamed Abid1 1CES Lab, National Engineering School of Sfax, Sfax, Tunisia 2IETR/INSA, UMR CNRS 6164, F-35043 Rennes, France Keywords: Video Compression, Accordeon, Dataflow, MPEG RVC. Abstract: Video codec standards evolution raises two major problems. The first one is the design complexity which makes very difficult the video coders implementation. The second is the computing capability demanding which requires complex and advanced architectures. To decline the first problem, MPEG normalized the Re- configurable Video Coding (RVC) standard which allows the reutilization of some generic image processing modules for advanced video coders. However, the second problem still remains unsolved. Actually, technol- ogy development becomes incapable to answer the current standards algorithmic increasing complexity. In this paper, we propose an efficient solution for the two problems by applying the RVC methodology and its associated tools on a new video coding model called Accordion based video coding. The main advantage of this video coding model consists in its capacity of providing high compression efficiency with low complexity which is able to resolve the second video coding problem. 1 INTRODUCTION nology from all existing MPEG video past standards (i.e. MPEG- 2, MPEG- 4, etc. ). CAL is used to pro- During most of two decades, MPEG has produced vide the reference software for all coding tools of the several video coding standards such as MPEG-2, entire library. The originality of this paper is the ap- MPEG-4, AVC and SVC. However, the past mono- plication of the CAL and its associated tools on a new lithic specification of such standards (usually in the video coding model called Accordion based video form of C/C++ programs) lacks flexibility.
    [Show full text]