Signal Processing for Improved MPEG-Based Communication Systems

Total Page:16

File Type:pdf, Size:1020Kb

Signal Processing for Improved MPEG-Based Communication Systems Signal Processing for Improved MPEG-based Communication Systems Signal Processing for Improved MPEG-based Communication Systems PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Technische Universiteit Eindhoven, op gezag van de rector magnificus prof.dr.ir. F.P.T. Baaijens, voor een commissie aangewezen door het College voor Promoties, in het openbaar te verdedigen op woensdag 9 december 2015 om 16:00 uur door Onno Eerenberg geboren te Zwolle Dit proefschrift is goedgekeurd door de promotor en de samenstelling van de promotiecommissie is als volgt: voorzitter: prof.dr.ir. J.W.M. Bergmans 1e promotor: prof.dr.ir. P.H.N. de With copromotor(en): prof.dr. R.M. Aarts leden: prof.dr. C. Hentschel (Brandenb. Univ. of Technol. Cottbus, Germany) prof.dr. S. Sherratt (Univ. of Reading, United Kingdom) prof.dr.ir. J.J. Lukkien adviseur(s): dr.P.Hofman(PhilipsIP&S) Het onderzoek of ontwerp dat in dit proefschrift wordt beschreven is uitgevo- erd in overeenstemming met de TU/e Gedragscode Wetenschapsbeoefening. Nulla tenaci invia est via – Voor de volhouder is geen weg onbegaanbaar (Spyker Automobielen N.V.). CIP-DATA LIBRARY TECHNISCHE UNIVERSITEIT EINDHOVEN Onno Eerenberg Signal Processing for Improved MPEG-based Communication Systems / by Onno Eeren- berg. - Eindhoven : Technische Universiteit Eindhoven, 2015. A catalogue record is available from the Eindhoven University of Technology Library ISBN: 978-90-386-3979-6 NUR: 959 Trewf: videocompressie / personal video recording / MPEG-2 / H.264/MPEG4-AVC / trick play / DVB-H / video coding artifact detection. Subject headings: conventional video navigation / advanced video navigation / MPEG- 2-compliant video navigation / DVB-H link layer / mosquito and ringing artifact loca- tion detection. Cover design: D.J.M. Frishert. Printed by: Dereumaux. Copyright c 2015 by O. Eerenberg All rights reserved. No part of this material may be reproduced or transmitted in any form or by any means, electronic, mechanical, including photocopying, recording or by any information storage and retrieval system, without the prior permission of the copyright owner. Summary Signal Processing for Improved MPEG-based Communication Systems This thesis describes improvements for MPEG-based consumer communica- tion systems and focuses on three areas. The first area addresses intra-program video navigation for disk-based storage media, where three forms of video nav- igation are investigated enabling conventional as well as more advanced forms of video navigation. The second area presents an efficient and robust data link layer for DVB-H, a standard targeting battery-powered mobile television recep- tion. The improved link layer results in a higher robustness and efficiency. The third area addresses picture quality for digital-television reception, presenting two detection systems for locating visual-coding artifact regions, which are po- tentially contaminated with either mosquito noise or ringing. The location in- formation is used to attenuate the detected coding noise. The emphasis of the presented work is on embedded system solutions to be integrated into existing consumer platforms. The three areas are briefly summarized below. In this thesis, three navigation techniques are presented for disk-based storage systems. The first navigation technique equals that of full-frame fast-search and slow-motion playback and is suitable for a push-based architecture, en- abling deployment in a networked client-server system setup. Networked full- frame fast-search video navigation is based on re-using intra-coded MPEG- compressed normal-play video pictures. The proposed solution divides the signal processing for navigation over both recording and navigation playback operation mode. It is based on the finding of characteristic point information during recording, revealing the storage locations of intra-coded pictures, which are then re-used for generation of a fast-search navigation sequence. Furthermore, in order to adapt the frame rate, refresh-rate, bit rate and playback speed, the solution employs repetition pictures, which repeat normal- play reference pictures. On the basis of field-based repetition pictures, render- ing control at field level is obtained, enabling efficient removal of field-based video information (interlace kill), thereby avoiding motion judder during nav- i igation, enabling the navigation method to be applied to both progressive and interlaced video formats. Slow-motion video navigation is implemented in a similar fashion. When applying repetition pictures to control the Quality of Ex- perience (QoE) during navigation, the refresh-rate should not drop below 1/3 of the frame rate, otherwise a slide-show effect occurs, whereby the viewer loses the fast-search navigation experience. For a typical video broadcast at SD reso- lution and 4 Mbit/s, the required DSP processor load during recording requires a cycle frequency of 5 MHz, while a cycle frequency of 22 MHz is required for full-frame fast-search video playback with speed-up factor 12 and 5 MHz for slow-motion with playback speed 0.5. Both fast-search and slow-motion video navigation can be operated with a reduced refresh-rate and thus a considerably lower cycle frequency. Based on drawbacks associated to full-resolution fast-search trick play, a video navigation technique is presented based on a hierarchical mosaic screen representation. This screen is composed of re-used MPEG-compressed sub- pictures, avoiding transcoding of pictorial data. Hierarchical mosaic screens enable instant overview of video information associated with a large temporal interval, thereby eliminating the individual picture refresh-rate from the final video navigation rendering. Each subpicture is coded at a fixed bit cost, thereby simplifying the construction of the final mosaic screen in the compressed do- main. A fixed-cost subpicture is achieved by dividing each subpicture into a set of “mini slices”, which are also encoded at a fixed bit cost. Furthermore, when subpictures use P-type coding syntax, new mosaic screens can be con- structed using predictive coding, based on re-used subpictures available at the MPEG decoder. Both aspects clearly reduce the complexity of the implementa- tion. The continuous derivation of subpictures for mosaic screens requires only a low fraction of the computation complexity, because this intra-coded normal- play pictures appear at a rate of only 2 Hz, which results in a processing load of 0.3 Hz, when a scene duration of 3 seconds is used. It was found that this sys- tem can be implemented with the same architecture as the first navigation so- lution, because the processing required for the construction of a mosaic screen has a high resemblance with the fast-search full-frame navigation solution. We therefore expect that the involved playback processing for mosaic-screen navi- gation will show a similar throughput and DSP cycle load. Finally, an audio-enhanced dual-window video navigation technique is presented, combining normal-play audiovisual fragments, with a down-scaled fast-search information signal. This representation addresses human percep- tion which employs both visual and auditory queues. Hereby detailed normal- play information is rendered in a main window, while a coarse overview is pro- vided by fast-search information rendered in a second picture-in-Picture (PiP) window. Due to simultaneous rendering, a viewer can switch between the two information signals, perceiving either a fast- or a detailed overview, guiding the viewer to the requested audiovisual fragment. ii The main navigation signal is based on re-using normal-play audio and vi- deo fragments of typical length of 3 seconds (approx. 6 GOPs), which are fully decoded. The fast-search navigation signal is derived from decoded normal- play intra-coded pictures, which are scalable MPEG-2 decoded already dur- ing recording. The scalability enables downscaling while decoding, since the picture quality is lower. These lower-quality pictures are stored, next to the other essential Characteristic Point Information (CPI) in the metadata database. During playback, the concatenated stream fragments are modified to ensure MPEG-2-compliant formatting and enabling seamless decoding. This modifi- cation involves amongst others audio padding, removal of predictive images without reference and modification of the time base for navigation. Scalabil- ity is implemented by partial decoding of the intra-compressed data, which is worked out for both MPEG-2 and H.264/MPEG4-AVC standards. The ob- tained picture quality for both standards is in the range of 23-32 dB, which is lower than the normal-play quality but sufficient for navigation purposes. The three proposed navigation solutions show a high commonality with respect to the required signal processing and the execution architectures, so that they can be jointly implemented in one overall architecture. The second contribution of this thesis aims at improving the DVB-H link layer for mobile battery-powered MPEG communication. The problem of the standard solution in a receiver is that it does not provide sufficient robustness and is also based on inefficient bandwidth usage. The solution is based on lo- cally obtained reliability and location information, in the form of 2-bit erasure flags, Internet Protocol Entry Table (IPET) and Correct Row Index Table (CRIT). The reliability information is derived on the basis of dual-stage FEC decod- ing, while the location information is derived from correctly received broadcast data. Hereby, the primary FEC is performed by the channel decoder, while the secondary FEC is integrated in the DVB-H link layer. The usage of the reliabil- ity information derived from the primary FEC is employed in two ways. First, this reliability information is applied for error and erasure decoding by the sec- ondary FEC stage. For the situation that after this second FEC stage an MPE- FEC frame is still incorrect, this reliability information is used for a second time, in combination with reliability information derived from the secondary FEC de- coding stage, supplemented with the IPET and CRIT information. In this way, correctly received and corrected IP datagrams are extracted from the defect MPE-FEC frame.
Recommended publications
  • Versatile Video Coding – the Next-Generation Video Standard of the Joint Video Experts Team
    31.07.2018 Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team Mile High Video Workshop, Denver July 31, 2018 Gary J. Sullivan, JVET co-chair Acknowledgement: Presentation prepared with Jens-Rainer Ohm and Mathias Wien, Institute of Communication Engineering, RWTH Aachen University 1. Introduction Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team 1 31.07.2018 Video coding standardization organisations • ISO/IEC MPEG = “Moving Picture Experts Group” (ISO/IEC JTC 1/SC 29/WG 11 = International Standardization Organization and International Electrotechnical Commission, Joint Technical Committee 1, Subcommittee 29, Working Group 11) • ITU-T VCEG = “Video Coding Experts Group” (ITU-T SG16/Q6 = International Telecommunications Union – Telecommunications Standardization Sector (ITU-T, a United Nations Organization, formerly CCITT), Study Group 16, Working Party 3, Question 6) • JVT = “Joint Video Team” collaborative team of MPEG & VCEG, responsible for developing AVC (discontinued in 2009) • JCT-VC = “Joint Collaborative Team on Video Coding” team of MPEG & VCEG , responsible for developing HEVC (established January 2010) • JVET = “Joint Video Experts Team” responsible for developing VVC (established Oct. 2015) – previously called “Joint Video Exploration Team” 3 Versatile Video Coding – The Next-Generation Video Standard of the Joint Video Experts Team Gary Sullivan | Jens-Rainer Ohm | Mathias Wien | July 31, 2018 History of international video coding standardization
    [Show full text]
  • Security Solutions Y in MPEG Family of MPEG Family of Standards
    1 Security solutions in MPEG family of standards TdjEbhiiTouradj Ebrahimi [email protected] NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG: Moving Picture Experts Group 2 • MPEG-1 (1992): MP3, Video CD, first generation set-top box, … • MPEG-2 (1994): Digital TV, HDTV, DVD, DVB, Professional, … • MPEG-4 (1998, 99, ongoing): Coding of Audiovisual Objects • MPEG-7 (2001, ongo ing ): DitiDescription of Multimedia Content • MPEG-21 (2002, ongoing): Multimedia Framework NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG-1 - ISO/IEC 11172:1992 3 • Coding of moving pictures and associated audio for digital storage media at up to about 1,5 Mbit/s – Part 1 Systems - Program Stream – Part 2 Video – Part 3 Audio – Part 4 Conformance – Part 5 Reference software NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG-2 - ISO/IEC 13818:1994 4 • Generic coding of moving pictures and associated audio – Part 1 Systems - joint with ITU – Part 2 Video - joint with ITU – Part 3 Audio – Part 4 Conformance – Part 5 Reference software – Part 6 DSM CC – Par t 7 AAC - Advance d Au dio Co ding – Part 9 RTI - Real Time Interface – Part 10 Conformance extension - DSM-CC – Part 11 IPMP on MPEG-2 Systems NET working? Broadcast Networks and their security 16-17 June 2004, Geneva, Switzerland MPEG-4 - ISO/IEC 14496:1998 5 • Coding of audio-visual objects – Part 1 Systems – Part 2 Visual – Part 3 Audio – Part 4 Conformance – Part 5 Reference
    [Show full text]
  • The H.264 Advanced Video Coding (AVC) Standard
    Whitepaper: The H.264 Advanced Video Coding (AVC) Standard What It Means to Web Camera Performance Introduction A new generation of webcams is hitting the market that makes video conferencing a more lifelike experience for users, thanks to adoption of the breakthrough H.264 standard. This white paper explains some of the key benefits of H.264 encoding and why cameras with this technology should be on the shopping list of every business. The Need for Compression Today, Internet connection rates average in the range of a few megabits per second. While VGA video requires 147 megabits per second (Mbps) of data, full high definition (HD) 1080p video requires almost one gigabit per second of data, as illustrated in Table 1. Table 1. Display Resolution Format Comparison Format Horizontal Pixels Vertical Lines Pixels Megabits per second (Mbps) QVGA 320 240 76,800 37 VGA 640 480 307,200 147 720p 1280 720 921,600 442 1080p 1920 1080 2,073,600 995 Video Compression Techniques Digital video streams, especially at high definition (HD) resolution, represent huge amounts of data. In order to achieve real-time HD resolution over typical Internet connection bandwidths, video compression is required. The amount of compression required to transmit 1080p video over a three megabits per second link is 332:1! Video compression techniques use mathematical algorithms to reduce the amount of data needed to transmit or store video. Lossless Compression Lossless compression changes how data is stored without resulting in any loss of information. Zip files are losslessly compressed so that when they are unzipped, the original files are recovered.
    [Show full text]
  • ITC Confplanner DVD Pages Itcconfplanner
    Comparative Analysis of H.264 and Motion- JPEG2000 Compression for Video Telemetry Item Type text; Proceedings Authors Hallamasek, Kurt; Hallamasek, Karen; Schwagler, Brad; Oxley, Les Publisher International Foundation for Telemetering Journal International Telemetering Conference Proceedings Rights Copyright © held by the author; distribution rights International Foundation for Telemetering Download date 25/09/2021 09:57:28 Link to Item http://hdl.handle.net/10150/581732 COMPARATIVE ANALYSIS OF H.264 AND MOTION-JPEG2000 COMPRESSION FOR VIDEO TELEMETRY Kurt Hallamasek, Karen Hallamasek, Brad Schwagler, Les Oxley [email protected] Ampex Data Systems Corporation Redwood City, CA USA ABSTRACT The H.264/AVC standard, popular in commercial video recording and distribution, has also been widely adopted for high-definition video compression in Intelligence, Surveillance and Reconnaissance and for Flight Test applications. H.264/AVC is the most modern and bandwidth-efficient compression algorithm specified for video recording in the Digital Recording IRIG Standard 106-11, Chapter 10. This bandwidth efficiency is largely derived from the inter-frame compression component of the standard. Motion JPEG-2000 compression is often considered for cockpit display recording, due to the concern that details in the symbols and graphics suffer excessively from artifacts of inter-frame compression and that critical information might be lost. In this paper, we report on a quantitative comparison of H.264/AVC and Motion JPEG-2000 encoding for HD video telemetry. Actual encoder implementations in video recorder products are used for the comparison. INTRODUCTION The phenomenal advances in video compression over the last two decades have made it possible to compress the bit rate of a video stream of imagery acquired at 24-bits per pixel (8-bits for each of the red, green and blue components) with a rate of a fraction of a bit per pixel.
    [Show full text]
  • Computersalectronics AUGUST 1983 Formerly Popular Electronics $1.50
    Computersalectronics AUGUST 1983 formerly Popular Electronics $1.50 Printers for Small Computers Radio Shack's New Micro Color Computer Testing the Microbuffer II Printer Interface Compact DiscDigital Audio Systems Also In This Issue: 08 o heT !Latest Flat -Panel Displays 1111 First 4024 1427 Look at Magnavox's CD Record Player www.americanradiohistory.comAmericanRadioHistory.Com You can wait for industry siandards to mandate improved performance. Or you can have it now on Maxell. The Gold Standard. The refinements of The Gold Standard, from clear and accurate. And lubricants reduce fric- oxide particles to lubricant to jacket, are uniquely tion between head and disk for a longer media Maxell. And therefore, so are the benefits. and head life. To house it, we then Our unique, uniform crystals assure dense a constructed a new jacket heat - oxide packing. So you begin with an origi- resistant to 140° F to withstand drive nal signal of extraordinary fidelity. A signal heat without warp or wear. And we safeguard in ways that leave industry created the floppy disk that standards in our wake. leads the industry in error -free performance and durability. An advanced binder bonds oxides to the base material preventing time All industry standards exist to and money- wasting dropouts. assure reliable performance. Calendering then smooths the sur- The Gold Standard expresses face for a read /write signal that stays a higher aim: perfection. mexeII. IT'S WORTH IT Computer Products Division, Maxell Corporation of America, 60 Oxford Drive, Moonachie, N.J. 07074 201 -440 -8020 Circle No. 30 on Free Information Card www.americanradiohistory.comAmericanRadioHistory.Com the KLH Solo Price Slashed List Price $199- Suggested Retail $169 January 1983 Dealer Cost $106 NOW $68 It's been killing us.
    [Show full text]
  • The H.264/MPEG4 Advanced Video Coding Standard and Its Applications
    SULLIVAN LAYOUT 7/19/06 10:38 AM Page 134 STANDARDS REPORT The H.264/MPEG4 Advanced Video Coding Standard and its Applications Detlev Marpe and Thomas Wiegand, Heinrich Hertz Institute (HHI), Gary J. Sullivan, Microsoft Corporation ABSTRACT Regarding these challenges, H.264/MPEG4 Advanced Video Coding (AVC) [4], as the latest H.264/MPEG4-AVC is the latest video cod- entry of international video coding standards, ing standard of the ITU-T Video Coding Experts has demonstrated significantly improved coding Group (VCEG) and the ISO/IEC Moving Pic- efficiency, substantially enhanced error robust- ture Experts Group (MPEG). H.264/MPEG4- ness, and increased flexibility and scope of appli- AVC has recently become the most widely cability relative to its predecessors [5]. A recently accepted video coding standard since the deploy- added amendment to H.264/MPEG4-AVC, the ment of MPEG2 at the dawn of digital televi- so-called fidelity range extensions (FRExt) [6], sion, and it may soon overtake MPEG2 in further broaden the application domain of the common use. It covers all common video appli- new standard toward areas like professional con- cations ranging from mobile services and video- tribution, distribution, or studio/post production. conferencing to IPTV, HDTV, and HD video Another set of extensions for scalable video cod- storage. This article discusses the technology ing (SVC) is currently being designed [7, 8], aim- behind the new H.264/MPEG4-AVC standard, ing at a functionality that allows the focusing on the main distinct features of its core reconstruction of video signals with lower spatio- coding technology and its first set of extensions, temporal resolution or lower quality from parts known as the fidelity range extensions (FRExt).
    [Show full text]
  • H.264/MPEG-4 Advanced Video Coding Alexander Hermans
    Seminar Report H.264/MPEG-4 Advanced Video Coding Alexander Hermans Matriculation Number: 284141 RWTH September 11, 2012 Contents 1 Introduction 2 1.1 MPEG-4 AVC/H.264 Overview . .3 1.2 Structure of this paper . .3 2 Codec overview 3 2.1 Coding structure . .3 2.2 Encoder . .4 2.3 Decoder . .6 3 Main differences 7 3.1 Intra frame prediction . .7 3.2 Inter frame prediction . .9 3.3 Transform function . 11 3.4 Quantization . 12 3.5 Entropy encoding . 14 3.6 In-loop deblocking filter . 15 4 Profiles and levels 16 4.1 Profiles . 17 4.2 Levels . 18 5 Evaluation 18 6 Patents 20 7 Summary 20 1 1 Introduction Since digital images have been created, the need for compression has been clear. The amount of data needed to store an uncompressed image is large. But while it still might be feasible to store a full resolution uncompressed image, storing video sequences without compression is not. Assuming a normal frame-rate of about 25 frames per second, the amount of data needed to store one hour of high definition video is about 560GB1. Compressing each image on its own, would reduce this amount, but it would not make the video small enough to be stored on today's typical storage mediums. To overcome this problem, video compression algorithms have exploited the temporal redundancies in the video frames. By using previously encoded, or decoded, frames to predict values for the next frame, the data can be compressed with such a rate that storage becomes feasible.
    [Show full text]
  • Introduction to the Special Issue on Scalable Video Coding—Standardization and Beyond
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, VOL. 17, NO. 9, SEPTEMBER 2007 1099 Introduction to the Special Issue on Scalable Video Coding—Standardization and Beyond FEW YEARS ago, the subject of video coding received of scalability, this one is perhaps the most difficult to design Aa large amount of skepticism. The basic argument was in a way that is both effective (from a compression capability that the 1990s generation of international standards was close point of view) and practical for implementation in terms of to the asymptotic limit of compression capability—or at least computational resource requirements. Finally, there is quality was good enough that there would be little market interest in scalability, sometimes also referred to as SNR or fidelity scala- something else. Moreover, there was the fear that a new video bility, in which the enhancement layer provides an increase in codec design created by an open committee process would in- video quality without changing the picture resolution. evitably contain so many compromises and take so long to be For example, 1080p50/60 enhancement of today’s 720p50/60 completed that it would fail to meet real industry needs or to be or 1080i25/30 HDTV can be deployed as a premium service state-of-the-art in capability when finished. while retaining full compatibility with deployed receivers, or The Joint Video Team (JVT) then proved those views HDTV can be deployed as an enhancement of SDTV video to be wrong and revitalized the field of video coding with services. Another example is low-delay adaptation for multi- the 2003 creation of the H.264/AVC standard (ITU-T Rec.
    [Show full text]
  • Lossy-To-Lossless 3D Image Coding Through Prior Coefficient Lookup
    1 Lossy-to-Lossless 3D Image Coding through Prior Coefficient Lookup Tables Francesc Aul´ı-Llinas,` Michael W. Marcellin, Joan Serra-Sagrista,` and Joan Bartrina-Rapesta Abstract—This paper describes a low-complexity, high- JPEG2000 [12] attracts the most attention due to its advanced efficiency, lossy-to-lossless 3D image coding system. The proposed features and due to its inclusion in DICOM (Digital Imaging system is based on a novel probability model for the symbols that and Communications in Medicine). Hence, compression of are emitted by bitplane coding engines. This probability model uses partially reconstructed coefficients from previous compo- 3D medical images with JPEG2000 has been extensively nents together with a mathematical framework that captures explored [5], [9], [13], [14]. the statistical behavior of the image. An important aspect of In the remote sensing field, several bitplane coding schemes this mathematical framework is its generality, which makes the have been adapted to multi-dimensional data [15], [16]. The proposed scheme suitable for different types of 3D images. The Karhunen-Loeve` transform has proven to be especially suit- main advantages of the proposed scheme are competitive coding performance, low computational load, very low memory require- able to decorrelate spectral information [17]–[19]. Meanwhile, ments, straightforward implementation, and simple adaptation efficient approaches to exploit the intrinsic nature of remote to most sensors. sensing images have been proposed, including pixel classi- Index Terms—3D image coding, entropy coding, bitplane image fication [20] and models of anomalous pixels [21]. Again, coding, JPEG2000. JPEG2000 is widely known in the community due to the provi- sion of advanced features such as multi-component transforms I.
    [Show full text]
  • Intra-Frame JPEG-2000 Vs. Inter-Frame Compression Comparison: the Benefits and Trade-Offs for Very High Quality, High Resolution Sequences
    SMPTE Technical Conference and Exhibition Media Networks, the Reality! Intra-frame JPEG-2000 vs. Inter-frame Compression Comparison: The benefits and trade-offs for very high quality, high resolution sequences Michael Smith and John Villasenor For the past several decades, most of the research on image and video compression has focused on addressing highly bandwidth-constrained environments. However, some of the most compelling applications of image and video compression involve extremely high-quality, high- resolution image sequences where the primary constraints are related to quality and flexibility. The interest in flexibility, in particular the potential to generate lower resolution extractions from a single higher-resolution master was a primary driver behind the recent evaluation and consideration of JPEG2000 compression for digital cinema sequences. JPEG2000 is an intra- frame coding method where a single frame is coded at a time and no frame-to-frame motion compensation is applied. While the efficiency of exploiting inter-frame motion compensation is well established for lower bit rates, far less attention has been given to this issue for digital cinema resolutions. This paper addresses the specific coding efficiency differences between inter and intra frame coding for very high quality, high-resolution image sequences and at a wide variety of bit rate ranges. The information presented here will be useful to system designers aiming to balance the flexibility and benefits of intra-frame approaches against the bandwidth advantages of inter-frame methods that exploit frame-to-frame correlation. Michael Smith ([email protected]) is a consultant in the area of digital imaging. He received his B.S.
    [Show full text]
  • Tutorial: the H.264 Advanced Video Compression Standard
    Tutorial: The H.264 Advanced Video Compression Standard By: Sriram Sethuraman Ittiam Systems (Pvt.) Ltd., Bangalore IEEE Multimedia Compression Workshop October 27, 2005 Bangalore DSP Professionals Survey by Forward Concepts Overview Motivation – comparison against other standards AVC in the market Standards history & Tools progression AVC Coding Tools – how they work Fidelity Range Extension (FREXT) tools Profiles & Levels SEI and VUI JM – Brief description Implementation aspects Carriage of AVC (MPEG-2 TS / RTP) Storage of AVC (File format) Scalable Video Coding References 2 H.264 Tutorial – Presented at the IEEE Multimedia Compression Workshop, Bangalore, October 27, 2005. © Ittiam Systems Pvt. Ltd., 2003-2005. All Rights Reserved. AVC in the market 50+ companies that have already announced products Span a broad range of product categories DVB Broadcasting (HD/SD) Harmonic, Tandberg Digital Multimedia Broadcast (DMB) or ISDBT Several companies in Korea, Japan IPTV/VoD Skystream, Minerva Portable media player Sony PSP, Apple’s video iPod ASICs Broadcom, Conexant, Sigma Designs, ST Micro STBs Pace, Scientific Atlanta, Amino, Sentivision, Ateme, etc. Video conferencing systems Polycom DSP IP Ittiam, Ateme, Ingenient, Sentivision, etc. RTL IP Sciworx, Amphion, etc. PC based QuickTime 7, Main Concept, Elecard Analysis tools Tektronix (Vprov), Interra 3 H.264 Tutorial – Presented at the IEEE Multimedia Compression Workshop, Bangalore, October 27, 2005. © Ittiam Systems Pvt. Ltd., 2003-2005. All Rights Reserved. H.264 – Compression Advantage H.264 vs. MPEG-4 SP 40 35 MPEG-4 SP 30 H264 Y-PSNR (dB) 25 20 0 200 400 600 800 1000 Bitrate (kbps) • H.264 with CABAC, no B-frames, 1 reference frame • Sequence used was foreman CIF, 240 frames 4 H.264 Tutorial – Presented at the IEEE Multimedia Compression Workshop, Bangalore, October 27, 2005.
    [Show full text]
  • Deep Neural Network Based Video Compression for Next Generation MPEG Video Codec Standardization
    Deep Neural Network based Video Compression for Next Generation MPEG Video Codec Standardization This article introduces a brief status of DNNVC-related activities, current issues in the DNNVC MPEG ad-hoc group, and its standardization efforts. Tae Meon Bae Ofinno • ofinno.com • October 2020 1. Introduction in April 2020. Since winning the ImageNet Challenge Video transmission is expected to dominate (ILSVRC) for image classification in 2012, DNN has 79% of current global internet traffic with an attracted the attention of engineers and researchers annual growth rate of 31% by 2020 [1]. With this [4]. DNN illustrated outstanding performance in trend, the advancement of video compression prediction and classification. Because prediction technology becomes more important to alleviate and classification are also particularly important in radically increasing internet traffic. The Moving video compression, codec experts started paying Picture Experts Group (MPEG) is one of the most attention to DNN as a promising candidate for the important standardization groups working on video next generation video coding approach. compression technologies. This article introduces a brief status of DNNVC-related MPEG is formed by International Organization for activities, current issues in the DNNVC MPEG ad-hoc Standardization/International Electrotechnical group, and its standardization efforts. To describe Commission (ISO/IEC) and has been leading the latest DNNVC approaches, we referenced papers audio and visual compression and transmission of Computer Vision and Pattern Recognition (CVPR) standardizations since the 1980s. The video codecs 2020, and a special section on learning-based image standardized by MPEG such as MPEG-2, MPEG-4, and video coding in IEEE Transactions on Circuits MPEG-4 Advanced Video Coding (AVC, or H.264), and Systems for Video Technology (TCSVT) 2020.
    [Show full text]