MPEG-2 in a Stream Programming Language

Total Page:16

File Type:pdf, Size:1020Kb

MPEG-2 in a Stream Programming Language MPEG-2 in a Stream Programming Language Matthew Drake, Hank Hoffmann, Rodric Rabbah, and Saman Amarasinghe MIT Computer Science and Artificial Intelligence Laboratory Abstract tations are typically not portable as they are tuned to spe- cific architectures. And while image and video compression Image and video codecs are prevalent in multimedia appli- is needed on embedded systems, desktop PCs, and high end cations, ranging from embedded systems, to desktop comput- servers, writing a separate implementation for every archi- ers, to high-end servers such as HDTV editing consoles. It is tecture is not cost effective. Furthermore, compression stan- not uncommon however that developers create (from scratch) dards are also continuously evolving, and thus compression and customize their codec implementations for each of the ar- programs must be easy to modify and update. chitecture targets they intend their coders and decoders to run A typical compression algorithm involves three types of on. This practice is time consuming and error prone, leading operations: data representation, lossy compression, and loss- to code that is not malleable or portable. In this paper we less compression. These operations are semi-autonomous, describe an implementation of the MPEG-2 codec using the exhibit data and pipeline parallelism, and easily fit into a se- StreamIt programming language. StreamIt is an architecture- quence of distinct processing kernels. As such, image and independent stream language that aims to improve program- video compression is a good match for the streaming model mer productivity, while concomitantly exposing the inherent of computation, which affords certain advantages in terms parallelism and communication topology of the application. of programmability, robustness, and achieving high perfor- We describe why MPEG is a good match for the streaming mance. Our goal is to implement well-known still image programming model, and illustrate the malleability of the im- and motion picture compression standards—such as JPEG plementation using a simple modification to the decoder to and MPEG-2—in StreamIt [36], a high-level architecture- support alternate color compression formats. StreamIt allows independent language for the streaming domain. This will for modular application development, which also reduces the result in clean, malleable, and portable codes. In addition, complexity of the debugging process since stream components using the stream-aware StreamIt compiler, we can produce can be verified independently. This in turn leads to greater highly optimized codes that are competitive with hand-tuned programmer productivity. We implement a fully functional implementations. Our architecture targets include conven- MPEG-2 decoder in StreamIt. The decoder was developed tional processors, as well as new and emerging wire exposed in eight weeks by a single student programmer who did not and multi-core architectures [12, 24, 34, 35, 5]. have any prior experience with MPEG or other video codecs. This work is in the context of the StreamIt programming Many of the MPEG-2 components were subsequently reused language [36], an architecture-independent stream language to assemble a JPEG codec. that aims to improve programmer productivity within the streaming domain. StreamIt provides an intuitive program- ming model, allowing the programmer to build an applica- 1. Introduction tion by connecting components together into a stream graph, where the nodes represent actors that carry out the compu- Image compression, whether for still pictures or motion tation, and edges represent FIFO communication channels pictures (e.g., video), plays an important role in Internet and between actors. As a result, the parallelism and commu- multimedia applications, digital appliances such as HDTV, nication topology of the application are exposed, empower- and handheld devices such as digital cameras and mobile ing the compiler to perform many stream-aware optimiza- phones. Compression allows one to represent images and tions [1, 11, 19, 30] that elude other languages. video with a much smaller amount of data and negligible quality loss. The reduction in data decreases storage require- 2. MPEG-2 Video Coding and Decoding ments (important for embedded devices) and provides higher effective transmission rates (important for Internet enabled MPEG-2 [14] is a popular coding and decoding standard devices). for digital video data. The scheme is a subset of both the Unfortunately, implementing a compression scheme can DVD-Video [33] standard for storing movies, and the Digi- be especially difficult. For performance reasons, implemen- tal Video Broadcasting specifications for transmitting HDTV and SDTV [10]. The scheme is used by a wide variety of the picture into parts with varying visual importance. The in- multimedia applications and appliances such as the Tivo Dig- put to the DCT is one block. The output of the DCT is an ital Video Recorder [38], and the DirecTV satellite broadcast 8x8 matrix of frequency coefficients. The upper left corner service [6]. of the matrix represents low frequencies, whereas the lower MPEG-2 encoding uses both lossy compression and loss- right corner represents higher frequencies. The latter are of- less compression. Lossy compression permanently elimi- ten small and can be neglected without sacrificing human vi- nates information from a video based on a human percep- sual perception. tion model. Humans are much better at discerning changes in The DCT coefficients are quantized to reduce the num- color intensity (luminance information) than changes in color ber of bits needed to represent them. Following quantization, (chrominance information). Humans are also much more sen- many coefficients are effectively reduced to zero. The DCT sitive to low frequency image components, such as a blue sky, matrix is then run-length encoded by emitting each non-zero than to high frequency image components, such as a plaid coefficient, followed by the number of zeros that precede it, shirt. Details which humans are likely to miss can be thrown along with the number of bits needed to represent the coeffi- away without affecting the perceived video quality. cient, and its value. The run-length encoder scans the DCT Lossless compression eliminates redundant information matrix in a zig-zag order (Figure 2) to consolidate the zeros while allowing for its later reconstruction. Similarities be- in the matrix. tween adjacent video pictures are encoded using motion pre- Finally, the output of the run-length encoder, motion vec- diction, and all data is Huffman compressed[13]. The amount tor data, and other information (e.g., type of picture), are of lossy and lossless compression depends on the video data. Huffman coded to further reduce the average number of bits Common compression ratios range from 10:1 to 100:1. For per data item. The compressed stream is sent to the output example, HDTV, with a resolution of 1280x720 pixels and a device. streaming rate of 59.94 frames per second, has an uncom- pressed data rate of 1.33 Gigabits per second. It is com- 2.2. MPEG Decoding pressed at an average rate of 66:1, reducing the required streaming rate to 20 Megabits per second [40]. An MPEG-2 input stream is organized as a Group of Pic- 2.1. MPEG Coding tures (GOP) which contains all the information needed to re- construct a video. The GOP contains the three kinds of pic- The encoder operates on a sequence of pictures. Each pic- tures produced by the encoder, namely I, P, and B pictures. ture is made up of pixels arranged in a 16x16 array known as I pictures are intended to assist scene cuts, random access, a macroblock. Macroblocks consist of a 2x2 array of blocks fast forward, or fast reverse playback [14, p. 14]. A typi- (each of which contains an 8x8 array of pixels). There is a cal I:P:B picture ratio in a GOP is 1:3:8, and a typical pic- separate series of macroblocks for each color channel, and the ture pattern is a repetition of the following logical sequence: macroblocks for a given channel are sometimes downsampled I1 B2 B3 P4 B5 B6 P7 B8 B9 P10 B11 B12 where the subscripts to a 2x1 or 1x1 block matrix. The compression in MPEG denote positions in the original video. However, to simplify is achieved largely via motion estimation, which detects and the decoder, the encoder reorders the pictures to produce the eliminates similarities between macroblocks across pictures. following pattern: I1 P4 B2 B3 P7 B5 B6 P10 B8 B9 B11 B12. Specifically, the motion estimator calculates a motion vector Under this configuration, if the decoder encounters a P pic- that represents the horizontal and vertical displacement of a ture, its motion prediction is with respect to the previously given macroblock (i.e., the one being encoded) from a match- decoded I or P picture; if the decoder encounters a B picture, ing macroblock-sized area in a reference picture. The match- its motion prediction is with respect to the previously two de- ing macroblock is removed (subtracted) from the current pic- coded I or P pictures. ture on a pixel by pixel basis, and a motion vector is associ- As with the encoding process, pictures are divided up ated with the macroblock describing its displacement relative into 16x16 pixel macroblocks, themselves composed of 8x8 to the reference picture. The result is a residual predictive- blocks. Macroblocks specify colors using a luminance chan- code (P) picture. It represents the difference between the cur- nel to represent saturation (color intensity), and two chromi- rent picture and the reference picture. Reference pictures en- nance channels to represent hue. MPEG-2 streams specify coded without the use of motion prediction are intra-coded (I) a chroma format which allows the chrominance data to be pictures. In addition to forward motion prediction, it is possi- sampled at a lower rate. The most common chroma format is ble to encode new pictures using motion estimation from both 4:2:0 which represents a macroblock using four blocks for the previous and subsequent pictures.
Recommended publications
  • Kulkarni Uta 2502M 11649.Pdf
    IMPLEMENTATION OF A FAST INTER-PREDICTION MODE DECISION IN H.264/AVC VIDEO ENCODER by AMRUTA KIRAN KULKARNI Presented to the Faculty of the Graduate School of The University of Texas at Arlington in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING THE UNIVERSITY OF TEXAS AT ARLINGTON May 2012 ACKNOWLEDGEMENTS First and foremost, I would like to take this opportunity to offer my gratitude to my supervisor, Dr. K.R. Rao, who invested his precious time in me and has been a constant support throughout my thesis with his patience and profound knowledge. His motivation and enthusiasm helped me in all the time of research and writing of this thesis. His advising and mentoring have helped me complete my thesis. Besides my advisor, I would like to thank the rest of my thesis committee. I am also very grateful to Dr. Dongil Han for his continuous technical advice and financial support. I would like to acknowledge my research group partner, Santosh Kumar Muniyappa, for all the valuable discussions that we had together. It helped me in building confidence and motivated towards completing the thesis. Also, I thank all other lab mates and friends who helped me get through two years of graduate school. Finally, my sincere gratitude and love go to my family. They have been my role model and have always showed me right way. Last but not the least; I would like to thank my husband Amey Mahajan for his emotional and moral support. April 20, 2012 ii ABSTRACT IMPLEMENTATION OF A FAST INTER-PREDICTION MODE DECISION IN H.264/AVC VIDEO ENCODER Amruta Kulkarni, M.S The University of Texas at Arlington, 2011 Supervising Professor: K.R.
    [Show full text]
  • 1. in the New Document Dialog Box, on the General Tab, for Type, Choose Flash Document, Then Click______
    1. In the New Document dialog box, on the General tab, for type, choose Flash Document, then click____________. 1. Tab 2. Ok 3. Delete 4. Save 2. Specify the export ________ for classes in the movie. 1. Frame 2. Class 3. Loading 4. Main 3. To help manage the files in a large application, flash MX professional 2004 supports the concept of _________. 1. Files 2. Projects 3. Flash 4. Player 4. In the AppName directory, create a subdirectory named_______. 1. Source 2. Flash documents 3. Source code 4. Classes 5. In Flash, most applications are _________ and include __________ user interfaces. 1. Visual, Graphical 2. Visual, Flash 3. Graphical, Flash 4. Visual, AppName 6. Test locally by opening ________ in your web browser. 1. AppName.fla 2. AppName.html 3. AppName.swf 4. AppName 7. The AppName directory will contain everything in our project, including _________ and _____________ . 1. Source code, Final input 2. Input, Output 3. Source code, Final output 4. Source code, everything 8. In the AppName directory, create a subdirectory named_______. 1. Compiled application 2. Deploy 3. Final output 4. Source code 9. Every Flash application must include at least one ______________. 1. Flash document 2. AppName 3. Deploy 4. Source 10. In the AppName/Source directory, create a subdirectory named __________. 1. Source 2. Com 3. Some domain 4. AppName 11. In the AppName/Source/Com directory, create a sub directory named ______ 1. Some domain 2. Com 3. AppName 4. Source 12. A project is group of related _________ that can be managed via the project panel in the flash.
    [Show full text]
  • Discrete Cosine Transform for 8X8 Blocks with CUDA
    Discrete Cosine Transform for 8x8 Blocks with CUDA Anton Obukhov [email protected] Alexander Kharlamov [email protected] October 2008 Document Change History Version Date Responsible Reason for Change 0.8 24.03.2008 Alexander Kharlamov Initial release 0.9 25.03.2008 Anton Obukhov Added algorithm-specific parts, fixed some issues 1.0 17.10.2008 Anton Obukhov Revised document structure October 2008 2 Abstract In this whitepaper the Discrete Cosine Transform (DCT) is discussed. The two-dimensional variation of the transform that operates on 8x8 blocks (DCT8x8) is widely used in image and video coding because it exhibits high signal decorrelation rates and can be easily implemented on the majority of contemporary computing architectures. The key feature of the DCT8x8 is that any pair of 8x8 blocks can be processed independently. This makes possible fully parallel implementation of DCT8x8 by definition. Most of CPU-based implementations of DCT8x8 are firmly adjusted for operating using fixed point arithmetic but still appear to be rather costly as soon as blocks are processed in the sequential order by the single ALU. Performing DCT8x8 computation on GPU using NVIDIA CUDA technology gives significant performance boost even compared to a modern CPU. The proposed approach is accompanied with the sample code “DCT8x8” in the NVIDIA CUDA SDK. October 2008 3 1. Introduction The Discrete Cosine Transform (DCT) is a Fourier-like transform, which was first proposed by Ahmed et al . (1974). While the Fourier Transform represents a signal as the mixture of sines and cosines, the Cosine Transform performs only the cosine-series expansion.
    [Show full text]
  • Analysis Application for H.264 Video Encoding
    IT 10 061 Examensarbete 30 hp November 2010 Analysis Application for H.264 Video Encoding Ying Wang Institutionen för informationsteknologi Department of Information Technology Abstract Analysis Application for H.264 Video Encoding Ying Wang Teknisk- naturvetenskaplig fakultet UTH-enheten A video analysis application ERANA264(Ericsson Research h.264 video Besöksadress: ANalysis Application) is developed in Ångströmlaboratoriet Lägerhyddsvägen 1 this project. Erana264 is a tool that Hus 4, Plan 0 analyzes H.264 encoded video bitstreams, extracts the encoding information and Postadress: parameters, analyzes them in different Box 536 751 21 Uppsala stages and displays the results in a user friendly way. The intention is that Telefon: such an application would be used during 018 – 471 30 03 development and testing of video codecs. Telefax: The work is implemented on top of 018 – 471 30 00 existing H.264 encoder/decoder source code (C/C++) developed at Ericsson Hemsida: Research. http://www.teknat.uu.se/student Erana264 consists of three layers. The first layer is the H.264 decoder previously developed in Ericsson Research. By using the decoder APIs, the information is extracted from the bitstream and is sent to the higher layers. The second layer visualizes the different decoding stages, uses overlay to display some macro block and picture level information and provides a set of play back functions. The third layer analyzes and presents the statistics of prominent parameters in video compression process, such as video quality measurements, motion vector distribution, picture bit distribution etc. Key words: H.264, Video compression, Bitstream analysis, Video encoding Handledare: Zhuangfei Wu and Clinton Priddle Ämnesgranskare: Cris Luengo Examinator: Anders Jansson IT10061 Tryckt av: Reprocentralen ITC Acknowledgements Fist of all, I am heartily thankful to my supervisors, Fred Wu and Clinton Priddle, whose encouragement, supervision and support from the preliminary to the concluding level enabled me to develop an understanding of the subject.
    [Show full text]
  • CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE Optimized AV1 Inter Prediction using Binary classification techniques A graduate project submitted in partial fulfillment of the requirements for the degree of Master of Science in Software Engineering by Alex Kit Romero May 2020 The graduate project of Alex Kit Romero is approved: ____________________________________ ____________ Dr. Katya Mkrtchyan Date ____________________________________ ____________ Dr. Kyle Dewey Date ____________________________________ ____________ Dr. John J. Noga, Chair Date California State University, Northridge ii Dedication This project is dedicated to all of the Computer Science professors that I have come in contact with other the years who have inspired and encouraged me to pursue a career in computer science. The words and wisdom of these professors are what pushed me to try harder and accomplish more than I ever thought possible. I would like to give a big thanks to the open source community and my fellow cohort of computer science co-workers for always being there with answers to my numerous questions and inquiries. Without their guidance and expertise, I could not have been successful. Lastly, I would like to thank my friends and family who have supported and uplifted me throughout the years. Thank you for believing in me and always telling me to never give up. iii Table of Contents Signature Page ................................................................................................................................ ii Dedication .....................................................................................................................................
    [Show full text]
  • Video Coding Standards
    Module 8 Video Coding Standards Version 2 ECE IIT, Kharagpur Lesson 23 MPEG-1 standards Version 2 ECE IIT, Kharagpur Lesson objectives At the end of this lesson, the students should be able to : 1. Enlist the major video coding standards 2. State the basic objectives of MPEG-1 standard. 3. Enlist the set of constrained parameters in MPEG-1 4. Define the I- P- and B-pictures 5. Present the hierarchical data structure of MPEG-1 6. Define the macroblock modes supported by MPEG-1 23.0 Introduction In lesson 21 and lesson 22, we studied how to perform motion estimation and thereby temporally predict the video frames to exploit significant temporal redundancies present in the video sequence. The error in temporal prediction is encoded by standard transform domain techniques like the DCT, followed by quantization and entropy coding to exploit the spatial and statistical redundancies and achieve significant video compression. The video codecs therefore follow a hybrid coding structure in which DPCM is adopted in temporal domain and DCT or other transform domain techniques in spatial domain. Efforts to standardize video data exchange via storage media or via communication networks are actively in progress since early 1980s. A number of international video and audio standardization activities started within the International Telephone Consultative Committee (CCITT), followed by the International Radio Consultative Committee (CCIR), and the International Standards Organization / International Electrotechnical Commission (ISO/IEC). An experts group, known as the Motion Pictures Expects Group (MPEG) was established in 1988 in the framework of the Joint ISO/IEC Technical Committee with an objective to develop standards for coded representation of moving pictures, associated audio, and their combination for storage and retrieval of digital media.
    [Show full text]
  • AVC to the Max: How to Configure Encoder
    Contents Company overview …. ………………………………………………………………… 3 Introduction…………………………………………………………………………… 4 What is AVC….………………………………………………………………………… 6 Making sense of profiles, levels, and bitrate………………………………………... 7 Group of pictures and its structure..………………………………………………… 11 Macroblocks: partitioning and prediction modes….………………………………. 14 Eliminating spatial redundancy……………………………………………………… 15 Eliminating temporal redundancy……...……………………………………………. 17 Adaptive quantization……...………………………………………………………… 24 Deblocking filtering….….…………………………………………………………….. 26 Entropy encoding…………………………………….……………………………….. 2 8 Conclusion…………………………………………………………………………….. 29 Contact details..………………………………………………………………………. 30 2 www.elecard.com Company overview Elecard company, founded in 1988, is a leading provider of software products for encoding, decoding, processing, monitoring and analysis of video and audio data in 9700 companies various formats. Elecard is a vendor of professional software products and software development kits (SDKs); products for in - depth high - quality analysis and monitoring of the media content; countries 1 50 solutions for IPTV and OTT projects, digital TV broadcasting and video streaming; transcoding servers. Elecard is based in the United States, Russia, and China with 20M users headquarters located in Tomsk, Russia. Elecard products are highly appreciated and widely used by the leaders of IT industry such as Intel, Cisco, Netflix, Huawei, Blackmagic Design, etc. For more information, please visit www.elecard.com. 3 www.elecard.com Introduction Video compression is the key step in video processing. Compression allows broadcasters and premium TV providers to deliver their content to their audience. Many video compression standards currently exist in TV broadcasting. Each standard has different properties, some of which are better suited to traditional live TV while others are more suited to video on demand (VoD). Two basic standards can be identified in the history of video compression: • MPEG-2, a legacy codec used for SD video and early digital broadcasting.
    [Show full text]
  • Nvidia Video Codec Sdk - Encoder
    NVIDIA VIDEO CODEC SDK - ENCODER Programming Guide vNVENCODEAPI_PG-06155-001_v11 | July 2021 Table of Contents Chapter 1. Introduction........................................................................................................ 1 Chapter 2. Basic Encoding Flow.......................................................................................... 2 Chapter 3. Setting Up Hardware for Encoding....................................................................3 3.1. Opening an Encode Session.....................................................................................................3 3.1.1. Initializing encode device................................................................................................... 3 3.1.1.1. DirectX 9.......................................................................................................................3 3.1.1.2. DirectX 10.....................................................................................................................3 3.1.1.3. DirectX 11.....................................................................................................................4 3.1.1.4. DirectX 12.....................................................................................................................4 3.1.1.5. CUDA............................................................................................................................ 4 3.1.1.6. OpenGL........................................................................................................................
    [Show full text]
  • An Analysis of VP8, a New Video Codec for the Web Sean Cassidy
    Rochester Institute of Technology RIT Scholar Works Theses Thesis/Dissertation Collections 11-1-2011 An Analysis of VP8, a new video codec for the web Sean Cassidy Follow this and additional works at: http://scholarworks.rit.edu/theses Recommended Citation Cassidy, Sean, "An Analysis of VP8, a new video codec for the web" (2011). Thesis. Rochester Institute of Technology. Accessed from This Thesis is brought to you for free and open access by the Thesis/Dissertation Collections at RIT Scholar Works. It has been accepted for inclusion in Theses by an authorized administrator of RIT Scholar Works. For more information, please contact [email protected]. An Analysis of VP8, a New Video Codec for the Web by Sean A. Cassidy A Thesis Submitted in Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Engineering Supervised by Professor Dr. Andres Kwasinski Department of Computer Engineering Rochester Institute of Technology Rochester, NY November, 2011 Approved by: Dr. Andres Kwasinski R.I.T. Dept. of Computer Engineering Dr. Marcin Łukowiak R.I.T. Dept. of Computer Engineering Dr. Muhammad Shaaban R.I.T. Dept. of Computer Engineering Thesis Release Permission Form Rochester Institute of Technology Kate Gleason College of Engineering Title: An Analysis of VP8, a New Video Codec for the Web I, Sean A. Cassidy, hereby grant permission to the Wallace Memorial Library to repro- duce my thesis in whole or part. Sean A. Cassidy Date i Acknowledgements I would like to thank my thesis advisors, Dr. Andres Kwasinski, Dr. Marcin Łukowiak, and Dr. Muhammad Shaaban for their suggestions, advice, and support.
    [Show full text]
  • University of Texas at Arlington Dissertation Template
    IMPLEMENTATION AND ANALYSIS OF DIRECTIONAL DISCRETE COSINE TRANSFORM IN H.264 FOR BASELINE PROFILE by SHREYANKA SUBBARAYAPPA Presented to the Faculty of the Graduate School of The University of Texas at Arlington in Partial Fulfillment of the Requirements for the Degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING THE UNIVERSITY OF TEXAS AT ARLINGTON May 2012 Copyright © by Shreyanka Subbarayappa 2012 All Rights Reserved ACKNOWLEDGEMENTS Firstly, I would thank my advisor Prof. K. R. Rao for his valuable guidance and support, and his tireless guidance, dedication to his students and maintaining new trend in the research areas has inspired me a lot without which this thesis would not have been possible. I also like to thank the other members of my advisory committee Prof. W. Alan Davis and Prof. Kambiz Alavi for reviewing the thesis document and offering insightful comments. I appreciate all members of Multimedia Processing Lab, Tejas Sathe, Priyadarshini and Darshan Alagud for their support during my research work. I would also like to thank my friends Adarsh Keshavamurthy, Babu Hemanth Kumar, Raksha, Kirthi, Spoorthi, Premalatha, Sadaf, Tanaya, Karthik, Pooja, and my Intel manager Sumeet Kaur who kept me going through the trying times of my Masters. Finally, I am grateful to my family; my father Prof. H Subbarayappa, my mother Ms. Anasuya Subbarayappa, my sister Dr. Priyanka Subbarayappa, my brother-in-law Dr. Manju Jayram and my sweet little nephew Dishanth for their support, patience, and encouragement during my graduate journey. April 16, 2012 iii ABSTRACT IMPLEMENTATION AND ANALYSIS OF DIRECTIONAL DISCRETE COSINE TRANSFORM IN H.264 FOR BASELINE PROFILE Shreyanka Subbarayappa, M.S The University of Texas at Arlington, 2012 Supervising Professor: K.R.Rao H.264/AVC [1] is a video coding standard that has a wide range of applications ranging from high-end professional camera and editing systems to low-end mobile applications.
    [Show full text]
  • The Basics of Video Compression
    The Basics of Video Compression Marko Slyz February 18, 2003 (Sourcecoders talk) 1/18 Outline 1. Non-technical Survey of Video Compressors 2. Basic Description of MPEG 1 3. Discussion of Other Compressors 4. Numerical Comparisons 2/18 Some Applications of Video Compressors MPEG-1 Video-CD MPEG-2 HDTV, DVD (usually), some camcorders, current digital cable, TiVo/ReplayTV MPEG-4 (a.k.a. divx) internet video, future digital cable, at least one camcorder, cell phones. H.261, H.263 video conferencing (e.g. Polycom) H.26L (a.k.a. MPEG-4 part 10, JVT H.264?) (Not yet done?) Intended for both low-delay and high-delay applications. 3/18 Another Classification of Video Compressors MPEG-1 (?) Corona (MS) requires MPEG-2 royalties RealVideo ISO/IEC Standard MPEG-4 Sorenson ITU-T Standard doesn’t H.261 vp3 (on2) require H.263 Tarkin (ogg) royalties H.26L (base) WaveVideo (?) formally not formally standardized standardized 4/18 Basic Idea of MPEG 1 or 2 (and :::) An uncompressed video file consists of a sequence of frames. MPEG (often) uses DPCM between frames, and uses a block-based DCT within a frame. MPEG has lots of forward-adaptive parts, lots of static parts, and no (?), except for the DPCM just mentioned, backward-adaptive parts. 5/18 MPEG 1 or 2 Beginnings To code a particular frame, start by 1. transforming its colors into Y,Cb,Cr space, 2. then partitioning it into macroblocks, each of which consists of a 16x16 block of pixels from Y, and 8x8 blocks of the subsampled versions of Cb and Cr (4:2:0).
    [Show full text]
  • Redalyc.Performance Evaluation of Macroblock-Level Parallelization Of
    Revista Avances en Sistemas e Informática ISSN: 1657-7663 [email protected] Universidad Nacional de Colombia Colombia Alvarez, Mauricio; Ramirez, Alex; Valero, Mateo; Azevedo, Arnaldo; Meenderinck, Cor; Juurlink, Ben Performance evaluation of macroblock-level parallelization of H.264 decoding on a cc-NUMA multiprocessor architecture Revista Avances en Sistemas e Informática, vol. 6, núm. 1, junio, 2009, pp. 219-228 Universidad Nacional de Colombia Medellín, Colombia Available in: http://www.redalyc.org/articulo.oa?id=133112608024 How to cite Complete issue Scientific Information System More information about this article Network of Scientific Journals from Latin America, the Caribbean, Spain and Portugal Journal's homepage in redalyc.org Non-profit academic project, developed under the open access initiative Performance evaluation of macroblock­level parallelization of H.264 decoding on a cc­NUMA multiprocessor architecture Evaluación del rendimiento paralelo en el nivel macro bloque del decodificador H.264 en una arquitectura multiprocesador cc­NUMA Mauricio Alvarez, Ing. 1, Alex Ramirez, PhD. 1;2, Mateo Valero, PhD. 1;2 Arnaldo Azevedo, MSc. 3, Cor Meenderinck, MSc. 3, Ben Juurlink, PhD 3 1. Universitat Politècnica de Catalunya, Barcelona, Spain. 2. Barcelona Supercomputing Center, Barcelona, Spain. 3. Delft University of Technology, Delft, The Netherlands [email protected], [email protected], cor, [email protected], [email protected], [email protected] Recibido para revisión 5 de junio de 2009, aceptado 5 de junio de 2009, versión final 5 de junio de 2009 Abstract—This paper presents a study of the performance scalability mejor escalabilidad. Este artículo incluye una evaluación del impacto en of a macroblock­level parallelization of the H.264 decoder for High sincronización en bloque APIs como hilos POSIX y extensiones de Definition (HD) applications on a multiprocessor architecture.
    [Show full text]