(12) United States Patent (10) Patent No.: US 7,016,547 B1 Smirnov (45) Date of Patent: Mar

Total Page:16

File Type:pdf, Size:1020Kb

(12) United States Patent (10) Patent No.: US 7,016,547 B1 Smirnov (45) Date of Patent: Mar USOO7016547B1 (12) United States Patent (10) Patent No.: US 7,016,547 B1 Smirnov (45) Date of Patent: Mar. 21, 2006 (54) ADAPTIVE ENTROPY 5,504,591 A 4/1996 Dujari ENCODING/DECODING FORSCREEN 5,544.286 A 8/1996 Laney CAPTURE CONTENT 5,559,557 A 9/1996 Kato et al. 5,579,430 A 11/1996 Grill et al. (75)75 Inventor: Sergey Smirnov, Redmond, WA (US) 5,661,7555,654,706 A 8/1997 JeongVan de et Kerkhof al. (73) Assignee: Microsoft Corporation, Redmond, WA 5,717.821 A 2/1998 Tsutsui (US) (Continued) (*) Notice: Subject to any disclaimer, the term of this FOREIGN PATENT DOCUMENTS patent is extended or adjusted under 35 EP O5403SO 5/1993 U.S.C. 154(b) by 768 days. (Continued) (21)21) Appl. No.: 10/186,10/186,639 OTHER PUBLICATIONS (22) Filed: Jun. 28, 2002 Brandenburg, “ASPEC CODING”, AES 10" International Conference, pp. 81-90 (1991). (51) Int. Cl. G06K 9/36 (2006.01) (Continued) (52) U.S. Cl. ...................... 382/245; 382/166; 3. Primary Examiner-Anh Hong Do (74) Attorney, Agent, or Firm-Klarquist Sparkman, LLP (58) Field of Classification Search ................ 382/162, 382/166, 209, 218, 219, 173,244, 232,233, (57) ABSTRACT 382/234, 245; 341/87, 106,55, 67; 34.5/545, 34.5/538; 375/240.1, 241,253; 719/328; Adaptive entropy encoding and decoding techniques are 35 8/426.13 described. For example, a Screen capture encoder and See application file for complete Search history. decoder perform adaptive entropy encoding and decoding of (56) References Cited palettized Screen capture content in Screen capture video. The encoder Selects between different entropy encoding U.S. PATENT DOCUMENTS modes (such as an arithmetic coding mode and a combined run length/Huffman encoding mode, which allow the 4,698,672 A 10/1987 Chen 4,813,056 A 3/1989 Fedele encoder to emphasize bitrate reduction at Some times and 4,901,075 A 2/1990 Vogel encoding speed at other times). The run length encoding can 5,043,919 A 8/1991 Callaway et al. use run value Symbols adapted to common patterns of 5,089.818 A 2/1992 Mahieux et al. redundancy in the content. When the encoder detects Series 5,128,758 A 7/1992 Azadegan of pixels that could be encoded with different, alternative 5,179,442 A 1/1993 Azadegan runs, the encoder Selects between the alternative runs based 5,227,788 A 7/1993 Johnston upon efficiency criteria. The encoder also performs adaptive 5,266,941 A 11/1993 Akeley et al. Huffman encoding, efficiently parameterizing Huffman code 5,394,170 A 2/1995 Akeley et al. tables to reduce overall bitrate while largely preserving the 5,400,075 A 3/1995 Savatier compression gains of the adaptive Huffman encoding. A 5,457.495 A 10/1995 Hartung 5,461,421. A 10/1995 Moon decoder performs converse operations. 5,467,134 A 11/1995 Laney 5,481,553 A 1/1996 Suzuki 15 Claims, 19 Drawing Sheets 610 630 Run length encode runs for frame. Huffman encode results of run length encoding. 640 US 7,016,547 B1 Page 2 U.S. PATENT DOCUMENTS Proc. Data Compression Conference '92, IEEE Computer Society Press, pp. 52-62 (1992). 5,825,830 A 10/1998 Kopf Gill et al., “Creating High-Quality Content with Microsoft 5.835,144 A 11/1998 Matsumura Windows Media Encoder 7,” 4 pp. (2000). Downloaded 5,883,633. A 3/1999 Gill et al. from the World Wide Web on May 1, 2002). 5,982,437 A 11/1999 Okazaki Li et al., “Optimal Linear Interpolation Coding for Server 5.990,960 A 11/1999 Murakami Based Computing.” Proc. IEEE Int'l Conf. On Communica 5,995,670 A 11/1999 Zabinsky tions, 5 pp. (Apr.-May 2002). 6,002,439 A 12/1999 Murakami Matthias, “An Overview of Microsoft Windows Media 6,054,943 A * 4/2000 Lawrence .................... 341/87 Screen Technology,” 3 pp. (2000). Downloaded from the 6,097,759 A 8/2000 Murakami World Wide Web on May 1, 2002). 6,100,825 A 8/2000 Sedluk Palmer et al., “Shared Desktop: A Collaborative Tool for 6,111,914 A 8/2000 Bist Sharing 3-D Applications Among Different Window 6,148,109 A 11/2000 Boon Systems,” Digital Tech. Journal, v. 9, No. 3, pp. 42-49 6.215,910 B1 4/2001 Chaddha (1997). 6,223,162 B1 4/2001 Chen OPTX International, “OPTX Improves Technology-Based 6,226,407 B1 * 5/2001 Zabih et al. ................ 382/209 6,233,017 B1 5/2001 Chaddha Training with Screen WatchTM 3.0. Versatile Screen Capture 6.253,165 B1 6/2001 Malvar Software Adds High Color and Live Webcast Support,” 1 p., 6,292.588 B1 9/2001 Shen document marked Feb. 15, 2001 downloaded from the 6,300,888 B1 10/2001 Chen World Wide Web on Sep. 22, 2005). 6,304,928 B1 10/2001 Mairs et al. OPTX International, “OPTX International Marks One Year 6,337.881 B1 1/2002 Chaddha Anniversary of Screen Watch With Release of New 2.0 6,341,165 B1 1/2002 Gbur Version,” 1 p., document marked May 16, 2000 6,345,123 B1 2/2002 Boon downloaded from the World Wide Web on Sep. 22, 2005). 6,377,930 B1 4/2002 Chen OPTX International, “New Screen WatchTM 4.0 Click and 6,404.931 B1 6/2002 Chen StreamTM Wizard From OPTX International Makes 6,421,738 B1 7/2002 Ratan et al. ................ 719/328 Workplace Communication Effortless,” 1 p., document 6,477,280 B1 11/2002 Malvar marked Sep. 24, 2001, downloaded from the World Wide 6,542,631 B1 4/2003 Ishikawa .................... 382/162 6,573.915 B1* 6/2003 Sivan et al. ................ 71.5/781 Web on Sep. 22, 2005). 6,646,578 B1 11/2003 Au Schaar-Mitrea et al., “Hybrid Compression of Video with 6,721,700 B1 4/2004 Yin Graphics in DTV Communication Systems," IEEE Trans. On 6,728,317 B1 4/2004 Demos Consumer Electronics, pp. 1007-1017 (2000). 6,766.293 B1 7/2004 Herre Shamoon et al., “A Rapidly Adaptive LOSsleSS Compression 6,771,777 B1 8/2004 Gbur Algorithm for High Fidelity Audio Coding.” IEEE Data 2004/O136457 A1 7/2004 Funnell et al. Compression Conf. 1994, pp. 430-439 (Mar. 1994). TechSmith Corporation, “TechSmith Camtasia Screen FOREIGN PATENT DOCUMENTS Recorder SDK,” 2 pp. (2001). EP O910927 1/1998 TechSmith Corporation, “Camtasia Feature of the Week: EP O966793 9/1998 Quick Capture,” 2 pp. (Downloaded from the World Wide EP O931386 1/1999 Web on May 9, 2002; document dated Jan. 4, 2001). JP 5-292481 11/1993 TechSmith Corporation, “Camtasia Screen Recorder SDK JP T-274,171 10/1995 DLL API User Guide,” version 1.0, 66 pp. (2001). OTHER PUBLICATIONS TechSmith Corporation, “Camtasia v3.0.1-README. De Agostino et al., “Parallel Algorithms for Optimal Com TXT,”, 19 pp. (Jan. 2002). pression using Dictionaries with the Prefix Property,’” in * cited by examiner U.S. Patent Mar. 21, 2006 Sheet 1 of 19 US 7,016,547 B1 1.IVJOJCI 00[ OII ZZI U.S. Patent Mar. 21, 2006 Sheet 2 of 19 US 7,016,547 B1 Computing Environment 200 Communication Connection(s) 270 Input Device(s) 250 Processing Display Card 230 Unit 210 Memory 220 Output Device(s) 260 a in a - - - - a Storage 240 Software 280 Implementing Adaptive Screen Capture Entropy Encoder and/or Decoder U.S. Patent Mar. 21, 2006 Sheet 4 of 19 US 7,016,547 B1 U.S. Patent Mar. 21, 2006 Sheet S of 19 US 7,016,547 B1 00/ L9.InãIJ 009 0IS U.S. Patent Mar. 21, 2006 Sheet 6 of 19 US 7,016,547 B1 Figure 6 600 Start next frame. Run length encode runs Arithmetic encode data for frame. for frame. Huffman encode results of run length encoding. Figure 8 800 Start next frame. Huffman decode data for Arithmetic decode data runs for frame. for frame. Run length decode runs for frame. U.S. Patent Mar. 21, 2006 Sheet 7 of 19 US 7,016,547 B1 00ZI 0IZI OZZI 006 U.S. Patent Mar. 21, 2006 Sheet 8 of 19 US 7,016,547 B1 Figure 10 1000 Start next frame. Start Grow each extant run yes Start alternative by adding pixel. alternative run ? run(s). Each run ended ? yCS Encode pixels for Select longest run. selected run. Encode any skipped pixels. U.S. Patent US 7,016,547 B1 [0II eIIQun??+ U.S. Patent US 7,016,547 B1 [ qII9InãIA LEEEEEEEEEEEEE|× U.S. Patent Mar. 21, 2006 Sheet 11 0f 19 US 7,016,547 B1 ?I?InãIA U.S. Patent Mar. 21, 2006 Sheet 12 of 19 US 7,016,547 B1 Figure 15 1500 // Deterministically assign codes based on an incomplete code length table void GenerateVLCArray(CodeLencodeLen, intnReservedSyms, int vlcArray, intnSyms) { int iSym, len, nextAvail, nCtherSyms, *vlcArray++ = nSyms; for (iSym = 0; iSym < nSyms; iSym++) vlcArrayiSym * 2 + 1 = 0; // assign codes according to the code length table nextAvail = 0; len = 1; for (iSym = 0; iSym < n ReservedSyms; iSym++) { if (codeLeniSym).len = len) { nextAvail <= (codeLeniSym-len - len); len – codeLeniSym.len; vlcArraycodeLeniSym).sym *2 = nextAvail++, vlcArraycodeLeniSym).sym * 2 + 1 = len; Assert(nextAvail < (1 < len)); // assign (almost) equal length codes for symbols not in the table nOtherSyms = nSyms - nRescrvedSyms; int nExcessCodes; do lent---, nextAvail <= 1; nExcessCodes = (1 << len) - nextAvail - notherSyms; } while (nExcessCodes < 0); Assert(nExcessCodes < nOtherSyms); for (iSym = 0; iSym < nSyms, iSym-H) if (vlcArrayiSym * 2 + 1) = 0) { if (nExcessCodes) { // use shorter codes while they last vlcArrayiSym 2 = nextAvail D 1.
Recommended publications
  • CALIFORNIA STATE UNIVERSITY, NORTHRIDGE LOSSLESS COMPRESSION of SATELLITE TELEMETRY DATA for a NARROW-BAND DOWNLINK a Graduate P
    CALIFORNIA STATE UNIVERSITY, NORTHRIDGE LOSSLESS COMPRESSION OF SATELLITE TELEMETRY DATA FOR A NARROW-BAND DOWNLINK A graduate project submitted in partial fulfillment of the requirements For the degree of Master of Science in Electrical Engineering By Gor Beglaryan May 2014 Copyright Copyright (c) 2014, Gor Beglaryan Permission to use, copy, modify, and/or distribute the software developed for this project for any purpose with or without fee is hereby granted. THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE. Copyright by Gor Beglaryan ii Signature Page The graduate project of Gor Beglaryan is approved: __________________________________________ __________________ Prof. James A Flynn Date __________________________________________ __________________ Dr. Deborah K Van Alphen Date __________________________________________ __________________ Dr. Sharlene Katz, Chair Date California State University, Northridge iii Contents Copyright .......................................................................................................................................... ii Signature Page ...............................................................................................................................
    [Show full text]
  • Modification of Adaptive Huffman Coding for Use in Encoding Large Alphabets
    ITM Web of Conferences 15, 01004 (2017) DOI: 10.1051/itmconf/20171501004 CMES’17 Modification of Adaptive Huffman Coding for use in encoding large alphabets Mikhail Tokovarov1,* 1Lublin University of Technology, Electrical Engineering and Computer Science Faculty, Institute of Computer Science, Nadbystrzycka 36B, 20-618 Lublin, Poland Abstract. The paper presents the modification of Adaptive Huffman Coding method – lossless data compression technique used in data transmission. The modification was related to the process of adding a new character to the coding tree, namely, the author proposes to introduce two special nodes instead of single NYT (not yet transmitted) node as in the classic method. One of the nodes is responsible for indicating the place in the tree a new node is attached to. The other node is used for sending the signal indicating the appearance of a character which is not presented in the tree. The modified method was compared with existing methods of coding in terms of overall data compression ratio and performance. The proposed method may be used for large alphabets i.e. for encoding the whole words instead of separate characters, when new elements are added to the tree comparatively frequently. Huffman coding is frequently chosen for implementing open source projects [3]. The present paper contains the 1 Introduction description of the modification that may help to improve Efficiency and speed – the two issues that the current the algorithm of adaptive Huffman coding in terms of world of technology is centred at. Information data savings. technology (IT) is no exception in this matter. Such an area of IT as social media has become extremely popular 2 Study of related works and widely used, so that high transmission speed has gained a great importance.
    [Show full text]
  • Greedy Algorithm Implementation in Huffman Coding Theory
    iJournals: International Journal of Software & Hardware Research in Engineering (IJSHRE) ISSN-2347-4890 Volume 8 Issue 9 September 2020 Greedy Algorithm Implementation in Huffman Coding Theory Author: Sunmin Lee Affiliation: Seoul International School E-mail: [email protected] <DOI:10.26821/IJSHRE.8.9.2020.8905 > ABSTRACT In the late 1900s and early 2000s, creating the code All aspects of modern society depend heavily on data itself was a major challenge. However, now that the collection and transmission. As society grows more basic platform has been established, efficiency that can dependent on data, the ability to store and transmit it be achieved through data compression has become the efficiently has become more important than ever most valuable quality current technology deeply before. The Huffman coding theory has been one of desires to attain. Data compression, which is used to the best coding methods for data compression without efficiently store, transmit and process big data such as loss of information. It relies heavily on a technique satellite imagery, medical data, wireless telephony and called a greedy algorithm, a process that “greedily” database design, is a method of encoding any tries to find an optimal solution global solution by information (image, text, video etc.) into a format that solving for each optimal local choice for each step of a consumes fewer bits than the original data. [8] Data problem. Although there is a disadvantage that it fails compression can be of either of the two types i.e. lossy to consider the problem as a whole, it is definitely or lossless compressions.
    [Show full text]
  • Optimized Bitrate Ladders for Adaptive Video Streaming with Deep Reinforcement Learning
    Optimized Bitrate Ladders for Adaptive Video Streaming with Deep Reinforcement Learning ∗ Tianchi Huang1, Lifeng Sun1,2,3 1Dept. of CS & Tech., 2BNRist, Tsinghua University. 3Key Laboratory of Pervasive Computing, China ABSTRACT Transcoding Online Stage Video Quality Stage In the adaptive video streaming scenario, videos are pre-chunked Storage Cost and pre-encoded according to a set of resolution-bitrate/quality Deploy pairs on the server-side, namely bitrate ladder. Hence, we pro- … … pose DeepLadder, which adopts state-of-the-art deep reinforcement learning (DRL) method to optimize the bitrate ladder by consid- Transcoding Server ering video content features, current network capacities, as well Raw Videos Video Chunks NN-based as the storage cost. Experimental results on both Constant Bi- Decison trate (CBR) and Variable Bitrate (VBR)-encoded videos demonstrate Network & ABR Status Feedback that DeepLadder significantly improvements on average video qual- ity, bandwidth utilization, and storage overhead in comparison to Figure 1: An Overview of DeepLadder’s System. We leverage prior work. a NN-based decision model for constructing the proper bi- trate ladders, and transcode the video according to the as- CCS CONCEPTS signed settings. • Information systems → Multimedia streaming; • Computing and solve the problem mathematically. In this poster, we propose methodologies → Neural networks; DeepLadder, a per-chunk video transcoding system. Technically, we set video contents, current network traffic distributions, past KEYWORDS actions as the state, and utilize a neural network (NN) to deter- Bitrate Ladder Optimization, Deep Reinforcement Learning. mine the proper action for each resolution autoregressively. Unlike the traditional bitrate ladder method that outputs all candidates ACM Reference Format: at one step, we model the optimization process as a Markov Deci- Tianchi Huang, Lifeng Sun.
    [Show full text]
  • A Survey on Different Compression Techniques Algorithm for Data Compression Ihardik Jani, Iijeegar Trivedi IC
    International Journal of Advanced Research in ISSN : 2347 - 8446 (Online) Computer Science & Technology (IJARCST 2014) Vol. 2, Issue 3 (July - Sept. 2014) ISSN : 2347 - 9817 (Print) A Survey on Different Compression Techniques Algorithm for Data Compression IHardik Jani, IIJeegar Trivedi IC. U. Shah University, India IIS. P. University, India Abstract Compression is useful because it helps us to reduce the resources usage, such as data storage space or transmission capacity. Data Compression is the technique of representing information in a compacted form. The actual aim of data compression is to be reduced redundancy in stored or communicated data, as well as increasing effectively data density. The data compression has important tool for the areas of file storage and distributed systems. To desirable Storage space on disks is expensively so a file which occupies less disk space is “cheapest” than an uncompressed files. The main purpose of data compression is asymptotically optimum data storage for all resources. The field data compression algorithm can be divided into different ways: lossless data compression and optimum lossy data compression as well as storage areas. Basically there are so many Compression methods available, which have a long list. In this paper, reviews of different basic lossless data and lossy compression algorithms are considered. On the basis of these techniques researcher have tried to purpose a bit reduction algorithm used for compression of data which is based on number theory system and file differential technique. The statistical coding techniques the algorithms such as Shannon-Fano Coding, Huffman coding, Adaptive Huffman coding, Run Length Encoding and Arithmetic coding are considered.
    [Show full text]
  • Cube Encoder and Decoder Reference Guide
    CUBE ENCODER AND DECODER REFERENCE GUIDE © 2018 Teradek, LLC. All Rights Reserved. TABLE OF CONTENTS 1. Introduction ................................................................................ 3 Support Resources ........................................................ 3 Disclaimer ......................................................................... 3 Warning ............................................................................. 3 HEVC Products ............................................................... 3 HEVC Content ................................................................. 3 Physical Properties ........................................................ 4 2. Getting Started .......................................................................... 5 Power Your Device ......................................................... 5 Connect to a Network .................................................. 6 Choose Your Application .............................................. 7 Choose a Stream Mode ............................................... 9 3. Encoder Configuration ..........................................................10 Video/Audio Input .......................................................12 Color Management ......................................................13 Encoder ...........................................................................14 Network Interfaces .....................................................15 Cloud Services ..............................................................17
    [Show full text]
  • How to Encode Video for the Future
    How to Encode Video for the Future Dror Gill, CTO, Beamr Abstract The quality expectations of viewers paired with the ever-increasing shift to over-the-top (OTT) and mobile video consumption, are driving today’s networks to be more congested with video than ever before. To counter this congestion, this paper will cover advanced techniques for applying content-adaptive encoding and optimization methods to video workflows while lowering the bitrate of encoded video without compromising quality. Intended Audience: Business decision makers Video encoding engineers To learn more about optimized content-adaptive encoding, email [email protected] ©Beamr Imaging Ltd. 2017 | beamr.com Table of contents 3 Encoding for the future. 3 The trade off between bitrate and quality. 3 Legacy approaches to encoding your video content. 3 What is constant bitrate (CBR) encoding? 4 What about variable bitrate encoding? 4 Encoding content with constant rate factor encoding. 4 Capped content rate factor encoding for high complexity scenes. 4 Encoding content for the future. 5 Manually encoding content by title. 5 Manually encoding content by the category. 6 Content-adaptive encoding by the title and chunk. 6 Content-adaptive encoding using neural networks. 7 Closed loop content-adaptive encoding by the frame. 9 How should you be encoding your content? 10 References ©Beamr Imaging Ltd. 2017 | beamr.com Encoding for the future. how it will impact the file size and perceived visual quality of the video. The standard method of encoding video for delivery over the Internet utilizes a pre-set group of resolutions The rate control algorithm adjusts encoder parameters and bitrates known as adaptive bitrate (ABR) sets.
    [Show full text]
  • 16.1 Digital “Modes”
    Contents 16.1 Digital “Modes” 16.5 Networking Modes 16.1.1 Symbols, Baud, Bits and Bandwidth 16.5.1 OSI Networking Model 16.1.2 Error Detection and Correction 16.5.2 Connected and Connectionless 16.1.3 Data Representations Protocols 16.1.4 Compression Techniques 16.5.3 The Terminal Node Controller (TNC) 16.1.5 Compression vs. Encryption 16.5.4 PACTOR-I 16.2 Unstructured Digital Modes 16.5.5 PACTOR-II 16.2.1 Radioteletype (RTTY) 16.5.6 PACTOR-III 16.2.2 PSK31 16.5.7 G-TOR 16.2.3 MFSK16 16.5.8 CLOVER-II 16.2.4 DominoEX 16.5.9 CLOVER-2000 16.2.5 THROB 16.5.10 WINMOR 16.2.6 MT63 16.5.11 Packet Radio 16.2.7 Olivia 16.5.12 APRS 16.3 Fuzzy Modes 16.5.13 Winlink 2000 16.3.1 Facsimile (fax) 16.5.14 D-STAR 16.3.2 Slow-Scan TV (SSTV) 16.5.15 P25 16.3.3 Hellschreiber, Feld-Hell or Hell 16.6 Digital Mode Table 16.4 Structured Digital Modes 16.7 Glossary 16.4.1 FSK441 16.8 References and Bibliography 16.4.2 JT6M 16.4.3 JT65 16.4.4 WSPR 16.4.5 HF Digital Voice 16.4.6 ALE Chapter 16 — CD-ROM Content Supplemental Files • Table of digital mode characteristics (section 16.6) • ASCII and ITA2 code tables • Varicode tables for PSK31, MFSK16 and DominoEX • Tips for using FreeDV HF digital voice software by Mel Whitten, KØPFX Chapter 16 Digital Modes There is a broad array of digital modes to service various needs with more coming.
    [Show full text]
  • On the Coding Gain of Dynamic Huffman Coding Applied to a Wavelet-Based Perceptual Audio Coder
    ON THE CODING GAIN OF DYNAMIC HUFFMAN CODING APPLIED TO A WAVELET-BASED PERCEPTUAL AUDIO CODER N. Ruiz Reyes1, M. Rosa Zurera2, F. López Ferreras2, P. Jarabo Amores2, and P. Vera Candeas1 1Departamento de Electrónica, Universidad de Jaén, Escuela Universitaria Politénica, 23700 Linares, Jaén, SPAIN, e-mail: [email protected] 2Dpto. de Teoría de la Señal y Comunicaciones, Universidad de Alcalá, Escuela Politécnica 28871 Alcalá de Henares, Madrid, SPAIN, e-mail: [email protected] ABSTRACT The last one, the ISO/MPEG-4 standard, is composed of several speech and audio coders supporting bit rates This paper evaluates the coding gain of using a dynamic from 2 to 64 kbps per channel. ISO/MPEG-4 includes Huffman entropy coder in an audio coder that uses a the already proposed AAC standard, which provides high wavelet-packet decomposition that is close to the sub- quality audio coding at bit rates of 64 kbps per channel. band decomposition made by the human ear. The sub- Parallel to the definition of the ISO/MPEG standards, band audio signals are modeled as samples of a station- several audio coding algorithms have been proposed that ary random process with laplacian probability density use the wavelet transform as the tool to decompose the function because experimental results indicate that the audio signal. The most promising results correspond to highest coding efficiency is obtained in that case. We adapted wavelet-based audio coders. Probably, the most have also studied how the entropy coding gain varies cited is the one designed by Sinha and Tewfik [2], a high with the band index.
    [Show full text]
  • Why “Not- Compressing” Simply Doesn't Make Sens ?
    WHY “NOT- COMPRESSING” SIMPLY DOESN’T MAKE SENS ? Confidential IMAGES AND VIDEOS ARE LIKE SPONGES It seems to be a solid. But what happens when you squeeze it? It gets smaller! Why? If you look closely at the sponge, you will see that it has lots of holes in it. The sponge is made up of a mixture of solid and gas. When you squeeze it, the solid part changes it shape, but stays the same size. The gas in the holes gets smaller, so the entire sponge takes up less space. Confidential 2 IMAGES AND VIDEOS ARE LIKE SPONGES There is a lot of data that are not bringing any information to our human eyes, and we can remove it. It does not make sense to transport, store uncompressed images/videos. It is adding data that have an undeniable cost and that are not bringing any additional valuable information to the viewers. Confidential 3 A 4K TV SHOW WITHOUT ANY CODEC ▪ 60 MINUTES STORAGE: 4478.85 GB ▪ STREAMING: 9.953 Gbit per sec Confidential 4 IMAGES AND VIDEOS ARE LIKE SPONGES Remove Remove Remove Remove Temporal redundancy Spatial redundancy Visual redundancy Coding redundancy (interframe prediction,..) (transforms,..) (quantization,…) (entropy coding,…) Confidential 5 WHAT IS A CODEC? Short for COder & DECoder “A codec is a device or computer program for encoding or decoding a digital data stream or signal.” Note : ▪ It does not (necessarily) define quality ▪ It does not define transport / container format method. Confidential 6 WHAT IS A CODEC? Quality, Latency, Complexity, Bandwidth depend on how the actual algorithms are processing the content.
    [Show full text]
  • Applying Compression to a Game's Network Protocol
    Applying Compression to a Game's Network Protocol Mikael Hirki Aalto University, Finland [email protected] Abstract. This report presents the results of applying different com- pression algorithms to the network protocol of an online game. The al- gorithm implementations compared are zlib, liblzma and my own imple- mentation based on LZ77 and a variation of adaptive Huffman coding. The comparison data was collected from the game TomeNET. The re- sults show that adaptive coding is especially useful for compressing large amounts of very small packets. 1 Introduction The purpose of this project report is to present and discuss the results of applying compression to the network protocol of a multi-player online game. This poses new challenges for the compression algorithms and I have also developed my own algorithm that tries to tackle some of these. Networked games typically follow a client-server model where multiple clients connect to a single server. The server is constantly streaming data that contains game updates to the clients as long as they are connected. The client will also send data to the about the player's actions. This report focuses on compressing the data stream from server to client. The amount of data sent from client to server is likely going to be very small and therefore it would not be of any interest to compress it. Many potential benefits may be achieved by compressing the game data stream. A server with limited bandwidth could theoretically serve more clients. Compression will lower the bandwidth requirements for each individual client. Compression will also likely reduce the number of packets required to transmit arXiv:1206.2362v1 [cs.IT] 28 May 2012 larger data bursts.
    [Show full text]
  • The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on Iot Nodes in Smart Cities
    sensors Article The Deep Learning Solutions on Lossless Compression Methods for Alleviating Data Load on IoT Nodes in Smart Cities Ammar Nasif *, Zulaiha Ali Othman and Nor Samsiah Sani Center for Artificial Intelligence Technology (CAIT), Faculty of Information Science & Technology, University Kebangsaan Malaysia, Bangi 43600, Malaysia; [email protected] (Z.A.O.); [email protected] (N.S.S.) * Correspondence: [email protected] Abstract: Networking is crucial for smart city projects nowadays, as it offers an environment where people and things are connected. This paper presents a chronology of factors on the development of smart cities, including IoT technologies as network infrastructure. Increasing IoT nodes leads to increasing data flow, which is a potential source of failure for IoT networks. The biggest challenge of IoT networks is that the IoT may have insufficient memory to handle all transaction data within the IoT network. We aim in this paper to propose a potential compression method for reducing IoT network data traffic. Therefore, we investigate various lossless compression algorithms, such as entropy or dictionary-based algorithms, and general compression methods to determine which algorithm or method adheres to the IoT specifications. Furthermore, this study conducts compression experiments using entropy (Huffman, Adaptive Huffman) and Dictionary (LZ77, LZ78) as well as five different types of datasets of the IoT data traffic. Though the above algorithms can alleviate the IoT data traffic, adaptive Huffman gave the best compression algorithm. Therefore, in this paper, Citation: Nasif, A.; Othman, Z.A.; we aim to propose a conceptual compression method for IoT data traffic by improving an adaptive Sani, N.S.
    [Show full text]