Lecture 5 Lossless Coding (II) Review
Total Page:16
File Type:pdf, Size:1020Kb
Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Outline Lecture 5 Review Lossless Coding (II) Arithmetic Coding Dictionary Coding Run-Length Coding Lossless Image Coding May 20, 2009 Data Compression: Hall of Fame References 1 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Image and video encoding: A big picture Differential Coding Motion Estimation and Compensation A/D Conversion Context-Based Coding Color Space Conversion … Pre-Filtering Predictive Partitioning Coding Review … Input Image/Video Post- Pre- Lossy Lossless Processing Processing Coding Coding (Post-filtering) Quantization Entropy Coding Transform Coding Dictionary-Based Coding Model-Based Coding Run-Length Coding Encoded … … Image/Video 3 1 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding The ingredients of entropy coding FLC, VLC and V2FLC A random source (X, P) FLC = Fixed-length coding/code(s)/codeword(s) • Each symbol xi emitted from a random source (X, P) is encoded as an A statistical model (X, P’) as an estimation n-bit codeword, where |X|≤2n. of the random source VLC = Variable-length coding/code(s)/codeword(s) • Each symbol x emitted from a random source (X, P) is encoded as an An algorithm to optimize the coding i ni-bit codeword. performance (i.e., to minimize the average • FLC can be considered as a special case of VLC, where n1=…=n|X|. codeword length) V2FLC = Variable-to-fixed length coding/code(s)/codeword(s) • A symbol or a string of symbols is encoded as an n-bit codeword. • V2FLC can also be considered as a special case of VLC. At least one designer … 4 5 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Static coding vs. Dynamic/Adaptive coding A coding Static coding = The statistical model P’ is static, Shannon coding i.e., it does not change over time. Shannon-Fano coding Huffman coding Dynamic/Adaptive coding = The statistical model Arithmetic coding (Range coding) P’ is dynamically updated, i.e., it adapts itself to • Shannon-Fano-Elisa coding the context (i.e., changes over time). Universal coding • Exp-Golomb coding (H.264/MPEG-4 AVC, Dirac) • Dynamic/Adaptive coding ⊂ Context-based coding • Elias coding family , Levenshtein coding , … Hybrid coding = Static + Dynamic coding Non-universal coding • Truncated binary coding, unary coding, … • A codebook is maintained at the encoder side, and the • Golomb coding ⊃ Rice coding encoder dynamically chooses a code for a number of Tunstall coding ⊂ V2FLC David Salomon, Variable-length Codes symbols and inform the decoder about the choice. … for Data Compression, Springer, 2007 6 7 2 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shannon-Fano Code: An example Universal coding (code) X={A,B,C,D,E}, A code is called universal if L≤C1(H+C2) for all possible values of H, where C , C ≥1. P={0.35,0.2,0.19,0.13,0.13}, Y={0,1} 1 2 • You may see a different definition somewhere, but the basic idea remains the same – a universal code works like an optimal code, except there is a bound defined by a constant C . A Possible Code 1 0.35+0.2 0.19+0.13+0.13 A universal code is called asymptotically optimal if C →1 A, B C, D, E A 00 1 when H→∞. B 01 0.13+0.13 0.35 0.2 0.19 D, E C 10 A B C D 110 0.13 0.13 E 111 D E 8 9 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Coding positive/non-negative integers Golomb coding and rice coding Naive binary coding Golomb coding = Unary coding + Truncated • |X|=2k: k-bit binary representation of an integer. binary coding • An integer x is divided into two parts (quotient and Truncated binary coding remainder) according to a parameter M: q = x/M , b c • |X|=2k+b: X={0,1,…,2k+b-1} r=mod(x, M)=x-q*M. 0 0 0 k k k 2 b 1 b0 bk 1 2 b b0 bk 2 + b 1 1 1 ⇒ ··· − − ⇒ //////////////////////··· − − ⇒ ··· − ⇒ ··· • Golumb code = unary code of q + truncated binary k k+1 code of r. | {zUnary} code (Stone-age binary coding) | {z } • When M=1, Golomb coding = unary coding. • |X|=∞: X=Z+={1,2,…} • When M=2k, Golomb coding = Rice coding. f(x)=0 0 1or1 1 0 ··· ··· • Golomb code is the optimal code for the geometric x 1 x 1 − − distribution: Prob(x=i)=(1-p)i-1p, where 0<p<1. | {z } | {z } 10 11 3 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Exp-Golomb coding (Universal) Huffman code: An example Exp-Golomb coding ≠ Golomb coding X={1,2,3,4,5}, P=[0.4,0.2,0.2,0.1,0.1]. Exp-Golomb coding of order k=0 is used in p1+2+3+4+5=1 some video coding standards such as H.264. A Possible Code p =0.6 1 00 1+2 p3+4+5=0.4 The encoding process 2 01 • |X|=∞: X={0,1,2,…} 3 10 p =0.4 p =0.2 p =0.2 k 1 2 p =0.2 4+5 4 110 • Calculate q = x/ 2 +1 , and n q = log 2 q . 3 b c b c 5 111 • Exp-Golomb code = unary code of nq + nq LSBs of k k q + k-bit representation of r=mod(x,2 )=x-q*2 . p4=0.1 p5=0.1 12 13 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Huffman code: An optimal code Huffman code: Small X problem Relation between Huffman code and Shannon code: Problem H L L L <H+1 ≤ Huffman≤ Huffman-Fano≤ Shannon • When |X| is small, the coding performance is less A stronger result (Gallager, 1978) obvious. When p 0.5, L H+p <H+1 • max≥ Huffman≤ max • As a special case, when |X|=2, Huffman coding cannot When p <0.5, • max compress the data at all – no matter what the L ≤H+p +log (2(log e)/e)≈H+p +0.086<H+0.586 Huffman max 2 2 max probability is, each symbol has to be encoded as a Huffman’s rules of optimal codes imply that Huffman single bit. code is optimal. Solutions • When each pi is a negative power of 2, Huffman code reaches the entropy. • Solution 1: Work on Xn rather than X. • Solution 2: Dual tree coding = Huffman coding + Tunstall coding 14 15 4 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Huffman code: Variance problem Modified Huffman code Problem Problem • There are multiple choices of two smallest • If |X| is too large, the construction of the Huffman tree will be too long and the memory used for the tree will be too demanding. probabilities, if more than two nodes have the same probability during any step of the coding process. Solution • Divide X into two set X ={s |p(s )>2-v}, X ={s |p(s )≤2-v}. • Huffman codes with a larger variance may cause 1 i i 2 i i • Perform Huffman coding for the new set X =X ∪{X }. trouble for data transmissions via a CBR (constant bit 3 1 2 • Append f(X ) as the prefix of naive binary representation of all rate) channel – a larger buffer is needed. 2 symbols in X2. Solution • Shorter subtrees first. (A single node’s height is 0.) 16 17 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Huffman’s rules of making optimal codes Justify Huffman’s rules Source statistics: P=P0=[p1,…,pm], where p1≥…≥pm-1≥pm. Rule 1 If L >L , we can swap them to get a smaller average codeword Rule 1: L1≤…≤Lm-1=Lm. • i i+1 length (when Pi=Pi+1 it does not make sense, though). Rule 2: If L1≤…≤Lm-2<Lm-1=Lm, Lm-1 and Lm differs from • If Lm-1<Lm, then L1,…,Lm-1<Lm and the last bit of Lm is redundant. each other only for the last bit, i.e., f(xm-1)=b0 and Rule 2 f(xm)=b1, where b is a sequence of Lm-1 bits. • If they do not have the same parent node, then the last bits of both Rule 3: Each possible bit sequence of length Lm-1 must be codewords are redundant. either a codeword or the prefix of some codewords. Rule 3 Answers: Read Section 5.2.1 (pp. 122-123) of the following book – Yun Q. • If there is an unused bit sequence of length Lm-1, we can use it for Shi and Huifang Sun, Image and Video Compression for Multimedia Lm. Engineering, 2nd Edition, CRC Press, 2008 18 19 5 Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Shujun LI (李树钧): INF-10845-20091 Multimedia Coding Why do we need a new coding algorithm? Problems with Huffman coding • Each symbol in X is represented by at least one bit. • Coding for Xn: A Huffman tree with |X|n nodes needs to be constructed. The value of n cannot be too large. Arithmetic Coding • Encoding with a Huffman tree is quite easy, but decoding can be difficult especially when X is large.