Adaptive Golomb Code for Joint Geometrically Distributed Data and Its Application in Image Coding

Adaptive Golomb Code for Joint Geometrically Distributed Data and Its Application in Image Coding

Adaptive Golomb Code for Joint Geometrically Dis- tributed Data and Its Application in Image Coding Jian-Jiun Ding*, Soo-Chang Pei**, Wei-Yi Wei†, Hsin-Hui Chen††, and Tzu-Heng Lee††† Department of Electrical Engineering, National Taiwan University, Taiwan, R.O.C. Email: [email protected]*, [email protected]**, [email protected]†, [email protected]††, ††† [email protected] Abstract—The Golomb code is a special case of the Huffman also dependent on t2 − t1. That is, the value of p in (1) is not a code that is optimal for the data with a geometric distribution. In constant. When t2 − t1 is very small, the value of p in (1) is this paper, we generalize the Golomb code by using the joint near to 0. When t − t is large, the value of p is also large. probability. In image coding, there are many conditions that the 2 1 probability of a data not only has a geometric distribution but Thus, we can improve the performance of the Golomb code also depends on another data. Based on this idea, we improve the using the joint probability. We can use the information of Golomb code by considering the joint probability and make it another data to adjust the ratio p in (1). The detail of the algo- more flexible and efficient. The proposed method based on the rithm is shown in Section III. From our simulations for image joint probability is called the adaptive Golomb code. The simula- compression, using the Golomb code together with the joint tions show that, if we use the adaptive Golomb code instead of the probability can achieve higher compression rate than using the Huffman code or the original Golomb code in JPEG, the com- Huffman code or the original Golomb code. See Section IV. pression rate can be improved. II. REVIEW OF THE ORIGINAL GOLOMB CODE I. INTRODUCTION Suppose that a data y has the geometric distribution as in (1). In 1975, Gallager and Voorhis [3] found that, if As we know, the Huffman coding algorithm [1] can achieve m m+1 m m−1 the minimal codeword length when the probability distribution p + p ≤ 1 < p + p , of a data is known to the encoder. However, the Huffman i.e., ⎡ log(p + 1)⎤ , (2) m =−⎢ ⎥ code has two problems. First, it needs a codeword table for ⎢ log(p ) ⎥ encoding and decoding. Moreover, it cannot be applied to the where ⎡ ⎤ means the round-up operation, then, in addition to data with infinite possible values. For this type of data, it the first several layers, each level of the coding tree should takes infinite memory capacity to form the coding trees. have m leaf nodes. The process of the Golomb coding algo- A well-known source model with infinite symbols is the rithm is as follows: geometric distribution source, whose distribution has the form: a (i) First, determine m from p by (2). Prob()() y==− a1 p p , (1) (ii) Then, we regard a in (1) as the dividend and m as the di- where p ∈ [0, 1) is the ratio and a ∈ {0, 1, …, ∞}. In 1966, visor. q and r are the quotient and the remainder of a/m, Golomb [2] proposed an efficient entropy coding algorithm, respectively. known as the Golomb coding algorithm, to encode the geo- (iii) Convert q into the prefix. The prefix is composed of q metric distribution source. Gallager and Voorhis [3] general- “1” bits followed by a “0” bit. ized Golomb’s initial algorithm and proved that the Golomb (iv) Convert r into the suffix using the binary code. The num- code can achieve the optimal coding efficiency when the in- formation source is geometrically distributed. ber of bits of the suffix can be ⎣log2m⎦ or ⎡log2m⎤. In order Moreover, the Golomb coding algorithm can convert the to determine the length of the suffix, we must have a symbol into the codeword directly by a tunable parameter p. threshold parameter τ(m), which is defined as τ(m) = 2^ ⎡log m⎤ − m. If r < τ(m), the length of the suffix is ⎣log m⎦ This makes the Golomb code quite efficient because no code- 2 2 word table is required for encoding and decoding. bits. Otherwise, we update r into r+τ(m) and encode it into Due to these reasons, the Golomb code is useful and effi- a ⎡log2m⎤-length suffix. cient for data compression. In this paper, we find that using the concept of joint probability can further improve the per- For example, assume that a = 9 and the parameter m deter- formance of the Golomb code. mined from (2) is 5. Then the quotient q of a/m is 1 and the In practice, many data not only have geometric distributions remainder r is 4. Since q = 1, the prefix is ‘10’. On the other but also are highly dependent on another data. hand, the remainder r = 4 is larger than τ(m) = 2^3 − 5 = 3. For example, in video processing, the difference of the ve- Therefore, we update r into r + τ(m), which is equal to 7, and locities of an object at t = t1 and t = t2 is near to have a geomet- the length of the suffix is ⎡log2m⎤ = 3. Therefore, the suffix is rically distributed probability and its distribution can be ex- ‘111’ and the entire code is ‘10111’. pressed as the form in (1). However, the velocity difference is 302 Proceedings of the Second APSIPA Annual Summit and Conference, pages 302–305, Biopolis, Singapore, 14-17 December 2010. 10-0103020305©2010 APSIPA. All rights reserved. TABLE I COMPARISON AMONG THE HUFFMAN CODE, THE ORIGINAL then we can use the following procedure to encode y. This is GOLOMB CODE, AND THE PROPOSED ADAPTIVE GOLOMB CODE. the process of our proposed modified Golomb coding algo- rithm. Without code- Flexibility and word table adaptation Huffman NO GOOD Step 1: Choose η as a large value. For example, we can Golomb YES MIDDLE choose Adaptive Golomb YES GOOD ητ= max{ |fx ( [ ]) |} . (5) Step 2: Scale the value of y[τ] as Thus, using the Golomb coding algorithm, it is easy to convert y[]τ η yˆ []τ = . (6) a data into a codeword and no codeword table is required. |([])|fxτ Instead, we only have to record the tunable parameter m in (2). Step 3: Find the best value of pˆ such that the probability of If m is known, we can reconstruct the original data. yˆ [τ ] can be approximated by the geometric distribution as: III. PROPOSED ADAPTIVE GOLOMB CODE BY JOINT PROB- Pyˆˆˆ[]τ =≈− k (1 pp ) k . (7) ( ) ABILITY Since if (7) is satisfied, then In theory, using the Huffman code can achieve the optimal ∞ pˆ Ey()ˆˆˆ[]τ =−=∑ k (1 pp ) k , (8) coding efficiency. However, we need extra memory to record k =0 1− pˆ the codeword table. It is not good for compression. By con- thus we can estimate the value of pˆ from trast, as the description in Section 2, when using the Golomb code, the codeword table is not required. Although the Ey()ˆ[]τ pˆ = . (9) Golomb coding algorithm approximates the distribution of a Ey()ˆ[]τ + 1 data by a geometric series and sacrifices the optimization, in Step 4: The probability of |y[τ]| = k can be approximated by: practice, since the codeword table is saved, it can achieve k Py( []τ =≈− k) ( 1 px() []ττ) p() x [], (10) higher compression ratio (i.e., lower data rate) than the Huff- man code (see our simulations in Section IV). However, we where the adjustable ratio p(x[τ]) is estimated from: believe that the flexibility and the performance of the Golomb 1 . (11) px()[]τ = code can be further improved. η ⎛⎞1 ⎜⎟−+11 In this paper, we modify the Golomb coding algorithm by |([])|fxτ ⎝⎠ pˆ using the joint probability. As the original Golomb coding The proof of (11) is described as follows. Analogous to (9), algorithm, we assume that the data y is near to have a geomet- Ey( []τ ) ric distribution as in (1). However, the value of p is not a con- px()[]τ = . (12) Ey[]τ + 1 stant and may vary with another data x, i.e., (1) is modified as () Proby()()==− a1()() px pxa . (3) Then, from (6) and (8), |([])|f xfxpττ |([])| ˆ In nature, it is very often that a data y is not only near to have Ey()[]ττ== Ey()ˆ [] . (13) a geometric distribution but also dependent on another data x. ηη1− pˆ In addition to the example of time difference VS. velocity dif- After substituting (13) into (12), we obtain (11). ference described in Section 1, there are many other examples: Step 5: Then, from (2), we can determine the tunable parame- ter m[τ] for each data y[τ] by ⎡ log(px ( [τ ])+ 1) ⎤ • x is the number of days and y is the variation of the price of m[]τ =− (14) commodities after x days. ⎢ log(px ( [τ ])) ⎥ . • x is the height of a person and y is the difference between Step 6: After m[τ] is determined, we can encode y[τ] by the the standard weight and the weight of the person Golomb code process in Section 2 (but m should be replaced • x is the area of a pattern and y is the difference between the by m[τ]). circumference of the pattern and 2 π x . • x is the distance between two pixels in an image and y is the Furthermore, since sometimes y[τ] can be positive or nega- difference of the intensities of the two pixels.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    4 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us