
Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Digital Image Processing Lectures 25 & 26 M.R. Azimi, Professor Department of Electrical and Computer Engineering Colorado State University M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Area 4: Image Encoding and Compression Goal: To exploit the redundancies in the image in order to reduce the number of bits to represent an image or a sequence of images (e.g., video). Applications: Image Transmission: e.g., HDTV, 3DTV, satellite/military communication, and teleconferencing. Image Storage: e.g., Document storage & retrieval, medical image archives, weather maps, and geological surveys. Category of Techniques: 1 Pixel Encoding: PCM, run-length encoding, bit-plane, Huffmann encoding, entropy encoding 2 Predictive Encoding: Delta modulation, 2-D DPCM, inter-frame method 3 Transform-based Encoding: DCT-based, WT-based, Zonal encoding 4 Others: Vector quantization (clustering), neural network-based, hybrid encoding M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Encoding System There are three steps involved with any encoding system (Fig. 1). a. Mapping: Removes redundancies in the images. Should be invertible. b. Quantization: Mapped values are quantized using uniform or Llyod-Max quantizers. c. Coding: Optimal codewords are assigned to the quantized values. Figure 1: A Typical Image Encoding System. However, before we discuss several types of encoding systems, we need to review some basic results from information theory. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Measure of Information & Entropy Assume there is a source (e.g., an image) that generates a discrete set of independent messages (e.g., grey-levels), rk, with prob. Pk; k 2 [1;L] with L being the number of messages (or number of levels). Figure 2: Source and message. Then, information associated with rk is Ik = − log2 Pk bits PL−1 Clearly, k=0 Pk = 1. For equally likely levels (messages) information can be transmitted as an n-bit binary number 1 1 P = = ! I = n bits k L 2n k For images, Pk's are obtained from the histogram. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding As an example, consider a binary image with r0 = Black;P0 = 1 and r1 = White;P1 = 0, then Ik = 0 i.e. no information. Entropy: Average information generated by the source L L X X H = PkIk = − Pk log2 Pk Avg: bits=pixel k=1 k=1 Entropy also represents a measure of redundancy. Let L = 4,P1 = P2 = P3 = 0 and P4 = 1, then H = 0 i.e. most certain case and thus maximum redundancy. Now, let L = 4,P1 = P2 = P3 = P4 = 1=4, then H = 2 i.e. most uncertain case and hence least redundant. Maximum entropy occurs when levels are equally likely, 1 Pk = L k 2 [1;L], then L X 1 1 H = − log = log L max L 2 L 2 k=1 Thus, 0 ≤ H ≤ Hmax Entropy and coding M.R. Azimi Digital Image Processing Entropy represents the lower bound on the number of bits required to code the coder inputs. That is, for a set of coder input levels vk; k 2 [1;L], with Pk then it is guaranteed that it is not possible to code them using less than H bits on the average. Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Entropy and Coding Entropy represents the lower bound on the number of bits required to code the coder inputs, i.e. for a set of coder inputs vk; k 2 [1;L], with prob Pk it is guaranteed that it is not possible to code them using less than H bits on the average. If we design a code with codewords Ck; k 2 [1;L] with corresponding word lengths βks, the average number PL of bits required by the coder is R(L) = k=1 βkPk. Figure 3: Coder producing codewords Cks with lengths βks. Shannon's Entropy Coding Theorem (1949) The average length R(L) is bounded by H ≤ R(L) ≤ H + , ; = 1=L M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding i.e. it is possible to encode without distortion a source with entropy H using an average of H + bits/message; or it is possible to encode with distortion the source using H bits/message. Optimality of the coder depends on how close R(L) is to H. Example: Let L = 2, P1 = p and P2 = 1 − p 0 ≤ p ≤ 1. Thus, the entropy is H = −p log2 p − (1 − p) log2(1 − p). The above figure shows H as a function of p. Clearly, since the source is binary, we can use 1 bit/pixel. This corresponds to Hmax = 1 at p = 1=2. However, if p = 1=8,H ≈ 0:2 i.e. more redundancies then it is possible to find a coding scheme that uses only 0:2 bits/pixel. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Remark: Max achievable compression is Average bit rate of original raw data(B) C = Average bit rate of encoded data (R(L)) Thus B B ≤ C ≤ = 1=L H + H Since certain distortion is inevitable in any image transmission, it is necessary to find the minimum number of bits to encode the image while allowing a certain level of distortion. Rate Distortion Function Let D be a fixed distortion between the actual values, x and reproduced values, x^. Then the question is: allowing D distortion what is minimum number of bits required to encode the data? 2 If we consider x as a Gaussian r.v. with σx, D is D = E[(x − x^)2] Rate distortion function is defined by M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding ( 2 1 σx 2 2 log2 D 0 ≤ D ≤ σx RD = 2 0 D > σx 1 σ2 = Max[0; log x ] 2 2 D 2 At maximum D ≥ σx, RD = 0 i.e. no information needs is transmitted. Figure 4: Rate Distortion Function RD versus D. RD shows the number of bits required for distortion D. Since RD 2 1=2 RD σx represents the number of bits/pixel N = 2 = D , D is considered to be quantization noise variance. This variance can be minimized using Llyod-Max quantizer. In transform domain we can assume that x is white (e.g., due to KL). M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Pixel-Based Encoding Encode each pixel ignoring their inter-pixel dependencies. Among methods are: 1. Entropy Coding Every block of an image is entropy encoded based upon the Pk's within a block. This produces variable length code for each block depending on spatial activities within the blocks. 2. Run-Length Encoding Scan the image horizontally or vertically and while scanning assign a group of pixel with the same intensity into a pair (gi; li) where gi is the intensity and li is the length of the \run". This method can also be used for detecting edges and boundaries of an object. It is mostly used for images with a small number of gray levels and is not effective for highly textured images. M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Example 1: Consider the following 8 × 8 image. 4 4 4 4 4 4 4 0 4 5 5 5 5 5 4 0 4 5 6 6 6 5 4 0 4 5 6 7 6 5 4 0 4 5 6 6 6 5 4 0 4 5 5 5 5 5 4 0 4 4 4 4 4 4 4 0 4 4 4 4 4 4 4 0 The run-length codes using vertical (continuous top-down) scanning mode are: (4,9) (5,5) (4,3) (5,1) (6,3) (5,1) (4,3) (5,1) (6,1) (7,1) (6,1) (5,1) (4,3) (5,1) (6,3) (5,1) (4,3) (5,5) (4,10) (0,8) i.e. total of 20 pairs = 40 numbers. The horizontal scanning would lead to 34 pairs = 68 numbers, which is more than the actual number of pixels (i.e. 64). M.R. Azimi Digital Image Processing Image Encoding & Compression Information Theory Pixel-Based Encoding Predictive Encoding Transform-Based Encoding Example 2: Let the transition probabilities for run-length encoding of a binary image (0: black and 1: white) be p0 = P (0j1) and p1 = P (1j0). Assuming all runs are independent, find (a) average run lengths, (b) entropies of white and black runs, and (c) compression ratio. Solution: A run of length l ≥ 1 can be represented by a Geometric r.v. l−1 Xi with PMF P (Xi = l) = pi(1 − pi) with i = 0; 1 which corresponds to happening of 1st occurrences of 0 or 1 after l independent trials. (Note that (1 − P (0j1)) = P (1j1) and (1 − P (1j0)) = P (0j0).) and Thus, for the average we have 1 1 X X l−1 µXi = lP (Xi = l) = lpi(1 − pi) l=1 l=1 P1 n−1 1 1 which using series na = 2 reduces to µX = .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages29 Page
-
File Size-