Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Goal: – Reducing the amount of data required to represent a digital image. • Transmission • Archiving – Mathematical Definition: • Transforming a 2-D pixel array into a statistically uncorrelated data set.
1 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Two Main Category: – Lossless: TIFF (LZW,ZIP) – Lossy: JPEG and JPEG2000
2 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: Original vs. JPEG2000
CR 13.31
RD 0.925
Best Compressor: I am 1,085,496 Byte 510,576 bytes I am 81,503 Byte 3 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Fundamentals:
– Raw image: A set of n1bits – Compressed image: A set of n2 bits.
n – Compression ratio: C 1 R n 2 1 – Relative Data Redundancy in A: RD 1 CR
– Example: n1= 100KB and n2 = 10Kb, then C0 = 10 , and RD = 90%
• Special cases: 1) n1>>n2 → CR ≈ ∞, RD ≈ 1 • 2) n1 ≈ n2 → CR ≈ 1, RD ≈ 0 • 3) n1 << n2 → CR ≈ 0, RD ≈ -∞ 4 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Three basic data redundancies: – Coding redundancy – Spatial/Temporal (Inter-Pixel) redundancy, – Irrelevant/Psycho-Visual redundancy.
5 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Coding redundancy: – Type of coding (# of bits for each gray level) – Image histogram:
• rk: Represents the gray levels of an image
• pr(rk): Probability of occurrence of rk n p( r )k k 0,1,2,..., L 1 rk n
• l(rk): Number of bits used to represent each rk.
• Lavg: Average # of bits required to represent each pixel: L1 Lavg l()() r k p r r k k0
6 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example (1): Variable Length Coding
Lavg = 0.25(2)+0.47(1)+0.25(3)+0.03(3)=1.81 bits Code 2: n1 256 256 8 8 1 CRRD 4.42 1 0.774 77.4% n2 256 256 1.81 1.81 4.42
7 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example (2): Spatial Redundancy – Uniform intensity – Highly correlated in horizontal direction – No correlation in vertical direction
– Coding using run-length pairs: g1, RL 1 , g 2 , RL 2 ,
8 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Irrelevant/Psychovisual Redundancy: – Human perception of images: • Normally does NOT involve quantitative analysis of every pixel value. – Quantization eliminates psychovisual redundancy
9 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Image Compression using Psychovisual Redundancy: – A Simple method: Discard lower order bits (4) – IGS (Improved Gray Scale) Quantization: • Eye sensitivity to edges. • Add a pseudo random number (lower order bits of neighbors) to central pixel and then discard lower order bits.
169 ÷ 16 = 10.56 11 11 × 16 = 176 167 ÷ 16 = 10.43 10 10 × 16 = 160 Rounded values
10 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• IGS Example:
Pixel Gray Level SUM IGC i-1 N/A 0000 0000 N/A i 0110 1100 0110 1100 0110 i+1 1000 1011 1001 0111 1001 i+2 1000 0111 1000 1110 1000 i+3 1111 0100 1111 0100 1111
11 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Sample Results: Original Simple Discard IGS
12 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Image information Measure: – Self Information 1 IEPE logm log : m-ary units PE m 2 for bit expression ( P E 0.5 I E 1)
13 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Information Channel: – A Source of statistically independent random events
– Discrete possible events a 12 ,,, a a J with associated
probability: P a12,,, P a P aJ – Average information per source output (Entropy): J H P ajjlog P a j1 – For “zero-memory” (intensity source) imaging system: • Using Histogram L1 H pr r klog2 p k r k bits / pixel k 0 • This lower band of bits-length
14 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example:
H 1.6614 bits/pixel
15 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Shannon First Theorem (Noiseless Coding Theorem):
– For a n-symbol group with Lavg, n average number of code symbol:
Lavg, n lim H n n
16 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Fidelity Criteria: – Objective (Quantitative) • A number (vector) is calculated – Subjective (Qualitative) • Judgment of different subject is used in limited level.
17 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Objective Criteria: – f x , y :input image – f ˆ x , y : approximated image e x,,, y fˆ x y f x y MN11 1 ˆ Eabs f x,, y f x y MN xy00 12 MN11 1 ˆ 2 Erms f x,, y f x y MN xy00 MN11 2 fˆ x, y SNR xy00 ms MN11 2 fˆ x,, y f x y xy00 18 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Objective Criteria: – PSNR: Peak SNR 2 L PSNR 10log10 MN11 1 ˆ 2 f x,, y f x y MN xy00 L : Maximum level of images (255 for 8bits)
19 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Subjective Test
20 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: – Original and three approximation. Erms=5.17
Erms=15.67
Erms=14.17
21 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Image Compression Model:
– Encoder: Create a set of symbols from image date. • Source Encoder: Reduce input redundancy. • Channel Encoder*: Increase noise immunity (noise robustness) – Decoder: Construct output image from transmitted symbols • Source Decoder: Reverse of Source encoder • Channel Decoder*: Error detection/correction. 22 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Source Encoder-Decoder: – Mapper: Map input data to suitable format for interpixel redundancies reduction (i.e., Run-Length coding) – Quantizer: Reduce accuracy of mapper output according to fidelity criteria (i.e., Psychovisual Redundancy) – Symbol Encoder: Create Fix or variable length symbols (code redundancy)
23 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Channel Encoder-Decoder*: – Add controlled redundancy to source encoded data – Increase noise robustness. – Hamming (7-4) as an example:
4 bit binary input data: b3 b 2 b 1 b 0
7 bit binary output data: h7 h 6 h 5h4 h 3 h 2 h 1 b 0 b 1bb23 h 4 h 2 h 1
h1 b 3 b 2 b 1 c1 h 1 h 3 h 5 h 7 h2 b 3 b 1 b 0Parity bits, Error check: c 2 h 2 h 3 h 6 h 7 hb4 2 bb 1 0 c4 h4 h 5 h 6 h 7
24 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Image Compression Standards
• See Tables 8.3 and 8.4 for more details
25 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Error-Free Compression: – ZIP, RAR, TIFF(LZW/ZIP) • Compression Ratio: 2-10 – Steps: • Alternative representation to reduce interpixel redundancy • Coding the representation to reduce coding redundancy
26 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Variable-Length Coding: – Variable-Length Coding • Assign variable bit length to symbols based on its probability: • Low probability (high information) more bits • High Probability (low information) less bits
27 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman Coding: – Uses frequencies (Probability) of symbols in a string to build a variable rate prefix code. – Each symbol is mapped to a binary string. – More frequent symbols have shorter codes. – No code is a prefix of another. (Uniquely decodable) – It is optimum! – CCITT, JBIG2, JPEG MPEG (1, 2, 4), H261-4
28 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman Coding: – Example: • We have three symbols a, b, c, and c.
29 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman Coding: – A source string: aabddcaa • Fixed Length Coding: 16 bits (ordinary coding) – 00 00 01 11 11 10 00 00 • Variable length coding: 14bits (Huffman coding) – 0 0 100 11 11 101 0 0 • Uniquely Decodable:
0 0 1 0 0 1 1 1 1 1 0 1 0 0
a a b d d c a a 30 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman Coding:
– Consider an information source: S s12, s , , sm – Assume symbols are ranked decreasing with respect to occurrence probabilities:
p s1 p s 2 p smm 1 p s – We seek for optimum code, the lengths of codewords assigned to the source symbols should be: l1 l 2 lmm 1 l
31 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman Coding: – Huffman (1951) Derived the following rules: • Two least probable source symbols have equal-length codewords.
l1 l 2 lmm 1 l • These two codewords are identical except for the last bits, with binary 0 and 1, respectively. • These source can be combined to form a new symbol.
– Its occurrence probability is P smm P s 1 – Its codeword is common prefix of order lm-1of the two codewords assigned to Sm and Sm-1 , respectively.
32 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman Coding: – The new set of source symbols thus generated is referred to as the first auxiliary source alphabet, which is one source symbol less than the original source alphabet. – In the first auxiliary source alphabet, we can rearrange the source symbols according to a nonincreasing order of their occurrence probabilities. – The same procedure can be applied to this newly created source alphabet. – The second auxiliary source alphabet will again have one source symbol less than the first auxiliary source alphabet. – This is repeated until we form a single source symbol with a probability of 1. – Start from the source symbol in the last auxiliary source alphabet and trace back to each source symbol in original source alphabet to find the codewords.
33 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman Coding: – Source Reduction sequence.
34 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• More Illustration: Symbol Probability Codeword a 0.05 0000 b 0.05 0001 a b c d e f g 0.05 0.05 0.1 0.2 0.3 0.2 0.1 c 0.1 001 d 0.2 01 e 0.3 10
f 0.2 110 1.0 g 0.1 111 0 1
0.4
0 1
0.2 0.6
0 1 0 1 0.1 0.1 0.3
1 a b c d e f g 0 1 0 0.05 0.05 0.1 0.2 0.3 0.2 0.1 a b c d e f g 0.05 0.05 0.1 0.2 0.3 0.2 0.1 35 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example:
Lavg 2.2 bits/symbol
36 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: – 8bits monochrome • Estimated Entropy: 7.3838 bits/pixel – Huffman’s Coding: • MATLAB: 7.428 bits/pixel, CR = 1.077, RD = 0.0715
37 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Near Optimal
38 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Huffman drawbacks: – For large number symbols, construction of optimal Huffman code is nontrivial. – Huffman is block coding: • That is, some codeword having an integral number of bits is assigned to a source symbol. • A message may be encoded by cascading the relevant codewords. It is the block-based approach that is responsible for the limitations of Huffman codes. • Computationally not efficient for a message with all possible symbols.
39 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Coding (1): – Non-negative integer inputs – Exponentially decaying probability distribution – Inputs may be near optimal codes (simple than Huffman) – JPEG-LS and AVS – Notation: x: Largest integer less than or equal to x, Floor x:Smallest integer greater than or equal to x , Ceil
40 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Coding (2): – Goal:
• Golomb Coding of n (Non-Negative) with respect to m>0:Gm(n) – Combination of unary coding quotient nm and binary representation of remainder “n mod m” – Unary coding of integer q is: q “1s” followed by a “0”
41 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Coding (3): – Steps: • Form unary coding of nm k • With k log 2 m , c 2 m , r n mod m compute truncated remainder: r truncated to k -1 bits 0 r c r r c truncated to k bits o.w.
• Concatenate the results of two steps
42 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Coding Example (1):
– Goal: G4(9) Unary Coding – Step #1: 9 4 2 11 0 – Step #2: 2 kclog2 4 2, 2 4 0 rr9mod 4 1(00)01 truncated to k =2 01
– Step #3: Concatenate 110 and 01: 11001
43 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Coding Example (2):
44 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Coding (4): – Easy (binary shift operation) for: m 2k , c 0, r r n mod m • Golomb-Rice or Rice Codes – Optimality condition: • Non-negative geometrically distribution: Pn 1 n , 0 1 • With:
log2 1 m log2 1
45 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Usability (1): – Intensity image distribution are NOT geometrical. – Difference images are. • How handle negative value: – Map to Odd and even non-negative integer: 20nn Mn 2nn 1 0
46 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Usability (2): – Geometrically distribution(s) – Difference image histogram – Mapped histogram
47 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Use Golomb Coding in image: – Consider P(n-μ) instead of P(n), μ: mean intensity – Map the negative image.
– Code using Gm(n)
48 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Image Coding Example (1):
Mapping
49 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Image Coding Results (1):
– G1(n) • C=4.5 • 88% of theoretical compression ratio (4.5/5.1) • 96% of Huffman coding compression (Both in MATLAB) – Less computational efforts
50 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Image Coding Example (2):
– G1(n) • C=0.0922!!! • Data Expansion – Golomb is model based
51 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Golomb Exponential Coding of order k: – Step #1: Find non-negative integer i such that: ii1 j k j k 22n jj00 • And form unary coding of i. – Step #2: Truncate the following binary representation to
k+I (LSB) i1 jk n 2 j0 – Step #3: Concatenate results of step #1 and #2 – k=0 is known as Elias-gamma code
52 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: 0 – Let’d compute: Gexp 8 • Step #1: Unary Coding ki0 log2 9 3 1110 • Step #2: 31 8 2jk 8 1 2 4 1 Unary Coding 0001 Trunc. to 3+0 001 j0 • Step #3: 1110001
53 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Arithmetic Coding: – Stream-based (A string of source symbols is encoded as a string of code symbols): – Free of the integer number of bits/symbol restriction and more efficient. – Arithmetic coding may reach the theoretical bound of coding efficiency specified in the noiseless source coding theorem for any information source. – JBIG1, JBIG2, JPEG-2000, H264, MPEG-4 AVC 54 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Arithmetic Coding, Basic idea: – With n bits, a code interval of [0,1] divided to 2n parts with length of 2-n. -n – Vise versa A=2 can be coded using –log2A. 0 0.125 0.250 0.375 0.500 0.625 0.750 0.875 1
000 001 010 011 100 101 110 111 00 01 10 11 0 1
55 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Arithmetic Coding-Basic Idea: – Represent the entire input stream as a small interval between the range [0,1]. – 1. Divide interval ([0,1]) to subintervals according to probability distribution. – The first symbol interval is taken and again divide to sub interval with length of relative to probabilities. – Now take second symbol interval and continue …
56 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Arithmetic Encoding By Example: – Stream is baca
Si Pi Range Range Range Range
a 0.5 [0.0-0.5) [0.5-0.625) [0.5-0.5625) [0.59375-0.609375)
b 0.25 [0.5-0.75) [0.625-0.6875) [0.5625-0.59375) [0.609375-0.6171875)
c 0.25 [0.75-1.0) [0.6875-0.75) [0.59375-0.625) [0.6171875-0.625)
57 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Arithmetic Decoding By Example: – Code is 0.59375 • 0.5<0.59375<0.75 b (b section of main interval) • 0.5<0.59375<0.625 a (a section of subinterval b) • 0.59375<0.59375<0.625 c (c section of subinterval a) • 0.59375<0.59375<0.609375 a (a section of suninterval c)
– Loop. For all the symbols. • Range = high_range of the symbol - low_range of the symbol • Number = number - low_range of the symbol • Number = number / range • Stop for Number = 0
58 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Implementation: – Change [0,1] to [0000H,FFFFH] – Change Probabilities value
59 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Arithmetic Coding
60 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Lempel-Ziv-Welch (LZW) Coding – Reduce Spatial-Coding redundancy – Assign fixed-length codewords to variable-length sequences of source symbols. – No priori knowledge about intensity probability – GIF, TIFF, PNG, PDF
61 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Introductory Example: 21 21 21 95 169 243 243 243 21 21 21 95 169 243 243 243 21 21 21 95 169 243 243 243 21 21 21 95 169 243 243 243
• First Order Entropy: Gray level Count Probability 1.81 bits/pixel 21 12 0.375 95 4 0.125 169 4 0.125 243 12 0.375 62 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Introductory Example: – Second Order Entropy:
Gray level Count Probability pair (21, 21) 8 0.250 1.25 bits/pixel (21, 95) 4 0.125 (95, 169) 4 0.125 (169, 243) 4 0.125 (243, 243) 8 0.25 (243, 21) 4 0.125
63 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Meta AlgorithmWikipedia: – Initialize the dictionary to contain all strings of length one. – Find the longest string W in the dictionary that matches the current input. – Emit the dictionary index for W to output and remove W from the input. – Add W followed by the next symbol in the input to the dictionary. – Go to Step 2.
64 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• ExampleWikipedia: – String to be code: TOBEORNOTTOBEORTOBEORNOT# – 1) Initial Dictionary: Symbol Code Symbol Code Symbol Code # 0 I 9 R 18 A 1 J 10 S 19 B 2 K 11 U 20 C 3 L 12 V 21 D 4 M 13 W 22 E 5 N 14 X 23 F 6 O 15 Y 24 G 7 P 16 Z 25 H 8 Q 17 # 26 65 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: TOBEORNOTTOBEORTOBEORNOT# Input New Dictionary Entry Output Sequence Code Full Conjecture 20 T 27: T? 15 O 27: TO 28: O? 2 B 28: OB 29: B? 5 E 29: BE 30: E? 15 O 30: EO 31: O? 18 R 31: OR 32: R? 14 N 32: RN 33: N? 15 O 33: NO 34: O? 20 T 34: OT 35: T? 27 TO 35: TT 36: TO? 29 BE 36: TOB 37: BE? 31 OR 37: BEO 38: OR? 36 TOB 38: ORT 39: TOB? 30 EO 39: TOBE 40: EO? 32 RN 40: EOR 41: RN? 34 OT 41: RNO 42: OT? 66 E. Fatemizadeh, Sharif0 University of Technology,# 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
0. Initialize a dictionary by all possible gray values (0-255) 1. Input current pixel 2. If the current pixel combined with previous pixels form one of existing dictionary entries Then 2.1 Move to the next pixel and repeat Step 1 Else 2.2 Output the dictionary location of the currently recognized sequence (which is not include the current pixel)
2.3 Create a new dictionary entry by appending the currently recognized sequence in 2.2 with the current pixel
2.4 Move to the next pixel and repeat Step 1 67 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• LZW for Image coding: – Initialize a dictionary by all possible gray values (0-255) – (1) Input current pixel – (2) If the (current pixel concatenate previous pixels) exists in dictionary, then • (2-1) Move to the next pixel and repeat Step 1 – Else • (2-2) Output the dictionary location of the currently recognized sequence (which is not include the current pixel) • (2-3) Create a new dictionary entry by appending the currently recognized sequence in (2-2) with the current pixel • (2-4) Move to the next pixel and repeat Step 1
68 E. Fatemizadeh , Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
Dictionary 0 0 Input Sequences Encoded 1 1 39 39 Output (9 bits) … … 39 39 39 255 255 126 126 39 256 39-39 126 126 126 257 39-126 39 39 126 258 126-126 39 39-39 259 126-39 126 126 256 260 39-39-126 126 126-126 261 126-126-39 39 39 258 262 39-39-126-126 39 39-39 126 39-39-126 126 126 260 69 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Binary image compression: – A very simple approach: 8 pixels to one byte (code redundancy) – Constant Area Coding (CAC): • Image is divided to blocks (p×q): – White, Black, and Mixed • Coding: – Most Probable (W/B/M) will code by one bit (0) – Two others will code by two bits(10/11) – Mixed code is prefix of its data
70 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Binary image compression: – White Block Skipping (WBS): • 0 for whole white blocks • 1 is used as prefix for mixed or whole black (less probable) – Symbol-based (Token-based) Coding: • Image modeled as a collection of frequently occurred sub-images (symbols or tokens)
• (xi,yi,ti):(xi,yi) are position of symbols and ti position of symbol in dictionary – JBIG2 (Text Halftone, Generic region), Page: 561-2 71 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Symbol-based (Token-based) Coding: – Image modeled as a collection of frequently occurred sub-images (symbols or tokens)
– (xi,yi,ti):(xi,yi) are position of symbols and ti position of symbol in dictionary
72 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• One dimensional Run Length Coding (RLE): – Basic idea: • Code each cluster of white (1) and Black (0) pixel by its length (Or cluster of constant gray level). • Two Approach: – Specify value of first run of each row. – Assume each row begin with white (its length may be zero) – We may use variable length coding for each length of black or white. – Facsimile (FAX) and BMP
73 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Run-Length Coding Implementation: – One-dimensional CCITT – Two-Dimensional CCITT • Coding Black-to-white and white-to-black transition – Read pages 555-559
74 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Bit-Plane Coding: – Ordinary Coding: mm1 2 1 0 A amm12 a 2 2 a 1 2 a 0 2 – We could have 8 Binary images, but its drawback: • Small changes in gray level can have significant effect on bit-plane images: 127=01111111 and 128=10000000, all eight images are different! – Gray Code: g a a, i 0,1, , m 2 i i i1 gamm11 – 127=11000000 and 128=01000000
75 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Bit-Plane Coding Example
76 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression Gray Images are less complex • Example: Binary Gray – Bit-Planes (7-4) Bit 7
Bit 6
Bit 5
Bit 4 77 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression Gray Images are less complex • Example: Binary Gray – Bit-Planes (3-0) Bit 3
Bit 2
Bit 1
Bit 0 78 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• JBIG2 Results:
79 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Block Transform Coding: – A Reversible- Linear transform maps the image to a set of coefficients. – These coefficients then quantized and coded. (Local/Global)
80 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Transform Selection: – Compression is done during quantization NOT transforming. – Type of transformation relate to Compression ratio. nn11 Tuv ,,,;, gxyrxyuv xy00 nn11 gxy ,,,;, Tuvsxyuv uv00 – Separable Transform:
rxyuv ,;,,, rxur12 yv – Symmetric Transform: rxyuv,;,,, rxuryv 1 1 81 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Transform Coding: – DFT (M=N): r x,;, y u v ej2 ux vy n 1 s x,;, y u v e j2 ux vy n n2 • Fast • Complex • Gibbs Phenomenon
82 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• DFT Basis Function:
83 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Walsh-Hadamard Transform (WHT):
m1 1 bi x p i u b i u p i v r x, y ; u , v s x , y ; u , v 1 i0 n m n 2 , bk z : kth bit (righ to left) of binary representation of z
p01 u bm u
p1 u bmm 1 u b 2 u
p2 u bmm 2 u b 3 u
pm1 u b 1 u b 0 u
84 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Walsh-Hadamard Basis Function:
85 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Discrete Cosine Transform (DCT):
r x,;,,;, y u v s x y u v
2x 1 u 2 y 1 v uv cos cos 22nn 1 u 0 n u 2 un1,2, , 1 n
86 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• DCT Basis Function:
87 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: – n=8, – Truncating 50% of coefficients DFT (2.32) WHT (1.78) DCT (1.13)
88 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Transform Coding Details: – Consider a sub-image (block-wise processing):
nn11 gxy ,,,;, Tuvsxyuv uv00 Matrix Formulation: nn11 GS T u, v uv uv00 i, j n 1, n 1 S s h i,;, j u v uv ij, 0,0
89 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Transform Coding Details: – Transform Masking Function:
0T u , v in a predefined region uv, 1 o.w. nn11 ˆ GS u, v T u , v uv Compressed Image uv00
90 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Transform Coding Details: – Error Analysis:
2 eE2 GGˆ ms 2 n1 n 1 n 1 n 1 E T u,,, vSSuv u v T u v uv u0 v 0 u 0 v 0 2 nn11 E T u, v 1 u , v Suv uv00 nn11 Orthonormality of Transform Coeffs 2 T u,2 v 1uv , : uv00 G N 0, 91 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Transform Compression: – How to select Transform: • Who collect more information in less coefficients. – Best Transform: • KL (Karhune-Loeve) – Data Dependent! – Most used transform: • DCT: – Single Chip Implementation – Packing the most information into the fewest coefficients. – Minimum Blocking Artifact.
92 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Periodicity Implicit DFT and DCT – Gibbs phenomenon causes erroneous boundary • Blocking artifacts
DFT
DCT
93 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Block size effect: – Small Blocks: • Faster • More blocking effect (correlation between neighboring pixels) • Low compression ratio – Large Block: • Slower • Better compression in “flat” regions. • Lower Blocking Effect – Power of 2, for fast implementation.
94 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Block-Size and Transform Effects:
95 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Block Size effect for DCT: – Reconstruct using 25% of the DCT coefficients
Original 2×2 4×4 8×8
96 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Bit Allocation: – Bio Allocation: Truncation-Quantization-Coding – How to select retained coefficients. • Maximum Magnitude (Threshold Coding) • Maximum Variance (Zonal Coding) – Threshold Coding: • Keeping P-largest coefficients. – Zonal Coding: • DCT Coeffs. are considered as RV’s , all statistics computed using ensemble of transformed subimages. • Keeping P-largest variances (same -P (u,v)’s for all subimages.)
97 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Threshold vs. Zonal Coding: – Keep coefficients 8 out of 64
RMS=4.5
RMS=6.5
98 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Zonal Coding Implementation: – Ensemble average over MN/n2 subimages transforms and select non zero χ(u,v) , Zonal Masking. (High variances) • 1 or 0 for each position in transform domain, Zonal Mask • Number of bits used to code each coefficients, Zonal bit allocation – Quantization of the retained coefficients: • Fixed Length (Uniformly Quantized) • Optimal Quantizer: – A model for DCT coefficients: » Rayleigh for DC component. ( A positive RV’s) » Laplacian or Gaussian for others (CLT theorem)
99 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Zonal Coding Example:
Z-Coding # of bits
100 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Threshold Coding Implementation: – An Adaptive mask selection of non-zero χ(u,v) – Location of maximum magnitude vary from one sub-image to another. • Re-order 2D DCT to a 1D (Run Length Coded Sequence) in a zigzag arrangement.
101 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Why zigzag arrangement: – This is done so that the coefficients are in order of increasing frequency. – This improves the compression of run-length encoding.
102 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Threshold Coding Example:
T-Coding Zigzag tracing
103 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Thresholding: – A single global threshold for all subimages. • Different CR from image to image – A different threshold for each subimages. • N-Largest coding (same number of images are discarded) • Same CR for all images. – A variable threshold as a function of location. • Different CR from image to image. • Combined Quantization and Thresholding.
104 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Location Variant Thresholding: Compression : T u, v Tˆ u, v round Z u, v Z u, v : Elements of Transform Normalization Decompression : Tuv ,,, ZuvTuv ˆ , inv fxy
105 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Thresholding-Quantization: – Left for specific Z(u,v)=c – Map of Z(u,v)
T u, v Tˆ u, v round Z u, v
ccCurve Normalization Matrix Tuvˆ ,,, k kc Tuv kc 22 Z u, v Tˆ u, v 0, T u , v 2
106 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Quantization Effect: Z(u,v) 2Z(u,v) 4Z(u,v)
8Z(u,v) 16Z(u,v) 32Z(u,v) 107 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• JPEG – Based on the Text Book – Three different coding systems: • Lossy baseline coding system (most application) • Greater Compression/Progressive reconstruction (Web) • Lossles independent coding (reversible compression)
108 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Baseline system (Sequential baseline system): – Input and Output precision: 8 bits – Quantized DCT coefficients precision: 11 bits – Performs in three steps: DCT, Quantization, VL coding • Preprocessing: – Level shifting (2k-1, k bit depth) • Coding: – Different VLS coding for nonzero AC and DC components.
109 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Sample subimages (8*8)
110 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Level shifting (-128)
111 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Forward DCT Transoform:
112 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Normalized:
T u, v Tˆ u, v round Z u, v Tˆ 0,0 26
113 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Zig-Zag Scanning:
• Two different Coding for DC and AC – See Tables A-3, A-4, A-5 for codebook
114 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Quantized Coefficients:
115 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Denormalization:
T u,,, v Z u v Tˆ u v T 0,0 416 T 0,0 415 116 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Inverse DCT Transform:
117 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Level Shifting (+128):
118 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Difference Image:
119 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example:
1:25
1:52
120 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• JPEG (Joint Photographic Expert Group) – Color Models – Image Subsampling – Coding (DC and AC)
121 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• YUV Color model:
122 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• YIQ Color Model:
123 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Chromatic components subsampling:
124 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Chrominance Decimation of eye:
125 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Compression Block Diagram
126 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Decompression Block Diagram
127 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• DCT on Image block – Each image is divided into 8×8 blocks. The 2D DCT is applied to each block image f(i, j), with output being the DCT coefficients F(u,v) for each block.
128 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Quantization: – F(u,v) represents a DCT coefficient, Q(u,v) is a “quantization matrix” entry, and F(u,v) represents the quantized DCT coefficients which JPEG will use in the succeeding entropy coding. • The quantization step is the main source for loss in JPEG compression. • The entries of Q(u,v) tend to have larger values towards the lower right corner. This aims to introduce more loss at the higher spatial frequencies
129 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Quantization Table: – Default Q(u,v) values obtained from psychophysical studies with the goal of maximizing the compression ratio while minimizing perceptual losses in JPEG images.
130 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• DCT + Quantization:
Encoder
Decoder
131 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• DPCM: Differential Pulse Code Modulation – Used for DC components compression: • DC coefficient is unlikely to change drastically within a short distance so we expect DPCM codes to have small magnitude and variance.
150 155 149 152 144 150 5 6 3 8
– Entropy coding of DC components: • Huffman
132 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• RLC: Run Length Coding – Used for AC components compression: • RLC is effective if the information source has the property that symbols tend to form continuous groups. In this case, the quantized DCT coefficients tend to form continuous groups of 0s. – Used in Conjunction with zig-zag scan – Entropy coding of DC components: • Huffman
133 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• RLC: Run Length Coding
134 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Lossless Predictive Coding: – Code only new information in each pixel. – New Information: Difference between two successive pixel.
fn : New incoming pixel value.
ffˆ : Prediction of based on a limited range history nn
– en is coded with a variable-length coding (symbol Encoder) ˆ en f n f n
135 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Prediction: – An optimal (local/global) and adaptive predictor:
mm ˆˆ 1-D fn round i f n i f x,, y round i f x y i ii11 – A nearest integer operator is needed
136 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Coder and Decoder:
137 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Lossless Predictive Coding: – For 2D images: mm ˆ f x,, y roundi f x i y j i m j m – Example: ˆ fn round f x,1 y • Zero difference is code with 128 • Negative/Positive error are coded with dark light region.
138 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: – Prediction residual – α=1 e x ,,, y f x y fˆ x y – CR = 8/3.99 = 2 – Residual PDF
2 e 1 e Pe e e 2e
139 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example (Temporal Redundancy): ˆ f x, y , t round f x , y , t 1 , 1 exyt , , fxyt , , fxytˆ , , 1
140 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Motion Compensated Prediction Residuals – Self study (589-596)
141 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Lossy Compression: – JPEG, JPEG2000, GIF – 1:10 to 1:50
142 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Lossy Predictive Coding: – Map prediction error into a limited range of outputs (control compression ratio)
ˆ fnnn e f
143 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Delta Modulation: ˆ ffnn1 1
en 0 en O.W. Singel bit code (1bit/pixel)
144 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Optimal Prediction: 2 EeEff2 ˆ, fefeff ˆ ˆ n n n n n n n n n m ˆ ffn i n i i1 m 2 2 -1 E en E f n i f n i α = R r i1 m RR: Image Autocorrelation Matrix, i , j E f f n i n j ij,1 m r k E f fm 22 E f f n n kk 1 e i n n i i1
145 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Optimal Prediction: – Computation for image by image is hard. – Global coefficient is computed by assuming a simple image model.
2 ij E f x,, y f x i y i vh ˆ fxy , 1 fxy ,1 2 fxy 1,1 3 fxy 1, 4 fxy 1,1
1 h, 2 h v , 3 v , 4 0
hv and : Horizontal and Vertical correlation coefficient m Dynamic Range Const.: i 1 i=1
146 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Results of Prediction Error Coding:
fˆ x, y 0.97 f x , y 1 fˆ x, y 0.5 f x , y 1 0.5 f x 1, y fxyˆ , 0.75 fxy , 1 0.75 fxy 1, 0.5 fxy 1, 1 0.97f x , y 1 h v fˆ x, y 0.97f x 1, y O.W. h f x 1, y f x 1, y 1 v f x, y 1 f x 1, y 1
147 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: rms = 11.1 rms = 9.8 – Prediction Errors
rms = 9.1 rms = 9.7 148 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Optimal Quantizer: – t=q(s) – q(s)=-q(-s)
– Map s in (si,si+1] to ti
– What is best si and ti
149 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Lloyd-Max Quantizer 2 – For even p(s), E{(si-ti) }will minimize with these conditions: si L s t p s ds, i 1,2, , i 2 si1 00i ti ti 1 L sii 1,2, , 1 22 L i 2
si s i, t i t i 150 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Lloyd-Max Quantizer – L=2, 4, 8 – Laplacian pdf (σ=1)
– ti-ti-1= si-si-1=θ
151 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Wavelet Coding
152 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Wavelet Selection: Haar Daubechies
Symlet Cohen 153 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Wavelet Selection: – Truncating the transforms below 1.5:
154 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Decomposition Level Selection: – More scales More computational efforts – An Experiments: • Biorthogonal wavelet; • Fixed global threshold (25); • Truncate only details
155 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Jpeg 2000 (607-614) – DC Level shift
– Convert using color video transform (RGB YCbCr) – Convert the whole image to tile components – Using different filter for lossy or loss-free compression – Quantized the coefficients – Arithmetic Coding
156 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Example: – C=25, 52, 75, 105.
157 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Image Watermarking – Skipped
158 E. Fatemizadeh, Sharif University of Technology, 2011 Digital Image Processing ee.sharif.edu/~dip
Image Compression
• Matlab Command – Huffmandeco, huffmandict, huffmanenco – Arithdeco, arithenco – Dpcmdeco, dpcmenco – Lloyds, quantiz – wcompress (wavelet image compression)
159 E. Fatemizadeh, Sharif University of Technology, 2011