Chapter7 Image Compression

Chapter7 Image Compression

Chapter7 Image Compression •Preview • 71I7.1 Intro duct ion • 7.2 Fundamental concepts and theories • 7.3 Basidihdic coding methods • 7.4 Predictive coding • 7.5 Transform coding • 7.6 Introduction to international standards Digital Image Processing 1 Prof.Zhengkai Liu Dr.Rong Zhang 7.1 Introduction 7.1.1 why code data? 7.1.2 Classification • To reduce storage volume Lossless compression: reversible • To reduce transmission time Lossy compression: irreversible – One colour image • 760 by 580 pixels • 3 channels, each 8 bits • 1.3 Mbyte – Video da ta • same resolution • 25 frames per second • 33 Mbyte/second Digital Image Processing 2 Prof.Zhengkai Liu Dr.Rong Zhang 7.1 Introduction 7.1.3 Compression methods •Entropy coding: Huffman coding, arithmetic coding •Run length coding: G3, G4 •LZW coding: arj(DOS) , zi p( Wi nd ows) , compress( UNIX) •Predictive coding: linear prediction, non-linear prediction •TfTransform codi ng: DCT, KLT, W al sh -TWT(T, WT(wavel et) •Others: vector quantization (VQ), Fractal coding the second generation coding Digital Image Processing 3 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy If n1denote the original data number, and n2 denote the compressed data number, then the data redundancy can be defined as 1 RD =−1 CR where CR, commonly called the compression ratio, is n1 CR = n2 n =n FlFor example 2 1 RD=0 nn21 CRRD→∞,1 → Digital Image Processing 4 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy •Other expression of compression ratio n C = 2 n2 8/bits pixel R or L== bits/, pixel CR n1 MN× L •Three basic image redundancy Coding redundancy Inteeprpixle redu edudndancy psychovisual redundancy Digital Image Processing 5 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy: coding redundancy If rk represents the gray levels of an image and each rk occurs with probability pr(rk), then n pr()= k , k=− 0,1," L 1 rk n where L is the number of gray levels, nk is number of times that the kth gray level appears in the image, and n is the total number of pixels in the image Digital Image Processing 6 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy: coding redundancy If the number of bits used to represent each value of rk is l(rk),then the average number of bits required to represent each pixel is L−1 Lavg= ∑lr() k p r () r k k =0 For example rk Pr(rk) Code1 l1(rk) Code2 l2(rk) r0 0.19 000 3 11 2 r1 0.25 001 3 01 2 r2 0.21 010 3 10 2 r3 0.16 011 3 001 3 r4 0.08 100 3 0001 4 r5 0.06 101 3 00001 5 R6 0.03 110 3 000001 6 r7 0.02 111 3 000000 7 Digital Image Processing 7 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy: coding redundancy 7 Lavg= ∑lrpr2 () k r () k k =0 = 2(0.19)+++ 2(0.25) 2(0.21) 3(0.16) ++++4(0.08) 5(0.06) 6(0.03) 6(0.02) = 2.7bits CR = 3/273/2.71= 111.11 1 R ===1009910− = .099 D 1.11 Digital Image Processing 8 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy: interpixel redundancy Digital Image Processing 9 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy: interpixel redundancy Autocorrelation coefficients A()Δn γ ()Δ=n A(0) where 1 Nn−1−Δ A()Δ=nfxyfxyn∑ (,)(, +Δ ) Nn−Δ y=0 Digital Image Processing 10 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy: interpixel redundancy Digital Image Processing 11 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.1 Data redundancy: psychovisual redundancy .Psychovisual redundancy is associated with real or quantifiable visual information. The information itself is not essential for normal visual processing 8bits/pixel 4bits/pixel 2bits/pixel Digital Image Processing 12 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.2 Fidelity criteria: objective ^ LtLet fxy ()( , ) represent an i nput ti image and f ()( xy , )d enot e th e Decompressed image. For any value x and y, the error ^ e(x,y) bewteen f (, x y ) and f (, xy ) can be defined as ^ exy(, )= f (, xy )− f (, xy ) So that the total error between the two images is MN−−11^ ∑∑[(fxy, ))(− fxy (, )] xy==00 Digital Image Processing 13 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.2 Fidelity criteria: objective ^ The root-mean-square error, eems, between fxy (, ) and f (, xy ) is MN−11− ^ 1 21/2 efxyfxyrms =−{[(,)(,)]}∑ ∑ MN xy==00 The mean-square signal noise ratio is defined as MN−−11^ ∑∑f ()(x, y)2 SNR =10lg xy==00 rms M −11N − ^ ∑ ∑[(,)f x yf− (,)]x y 2 xy==00 Digital Image Processing 14 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.2 Fidelity criteria: objective The signal noise ratio is defined as 2 MN−−11⎡ ^ ⎤ ∑∑ fxy()()( , )(− fxy, ) ⎣⎢⎥⎦ SNR =10lg xy==00 MN−−11^ ∑∑[(fff x, yf ))(− (x, y )]2 xy==00 The peak signal noise ratio is defined as MN−−11 f 2 ∑ ∑ max f 2 PSNR ==10lgxy=00= 10lg max MN−−11^ e2 ∑∑[ ffyfy(()()x, yf)(− x, y)]2 rms xy==00 Digital Image Processing 15 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.2 Fidelity criteria: subjective Side-by-side comparisons Digital Image Processing 16 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.3 image compression models Source encoder model Symbol ^ f (,x y ) decorrelation quantizer f (,xy ) encoder Remove Remove Interpixel psychovisual Remove redundancy redundancy coding redundancy Digital Image Processing 17 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.4 Elements of information theory: terms 1948, Cloude Shannon, A mathematical theory of communication News:data: data information: contents Information source : symbols Memoryless source Markov source Digital Image Processing 18 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.4 Elements of information theory: Self-information A random event E that occurs with probability P()E is said to contain IE () units of information. 1 I()EPE==− log log() PE() For example PE()= 0, IE()= ∞ PE()= 1, IE()= 0 UU:nit: Base=2, bits ( binary digits) Base=e, nats ( nature digits) Base=10, Hartly Digital Image Processing 19 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.4 Elements of information theory: Entropy of the source definition Suppose X={x0 , x1,… xN-1 }is a discrete random variable, and the probability of xj is P(xj),namely ⎡⎤X ⎡ x0 xx11" N − ⎤ ⎢⎥= ⎢ ⎥ ⎣⎦PX() ⎣⎦p()x0 ppp()x11" p ()x then NN−−111 H ()Xpx==−∑∑ ()logj pxpx ()log()jj jj=00px()j = Digital Image Processing 20 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.4 Elements of information theory: Entropy of the source properties (1) HX()0≥ (2) HX( )===== H (1,0,"""" 0) H (0,1, 0) H (0,0, 1) 0 (3) HX()=≤ Hx (,01 x" xN − 1 )log N When P(xj)1/N)=1/N fllfor all j,, H(X)= log N example X={123456781,2,3,4,5,6,7,8 }, pj=1/8 for each j 8 1 H ()X==∑ log832 bits / symbol j=1 8 Digital Image Processing 21 Prof.Zhengkai Liu Dr.Rong Zhang 7. 2 Fundamental concepts and theories 7.2.4 Elements of information theory: Conditional entropy definitions 1 HX0 ()= ∑ px ()logi i p()xi 1 HX11()= ∑∑ px ()iii pxx (|− )log ii p()xxii−1 1 H212() X= ∑∑ px ()iiii px (| x−− x )log ii p()xxii−−12 x i # 1 Hmiiiim() X= ∑ px ()∑ px (| x−−1 " x )log ii px()ii x−−1 " x im property HHHHHH∞−=""=<mm1210 <<<< Digital Image Processing 22 Prof.Zhengkai Liu Dr.Rong Zhang 7.2 Fundamental concepts and theories 7.2.4 Elements of information theory: using information theory Asimple8A simple 8-bits image: Zero-order entropy: 21 21 21 95 169 243 243 243 21 21 21 95 169 243 243 243 1 HX0 ()= ∑ px ()logi 21 21 21 95 169 243 243 243 i p()xi 21 21 21 95 169 243 243 243 =1811.81bits / pixel Probability of the source symbols: Gray-levelGa Count probability 21 12 3/8 95 4 1/8 169 4 1/8 243 12 3/8 Digital Image Processing 23 Prof.Zhengkai Liu Dr.Rong Zhang 7.2 Fundamental concepts and theories 7.2.4 Elements of information theory: using information theory Relative frequency of pairs of pixels: Gray-level Count probability (21,21)Paira 8 2/8 (21, 95) 4 1/8 (95,169) 4 1/8 (169,243) 4 1/8 (243,243) 8 2/8 (243,21) 4 1/8 Firs t-ordtder entropy: 1 H11() X= ∑∑ px ()iii pxx (− )log ii px()ii x−1 =1.25bits / pixel Digital Image Processing 24 Prof.Zhengkai Liu Dr.Rong Zhang 7.2 Fundamental concepts and theories 7.2.4 Elements of information theory: Noiseless coding theorem Also called Shannon first theorem. Defines the minimum average code word length per source symbol that can be achieved H ()XL≤ avg For memoryless (zero-memory) source: 1 HX()= ∑ px ()logi i p()xi But, the image data usually are Markov source, how to code? Digital Image Processing 25 Prof.Zhengkai Liu Dr.Rong Zhang 7.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    65 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us