<<

AN ANALYSIS OF AND IMAGE RESTORATION PERFORMANCE FOR DIGITAL SYSTEMS

Thesis

Submitted to

The School of Engineering of the

UNIVERSITY OF DAYTON

In Partial Fulfillment of the Requirements for

The Degree of

Master of Science in

By

Iman Namroud

UNIVERSITY OF DAYTON

Dayton, Ohio

May, 2014 AN ANALYSIS OF ALIASING AND IMAGE RESTORATION PERFORMANCE FOR SYSTEMS

Name: Namroud, Iman

APPROVED BY:

Russell C. Hardie, Ph.D. John S. Loomis, Ph.D. Advisor Committee Chairman Committee Member Professor, Department of Electrical Professor, Department of Electrical and Engineering and

Eric J. Balster, Ph.D. Committee Member Assistant Professor, Department of Electrical and Computer Engineering

John G. Weber, Ph.D. Tony E. Saliba, Ph.D. Associate Dean Dean, School of Engineering School of Engineering & Wilke Distinguished Professor ii c Copyright by

Iman Namroud

All rights reserved

2014 ABSTRACT

AN ANALYSIS OF ALIASING AND IMAGE RESTORATION PERFORMANCE FOR

DIGITAL IMAGING SYSTEMS

Name: Namroud, Iman University of Dayton

Advisor: Dr. Russell C. Hardie

It is desirable to obtain a high image quality when designing an imaging system. The design depends on many factors such as the , the pitch, and the cost. The effort to enhance one aspect of the image may reduce the chances of enhancing another one, due to some tradeoffs. There is no imaging system capable of producing an ideal image, since that the system itself presents in the image. When designing an imaging system, some tradeoffs favor aliasing, such as the desire for a wide field of view (FOV) and a high to noise ratio (SNR). The is that aliasing is less disturbing visually if compared against the noise and blur. Some previous research attempted to define the best combination of the optics and pitch that would result in the best image quality that can be achieved practically. However, those studies may have not considered that the post processing can be conducted inside the imaging system. In this work, we reinspect the optics of the imaging system by taking the post image processing into account. Among the optics, we are more concerned about the aspect of the f-number. Varying the f-number iii controls the and the focal length, which affect the number of passing photons, the width of FOV, and the speed of the shutter. Optimizing the f-number would impact the amount of noise, blur, and contained in an image. To simulate the post processing, various restoration methods are used. The restoration methods are the adaptive Wiener filter (AWF), Wiener filter, lanczos, and the bicubic interpolation. We mainly focus on the AWF and its performance, since it is a super resolution (SR) that is designed to restore images that are sampled below the . Despite the fact that the AWF is a SR algorithm, it was built to expect multiple low resolution (LR) images as an input, and was never used to restore images from only one LR image. So, we employ the AWF as a single SR algorithm for the first time, and compare its performance against the other three methods, in order to achieve the best f-number that would introduce the best image quality available.

iv To my parents, my husband, and my two children.

You enrich my life.

v ACKNOWLEDGMENTS

I would like to thank all of the professors who taught me during my studies at the

University of Dayton. It was pleasure to be in each of their classes and to learn from each of them.

I want to thank Russel C. Hardie, Ph.D. for his inspiration, patience, and cooperation.

Working under his supervision was very challenging, yet very beneficial for me and my career. I was able to present this work because of his instructions and guidance.

Finally, I thank my parents, my husband, and my entire family for their ongoing sup- port.

vi TABLE OF CONTENTS

ABSTRACT ...... iii

DEDICATION ...... v

ACKNOWLEDGMENTS ...... vi

LIST OF FIGURES ...... ix

LIST OF TABLES ...... xi

I. INTRODUCTION ...... 1

II. OBSERVATION MODEL ...... 5

2.1 The Point Spread Function ...... 5 2.2 The Discrete Model ...... 9 2.3 The Impact of the F-Number on the Degradation Process ...... 10 2.4 The Ratio of λf/p ...... 14

III. ADAPTIVE WIENER FILTER ...... 17

3.1 The AWF Algorithm ...... 17

IV. EXPERIMENTAL RESULTS ...... 22

4.1 Degradation Results ...... 24 4.2 Restoration Comparison ...... 24 4.3 Desired and Restored Images Comparison ...... 30

vii V. CONCLUSION ...... 43

BIBLIOGRAPHY ...... 45

viii LIST OF FIGURES

2.1 Observation model...... 6

2.2 Uniform detector array...... 7

2.3 Observation model...... 10

2.4 A 3D plot of the impulse invariant PSF models for different f-numbers. (a) PSF for f/4 ; (b) PSF for f/8; (c) PSF for f/12; (d) PSF for f/16...... 12

2.5 Undersampling Vs. f-number...... 15

3.1 The block diagram of the original AWF algorithm ...... 18

4.1 Ideal test images. (a) Motocross bikes; (b) Chirp; (c) Aerial ; (d) River. . . 23

4.2 Degradation example. (a) Desired image ; (b) Image degraded at f/1; (c) Image degraded at f/4.5; (d) Image degraded at f/8; (e) Image degraded at f/11.5;(f) Image degraded at f/15...... 25

4.3 The degradation and restoration of motocross bikes at f/2. (a) motocross bikes. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter. . 27

4.4 The degradation and restoration of motocross bikes at f/6. (a) motocross bikes. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter. . 28

4.5 The degradation and restoration of motocross bikes at f/10. (a) motocross bikes. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter. . 29

ix 4.6 The degradation and restoration of the aerial image at f/1. (a) Aerial. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter...... 31

4.7 The degradation and restoration of the aerial image at f/8. (a) Aerial. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter...... 32

4.8 The degradation and restoration of the aerial image at f/15. (a) Aerial. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter...... 33

4.9 The restored images of the motocross bikes image at f/4. (a) restored using bicubic filter; (b); restored using lanczos filter (c) restored using Wiener filter; (d) restored using AWF filter...... 34

4.10 The degradation and restoration of the chirp image at f/4. (a) The chirp. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter...... 35

4.11 Error between the desired and the restored images of the aerial image. The four curves are for the filters, bicubic (), lanczos (magenta), Wiener (red), and AWF (blue) (a) MAE. (b) MSE ; (c) SSIM...... 37

4.12 Error between the desired and the restored images of river. The four curves are for the filters, bicubic (green), lanczos (magenta), Wiener (red), and AWF (blue) (a) MAE; (b) MSE ; (c) SSIM...... 38

4.13 The mean square error MSE between the desired and the restored images of motocross bikes. The four curves are for the filters, bicubic (green), lanczos (magenta), Wiener (red), and AWF (blue)...... 40

4.14 The mean square error MSE between the desired and the restored images of the chirp. The four curves are for the filters, bicubic (green), lanczos (magenta), Wiener (red), and AWF (blue)...... 41

4.15 MSE Error between the desired and the restored images of river. (a) Using scaled noise; (b) Using a constant noise...... 42

x LIST OF TABLES

4.1 The values of Q that introduce the minimum MSE for all test images using the bicubic interpolation, lanczos, Wiener, and the AWF methods...... 40

xi CHAPTER I

INTRODUCTION

In imaging systems design, it is preferable to achieve a wide field of view (FOV), faster , more depth of view, and high image resolution. Although it depends on the application, this is true in most cases. The optics of the and its pitch control such features. The attempt to design the optics to improve one of these qualities might reduce the chances to boost the other. For example, to get better resolution, the number of detectors of the focal plane array (FPA) needs to be increased, and the size of the detectors needs to be decreased, but small detectors would not be able to collect enough light photons. The attempt to optimize all the imager properties can only be physically limited and expensive.

All images acquired from any imaging system are not ideal. The images are always susceptible to blur, noise, and undersampling caused by the camera. In order to overcome the distortion, the image can be processed later to restore the information that was lost during the process of image acquisition. There are many restoration methods that are used to process images, such as linear filters. One of the most commonly used linear filters is the Wiener filter. Wiener filter attempts to find an estimate image of a desired one by minimize the mean square error between them. The Wiener filter finds the estimated image by filtering an observed noisy image. It is based on an assumption about the original image and the additive noise, and it integrate that in the restoration process [1]. 1 It seems that the human eye can tolerate some aliasing more than the blur and noise.

Therefore, most are designed to be somewhat undersampled. In the presence of aliasing, linear filters, including the Wiener filter, are no longer valid theoretically. Those

filters are based on the theory that the image is Nyquist sampled. Another discipline of image processing is introduced to restore undersampled images, which is super resolution

(SR). SR techniques restore images to reach a higher resolution than what the imaging system can provide. The SR can be divided into two categories, multi-frame SR and single-frame SR.

Multiframe SR algorithms use many low resolution (LR) images as input to produce one high resolution (HR) image. The LR images would all be images for the same scene with sub shift. The sub pixel shift is a main concept for these algorithms because it allows every LR image pixel to offer more information to restore the corresponding pixel in the HR image. Most of the research done in the SR field is about the multiframe SR such as [2] and [3] which take in a set of LR images or video sequence to introduce a HR image or images. A comparison of several multiframe SR algorithms is presented in [4].

Unlike the multiframe SR, the single frame SR utilize only on LR image as an input to generate a HR image. A classical single frame SR algorithm is an example based method, where a database of examples of a LR patches and their corresponding HR patches are saved, and then use this database when processing a LR image by comparing the patches of this image to the database [5]. This approach is still exploited by recent research such as [6] and [7]. This process seems to lay good results, but it needs a great memory. The more examples the database includes, the better results the method produces, and the more memory it requires. In addition, there might be still some patches examples that are not

2 included in the databases. Moreover, there are no guarantees that recovered details repre- sent the true missing ones. Some other proposed methods like [8], [5], and [9] exploit the similarities within the image itself in order to increase its resolution without the need of a database, which require much less memory. [8], [5], and [9] exhibit some good results of their algorithms.

Most multiframe SR algorithms produce better results compared to the single frame SR because they exploit more information to extract the HR image. On the other hand, Single frame SR reflect a more common issue than the multiframe SR, since that a single LR image is always available, but the multiple images of the same scene with suitable motion are not.

A previously introduced multiframe SR algorithm, the adaptive Wiener filter (AWF), is employed here to process a single LR image. This algorithm was introduced in [10] and used to process multiple frames and has never been used for single frame SR. This work attempts to evaluate the performance of the AWF in the case of single frame by compare its outcomes against other restoration methods. Along with the AWF, other methods such as, Wiener filter, lanczos [11] and bicubic interpolation are utilized to explore the impact of the f-number on the image restoration.

In this thesis, we are more interested in one aspect and that is the f-number. The f- number can be defined as the ratio of the focal length to the aperture. The aperture and the focal length are two fundamental concepts in an imaging system. Both of these two parameters are controlled by the dimensionless f-number. A small f-number, for a given aperture, means a short focal length, and therefore a wide FOV. Also, a small f-number, for a given focal length, means a big aperture, more light and faster shutter speed. On the other hand, for a high f-number, there will be more depth of view, and it is also less

3 expensive. In addition, the high f-numbers give smaller cutoff frequency, which implies less undersampling at those high values. The choice of the right f-number when designing an imaging system can be a controversial issue. Most previous studies that attempted to optimize the imaging system properties to be able to produce a decent image quality did not consider the post processing that can be done within the camera itself. In this work we investigate if the choice of the optimal f-number would be different than what previous studies suggest when post processing is taken into account.

The organization of the thesis is as follows. Chapter II demonstrate the observation model considered in the practical work of the study, and it also underline the influence of the f-number. Chapter III give a brief explanation for the algorithm of the AWF. Some of the results obtained in the study are illustrated in Chapter IV. Chapter V presents the conclusion of the thesis.

4 CHAPTER II

OBSERVATION MODEL

In this chapter we discuss the observation model that describes the forward process of forming an observed g(m, n) from a desired image d(x, y), where x and y are the continuous spatial coordinate. the block diagram of the observation model is shown in

Figure 2.1. This model is a single frame version of the observation model used in [10].

As it can be seen in the block diagram, the desired image d(x, y) is first convolved with the the point spread function (PSF) to produce the degraded image f(x, y). Then, the degraded image is sampled to obtain f(m, n). Next, noise is added to the sampled noise free image to produce the observed image g(m, n). So, the observed can be expressed in terms of the sampled degraded image as

g(m, n) = f(m, n) + η(m, n). (2.1)

2.1 The Point Spread Function

The PSF is the impulse response of an optical system. It is a simulation of the degra- dation process that happens during the image acquisition. There are many types of PSF that can be modeled, but the one used in this work is the same theoretical PSF explained

5 in [12]. It was designed to include two components, spatial detector integration and optics diffraction.

d(x, y) f(x, y) sub f(m, n) g(m, n) PSF P Nyquist

h(x, y) ∆x, ∆y η(m, n)

Figure 2.1: Observation model.

The degradation effects caused by the detector integration is related to the shape and size of the detectors on the focal plane array FPA. those detectors integrate photons of light during the image acquisition, and then produce a numerical value that represent the ’ values in a digital image. that is why their affect the quality of the image. For example, as the size of the detectors get smaller, the pixel size get smaller, which leads to a higher resolution. However, very small detectors will be unable to collect enough photons.

Diffraction is the phenomena that occurs as the light pas through holes or near objects.

In , diffraction is the loss of sharpness in an image. for large aperture(small f-number), diffraction can be negligible because the ratio of the non diffracted light to the diffracted light is high. This means that most of the light waves get away without being bent for being far enough from the edge, which result in sharper images. On the other hand, as the aperture decreases more light get diffracted. This cause more waves to travel different distances and consume different time to reach the FPA. It will also cause them to interfere with each other, and that will add up, subtract, or even cancel some waves.

That will produce less sharpness in the images. In addition, the interference will cause the appearance of a central spot and rings around it. The size of this spot, called Airy 6 a

b

Figure 2.2: Uniform detector array.

disk, depend on the f-number and the light waveform. Notice that, although the amount of diffraction depends on the f-number, it is always there.

According to the block diagram in Figure 2.1, the desired image d(x, y) is convovled with the continuous system PSF h(x, y) to apply the degradation effects,

f(x, y) = d(x, y) ∗ h(x, y), (2.2) where f(x, y) is the degraded noiseless version of d(x, y). In order to model the two degradation processes of diffraction and detector integration, the test images are convolved with a discrete equivalent of the PSF,

H(u, v) = FT {h(x, y)}, (2.3) where H(u, v) is the continuous frequency response of the PSF. 7 As it was described in [12], the spatial detector integration can be simulated by per-

forming a with an impulse response obtained based on the detector shape.

If this PSF denoted as hdet(x, y), its effective continuous Fourier transform is denoted

as Hdet(u, v). Also, the optical transfer function (OTF) of the diffraction is denoted as

Hdif (u, v). The impulse response of the whole system would be

h(x, y) = hdif (x, y) ∗ hdet(x, y). (2.4)

So, the frequency response OTF will simply be the product of the two OTFs, and can be

expressed as

H(u, v) = Hdif (u, v)Hdet(u, v). (2.5)

The system that is taken into account, as explained in [12], has a detector array of

rectangular detectors. Each detector has dimensions of ∆x in the x direction and ∆y in

the y direction. An illustration of those uniform detectors is shown in Figure 2.2. The

shades in the figure indicate the detectors’ active areas. Since that rectangular detectors are

considered, the detector PSF would be given by

( x 1 x 1 1 x y  1 if | a | < 2 & | b | < 2 hdet(x, y) = rect , = , (2.6) ab a b 0 otherwise

where a and b are the physical dimensions of the active area of a single detector measured

in millimeters (mm). As a result, the OTF of such detector array can be defined as

sin(πau) sin(πbv) H (u, v) = sinc(au)sinc(bv) = , (2.7) det au bv where u and v are the frequencies in the horizontal and vertical directions, respectively, measured in cycles/mm.

The OTF of diffraction of the optics with circular exit pupil can be expressed as [13]

8 ( h q i 2 cos−1( ρ ) − ρ 1 − ( ρ )2 if ρ < ρ π ρc ρc ρc c Hdif (u, v) = , (2.8) 0 otherwise √ 2 2 where ρ = u + v is the radial frequency of the OTF, and ρc is the radial cutoff fre-

quency. ρc is given by 1 ρ = , (2.9) c λf/#

where λ is the wavelength of light in millimeters, and f/# is the f-number of the optics.

The f-number (f stop) is defined as the ratio of the focal length to the aperture diameter.

Next, the blurred image is sampled spatially based on the pitch dimensions, ∆x and

∆y. The sampling frequencies in the x and y directions are 1/∆x and 1/∆y, respectively.

The sampling would be below the Nyquist rate. According to the Nyquist criteria for the

sampling to be at or above the Nyquist rate, then the sampling frequencies should follow

that 1/∆x > 2ρ and 1/∆y > 2ρ to guarantee no aliasing. Since the sampling here does

not necessary meet this condition, this will introduce aliasing. The last step is corrupt the

blurred under sampled image by additive noise. This noise is modeled using Gaussian

noise.

2.2 The Discrete Model

Figure 2.1 presents a fine description of the physical degradation process caused by the

optics which occurs during the acquisition of the image. However, in practice, it is required

to have a discrete model to describe the forward process that relate the desired image to the

observed one. Figure 2.3 illustrates the observation model of such system. Note that both

observation models of Figure 2.1 and Figure 2.3 are equivalent. The difference between the

two models is that in the discrete model, the continuous d(x, y) is first sampled according

to the Nyquist criteria before blurring or adding noise to obtain d(m, n). The goal now is 9 d(x, y) d(m, n) f(m, n) g(m, n) Nyquist PSF ↓ L P

∆¯ x, ∆¯ y h(m, n) η(m, n)

Figure 2.3: Observation model.

to use the observed image to get an estimation of the HR image d(m, n) that contains no aliasing, blur, or noise.

Let d(m, n) be of size ML1 by NL2, where L1 and L2 are positive integers. This image will be blurred using a convolution operation with the equivalent discrete system

PSF h(m, n)

fˆ(m, n) = d(m, n) ∗ h(m, n). (2.10)

Now, the blurred image will be subsampled to be in the same resolution as the observed image, ˆ f(m, n) = f(mL1, nL2). (2.11)

As a final step in the observation model, an additive noise, denoted as η(m, n), will be added to this undersampled blurred image yielding

g(m, n) = f(m, n) + η(m, n). (2.12)

This will also introduce aliasing to the image.

2.3 The Impact of the F-Number on the Degradation Process

When inspecting the observation model, it can be seen that the image degradation is caused mainly by the PSF and the noise. Both of them are directly affected by the aperture 10 and therefore the f-number. The value of the f-number controls the amount and the direction of light that fall on the FBA. How light is reflected in the detectors of the FBA will be interpreted in more or less blur and distortion. In addition, the undersampling process affects the quality of the image, and cause loss in the information contained by the image.

The undersampling is also another aspect that is influenced by the value of he f-number.

The impulse response of the imaging system PSF is affected by the f-number. As mentioned before, the PSF represents the distortion, which includes the optics diffraction.

Therefore varying the value of the f-number has an impact on the size of the PSF. Higher f-number leads to a wider PSF, and smaller f-number produce narrower PSF. Examples of different PSFs are exhibited in Figure 2.4, which shows a 3D plot of the PSF evaluated for several f-numbers. The impact of the number on the size of the PSF follows from Equations

2.8 and 2.9.

Every device that processes a signal induces noise. Cameras are also devices and the signal they process is the light. When light hits the camera sensor, the light photons are converted into electric charge. The amount of the electric charge will decide the value of the corresponding pixel. This electric charge is simply electrons, which are subject to the environmental conditions around them such as temperature. Such ambient conditions cause unwanted electron flow (free electrons) near the sensor that is generated without incoming light. This will produce noise as it will add to the pixel values eventually. Also, the structure of the sensor is not perfect, which will cause noise too. Such noise occurs when capturing the image.

For the observation model used, an independent noise is assumed. Additive white Gaus- sian noise is used to model such noise. Gaussian noise is additive and independent of the

11 (a) (b)

(c) (d)

Figure 2.4: A 3D plot of the impulse invariant PSF models for different f-numbers. (a) PSF for f/4 ; (b) PSF for f/8; (c) PSF for f/12; (d) PSF for f/16.

12 pixel intensity value, and it is independent from one pixel to another. This random noise

adds values from the zero mean Gaussian distribution to each pixel in the image.

The noise is another element that is affected by the f-number. Big f-numbers indicate

less incoming light and darker images. Images that are taken in low light tend to have more

noise in them. As the f-number get higher, the aperture gets lower, which would scale the

signal (the image in this case) and make it more noisy. In practice, the image can be scaled

according to the f-number, to model the scaling that happens as the image being captured.

Another option would be to scale the noise instead of the signal which is an equivalent

process and produce the same results. The scaling factor S will follow from

A S = i , (2.13) A

where Ai is the aperture at the current f-number, and A is the aperture at the lowest f-

number. The aperture can be obtained as

 l 2 A = π , (2.14) 2f/#

where l is the focal length. The scaling factor S will next be used to scale the variance of

2 the Gaussian noise for the lower f-number σn to produce the variance of the Gaussian noise

for the current f-number σi such that

σ σ = n . (2.15) i S

Another process that is influenced by the value of the f-number is undersampling. On the contrary of the PSF and noise where high f-numbers increased the image distortion, un- dersamling favor those high f-numbers. As previously mentioned, aliasing occur when the sampling goes below the Nyquist sampling rate (undersampling). Let the undersampling be defined as the ratio of the detector pitch dimensions ∆x or ∆y, which define the sampling 13 frequencies, to the Nyquist sampling frequency. This means that as the Nyquist sampling

frequency gets higher, the undersampling will be smaller, and the image will contain less

aliasing. According to the Nyquist criteria stated in Section 2.1, for the Nyquist sampling

frequency to be higher the cutoff frequency has to be smaller. It can be seen from Equation

2.9 that the cutoff frequency is directly related to the f-number. It can also be seen that

bigger f-numbers will give smaller cutoff frequency, and therefore the Nyquist sampling

frequency will get higher, which will lead to less undersampling, and less aliasing.

For more insight, a plot was generated by varying the f-number, and maintain the same

values for the detector pitch dimension to visually study how the undersampling changes.

For this plot we assume a square detectors where ∆x = ∆y = 5.6µm, and a wavelength

λ = 0.55µm. This plot is shown in Figure 2.5.

Since that the f-number has an impact on both the PSF, the noise, and the undersampling

process, it will has an impact on the image quality. Using different f-numbers will result in

different amount of image distortion. Eventually, this will influence the restoration process

and its efficiency.

2.4 The Ratio of λf/p

According to [14] the ratio of λf/p can be used to define the image quality of an imaging system. The image quality would be controlled by modifying the parameters of this ratio. follows from [14], if the focal length is denoted as land the aperture as A, then

this ratio can be defined as the ratio of the sampling frequency to the optical bandpass limit

(the cutoff frequency) of the system

1/p λf = , (2.16) 1/[λ(l/A)] p

14 Figure 2.5: Undersampling Vs. f-number.

15 where f is the f-number, λ is the wavelength, and p is the detector sampling pitch. p =

∆x in the x direction and p = ∆y in y direction. We refer to this ratio here as Q for convenience.

A study was held in [14] to find the best ratio that will evaluate the best image quality obtained from an imaging system. As a consequence, [14] looked into the impact of this ratio on many aspects that may affect the image quality, and how changing each parameter on a side may influence the choice of the best value for this ratio. The nyquist criteria is met at Q = λf/p = 2. λf/p < 2 will introduce aliasing to the image. Although, when decreasing λf/p present aliasing, it is produce sharper edges. In addition to image sharp- ness, decreasing Q improves the signal to noise ratio (SNR) and the modulation transfer function (MTF). The MTF of the system is very close to zero near Nyquist, which make image motion, system noise, and optical aberration affect the HR details. Reducing Q can increase the MTF response near Nyquist, which leads to sharper and that would be a big advantage. As the SNR increases, the noise and the image smear will be reduced, and it would enhance the image quality by reducing the integration time.

Each parameter in the ratio λf/p can change the performance of the imaging system and the image quality [14]. For Q = 2, the image will Nyquist sampled, but looking at other factors may make it preferable to have Q < 2. [14] suggest that reducing the ratio

Q = λf/p from 2 to 1 would improve the image quality. However, going below 1 might end up in very noiticable aliasing artifacts, which could be undesirable.

The work introduced in [14] did not take the post processing that can occur within the imaging system into account. For this thesis, we anticipate that considering the post processing inside the imaging system may lay out another value for the ratio which might go below 1.

16 CHAPTER III

ADAPTIVE WIENER FILTER

Adaptive Wiener filter (AWF) is one SR method, which was mainly designed to work on multiple frames of LR images of the same scene to produce one HR image. AWF is exploited and tested within the current work for single frame SR. Within this work, there is only one LR image available as an input for the AWF, and the goal is to produce a HR image. Since that the AWF show some flexibility, it was used here with no changes made to the original algorithm. This chapter provides an overview of the AWF algorithm presented originally in [10], and it is repeated here for the readers convenience. For more information and detailed explanation, the reader can go back to [10].

3.1 The AWF Algorithm

Figure 3.1 shows the block diagram of the original AWF algorithm. The first step is the frame registration. Here all the frames are registered on a common grid called a HR grid, and then all the transformations’ parameters are saved. However, for this particular work, the registration is not an issue of concern because the available input is only one LR frame and its coordinates will also be the HR image coordinates and there are no transformations involved and no pixel alignment needed.

17 2 rff (x, y), rdf (x, y), σ R n Window i Window W = R−1P weights i i i Pi

T g(k) g g dˆ = W g i i i i Combine zˆ Register Moving Weighted Sum Window Outputs θk

Figure 3.1: The block diagram of the original AWF algorithm

In the second step a moving window is used to move across the LR image in steps of

Dx and Dy in the horizontal direction and the vertical direction, respectively, and it is of size Wx in the horizontal direction and Wy in the vertical direction, where 1 < Dx < Wx

and 1 < Dy < Wx. Each observation window contains a number of LR pixels. In this work

in particular this number is a constant at each position of the observation window, and it

can be expressed as W W K = x y , (3.1) LxLy where Lx and Ly are the difference between the sampling rate of the estimated image and the sampling rate in the observation model in the horizontal and vertical directions, respectively. All K LR pixels are stored in an observation vector gi, where i is the index of the observation window locations. The LR pixels inside the observation window are used ˆ in order to find an estimated pixel values di of the desired HR pixels di. Those HR pixels ˆ estimations di are combined later to form an estimation zˆ of the desired HR image z.

ˆ Every group of di lie within a smaller subwindow inside the observation window called estimation window. Its size is Dx by Dy. A weighted sum is applied to the K LR pixels such that

ˆ T di = Wi gi, (3.2)

18 ˆ where di is a DxDy × 1 matrix and gi is a K × 1 matrix. To be able to find the estimation ˆ of the HR pixels di, the matrix Wi of size DxDy × K, which contains a set of weights, needs to be calculated. The elements of Wi are found in a manner that minimize the mean square error between the observed image g and the desired image d, so that

−1 Wi = Ri Pi, (3.3)

 T where Ri = E gigi is the auto-correlation function of the observation vector, and Pi =

 T E gidi is the cross correlation between the observation vector and the corresponding desired vector. Each column in the weights matrix is quantized, so that all of its elements would sum up to 1.

Now, both Ri and Pi need to be calculated, so that the weights matrix can be evaluated.

To achieve that, let fi be the observation vector without the noise. In other words,

fi = gi + ni, (3.4)

where ni represents the noise within the ith observation window. Assuming that the noise

2 is independent with zero mean and has a variance of σn, then

 T 2 Ri = E fifi + σnI, (3.5)

and

 T Pi = E fidi . (3.6)

If the desired image has a wide sense stationary auto-correlation function rdd(x, y), the

autocorrelation and the cross-correlation functions of the noise free observation vector can

be expressed as

rff (x, y) = rdd(x, y) ∗ h(x, y) ∗ h(−x, −y), (3.7)

19 and

rdf (x, y) = rdd(x, y) ∗ h(x, y), (3.8) respectively, where h(x, y) is the PSF used to degrade the desired image d(x, y) to get the noiseless observed image f(x, y). When using the AWF algorithm the auto-correlations of the desired image can either be empirically obtained from training images, or it can be set as a parametric model. Most of the results included in [10] were obtained using a circularly symmetric parametric auto-correlation model,

2 p 2 2 rdd(x, y) = σdρ x + y , (3.9)

2 where σd is the varience of the desired image, and ρ is a tuning parameter.

2 Once σd and ρ are acquired the matrices of the autocorreltion Ri and crosscorrelation

Pi can be found. Follow that, the weights matrix can be calculated, which will endow the estimation of the HR pixels. The whole process of finding the weights and the estimated

HR pixels is repeated for every observation window.

The auto-correlation model rdd(x, y) assumed above is a wide sense stationary. This does not apply to many images. To overcome this obstacle, the auto-correlation model should change according to the position of the observation window. This will make it affected by the change in the local intensity statistics of the samples. The model in Equation

3.10 is capable to achieve that,

p r (x, y) = σ2 ρ x2 + y2. dd di (3.10)

σ2 i di represents the variance of the local area in the desired image that corresponds to the th observation vector.

σ2 σ2 Using di instead of d make the variance change with the position of the observation σ2 window. This implies that a di is measured for each observation window. The variances 20 of the desired and degraded images are related as

1 σ2 = σ2 , (3.11) di c(ρ) fi where Z ∞ Z ∞ √ c(ρ) = ρ x2+y2 h(x,˜ y)dxdy, (3.12) ∞ ∞ and

h(x,˜ y) = h(x, y) ∗ h(−x, −y). (3.13)

Along this work, the noise is assumed to be independent. So, the variance of the noise- less observation vector can be obtained from the observation vector using the form

σ2 = σ2 − σ2. fi gi n (3.14)

σ2 σ2 σ2 Now fi can be used to find di . To reduce the computational complexity di can be quantized to V levels, and therefore only V weight matrices need to be computed for every unique spatial pattern.

2 If the wide stationary auto-correlation model with a σd = 1 was used, then Ri and Pi σ2 can be found for any di as

R = E{g g T } = σ2 E{f f T } + σ2I, i i i di i i n (3.15) and

P = E{g d T } = σ2 E f d T . i i i di i i (3.16)

This will make the weights matrix be

 σ2  W = E{f f T } + n I E f d T . (3.17) i i i σ2 i i di σ2 The elements of the matrix are the noise to signal ration (NSR). Now as di vary with the observation window position, the weights will also vary with NSR through the HR grid. 21 CHAPTER IV

EXPERIMENTAL RESULTS

In this chapter, some results of using the AWF as a single frame SR algorithm are ex- plored. The AWF restored images are compared against other methods, such as bicubic interpolation, lanczos, and Wiener filter. Also, the impact of the f-number on the degra- dation process and the restoration process is examined. The study was performed using different images. Figure 4.1 shows a sample of those images. The four images in Figure

4.1 are the motocross bikes from the Kodak database [15] in Figure 4.1a, the chirp image in Figure 4.1b, the aerial image in Figure 4.1c, and river in Figure 4.1d. All the results that will be presented in this chapter are obtained using one of these four images. When collecting the data, all the optics variables were set to constant values, except the f-number.

The wavelength of light λ = 0.55µm, the pitch dimensions ∆x = ∆y = 2.6µm, their active area a = b = 2.0µm, the undersampling factors L1 = L2 = 6, and the f-number was varying along a range from 1 to 16. Note that these same values were used to obtain all the

figures included in this chapter.

22 (a) (b)

(c) (d)

Figure 4.1: Ideal test images. (a) Motocross bikes; (b) Chirp; (c) Aerial ; (d) River.

23 4.1 Degradation Results

As explained in Chapter 2, the f-number has a direct impact on the degradation process.

The distortion in the image was modeled to include the blur, the noise,and the undersam- pling. All three processes are influenced by the value of the f-number used. In order to have a better , Figure 4.2 exhibits the original image along with its degraded versions that were acquired at different f-numbers. Figure 4.2a shows what is considered to be an ideal image, and then it is followed by the degraded images that is found at f-numbers 1,

4.5, 8, 11.5, and 15 shown in Figures 4.2b, 4.2c, 4.2d, 4.2e, and 4.2f, respectively.

When looking at Figure 4.2, it can be recognized visually how the images get more noisy and blurry as the f-number gets higher. This simulates what would happen in a real camera for big f-numbers. Since that high f-numbers respond to less aperture and less light, the lack of light causes the image to contain more blur and more noise. On the other hand, the cutoff frequency ρc, which obeys Equation 2.9, gets smaller as the f-number gets higher. This leads to less undersampling for high f-numbers and more undersampling for low f-numbers, but it seems that for the human eye the noise and blur are more unacceptable and disturbing at high f-numbers than the undersampling at low f-numbers. Note that all of these degraded images were calculated at the same sampling factors, and the same pitch dimensions. The only varying operand was the f-number.

4.2 Restoration Comparison

In this section, the restoration results are compared with respect to the f-number and with respect to the various restoration methods. As discussed in the previous section, choosing different f-numbers in the degradation process effect the amount of distortion presented in the image. The more distortion contained in an image, the harder it gets to 24 (a) (b)

(c) (d)

(e) (f)

Figure 4.2: Degradation example. (a) Desired image ; (b) Image degraded at f/1; (c) Im- age degraded at f/4.5; (d) Image degraded at f/8; (e) Image degraded at f/11.5;(f) Image degraded at f/15.

25 retrieve the original image. In addition, the different methods used to restore the image produce different image quality compared to the original image.

For a start, one can look at Figures 4.3. In this figure the original image of motocross bikes is shown in the upper left corner, next to it (in the same raw) is its degradation. The restored motocross bikes images are in the second raw, using bicubic interpolation on the left, and lanczos on the right. The third raw has the restored image using Wiener filter on the left, and AWF on the right. The degraded and restored images in Figure 4.3 were all obtained for f/2.

Figures 4.4 and 4.5 also show the ideal motocross bikes image followed by its degraded and restored images in the same order as in Figure 4.3. However, the images in Figure 4.4 were obtained for f/6, and the images in Figure 4.5 were found at f/10. It can be noticed that the Wiener filter and the AWF are able to produce much better images compared to those produced by the lanczos and bicubic interpolation regardless to the value of the f- number.

When examine the three figures, the influence of varying the f-number is somewhat noticeable on both the degraded and restored images. the difference between the three

figures is not very obvious because the three chosen f-numbers 2,6, and 10 are not very far from each other. However, at those values, those values were chosen to illustrate the difference in the performance between the different restoration algorithms.

In order to make the impact of the f-number more obvious, another set of f-numbers are chosen and applied to the aerial image. The f-numbers are 1,8, and 15. Those values are far from each other. This will expose the difference in a more extreme cases. Look at

Figures 4.6, 4.7, and 4.8. These three figures illustrate the desired aerial image in Figures

4.6a, 4.7a, and 4.8a. its degraded versions at f-numbers 1,8, and 15 are shown in Figures

26 (a) (b)

(c) (d)

(e) (f)

Figure 4.3: The degradation and restoration of motocross bikes at f/2. (a) motocross bikes. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter.

27 (a) (b)

(c) (d)

(e) (f)

Figure 4.4: The degradation and restoration of motocross bikes at f/6. (a) motocross bikes. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter.

28 (a) (b)

(c) (d)

(e) (f)

Figure 4.5: The degradation and restoration of motocross bikes at f/10. (a) motocross bikes. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter.

29 4.6b, 4.7b, and 4.8b, respectively. Then its restored images using bicubic interpolation are in Figures 4.6c,4.7c, and 4.8c. Next the lanczos restored images are shown in Figures 4.6d,

4.7d, and 4.8d. The weiner filter results shown in Figures 4.6e, 4.7e, and 4.8e. the AWF restored images are shown in Figures 4.6f, 4.7f, and 4.8f.

Figures 4.6, 4.7, and 4.8 may not show the difference between the different restoration methods very well, but they present the impact of varying the f-number on the restoration process, and show how changing the f-number can effect the quality of the retrieved image.

The best restored images can be obtained at or near the f-number that produce the mini- mum error. To take a closer look at the difference between the outcomes of the restoration methods, Figure 4.9 illustrate a patch from the motocross bikes original image (upper left corner), its corresponding degraded patch at f/4 (upper right corner), the bicubic interpo- lation of that patch (middle left), the corresponding patch from the restored image using lanczos (middle right), the corresponding patch from the restored image using Wiener filter

(bottom left corner), and the corresponding patch from the restored image using the AWF

(bottom right corner). This value for f-number lie near the f-numbers that introduce the minimum square errors for the Wiener filter and the AWF.

A very interesting case is the chirp. The original image is in fact an ideal image. Be- cause of the nature of this image, the restoration algorithms suffer more aliasing with the chirp than the other test images. Figure 4.10 shows the chirp along the degraded and re- stored images at f/4.

4.3 Desired and Restored Images Comparison

A statistical comparison is included in this section to evaluate the performance of the

AWF against the other three restoration methods, and how they behave when varying the

30 (a) (b)

(c) (d)

(e) (f)

Figure 4.6: The degradation and restoration of the aerial image at f/1. (a) Aerial. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter.

31 (a) (b)

(c) (d)

(e) (f)

Figure 4.7: The degradation and restoration of the aerial image at f/8. (a) Aerial. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter.

32 (a) (b)

(c) (d)

(e) (f)

Figure 4.8: The degradation and restoration of the aerial image at f/15. (a) Aerial. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter.

33 (a) (b)

(c) (d)

(e) (f)

Figure 4.9: The restored images of the motocross bikes image at f/4. (a) restored using bicubic filter; (b); restored using lanczos filter (c) restored using Wiener filter; (d) restored using AWF filter.

34 (a) (b)

(c) (d)

(e) (f)

Figure 4.10: The degradation and restoration of the chirp image at f/4. (a) The chirp. (b) degraded ; (c) restored using bicubic filter; (d); restored using lanczos filter (e) restored using Wiener filter; (f) restored using AWF filter.

35 f-number. In order to examine the difference between the desired and restored images, a three error criteria were used, mean absolute error MAE, mean square error MSE, and the structural similarity SSIM index.

The SSIM is a method that was proposed in [16] to measure the similarity between two images. It considers one of the images to be a perfect quality image, and compare the other image to this perfect one. In the case where the first image and the second image are identical, i.e image1=image2, then the SSIM =1. A MATLAB function was also provided for use by [17]. We refer the reader to [16] and [17] for more information and resources about the SSIM method.

Figures 4.11, and 4.12 show the error curves for the aerial, and river, respectively. In each figure, there are three subfigures in one column. The MAE comparison is on the top, the MSE at the middle, and the SSIM at bottom in each figure. Every subfigure has four different curves. Each curve represent the difference calculated between the desired image and the restored image by one of the four restoration methods. The blue curve is for AWF, the red curve for the Wiener filter, the green curve for the bicubic interpolation, and the magenta curve for the lanczos.

By looking at all the curves in general it seems that all the restoration methods favor the low f-numbers. This can be seen in the three different error metrics. As discussed earlier, low f-numbers cause less blur and noise, and more undersampling. It seems that some undersampling would be acceptable and causes less error when comparing to the error caused by the blur and noise. Keep in that the noise applied here is a scaled noise, which is affected by the f-number, this makes the three error criteria lean even more towards lower f-numbers.

36 (a)

(b)

(c)

Figure 4.11: Error between the desired and the restored images of the aerial image. The four curves are for the filters, bicubic (green), lanczos (magenta), Wiener (red), and AWF (blue) (a) MAE. (b) MSE ; (c) SSIM.

37 (a)

(b)

(c)

Figure 4.12: Error between the desired and the restored images of river. The four curves are for the filters, bicubic (green), lanczos (magenta), Wiener (red), and AWF (blue) (a) MAE; (b) MSE ; (c) SSIM.

38 According to the MAE, MSE, and SSIM plots, although, the small f-numbers cause more undesampling, the blur and noise causes more error and difference between the de- sired image and restored image than the undersampling. Based on these observations from

Figures 4.11, and 4.12, and Figure 4.2, blur and noise cause more noticeable distortion and error both visually and statistically than undersampling does, which makes the choice for a low f-number when designing the optics of the camera preferable.

In the previous study presented in [14], it was suggested that the best choices for the optics when designing the camera are the combination of the wavelength of light λ, f- number f/#, and pitch dimensions ∆x and ∆y which make Q = 1. At Q = 1 the image would be sampled at twice Nyquist rate, but that did not assume the processing of the image within the camera. Q is shown on the upper x axis in Figures 4.11, and 4.12. If considering that processing of an image can be available inside the camera itself, and by looking at those plots, Q does not have to be. The best Q value depends on the restoration method used for post processing. The average best Q for the AWF and the Wiener filter was around

Q = 0.8. This is also confirmed in Figures 4.13 and 4.14 that shows the MSE plots for the motocross bikes and the chirp. For Q − 0.8, this allow for more advantageous of reducing the value of Q, and the post processing will reduce the aliasing artifacts caused by reducing

Q. Other methods that may not handle aliasing properly, but are able to improve the SNR and the blur in an image, may favor Q > 1 to have less aliasing, while these methods process the blur and noise. The optimum values of Q that produce the minimum MSE for the four test images for all the restoration methods used in this work are presented in 4.1.

One more point of interest is the performance of the AWF as a single frame SR. The

AWF was used as a multiframe SR in many application. It was interesting to investigate its performance with the lack of the information that are usually provided by the multiple

39 Image name Motocross bikes Chirp Aerial River Bicubic 0.21 0.31 0.21 0.21 Lanczos 0.21 0.63 0.31 0.21 Wiener 0.63 0.74 0.63 0.63 AWF 0.95 0.95 0.95 0.84

Table 4.1: The values of Q that introduce the minimum MSE for all test images using the bicubic interpolation, lanczos, Wiener, and the AWF methods.

Figure 4.13: The mean square error MSE between the desired and the restored images of motocross bikes. The four curves are for the filters, bicubic (green), lanczos (magenta), Wiener (red), and AWF (blue).

frames. Given only one input image, the AWF was able to function as single frame SR algorithm. Its restored images were much better visually comparing to the lanczos and bicubic interpolation, and very close to those restored using the Wiener filter. Furthermore, the AWF error curves indicated by blue lines in Figures 4.13, 4.14, 4.11, and 4.12 were able to record the minimum error in every case shown.

40 Figure 4.14: The mean square error MSE between the desired and the restored images of the chirp. The four curves are for the filters, bicubic (green), lanczos (magenta), Wiener (red), and AWF (blue).

The performance of the AWF and the other three algorithm can be evaluated by ob- serving the plots of the MSE shown in Figures 4.13 and 4.14. The MSE plots seem to give a good perception and a fai comparison to some extent, since that both the Wiener

filter and the AWF algorithms are based on minimizing the MSE between the desired and the estimated images. The AWF was able to restore images with quality that is superior to the quality of the images restored by lanczos and bicubic interpolation. In addition the performance of the AWF was either equivalent or slightly better than that of the Wiener

filter. For very low f-numbers, the AWF surpasses the Wiener filter,due to the fact that at those f-numbers there the image would be sampled below the Nyquist rate. The Wiener

filter, unlike the AWF, was not originally designed to handle such undersampling. On the other hand, for high f-numbers, the performance of the Wiener filter and the AWF is almost equivalent, since that the image is sampled at or above the Nyquist rate.

41 (a)

(b)

Figure 4.15: MSE Error between the desired and the restored images of river. (a) Using scaled noise; (b) Using a constant noise.

One last comparison is shown in Figure 4.15. Figure 4.15 illustrate the MSE plots obtained from restoring the image of the river using the scaled noise, and the MSE plots for the same image, but using a constant noise. For the first case a constant integration time is assumed, while the in the second one the integration time can be varies to allow for enough light to pass through to the detector array.

42 CHAPTER V

CONCLUSION

In this thesis, we employed different image restoration methods to optimize the f- number. By optimizing the f-number, the value of the λf/p ratio can be improved together with the image quality. The restoration methods, the AWF in particular, are used here to deal with aliasing. This had an effect on the chosen value for the λf/p.

The value of the ratio λf/p can be reduced to less than 1,when the post processing is considered. The ratio can be as low as 0.8 according to the outcomes obtained from using the Wiener filter and the AWF. Reducing the value of this ratio can result in sharper images with less noise and less smear. So, this can be a good advantage. It might be true that reducing the ratio less than 1 would produce unacceptable aliasing, but the post processing can recover that to improve the image quality.

The different restoration methods favor the low f-numbers. The error curves had their minimum values for f-numbers range from f/3 and f/6. It seems that aliasing is more ac- ceptable when compared against blur and noise, both visually and statistically. Therefore, all the used restoration methods have better outcomes at low f-numbers, where the image has more aliasing and less blur and noise, than that at high f-numbers.

43 In general, the performance of the AWF was better than the Wiener, lanczos, and bicu- bic interpolation. Although the AWF was not designed to restore images using only one

LR image, it was able to overcome that issue and produce the minimum error between the original and restored images compared to the other methods. The AWF has the advantage of being modeled to deal with aliasing. That is why it could handle the undersampling at the low f-numbers and generate much lower error than the other three algorithms at those f-numbers. On the other hand, at the high f-numbers, the error curves of both the Wiener

filter and the AWF were very close, since the image was oversampled.

For this thesis, we were more interested in the f-number and its impact on the restoration process and the final image quality. Another good aspect that can be considered in a later work is the detector pitch and its geometry. The choice of the pitch size and shape would also affect the image quality and the imaging system cost. Furthermore, this work has considered only one SR method. Other SR algorithms may lay out different and might be better results. Some single frame SR methods can be considered. They will have the advantage of being trained to rely on only one input LR image.

44 BIBLIOGRAPHY

[1] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd ed. Prentice Hall, 2007.

[2] S. Farsiu, M. D. Robinson, M. Elad, and P. Milanfar, “Fast and robust multiframe super resolution,” Image processing, IEEE Transactions on, vol. 13, no. 10, pp. 1327– 1344, 2004.

[3] Y.-R. Li, D.-Q. Dai, and L. Shen, “Multiframe super-resolution reconstruction using sparse directional regularization,” Circuits and Systems for Video Technology, IEEE Transactions on, vol. 20, no. 7, pp. 945–956, 2010.

[4] K. Nelson, A. Bhatti, and S. Nahavandi, “Performance evaluation of multi-frame super-resolution algorithms.” in DICTA, 2012, pp. 1–8.

[5] C.-Y. Yang, J.-B. Huang, and M.-H. Yang, “Exploiting self-similarities for single frame super-resolution,” in –ACCV 2010. Springer, 2011, pp. 497– 510.

[6] K. I. Kim and Y. Kwon, “Example-based for single-image super-resolution,” in . Springer, 2008, pp. 456–465.

[7] W. T. Freeman, T. R. Jones, and E. C. Pasztor, “Example-based super-resolution,” and Applications, IEEE, vol. 22, no. 2, pp. 56–65, 2002.

[8] N. Suetake, M. Sakano, and E. Uchino, “Image super-resolution based on local self- similarity,” Optical review, vol. 15, no. 1, pp. 26–30, 2008.

[9] D. Glasner, S. Bagon, and M. Irani, “Super-resolution from a single image,” in Com- puter Vision, 2009 IEEE 12th International Conference on. IEEE, 2009, pp. 349– 356.

[10] R. Hardie, “A fast image super-resolution algorithm using an adaptive wiener filter,” Image Processing, IEEE Transactions on, vol. 16, no. 12, pp. 2953–2964, 2007.

45 [11] Lanczos resampling. [Online]. Available: http://en.wikipedia.org/wiki/Lanczosresampling

[12] R. C. Hardie, K. J. Barnard, J. G. Bognar, E. A. Watson, and E. E. Armstrong, “High- resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Optical Engineering, vol. 37, no. 1, pp. 247–260, 1998.

[13] J. Goodman, Introduction to , 2nd ed. MaGraw-Hill, 1996.

[14] R. D. Fiete, “Image quality and λfn/p for systems,” Optical Engineer- ing, vol. 38, no. 7, pp. 1229–1240, 1999.

[15] Kodak lossless true image suite. [Online]. Available: http://r0k.us/graphics/kodak/

[16] H. R. S. Zhou Wang, Alan C. Bovik and E. P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans Image Processing, vol. 13, no. 4, pp. 600–612, April 2004.

[17] A. C. B. R. S. Zhou Wang and E. P. Simoncelli. The ssim index for image quality assessment. [Online]. Available: https://ece.uwaterloo.ca/ z70wang/research/ssim/

46