<<

University of Dayton eCommons Electrical and Engineering Faculty Department of Electrical and Computer Publications Engineering

2003 Processing Russell C. Hardie University of Dayton, [email protected]

Majeed M. Hayat University of New Mexico - Main Campus

Follow this and additional works at: https://ecommons.udayton.edu/ece_fac_pub Part of the Electromagnetics and Photonics Commons, Commons, Other Electrical and Commons, Other Physics Commons, and the Processing Commons eCommons Citation Hardie, Russell C. and Hayat, Majeed M., "Digital Image Processing" (2003). Electrical and Computer Engineering Faculty Publications. 88. https://ecommons.udayton.edu/ece_fac_pub/88

This Encyclopedia Entry is brought to you for free and open access by the Department of Electrical and Computer Engineering at eCommons. It has been accepted for inclusion in Electrical and Computer Engineering Faculty Publications by an authorized administrator of eCommons. For more information, please contact [email protected], [email protected]. Digital Image Processing

Russell C. Hardie University of Dayton, Dayton, Ohio, U.S.A.

Majeed M. Hayat University of New M exico, Albuquerque, New Mexico, U.S.A.

INTRODUCTION OPTICS AND SYSTEMS

In recent years, digital images and digital image proces­ Considering that optical images are our main focus here, sing have become part of everyday life. This growth has it is highly beneficial to review the basics of optical image been primarily fueled by advances in digital acquisition. During acquisition, significant degradation and the advent and growth of the . Furthermore, can occur. Thus in order to design and properly apply commercially available digital , scanners, and various image processing , knowledge of the other equipment for acquiring, storing, and displaying acquisition process may be essential. digital imagery have become very inexpensive and in­ The optical digital-image acquisition process is creasingly powerful. An excellent treatment of digital im­ perhaps most simply broken up into three stages. The ages and digital image processing can be found in Ref. [1]. first stage is the formation of the continuous optical image A digital image is simply a two-dimensional array of in the focal plane of a lens. We rely on linear systems finite-precision numerical values called picture elements theory to model this. This step is characterized by the (or ). Thus a digital image is a spatially discrete (or system point spread function (PSF), as in the case of an discrete-space) signal. In visible images, for analog photographic . Next, this continuous optical example, each represents the intensity of a image is usually sampled in a typical fashion by a detector corresponding region in the scene. The grayscale values array, referred to as the focal-plane array (FPA). Finally, must be quantized into a finite precision format. Typical the values from each detector are quantized to form the resolutions include 8 bit (256 gray levels), 12 bit (4096 final digital image. gray levels), and 16 bit (65536 gray levels). visible In this section, we address the incoherent optical image images are most frequently represented by tristimulus formation through the system PSF. In the section values. These are the quantities of red, , and blue "Resolution and Sampling," we address sampling and required, in the additive color system, to produce the quantization. There are two main contributors to the desired color. Thus a so-called ''RGB'' color image can be system PSF, one of which is the spatial integration of the thought of as a set of three " grayscale" images-the first finite detector size. A typical FPA is illustrated in Fig. 1. representing the red component, the second the green, and This effect is spatially invariant for a uniform detector the third the blue. array. Spatial integration can be included in an overall Digital images can also be nonvisible in nature. This system PSF by modeling it with a mask, means that the physical quantity represented by the pixel followed by ideal spatial sampling. More details will be values is something other than visible light intensity supplied about this presently. Another contributor is diffraction due to the finite size of the in the or color. These include radar cross-sections of an object, 2 31 temperature profile (infrared imaging), X-ray images, optics. Other factors such as lens aberrationsr • and 141 gravitation field, etc. In general, any two-dimensional atmospheric turbulence can also be included in the array information can be the basis for a digital image. image acquisition model. As in the case of any digital data, the advantage of this Let us examine a uniform detector array and provide a representation is in the ability to manipulate the pixel mathematical model for the associated imaging system. values using a digital computer or digital hardware. This We will closely follow the analysis given in Ref. [5]. The offers great power and flexibility. Furthermore, digital effect of the integration of light intensity over the span of images can be stored and transmitted far more reliably the detectors can be modeled as a linear convolution than their analog counterparts. Error protection coding of operation with a PSF determined by the of a digital imagery, for example, allows for virtually error­ single detector. Let this detector PSF be denoted by free transmission. d(x, y). Applying the Fourier transform to d(x, y) yields 403 Encyclopedia of Optical Engineering DOl: 10.1081/E-EOE 120009509 Copyright © 2003 by Marcel Dekker, Inc. All rights reserved. 404 Digital Image Processing

a areas represent the active region of each detector. The 1--1 detector model PSF in this case i given by

d(x ,y) = ~rect(_:: !:.) D D D D ab a ' b for jxjaj < I / 2 and lx/bl < 1/2 (5) D D D D {~ otherwise Let the active region dimensions, a and b, be measured in millimeters (mm). Thus the effective continuou fre­ D D D D quency response resulting from the detector i

sin (nau) sin (nbv) D(u, v) = sinc(au, bv) = b (6) D D D D n 2 au v where u and v are the horizontal and vertical frequencie Fig. 1 Uniform detector array illustrating critical dimensions. (From Ref. [5].) measured in cycles/mm. The incoherent OTF of diffraction-limited optic with a circular exit pupil can be found 121 as

the effective continuous frequency response resulting 2 - 1 p p p 1 2 5 -cos - -- I - - 2] for p < Pc from th e spati al integration of th e detectors[ • • 1 F(j]) H,(u, ') ~ {: [ CJ p, p, D(u, v) = F{d(x,y)} ( I) otherwi e (7) where F {·} represents the continuous Fourier transform. where p = J u2 + v2. The parameter Pc the radial Next, define the incoherent optical transfer function system cutoff frequency given by (OTF) of th e opti cs to be H0 (u , v) . The overall system OTF is given by the product of these, yielding I Pc (8) H(u, v) = D(u, v)H0 (u , v) (2) Af/#

The overall continuous system PSF is then given by where fl# is the !-number of the optics and }, i the wavelength of light considered. Because the cutoff of Ho(u, v) is Pc• so is the cutoff of the overall system hc(x,y) = .r-l {H(u, v)} (3) H(u, v). 1 Fig. 2 shows an example of D(u, v), H (u, v), H(u, v), where .r- { ·} represents the inverse Fourier transform. 0 If we intend to correct for thi s blurring in our digital and hc(x,y) for a particular imaging system. The sy tern considered happens to be a forward- looking infrared image with postprocessing, it is most convenient to have (FUR) imager. The FUR camera uses a 128 128 the equivalent discrete PSF (the impulse-in vari ant sys­ x tem). The impulse-invariant di screte system PSF,r6l Amber AE-4128 infrared FPA. The FPA is composed of indium -antimonide (lnSb) detectors with a re ponse in denoted as hd(n I> n2), is obtained by sampling the con­ tinuous PSF such that the 3-5 ~m wavelength band. This system has quare detectors of size a = b = 0.040 mm. The imager is equipped with 100 mmf/3 optics. The center wavelength, (4) A = 0.004 mm, is used in the OTF calculation. Fig. 2a) shows the effective modulation transfer function (MTF) where r l and r2 are the hori zontal and vertical detector of the detectors, ID(u, v)l. The diffraction-limited OTF spacings, respectively. Thi s accurately represents the for the optics, H 0 (u, v), is shown in Fig. 2b). Note that continuous blurring when the effective sampling fre­ the cutoff frequency is 83.3 cycle/mm. The overall sy - quency, I/T1, exceeds two times the horizontal cutoff tern MTF, IH(u , v)l, is plotted in Fig. 2c). Finally, the frequency of H(u ,v) and l/T1 exceeds th e vertical cutoff normalized continuous system PSF, hc(x,y), is plotted in /6/ frequency by twofold . . . . Fig. 2d). The continuous PSF is sampled at the detector Let us now specifically consider a system with uniform spacing to yield the im pu lse invariant discrete system rectangul ar detectors, as shown in Fig. 1. The shaded im pul se response. Digital Image Processing 405

0.8

0.6 -.~ I 0.4

0.2

0 100

100

v (cycles/mm) v (cyclos/mm) u (cyctes/mm) u (cycles/mm) (a) (b)

0.8

0.6 >: >i u .c 0.4

100

v (cycles/mm) u (cycles/mm)

(c) (d)

Fig. 2 (a) Effective MTF of the detectors in th e FLIR imager; (b) diffraction-limited OTF of the optics; (c) overall system MTF; (d) overall continuous system PSF. (From Ref. [5].)

,/ RESOLUTION AND SAMPLING of the optical image in the focal plane (prior to the spatial sampling and quantization). Another related means. of Resolution is generally thought of as the ability of an quantifying optical resolution involves the Rayleigh imaging system to preserve contrast in small , closely distance.' 11 The Ray leigh distance is related to the wtdth spaced objects. Thus resolution is tied to the ability of a of the PSF spot. For a diffraction-limited system with a system to preserve information necessary to resolve point circular aperture, the first zero of the PSF occurs at sources at various separations. In this sense, resolution is affected by the three stages of image acquisition described ).d ro = 1.22 - (9) above. We must consider the optical cut-off frequency, a which limits the resolution of the optical image in the focal plane. Next, we mu st consider the ability of the where )., is the optical wavelength, d is the focal length, sampling and quantization to preserve this optical and a is the aperture diameter. Thus the Rayleigh dis­ tance, r , basically describes the width of the PSF spot resolution in our digital image. 0 The cut-off frequency of the optics and detector size (ignoring detector spatial integration). Thus accord­ integration is a good means of quantifying the resolution ing to the Rayleigh criterion, two point sources can be 406 Digital Image Processing

"resolved" in the focal plane image if they are separated IMAGE DEGRADATION AND NOISE by r0 or greater. Because of sampling and quantization, the diffraction­ In almost all imaging systems, the very proce of limited resolution does not tell the whole story for a recording or acquiring of an image is accompanied by one system. Another important aspect is the or more forms of degradation and noise. This section detector spacing which determines the spatial sampling deals with some of the prevalent sources of degradation rate. [?J The optical cut-off frequency and the Rayleigh and noise encountered in modern imaging systems. distance lead to two different detector spacing criteria for There are many sources of degradation in imaging "proper" sampling. The Nyquist criterion specifies that systems, and they can be categorized according to their the sampling frequency must exceed two times the optical spatial and temporal characteristics. Point degradations cut-off frequency. Note that the detector spacing in the often refer to types of degradation where the values of focal plane determines the sampling frequency. Thus for a individual pixels are affected without introducing any blur. typical camera system with a cut-off frequency of fc = In this case, the pixel values undergo a simple pointwise ai(Ad), the detector spacing for Nyquist sampling would transformation. A good example of point degradation is be (Ad)!(2a). Sampling at this rate means that the data the pixel saturation effect exhibited by visible and infrared are free from alias ing. Consequently, with proper inter­ cameras for which an excessive level of intensity will set polation, it is possible to recover the continuous image the detector output to a saturation level. Practical imaging from its samples without error. Thus it is fair to say that the systems also introduce some form of image blur, in which full optical resolution is preserved by Nyquist sampling. case, the gray level of a pixel is affected by the gray levels The Rayleigh sampling criterion is less strict than the of the neighboring pixels. These types of degradation are Nyquist criterion. According to the Rayleigh criterion, we commonly referred to as spatial degradations. A prime space the detectors at one half of the Rayleigh distance, example is the optical (or Rayleigh type) blur, which is 0.5r0 . By doing so, two resolvable adjacent spots in the particularly pronounced in small-aperture optical imaging focal plane image will remain resolvable in the sampled systems. Another related example of spatial degradation data. Thus for our typical camera system with a cut-off is the blur introduced by the finite size of the active area frequency of fc = ai(Ad), the detector spacing for Ray­ of each detector in the array. leigh sampling would be 0.6Ud!a. Note that this is 22% Other types of degradation involve chromatic or larger than the spacing for Nyquist sampling. As a result, temporal effects. In some cases in hyperspectral imaging, some ali asing will be present in the imagery, and it is not for example, the PSF of the system can be strongly possible to perfectly reconstruct the continuous image dependent on the optical wavelength of the incoming light, from its samples. Many camera-system designs utilize a which may cause image deformation and additional blur. sampling rate lower than the Nyquist. This is partly Another form of degradation occurs when imaging is attributed to the fact that little energy tends to exist at performed in the presence of atmospheric turbulence. In the hi gher aliased frequencies, and fewer detectors means this situation, the system PSF randomly fluctuates in time, less data (for the same field of view) and lower cost. according to the changes in the refractive index of the In infrared imaging systems, commerciall y available column of air between the object and the camera. This type FPA arrays tend to have fewer detectors than in visible of degradation may result in blur, , or other aberration systems. This is attributed to the cost and fabrication depending on the time of the camera and complexity. This, coupled with the desire for a wide field atmospheric conditions.f41 Yet another important form of of view (small focal length), means that is a degradation involves the geometric deformation of the significant problem in these systems. A number of image, which can be attributed, for example, to aberration algorithms have been developed which attempt to associated with the optical elements (e.g., lenses). compensate for the ali asing in all types of images by post The above sources of degradation are all operational in . h . [5 8-I5] processing a sequence of frames wit motion. · · nature. In other words, they can be thought of as some Related to these aliasing red uction algorithms are super- operator (or system) acting on the true image. This 6 9 reso ]uti · on tee h mques· . fi· - I ·lTh e term · " super-res ol u t·o"1 n, representation becomes particularly useful in image is generally used to refer to algorithms that use post­ restoration, where the goal is to perform counter processing to recover frequency information beyond that operations in an effort to "undo" degradation. However, of the cut-off of the optics. This requires creating a new such degradations are not the only cause of reduction of effective sampling grid denser than the Nyquist criterion image quality because recording or acquisition of an dictates. Clearly, this is a far more ambitious goal than image is always accompanied by measurement noise. simply trying to realize the fu ll resolution afforded by As in the case of image degradation, there are many the optics. Such techniques require a priori information sources of noise that can seriously degrade the qua­ about the scene and its dynamics in some form . lity of images. In principle, noise be classified into two Digital Image Processing 407 categories: signal-independent and signal-dependent. OVERVIEW OF IMAGE Thermal noise (also known as Johnson noise), which is PROCESSING TECHNIQUES inherent in all electronic measurements, is an example of signal-independent noise. This type of noise is added to Digital image processing generally refers to the mani­ each pixel value and can become a limiting factor in cases pulation of the pixel values using a digital computer. when the object illumination is weak. Thermal noise is The goals of such processing vary widely. In this work often modeled by a Gau sian process whose variance is we describe six major categories of image process­ dependent on the temperature, the resistance and the tem­ ing algorithms. Each is addressed in detail in subse­ poral bandwidth of the electronic circuitry.!ZOJ In array quent subsections. sen ors, however, there i additional noise because of the nonidentical responsivities of the individual detectors. In Image Restoration particular, any two detectors may respond slightly dif­ ferently to the same intensity level. This results in a spatial One important class of im age processing problems is pattern, which appears atop the image causing visual image restoration. In image restoration, the goal is to and reducing the gray-level accuracy. This type estimate an image free from some corruptive process of noi se is commonly refetTed to as fixed-pattern noise; it based on corrupted observations. The corruptive process can be a limiting factor in high-precision applications may include blur, noise, or aliasing, for example. Image 21 231 particularly in mid- to far-infrared imagers.! - Fixed­ restoration tends to be quantitative (rather than qualitat­ pattern noise is a form of signal-dependent noise. For ive) and requires some knowledge of the corruptive example, fluctuations in the gains of detectors are affected process. The corruptive process modeled may include the (in a multiplicative form) by the level of intensity. A imaging system itself. number of techniques which attempt to correct for this Image restoration techniques can be divided into linear 21 23 311 nonuniformity' • - have been proposed. and nonlinear methods. Typical linear methods tend to be The most fundamental source of noise in optical mea­ well suited for the restoration of images corrupted by a surement, however, is quantum noise, which arises from linear blur and additive Gaussian noise. Linear filters the photon nature of light. Quantum noise represents enjoy the benefits of having a well-established and rich the uncertainty in the number of photons collected in theoretical framework. Furthermore, real-time imple­ 32 any time intervaJ.f3· 1 This random fluctuation in the mentation of linear filters is relatively easy because photon number increases with the optical energy (or they employ only standard operations (multiplication signal) per measurement time. Interestingly, the mag­ and addition), and can also be implemented using fast nitude of quantum noise, relative to the mean photon Fourier transforms. In many cases, however, the restric­ number, is inversely related to the optical energy. Quan­ tion of linearity can lead to hi ghly suboptimal results. In tum noi se therefore plays an important role in limiting such cases, it may be desirable to employ a nonlinear the accuracy of optical imaging and vision in situations filter.136- 38l Furthermore, as di gital when the density of photons per measurement time is hardware becomes increasingly more sophisticated and small, i.e., at low intensities of light. 1331 These situations capable, complex nonlinear operations can be realized in / occur, for example, when imaging faint astronomical real-time. For these , the field of nonlinear filters objects and in medical radiographic imaging.1341 More­ has grown, and continues to grow rapidly. over, this type of signal-dependent noise is also important While many applications benefit from the use of in high-intensity imaging applications whenever high nonlinear methods, there exist broad classes of problems levels of measurement accuracy is desired. For fully that are fundamentally suited to nonlinear methods and coherent light, the photon number distribution obeys a which have motivated the development of many nonlinear Poisson distribution. The Poisson distribution is also used algorithms. Included in these classes of problems are: as an approximation for partially coherent and thermal suppression of heavy-tail ed noi se ~rocesses and shot noise; processing of nonstationary ; superresolu­ light as long as the measurement time and area (i.e., detector integration time and active area) are greater than tion frequency extension; modeling and inversion of 35 the coherence time and area of the light, respectively.l 1 nonlinear physical systems. In any imaging application, the harm resulting from image degradation and measurement noise can be better Image Enhancement overcome if the sources of degradation and noise are Another class of image processing techniques is image ~dentified and properly modeled. Much of the available Image restoration and model-based enhancement tech­ enhancement. Enhancement is the process of subjectively niques often require a mathematical model for pertinent improving image quality. These techniques tend to be more qualitative than quantitative. Enhancement includes point image-degrading effects. 408 Digital Image Processing

operations which may modify the hi stogram of an image. A mented and may then be c lassified by a radiologi tor by an hi stogram of a grayscale image is sim ply a plot of the automated system. number of times each gray level is observed in the im age. In automatic target recognition (ATR), described Changing the bri ghtness or contrast usin g a linear scaling below, it is often necessary to isolate a single target of each pixel va lue, for exampl e, will change the hi stogram object from the background prior to attempting to identify of the im age. Enhancement includes many other operations the object. This is carried out through image egmenta­ including spatial and spatial sharpening. These tion. Another use for segmentation is to allow one to can be accompli shed with linear low-pass and high-boost perform different processing techniques on different filters, respectively. In a broader sense, enhancement can groups of pixels (segments).14 1 1 Segmentation can be include edge detection, image resizing through inter­ accomplished using gray-level information, color or other polation, and other geometric transformations. spectral characteristics, local texture, boundaries, etc.

Multispectral and Hyperspectral Automatic Target Recognition Image Processing ATR using image data remains an important problem in A visible multispectral image is a coll ection of im ages of im age processing. ATR refers to the proces ing of the sa me scene at different optical wavelengths. Each identifying an object in image data, based solely on an image or " band" typically corresponds to a narrow­ automated system (no human in the loop). lll ATR spectral-band im age of the scene. The term " hyperspec­ capabilities are an important aspect of many autonomous tral" is generall y reserved for multispectral images where veh icles, surveillance systems, and military systems. ATR hundreds of spectral bands are used. In a broader sense, is also important in a number of industrial applications some nonvisibl e images are described as mu ltispectral such as automatic sorting of objects and quality control in when they are composed of a set of images, each an assembly line. measuring a different physical quantity. For exampl e, in ATR is generall y broken down into the steps of magneti c resonance imaging (MRI), the modali ty of the segmentation, , and c lassification (pat­ imager can be changed to coll ect several different types of tern recogniti on). 142 1 Segmentation, de cribed above, is images of the same area. used to isolate objects from one ano ther and the It may be that visible multispectral and hyperspectral background. Feature extraction is the process of convert­ im ages have most often found application in remote ing the observed pixel data in the segment of interest into sensing. An excellent treatment of the su bj ect can be a compact and concise set of numerical observations. found in Ref. [39]. The multiple wavelength in formation These "features" should be designed to offer maximum can provide a wealth of inform ation about the scene. In class separability with a minimum of redundant informa­ the case of hypespectral imagery , each pixel provides tion. Such fea tures can include shape analysis parameters, significant information about spectral reflectance pattern Fourier descriptors, edge locations, object symmetry or signature for the scene in that area. Such a spectral parameters, size, color (spectral characteristics), etc. re fl ectance pattern can be used to distinguish the various Thus the features can be both spatial and spectral in I materi als in the scene with far more precision than with a nature. lt is generall y believed that a combination of these single band. This is carried out by using multivariate factors offers the most promise for many application . clustering techniques. In some cases, it is even possibl e to Note that if only spatial in fo rm ation is used, this generally perform materi al identification for each pi xel based on the requires very high spatial resolu tion. If high-resolution spectra l reflectance signatures inferred from a hyperspec­ spectral information is avai lable, it may be possible to tral image deck. This is referred to as pi xel classification. perform ATR with onl y one hyperspectral pixel on the object. Thus there is clearl y an interesting trade-off between spatial and spectral resolution. The pattern recog nitio n step is the process of Segmentation is the process of grouping pixels in an im age classifying (or naming) the objects based on the extracted 140 1 into simi Jar classes. T hi s creates a set of nonoverl ap­ features for those objects. This process can be accom­ ping regions in the image. Segmentation is used in a pl ished using a number of frameworks including Bayesian number of ways. Perhaps the primary application is to analysis, neural networks, k nearest neighbor analysis, isolate groups of pixels with similar properti es so that etc .l421 th ese groups can then be classifi ed together. T hi s is Image sequences can aid ATR. For example, moving . . · !39 1 I · I performed extensive ly 111 remote sensi ng. t .'s a so targets are easily segmented from the background in an carri ed out with nonvisibl e im ages such as MRI Im ages. image sequence. Furthermore, the motion parameters In the case of MRI, the different ti ssue types are seg- themselves could be used as features for ATR. Multiple Digital Image Processing 409 looks from an image sequence from different viewing 7. Voll merhausen, R.H.; Driggers, R.G. Analysis of Sampled angles can also be a big aid, especially in the case of Imaging Systems; SPIE: Bellingham, W A, 2000. [iJ 8. Tsai, R.; Huang, T. Multiframe Image Restoration and partially hidden or obscured object . I Regisu·ation. In Advances in and Image Processing; JAI Press Inc., 1984; Vol. I, 3 17-339. Image Sequence Processing 9. Kim, S.; Bose, N.; Vale nzuela, H. Recursive reconstruction of high resolution image from noisy undersampled multi­ With recent advances in video hardware and desktop frames. IEEE Trans. Acoust. Speech Signal Process. digital video product , full-motion digital video and other June 1990, 38, 1013- 1027. I 0. Stark, H. ; Oskoui, P. High-resolution image recovery from image sequence data are becoming increasingly prevalent. image-plane arrays, using convex projections. J. Opt. Soc. Image sequences offer a wealth of information which can Am., A 1989, 6 ( II ), 1715-1726. be exploited in image processing algorithms. Image I l. Patti, A.J.; Sezan, M .l.; Tekalp, A.M. Superresolution re toration, enhancement, segmentation, and automatic video reconstruction with arbitrary sampling lattices and target recognition can all benefit from image sequence nonzero aperture time. IEEE T rans. Image Process. Aug. information.143 1 Consider the case of temporal noise, for 1997, 6, 1064- 1076. example. Such noise can often be greatly reduced with a 12. Mann, S .; Picard, R.W. Virtual Bell ows: Constructing 43 motion-compensated temporal-averaging filter.1 1 Re­ High Quality Stills from Video. In Proceedings of IEEE dundant information in sequences can also be exploited International Conference on /m.age Processing, Austin, for dramatic video compression, as seen in the Moving TX; Nov. 1994. 13. Schultz, R.R.; Stevenson, R.L. Extraction of high-resolu­ Picture Experts Group (MPEG) tandard. In many image tion frames from video sequences. IEEE Trans. Image sequence processing algorithms, scene motion parameters Process. June 1996,5,996- 1011. are frequently required. This may involve rigid object 14. Cheeseman, P.; Kanefsky, B.; Kraft, R.; Stutz, J.; motion or full deformable optical flow . Thus motion Hanson, R. Super-Resolved Surface Reconstruction from estimation and optical flow estimation are fundamental M ultiple Images. In NASA Technical Report FIA-94-12; problems associated with image sequence processing. NASA Ames Research Center: Moffett Field, CA, Dec. 1994. 15. Irani, M.; Peleg, S. Improving resolution by image CONCLUSION registration. CVGTP, Graph. Models Image Process. 1991, 53, 23 1-239. In this article, we have attempted to convey the basics of 16. Hunt, B.R. A vector quantizer for image restoration. Int. J. digital image representation and some aspects of visible Imaging Syst. Techno!. 1995, 6, 11 9- 124. 17. Sheppard, D.G.; Bilgin, A.; Nadar, M.S.; Hunt, B:R.; image acquisition. We have also highlighted six major Marcellin, M.W. A vector quantizer for 1mage restoratiOn. areas within the broad field of digital image processing, IEEE Trans. Im age Process. Jan. 1998, 7, 11 9-124. which are discussed in more detai l in related articles. We 18. Sementilli, P.J. ; Hunt, B.R.; Nadar, M.S. Analysis of the hope that the reader will benefit from the references limit to superresolu tion in incoherent imaging. J. Opt. Soc. I cited in this work, which provide a fuller treatment of Am., A Nov. 1993, 10, 2265-2276. these subjects. 19. Gerchberg, R.W. Super-resolution through error energy reduction. Opt. Acta 1974, 21, 709-720. . 20. der Ziel, A.V. Noise in Dev1ces and REFERENCES Circuits; W il ey-Interscience: New York, 1986. 2 1. Mi lton, A.F.; Barone, F.R.; Kruer, M.R. Influence of nonuniformity on in frared focal plane array performance. 1. Castleman, K.R. Digica/lmage Processing; Prentice-Hall : Englewood C li ffs, NJ, 1996. Opt. Eng. 1985, 24, 855-862. . 22. Holst, G.C. CCD Arrays, Cam.eras, and Dtsplays; SPIE 2. Goodman, J. Introduction to ; McGraw-Hill , Opt. Eng. Press: Bellingham, 1996. 1968. 23. Scribner, D.A.; Sarkay, K.A.; Caulfield, J.T.; Kruer, M.R. ; 3. Goodman, J.W. Statistical Optics; John Wiley and Sons: Katz, G.; Gridley, C .J . Nonuniformity correcti o n for New York, 1985. staring focal pl ane arrays usin g scene-based techniques. 4. Roggemann, M.C.; Welsh, B. Imaging Through Tur­ SPIE Proc. Tech. Symp. Opt. E ng. Photonics Aerosp. bulence; CRC Press: New York, 1996. 5. Hardie, R.C.; Barnard, K.J.; Bognar, J.G.; Armstrong, E.E.; Sens. 1990, 1308, 224-233. 24. Perry, D.L.; Dereniak, E.L. Linear theory of nonuniformity Watson, E.A. Hi gh resolution image reconstruction from correcti on in infrared staring sensors. Opt. Eng. 1993, a sequence of rotated and translated frames and its appli cation to an infrared imaging system. Opt. Eng. 32, 1853- 1859. 25. Schulz, M.; Caldwe ll , L. Nonuniformity correcti o n and Jan. 1998, 37, 247-260. correctability of infrared focal pl ane arrays. Infrared 6. Oppenheim, A.V.; Schafer, R.W. Discrete-Time Signal Processing; Prentice Hall , New Jersey, 1989. Phys. Techno!. 1995, 36, 763-777. 410 Digital Image Processing

26. Scribner, D.; Sarkady, K.; Kruer, M.; Ca!ufield, J.; Hunt, 32. Saleh, B.E.A. PhoiOelectron ; Springer: Berlin, J.; Colbert, M. ; Descour, M. Adapti ve nonuniformity 1978. correction for infrared focal plane aJTays using neural 33. Cornsweet, T.N. Visual ; Academic Pre s: New networks. Proc. SPIE Int. Symp. Opt. Appl. Sci. Eng. York, 1970. 1991 , 1541, 100- 11 0. 34. Snyder, D.L.; Miller, M.l. Random Poilll Processes in 27. Narendra, P.M.; Foss, N.A. Shutterless fi xed pattern noi se Time and Space; Springer-Verlag: New York, 1991. correction for infrared imaging arrays. Proc. SPIE Int. Soc. 35. Saleh, B.E.A. Real-Time Optical Processing: Quantum Opt. Eng., Tech. Issues Focal Plane Dev. 1981, 282, 44- No ise in Optical Processing; Academic Press: New York, 5 1. 1994. 28 . Harris, J. G. Continuous-Time Cali bration of VLSI Sensors 36. Mitra, S.K.; Sicuranza, G.L. Nonlinear Image Processing; for Gain and Offset Vari ations. In Proceedings of the SPIE Academic Press, 2001. International Symposium on Aerospace and Dual-Use 37. Astola, J. ; Kuosmanen, P. Fundamentals of Nonlinear Photonics, Sma rt Focal Plane Arrays and Focal Plane Digital Filtering; CRC Press: New York, 1997. Array Testing; Wigdor, M., Massie, M.A., Eds.; 1995; 38. Pitas, I.; Venetsanopoulos, A.N. Non- Linear Filters: Vol. 2474, 23 - 33. Principles and Applications; Kluwer Academic Publisher , 29. Harris, J.G.; Chiang, Y.-M. Nonuniformity Correction 1990. Using Constant Average Stati sti cs Constraint: Analog and 39. Richards, J.A. Digital ; Di gital Implementations. In Proceedings of the SPIE Springer-Verl ag, 1986. Aerospace/Defense Sensing and Controls, 1997, lnji-ared 40. Haralick, R.M. ; Shapiro, L.G. Survey: Image segmenta­ Technology and Applications XX/JI ; Anderson, B.F., ti on. Com put. Vis. Graph. Image Process. 1985, 29, 100- Strojnik, M. , Eds.; 1997; Vol. 306 1, 895-905. 132. 30. Hayat, M.M.; Torres, S.N.; Armstrong, E.; Cain, S.C.; 4 1. Barner, K.E. ; Sarhan, A.M.; Hardie, R.C. Partition-based Yasuda, B. Stati stical fo r nonuniform ity correc­ weighted sum filters for image restoration. IEEE Trans. tion in focal-plane arrays. Appl. Opt. Mar. 1999, 38, 772 - Image Process. 8 (5), May I 999. 780. 42. Fukunaga, K. Introduction to Statistical Pal/ern Recog­ 3 1. Hardie, E.A.R.C.; Hayat, M.M.; Yasuda, B. Scene based nition; Academic Press, Inc., San Diego, 1990. non-uniformity correction usin g video sequences and 43. TekaJp, A.M. Digital Video Processin g; Prentice Hall , registration. Appl. Opt. Mar. 2000, 39, 1241 - 1250. 1995.