
University of Dayton eCommons Electrical and Computer Engineering Faculty Department of Electrical and Computer Publications Engineering 2003 Digital Image Processing Russell C. Hardie University of Dayton, [email protected] Majeed M. Hayat University of New Mexico - Main Campus Follow this and additional works at: https://ecommons.udayton.edu/ece_fac_pub Part of the Electromagnetics and Photonics Commons, Optics Commons, Other Electrical and Computer Engineering Commons, Other Physics Commons, and the Signal Processing Commons eCommons Citation Hardie, Russell C. and Hayat, Majeed M., "Digital Image Processing" (2003). Electrical and Computer Engineering Faculty Publications. 88. https://ecommons.udayton.edu/ece_fac_pub/88 This Encyclopedia Entry is brought to you for free and open access by the Department of Electrical and Computer Engineering at eCommons. It has been accepted for inclusion in Electrical and Computer Engineering Faculty Publications by an authorized administrator of eCommons. For more information, please contact [email protected], [email protected]. Digital Image Processing Russell C. Hardie University of Dayton, Dayton, Ohio, U.S.A. Majeed M. Hayat University of New M exico, Albuquerque, New Mexico, U.S.A. INTRODUCTION OPTICS AND IMAGING SYSTEMS In recent years, digital images and digital image proces­ Considering that optical images are our main focus here, sing have become part of everyday life. This growth has it is highly beneficial to review the basics of optical image been primarily fueled by advances in digital computers acquisition. During acquisition, significant degradation and the advent and growth of the Internet. Furthermore, can occur. Thus in order to design and properly apply commercially available digital cameras, scanners, and various image processing algorithms, knowledge of the other equipment for acquiring, storing, and displaying acquisition process may be essential. digital imagery have become very inexpensive and in­ The optical digital-image acquisition process is creasingly powerful. An excellent treatment of digital im­ perhaps most simply broken up into three stages. The ages and digital image processing can be found in Ref. [1]. first stage is the formation of the continuous optical image A digital image is simply a two-dimensional array of in the focal plane of a lens. We rely on linear systems finite-precision numerical values called picture elements theory to model this. This step is characterized by the (or pixels). Thus a digital image is a spatially discrete (or system point spread function (PSF), as in the case of an discrete-space) signal. In visible grayscale images, for analog photographic camera. Next, this continuous optical example, each pixel represents the intensity of a image is usually sampled in a typical fashion by a detector corresponding region in the scene. The grayscale values array, referred to as the focal-plane array (FPA). Finally, must be quantized into a finite precision format. Typical the values from each detector are quantized to form the resolutions include 8 bit (256 gray levels), 12 bit (4096 final digital image. gray levels), and 16 bit (65536 gray levels). Color visible In this section, we address the incoherent optical image images are most frequently represented by tristimulus formation through the system PSF. In the section values. These are the quantities of red, green, and blue "Resolution and Sampling," we address sampling and light required, in the additive color system, to produce the quantization. There are two main contributors to the desired color. Thus a so-called ''RGB'' color image can be system PSF, one of which is the spatial integration of the thought of as a set of three " grayscale" images-the first finite detector size. A typical FPA is illustrated in Fig. 1. representing the red component, the second the green, and This effect is spatially invariant for a uniform detector the third the blue. array. Spatial integration can be included in an overall Digital images can also be nonvisible in nature. This system PSF by modeling it with a convolution mask, means that the physical quantity represented by the pixel followed by ideal spatial sampling. More details will be values is something other than visible light intensity supplied about this presently. Another contributor is diffraction due to the finite size of the aperture in the or color. These include radar cross-sections of an object, 2 31 temperature profile (infrared imaging), X-ray images, optics. Other factors such as lens aberrationsr • and 141 gravitation field, etc. In general, any two-dimensional atmospheric turbulence can also be included in the array information can be the basis for a digital image. image acquisition model. As in the case of any digital data, the advantage of this Let us examine a uniform detector array and provide a representation is in the ability to manipulate the pixel mathematical model for the associated imaging system. values using a digital computer or digital hardware. This We will closely follow the analysis given in Ref. [5]. The offers great power and flexibility. Furthermore, digital effect of the integration of light intensity over the span of images can be stored and transmitted far more reliably the detectors can be modeled as a linear convolution than their analog counterparts. Error protection coding of operation with a PSF determined by the geometry of a digital imagery, for example, allows for virtually error­ single detector. Let this detector PSF be denoted by free transmission. d(x, y). Applying the Fourier transform to d(x, y) yields 403 Encyclopedia of Optical Engineering DOl: 10.1081/E-EOE 120009509 Copyright © 2003 by Marcel Dekker, Inc. All rights reserved. 404 Digital Image Processing a areas represent the active region of each detector. The 1--1 detector model PSF in this case i given by d(x ,y) = ~rect(_:: !:.) D D D D ab a ' b for jxjaj < I / 2 and lx/bl < 1/2 (5) D D D D {~ otherwise Let the active region dimensions, a and b, be measured in millimeters (mm). Thus the effective continuou fre­ D D D D quency response resulting from the detector i sin (nau) sin (nbv) D(u, v) = sinc(au, bv) = b (6) D D D D n 2 au v where u and v are the horizontal and vertical frequencie Fig. 1 Uniform detector array illustrating critical dimensions. (From Ref. [5].) measured in cycles/mm. The incoherent OTF of diffraction-limited optic with a circular exit pupil can be found 121 as the effective continuous frequency response resulting 2 - 1 p p p 1 2 5 -cos - -- I - - 2] for p < Pc from th e spati al integration of th e detectors[ • • 1 F(j]) H,(u, ') ~ {: [ CJ p, p, D(u, v) = F{d(x,y)} ( I) otherwi e (7) where F {·} represents the continuous Fourier transform. where p = J u2 + v2. The parameter Pc the radial Next, define the incoherent optical transfer function system cutoff frequency given by (OTF) of th e opti cs to be H0 (u , v) . The overall system OTF is given by the product of these, yielding I Pc (8) H(u, v) = D(u, v)H0 (u , v) (2) Af/# The overall continuous system PSF is then given by where fl# is the !-number of the optics and }, i the wavelength of light considered. Because the cutoff of Ho(u, v) is Pc• so is the cutoff of the overall system hc(x,y) = .r-l {H(u, v)} (3) H(u, v). 1 Fig. 2 shows an example of D(u, v), H (u, v), H(u, v), where .r- { ·} represents the inverse Fourier transform. 0 If we intend to correct for thi s blurring in our digital and hc(x,y) for a particular imaging system. The sy tern considered happens to be a forward- looking infrared image with postprocessing, it is most convenient to have (FUR) imager. The FUR camera uses a 128 128 the equivalent discrete PSF (the impulse-in vari ant sys­ x tem). The impulse-invariant di screte system PSF,r6l Amber AE-4128 infrared FPA. The FPA is composed of indium -antimonide (lnSb) detectors with a re ponse in denoted as hd(n I> n2), is obtained by sampling the con­ tinuous PSF such that the 3-5 ~m wavelength band. This system has quare detectors of size a = b = 0.040 mm. The imager is equipped with 100 mmf/3 optics. The center wavelength, (4) A = 0.004 mm, is used in the OTF calculation. Fig. 2a) shows the effective modulation transfer function (MTF) where r l and r2 are the hori zontal and vertical detector of the detectors, ID(u, v)l. The diffraction-limited OTF spacings, respectively. Thi s accurately represents the for the optics, H 0 (u, v), is shown in Fig. 2b). Note that continuous blurring when the effective sampling fre­ the cutoff frequency is 83.3 cycle/mm. The overall sy - quency, I/T1, exceeds two times the horizontal cutoff tern MTF, IH(u , v)l, is plotted in Fig. 2c). Finally, the frequency of H(u ,v) and l/T1 exceeds th e vertical cutoff normalized continuous system PSF, hc(x,y), is plotted in /6/ frequency by twofold . Fig. 2d). The continuous PSF is sampled at the detector Let us now specifically consider a system with uniform spacing to yield the im pu lse invariant discrete system rectangul ar detectors, as shown in Fig. 1. The shaded im pul se response. Digital Image Processing 405 0.8 0.6 -.~ I 0.4 0.2 0 100 100 v (cycles/mm) v (cyclos/mm) u (cyctes/mm) u (cycles/mm) (a) (b) 0.8 0.6 >: >i u .c 0.4 100 v (cycles/mm) u (cycles/mm) (c) (d) Fig. 2 (a) Effective MTF of the detectors in th e FLIR imager; (b) diffraction-limited OTF of the optics; (c) overall system MTF; (d) overall continuous system PSF.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages9 Page
-
File Size-