Interpolation and Decimation

Total Page:16

File Type:pdf, Size:1020Kb

Interpolation and Decimation C. A. Bouman: Digital Image Processing - January 20, 2021 1 1-D Rate Conversion • Decimation – Reduce the sampling rate of a discrete-time signal. – Low sampling rate reduces storage and computation requirements. • Interpolation – Increase the sampling rate of a discrete-time signal. – Higher sampling rate preserves fidelity. C. A. Bouman: Digital Image Processing - January 20, 2021 2 1-D Periodic Subsampling • Time domain subsampling of x(n) with period D y(n)= x(Dn) x(n)D y(n) • Frequency domain representation 1 D−1 Y (ejω)= X ej(ω−2πk)/D D Xk=0 • Problem: Frequencies above π/D will alias. • Example when D =2 aliased aliased signal signal ω −π −π/2 π/2 π • Solution: Remove frequencies above π/D. C. A. Bouman: Digital Image Processing - January 20, 2021 3 Decimation System ω x(n) H(e j ) D y(n) • Apply the filter H(ejω) to remove high frequencies – For |ω| <π Dω H(ejω)= rect 2π – For all ω ∞ ω − k2π H(ejω) = rect D 2π kX=−∞ = prect2π/D(ω) – Impulse response 1 h(n)= sinc(n/D) D • Frequency domain representation 1 D−1 Y (ejω)= H ej(ω−2πk)/D X ej(ω−2πk)/D D Xk=0 C. A. Bouman: Digital Image Processing - January 20, 2021 4 Graphical View of Decimation for D =2 • Spectral content of signal X ejω 1 ω −π −π/2π/2π 3π/2 2π 5π/2 • Spectral content of filtered signal H ejω X ejω 1 ω −π −π/2π/2π 3π/2 2π 5π/2 • Spectral content of decimated signal 1 D−1 Y (ejω)= H ej(ω−2πk)/D X ej(ω−2πk)/D D Xk=0 1/D ω −2π −ππ 2π 3π 4π 5π C. A. Bouman: Digital Image Processing - January 20, 2021 5 Decimation for Images • Extension to decimation of images is direct • Apply 2-D Filter f(i,j)= h(i,j) ∗ x(i,j) • Subsample result y(i,j)= f(Di,Dj) • Ideal choice of filter is 1 h(m,n)= sinc(m/D)sinc(n/D) D2 • Problems: – Filter has infinite extent. – Filter is not strictly positive. C. A. Bouman: Digital Image Processing - January 20, 2021 6 Alternative Filters for Image Decimation • Direct subsampling h(m,n)= δ(m,n) – Advantages/Disadvantages: ∗ Low computation ∗ Excessive aliasing • Block averaging h(m,n) = δ(m,n)+ δ(m +1,n) +δ(m,n +1)+ δ(m +1,n + 1) – Advantages/Disadvantages: ∗ Low computation ∗ Some aliasing • Sinc function 1 h(m,n)= sinc(m/D, n/D) D2 – Advantages/Disadvantages: ∗ Optimal, if signal is band limited... ∗ High computation C. A. Bouman: Digital Image Processing - January 20, 2021 7 Decimation Filters 2−D Filter Impulse Response used for Block Averaging 1 0.8 0.6 0.4 0.2 0 40 30 35 30 20 25 20 15 10 10 5 0 0 Block averaging filter 2−D Sinc Filter used for Ideal Subsampling 0.08 0.07 0.06 0.05 0.04 0.03 0.02 0.01 0 −0.01 40 30 35 30 20 25 20 15 10 10 5 0 0 Sinc filter C. A. Bouman: Digital Image Processing - January 20, 2021 8 Original Image • Full resolution C. A. Bouman: Digital Image Processing - January 20, 2021 9 Image Decimation by 4 using Subsampling • Severe aliasing C. A. Bouman: Digital Image Processing - January 20, 2021 10 Image Decimation by 8 using Subsampling • More severe aliasing C. A. Bouman: Digital Image Processing - January 20, 2021 11 Image Decimation by 4 using Block Averaging • Sharp, but with some aliasing C. A. Bouman: Digital Image Processing - January 20, 2021 12 Image Decimation by 4 using Sinc Filter • Theoretically optimal, but not necessarily the best visual quality C. A. Bouman: Digital Image Processing - January 20, 2021 13 1-D Up-Sampling • Up-sampling by L x(n/L) if n = KL for some K y(n)= 0 otherwise or the alternative form ∞ y(n)= x(m)δ(n − mL) mX=−∞ • Example for L =3 Discrete time signal x(n) 1 0 −1 0 5 10 15 20 x(n) up−sampled by 3 1 0 −1 0 10 20 30 40 50 60 C. A. Bouman: Digital Image Processing - January 20, 2021 14 Up-Sampling in the Frequency Domain • Up-sampling by L ∞ y(n)= x(m)δ(n − mL) mX=−∞ • In the frequency domain Y (ejω)= X(ejωL) • Example for L =3 DTFT of x(n) 15 10 5 0 −4 −2 0 2 4 DTFT of y(n) 15 10 5 0 −4 −2 0 2 4 C. A. Bouman: Digital Image Processing - January 20, 2021 15 Interpolation in the Frequency Domain ω x(n) L H(e j ) y(n) • Interpolating filter has the form Y (ejω)= H(ejω) X(ejωL) • Example for L =3 DTFT of y(n) 15 10 5 0 −4 −2 0 2 4 Interpolated Signal in Frequency 40 20 0 −4 −2 0 2 4 C. A. Bouman: Digital Image Processing - January 20, 2021 16 Interpolating Filter ω x(n) L H(e j ) y(n) • In the frequency domain jω H(e )= Lprect2π/L(ω) • In the time domain n h(n)= sinc L Interpolating filter in Time 1 0.5 0 −0.5 0 10 20 30 40 50 60 Interpolating filter in Frequency 3 2 1 0 −4 −2 0 2 4 C. A. Bouman: Digital Image Processing - January 20, 2021 17 Interpolation in the Time Domain ω x(n) L H(e j ) y(n) • In the frequency domain Y (ejω)= X(ejωL) • Example for L =3 Discrete time signal x(n) 1 0 −1 0 5 10 15 20 Interpolated Signal in Time 1 0 −1 0 10 20 30 40 50 60 C. A. Bouman: Digital Image Processing - January 20, 2021 18 2-D Interpolation x(n) (L,L) H(e j υ ,e j µ ) y(n) • Up-sampling ∞ ∞ y(m,n)= x(k,l)δ(m − kL, n − lL) kX=−∞ l=X−∞ • In the frequency domain jµ jν H(e ,e )= prect2π/L(µ)prect2π/L(ν) h(m,n)= sinc (m/L, n/L) • Problems: sinc function impulse response – Infinite support; infinite computation – Negative sidelobes; ringing artifacts at edges C. A. Bouman: Digital Image Processing - January 20, 2021 19 Pixel Replication x(n) (L,L) H(e j υ ,e j µ ) y(n) • Impulse response of filter 1 for 0 ≤ m ≤ L − 1 and 0 ≤ n ≤ L − 1 h(m,n)= 0 otherwise • Replicates each pixel L2 times C. A. Bouman: Digital Image Processing - January 20, 2021 20 Bilinear Interpolation x(n) (L,L) H(e j υ ,e j µ ) y(n) • Impulse response of filter h(m,n)=Λ(m/L)Λ(n/L) • Results in linear interpolation of intermediate pixels.
Recommended publications
  • Automatic Detection of Perceived Ringing Regions in Compressed Images
    International Journal of Electronics and Communication Engineering. ISSN 0974-2166 Volume 4, Number 5 (2011), pp. 491-516 © International Research Publication House http://www.irphouse.com Automatic Detection of Perceived Ringing Regions in Compressed Images D. Minola Davids Research Scholar, Singhania University, Rajasthan, India Abstract An efficient approach toward a no-reference ringing metric intrinsically exists of two steps: first detecting regions in an image where ringing might occur, and second quantifying the ringing annoyance in these regions. This paper presents a novel approach toward the first step: the automatic detection of regions visually impaired by ringing artifacts in compressed images. It is a no- reference approach, taking into account the specific physical structure of ringing artifacts combined with properties of the human visual system (HVS). To maintain low complexity for real-time applications, the proposed approach adopts a perceptually relevant edge detector to capture regions in the image susceptible to ringing, and a simple yet efficient model of visual masking to determine ringing visibility. Index Terms: Luminance masking, per-ceptual edge, ringing artifact, texture masking. Introduction In current visual communication systems, the most essential task is to fit a large amount of visual information into the narrow bandwidth of transmission channels or into a limited storage space, while maintaining the best possible perceived quality for the viewer. A variety of compression algorithms, for example, such as JPEG[1] and MPEG/H.26xhave been widely adopted in image and video coding trying to achieve high compression efficiency at high quality. These lossy compression techniques, however, inevitably result in various coding artifacts, which by now are known and classified as blockiness, ringing, blur, etc.
    [Show full text]
  • An End-To-End System for Automatic Characterization of Iba1 Immunopositive Microglia in Whole Slide Imaging
    Neuroinformatics (2019) 17:373–389 https://doi.org/10.1007/s12021-018-9405-x ORIGINAL ARTICLE An End-to-end System for Automatic Characterization of Iba1 Immunopositive Microglia in Whole Slide Imaging Alexander D. Kyriazis1 · Shahriar Noroozizadeh1 · Amir Refaee1 · Woongcheol Choi1 · Lap-Tak Chu1 · Asma Bashir2 · Wai Hang Cheng2 · Rachel Zhao2 · Dhananjay R. Namjoshi2 · Septimiu E. Salcudean3 · Cheryl L. Wellington2 · Guy Nir4 Published online: 8 November 2018 © Springer Science+Business Media, LLC, part of Springer Nature 2018 Abstract Traumatic brain injury (TBI) is one of the leading causes of death and disability worldwide. Detailed studies of the microglial response after TBI require high throughput quantification of changes in microglial count and morphology in histological sections throughout the brain. In this paper, we present a fully automated end-to-end system that is capable of assessing microglial activation in white matter regions on whole slide images of Iba1 stained sections. Our approach involves the division of the full brain slides into smaller image patches that are subsequently automatically classified into white and grey matter sections. On the patches classified as white matter, we jointly apply functional minimization methods and deep learning classification to identify Iba1-immunopositive microglia. Detected cells are then automatically traced to preserve their complex branching structure after which fractal analysis is applied to determine the activation states of the cells. The resulting system detects white matter regions with 84% accuracy, detects microglia with a performance level of 0.70 (F1 score, the harmonic mean of precision and sensitivity) and performs binary microglia morphology classification with a 70% accuracy.
    [Show full text]
  • Generalizing Sampling Theory for Time-Varying Nyquist Rates Using Self-Adjoint Extensions of Symmetric Operators with Deficiency Indices (1,1) in Hilbert Spaces
    Generalizing Sampling Theory for Time-Varying Nyquist Rates using Self-Adjoint Extensions of Symmetric Operators with Deficiency Indices (1,1) in Hilbert Spaces by Yufang Hao A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Applied Mathematics Waterloo, Ontario, Canada, 2011 c Yufang Hao 2011 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Sampling theory studies the equivalence between continuous and discrete representa- tions of information. This equivalence is ubiquitously used in communication engineering and signal processing. For example, it allows engineers to store continuous signals as discrete data on digital media. The classical sampling theorem, also known as the theorem of Whittaker-Shannon- Kotel'nikov, enables one to perfectly and stably reconstruct continuous signals with a con- stant bandwidth from their discrete samples at a constant Nyquist rate. The Nyquist rate depends on the bandwidth of the signals, namely, the frequency upper bound. Intuitively, a signal's `information density' and ‘effective bandwidth' should vary in time. Adjusting the sampling rate accordingly should improve the sampling efficiency and information storage. While this old idea has been pursued in numerous publications, fundamental problems have remained: How can a reliable concept of time-varying bandwidth been defined? How can samples taken at a time-varying Nyquist rate lead to perfect and stable reconstruction of the continuous signals? This thesis develops a new non-Fourier generalized sampling theory which takes samples only as often as necessary at a time-varying Nyquist rate and maintains the ability to perfectly reconstruct the signals.
    [Show full text]
  • Analysis of Ringing Artifact in Image Fusion Using Directional Wavelet
    Special Issue - 2021 International Journal of Engineering Research & Technology (IJERT) ISSN: 2278-0181 NTASU - 2020 Conference Proceedings Analysis of Ringing Artifact in Image Fusion Using Directional Wavelet Transforms y z x Ashish V. Vanmali Tushar Kataria , Samrudhha G. Kelkar , Vikram M. Gadre Dept. of Information Technology Dept. of Electrical Engineering Vidyavardhini’s C.O.E. & Tech. Indian Institute of Technology, Bombay Vasai, Mumbai, India – 401202 Powai, Mumbai, India – 400076 Abstract—In the field of multi-data analysis and fusion, image • Images taken from multiple sensors. Examples include fusion plays a vital role for many applications. With inventions near infrared (NIR) images, IR images, CT, MRI, PET, of new sensors, the demand of high quality image fusion fMRI etc. algorithms has seen tremendous growth. Wavelet based fusion is a popular choice for many image fusion algorithms, because of its We can broadly classify the image fusion techniques into four ability to decouple different features of information. However, categories: it suffers from ringing artifacts generated in the output. This 1) Component substitution based fusion algorithms [1]–[5] paper presents an analysis of ringing artifacts in application of 2) Optimization based fusion algorithms [6]–[10] image fusion using directional wavelets (curvelets, contourlets, non-subsampled contourlets etc.). We compare the performance 3) Multi-resolution (wavelets and others) based fusion algo- of various fusion rules for directional wavelets available in rithms [11]–[15] and literature. The experimental results suggest that the ringing 4) Neural network based fusion algorithms [16]–[19]. artifacts are present in all types of wavelets with the extent of Wavelet based multi-resolution analysis decouples data into artifact varying with type of the wavelet, fusion rule used and levels of decomposition.
    [Show full text]
  • Fourier Analysis and Sampling Theory
    Reading Required: Shirley, Ch. 9 Recommended: Ron Bracewell, The Fourier Transform and Its Applications, McGraw-Hill. Fourier analysis and Don P. Mitchell and Arun N. Netravali, “Reconstruction Filters in Computer Computer sampling theory Graphics ,” Computer Graphics, (Proceedings of SIGGRAPH 88). 22 (4), pp. 221-228, 1988. Brian Curless CSE 557 Fall 2009 1 2 What is an image? Images as functions We can think of an image as a function, f, from R2 to R: f(x,y) gives the intensity of a channel at position (x,y) Realistically, we expect the image only to be defined over a rectangle, with a finite range: • f: [a,b]x[c,d] Æ [0,1] A color image is just three functions pasted together. We can write this as a “vector-valued” function: ⎡⎤rxy(, ) f (,xy )= ⎢⎥ gxy (, ) ⎢⎥ ⎣⎦⎢⎥bxy(, ) We’ll focus in grayscale (scalar-valued) images for now. 3 4 Digital images Motivation: filtering and resizing In computer graphics, we usually create or operate What if we now want to: on digital (discrete)images: smooth an image? Sample the space on a regular grid sharpen an image? Quantize each sample (round to nearest enlarge an image? integer) shrink an image? If our samples are Δ apart, we can write this as: In this lecture, we will explore the mathematical underpinnings of these operations. f [n ,m] = Quantize{ f (n Δ, m Δ) } 5 6 Convolution Convolution in 2D One of the most common methods for filtering a In two dimensions, convolution becomes: function, e.g., for smoothing or sharpening, is called convolution.
    [Show full text]
  • Digital Image Processing
    11/11/20 DIGITAL IMAGE PROCESSING Lecture 4 Frequency domain Tammy Riklin Raviv Electrical and Computer Engineering Ben-Gurion University of the Negev Fourier Domain 1 11/11/20 2D Fourier transform and its applications Jean Baptiste Joseph Fourier (1768-1830) ...the manner in which the author arrives at these A bold idea (1807): equations is not exempt of difficulties and...his Any univariate function can analysis to integrate them still leaves something be rewritten as a weighted to be desired on the score of generality and even sum of sines and cosines of rigour. different frequencies. • Don’t believe it? – Neither did Lagrange, Laplace, Poisson and Laplace other big wigs – Not translated into English until 1878! • But it’s (mostly) true! – called Fourier Series Legendre Lagrange – there are some subtle restrictions Hays 2 11/11/20 A sum of sines and cosines Our building block: Asin(wx) + B cos(wx) Add enough of them to get any signal g(x) you want! Hays Reminder: 1D Fourier Series 3 11/11/20 Fourier Series of a Square Wave Fourier Series: Just a change of basis 4 11/11/20 Inverse FT: Just a change of basis 1D Fourier Transform 5 11/11/20 1D Fourier Transform Fourier Transform 6 11/11/20 Example: Music • We think of music in terms of frequencies at different magnitudes Slide: Hoiem Fourier Analysis of a Piano https://www.youtube.com/watch?v=6SR81Wh2cqU 7 11/11/20 Fourier Analysis of a Piano https://www.youtube.com/watch?v=6SR81Wh2cqU Discrete Fourier Transform Demo http://madebyevan.com/dft/ Evan Wallace 8 11/11/20 2D Fourier Transform
    [Show full text]
  • 2D Signal Processing (III): 2D Filtering
    6.003 Signal Processing Week 11, Lecture A: 2D Signal Processing (III): 2D Filtering 6.003 Fall 2020 Filtering and Convolution Time domain: Frequency domain: 푋[푘푟, 푘푐] ∙ 퐻[푘푟, 푘푐] = 푌[푘푟, 푘푐] Today: More detailed look and further understanding about 2D filtering. 2D Low Pass Filtering Given the following image, what happens if we apply a filter that zeros out all the high frequencies in the image? Where did the ripples come from? 2D Low Pass Filtering The operation we did is equivalent to filtering: 푌 푘푟, 푘푐 = 푋 푘푟, 푘푐 ∙ 퐻퐿 푘푟, 푘푐 , where 2 2 1 푓 푘푟 + 푘푐 ≤ 25 퐻퐿 푘푟, 푘푐 = ቊ 0 표푡ℎ푒푟푤푠푒 2D Low Pass Filtering 2 2 1 푓 푘푟 + 푘푐 ≤ 25 Find the 2D unit-sample response of the 2D LPF. 퐻퐿 푘푟, 푘푐 = ቊ 0 표푡ℎ푒푟푤푠푒 퐻 푘 , 푘 퐿 푟 푐 ℎ퐿 푟, 푐 (Rec 8A): 푠푛 Ω 푛 ℎ 푛 = 푐 퐿 휋푛 Ω푐 Ω푐 The step changes in 퐻퐿 푘푟, 푘푐 generated overshoot: Gibb’s phenomenon. 2D Convolution Multiplying by the LPF is equivalent to circular convolution with its spatial-domain representation. ⊛ = 2D Low Pass Filtering Consider using the following filter, which is a circularly symmetric version of the Hann window. 2 2 1 1 푘푟 + 푘푐 2 2 + 푐표푠 휋 ∙ 푓 푘푟 + 푘푐 ≤ 25 퐻퐿2 푘푟, 푘푐 = 2 2 25 0 표푡ℎ푒푟푤푠푒 2 2 1 푓 푘푟 + 푘푐 ≤ 25 퐻퐿1 푘푟, 푘푐 = ቐ Now the ripples are gone. 0 표푡ℎ푒푟푤푠푒 But image more blurred when compare to the one with LPF1: With the same base-width, Hann window filter cut off more high freq.
    [Show full text]
  • Handout 04 Filtering in the Frequency Domain
    Digital Image Processing (420137), SSE, Tongji University, Spring 2013, given by Lin ZHANG Handout 04 Filtering in the Frequency Domain 1. Basic Concepts (1) Any periodic functions, no matter how complex it is, can be represented as the sum of a series of sine and cosine functions with different frequencies. (2) The signal’s frequency components can be clearly observed in the frequency domain. (3) Fourier transform is invertible. That means after the Fourier transform and the inverse Fourier transform, no information will be lost. (4) Usually, the Fourier transform F(u) is a complex number, from which we can define its magnitude, power spectrum, and the phase angle. (5) The impulse function, or Dirac function, is widely used in theoretical analysis of linear systems. It can be considered as an infinitely high, infinitely thin spike at the origin, with total area one under the spike. (6) Dirac function has a sift property, defined as f ()tttdtft (00 ) ( ). (7) The basic theorem linking the spatial domain to the frequency domain is the convolution theorem, ft()*() ht F ( ) H ( ). (8) The Fourier transform of a Gaussian function is also of a Gaussian shape in the frequency domain. (9) For visualization purpose, we often use fftshift to shift the origin of an image’s Fourier transform to its center position. (10) The basic steps for frequency filtering comprise the following, (11) Edges and other sharp intensity transitions, such as noise, in an image contribute significantly to the high frequency content of its Fourier transform. (12) The drawback of the ideal low-pass filter is that the filtering result has obvious ringing artifacts.
    [Show full text]
  • Video Compression Optimized for Racing Drones
    Video compression optimized for racing drones Henrik Theolin Computer Science and Engineering, master's level 2018 Luleå University of Technology Department of Computer Science, Electrical and Space Engineering Video compression optimized for racing drones November 10, 2018 Preface To my wife and son always! Without you I'd never try to become smarter. Thanks to my supervisor Staffan Johansson at Neava for providing room, tools and the guidance needed to perform this thesis. To my examiner Rickard Nilsson for helping me focus on the task and reminding me of the time-limit to complete the report. i of ii Video compression optimized for racing drones November 10, 2018 Abstract This thesis is a report on the findings of different video coding tech- niques and their suitability for a low powered lightweight system mounted on a racing drone. Low latency, high consistency and a robust video stream is of the utmost importance. The literature consists of multiple comparisons and reports on the efficiency for the most commonly used video compression algorithms. These reports and findings are mainly not used on a low latency system but are testing in a laboratory environment with settings unusable for a real-time system. The literature that deals with low latency video streaming and network instability shows that only a limited set of each compression algorithms are available to ensure low complexity and no added delay to the coding process. The findings re- sulted in that AVC/H.264 was the most suited compression algorithm and more precise the x264 implementation was the most optimized to be able to perform well on the low powered system.
    [Show full text]
  • Filter Effects and Filter Artifacts in the Analysis of Electrophysiological Data
    GENERAL COMMENTARY published: 09 July 2012 doi: 10.3389/fpsyg.2012.00233 Filter effects and filter artifacts in the analysis of electrophysiological data Andreas Widmann* and Erich Schröger Institute of Psychology, University of Leipzig, Leipzig, Germany *Correspondence: [email protected] Edited by: Guillaume A. Rousselet, University of Glasgow, UK Reviewed by: Rufin VanRullen, Centre de Recherche Cerveau et Cognition, France Lucas C. Parra, City College of New York, USA A commentary on unity gain at DC (the step response never FILTER EFFECTS VS. FILTER ARTIFACTS returns to one). These artifacts are due to a We also recommend to distinguish between Four conceptual fallacies in mapping the known misconception in FIR filter design in filter effects, that is, the obligatory effects time course of recognition EEGLAB1. The artifacts are further ampli- any filter with equivalent properties – cutoff by VanRullen, R. (2011). Front. Psychol. fied by filtering twice, forward and back- frequency, roll-off, ripple, and attenuation 2:365. doi: 10.3389/fpsyg.2011.00365 ward, to achieve zero-phase. – would have on the data (e.g., smoothing With more appropriate filters the under- of transients as demonstrated by the filter’s Does filtering preclude us from studying estimation of signal onset latency due to step response), and filter artifacts, that is, ERP time-courses? the smoothing effect of low-pass filtering effects which can be minimized by selection by Rousselet, G. A. (2012). Front. Psychol. could be narrowed down to about 4–12 ms of filter type and parameters (e.g., ringing). 3:131. doi: 10.3389/fpsyg.2012.00131 in the simulated dataset (see Figure 1 and Appendix for a simulation), that is, about an CAUSAL FILTERING In a recent review, VanRullen (2011) con- order of magnitude smaller than assumed.
    [Show full text]
  • Destriping of Landsat MSS Images by Filtering Techniques
    Destriping of Landsat MSS Images by Filtering Techniques Jeng-Jong Panl TGS Technology, Inc., EROS Data Center, Sioux Falls, SD 57198 Chein-I Chang ., Department of Electrical Engineering, University of Maryland, Balhmore County Campus, Balhmore, MD 21228-5398 ABSTRACT: The removal of striping noise encountered in the Landsat Multispectral Scanner (MSS) images can be gen­ erally done by using frequency filtering techniques. Frequency do~ain filteri~g has, how~ver, se,:era~ prob~ems~ such as storage limitation of data required for fast Fourier transforms, nngmg artl~acts appe~nng at hlgh-mt,enslty.dlscon­ tinuities, and edge effects between adjacent filtered data sets. One way for clrcu~,,:entmg the above difficulties IS, to design a spatial filter to convolve with the images. Because it is known that the,stnpmg a.lways appears at frequencies of 1/6, 1/3, and 1/2 cycles per line, it is possible to design a simple one-dimensIOnal spat~a~ fll,ter to take advantage of this a priori knowledge to cope with the above problems. The desired filter is the type of ~mlte Impuls~ response which can be designed by a linear programming and Remez's exchange algorithm coupled ~lth an adaptIve tec,hmque. In addition, a four-step spatial filtering technique with an appropriate adaptive approach IS also presented which may be particularly useful for geometrically rectified MSS images. INTRODUCTION striping lines, it still suffers from several common probleI?s such as a large amount of data storage needed, for .a fast ~our~er HE STRIPING THAT OCCURS IN THE Landsat Multispectral.
    [Show full text]
  • 9. Biomedical Imaging Informatics Daniel L
    9. Biomedical Imaging Informatics Daniel L. Rubin, Hayit Greenspan, and James F. Brinkley After reading this chapter, you should know the answers to these questions: 1. What makes images a challenging type of data to be processed by computers, as opposed to non-image clinical data? 2. Why are there many different imaging modalities, and by what major two characteristics do they differ? 3. How are visual and knowledge content in images represented computationally? How are these techniques similar to representation of non-image biomedical data? 4. What sort of applications can be developed to make use of the semantic image content made accessible using the Annotation and Image Markup model? 5. Describe four different types of image processing methods. Why are such methods assembled into a pipeline when creating imaging applications? 6. Give an example of an imaging modality with high spatial resolution. Give an example of a modality that provides functional information. Why are most imaging modalities not capable of providing both? 7. What is the goal in performing segmentation in image analysis? Why is there more than one segmentation method? 8. What are two types of quantitative information in images? What are two types of semantic information in images? How might this information be used in medical applications? 9. What is the difference between image registration and image fusion? Given an example of each. 1 9.1. Introduction Imaging plays a central role in the healthcare process. Imaging is crucial not only to health care, but also to medical communication and education, as well as in research.
    [Show full text]