Lecture 6 — January 28, 2021 1 Outline 2 a Case of Aliasing

Total Page:16

File Type:pdf, Size:1020Kb

Lecture 6 — January 28, 2021 1 Outline 2 a Case of Aliasing MATH 262/CME 372: Applied Fourier Analysis and Winter 2021 Elements of Modern Signal Processing Lecture 6 | January 28, 2021 Prof. Emmanuel Candes Scribe: Carlos A. Sing-Long, Edited by E. Bates 1 Outline Agenda: Fourier series 1. Aliasing example 2. Discrete convolutions 3. Fourier series 4. Another look at Shannon's sampling theorem Last Time: We introduced the Dirac comb and proved the Poisson summation formula, which states that the Fourier transform of a Dirac comb of period T is also a Dirac comb, but with period 2π=T (modulo a mutiplicative factor 2π=T ). We discussed the problem of sampling an analog signal f(t) by measuring its values at time intervals of length T , and we represented the discretized signal fd as a superposition of Dirac delta functions. The Poisson summation formula was instrumental in proving the aliasing formula, which states that the Fourier transform f^d of the discretized signal is the 2π=T -periodization of the spectrum f^ of the analog signal. From this calculation, we saw that when f is bandlimited and T is sufficiently large, no aliasing occurs in the periodization. When this is the case, we can recover the original signal from its samples via Shannon's sampling theorem. Today: We will first revisit the topic of aliasing by observing it for an actual signal. Then, in the pursuit of further studying discrete signals, we shall introduce Fourier series. Understanding these discrete sums as a special case of the Fourier integrals we know well, we will survey the properties Fourier series have in analogy with Fourier transforms. In particular, the Poisson summation formula will be employed once more to show that complex exponentials again form an orthonormal basis of L2-space. The machinery of this basis will actually provide a second proof of Shannon interpolation. 2 A case of aliasing To make concrete our study of aliasing, let us consider the very simple signal 3π 1 h i 3π t i −3π ti f(t) = cos t = e 2 + e 2 : 2 2 1 Its Fourier transform is 1 3π 3π f^(!) = δ ! + + δ ! ; 2 2 − 2 3π 3π meaning f is bandlimited on 2 ; 2 . From last lecture we know that the Nyquist rate is then 3 − 2 (three samples for every two units of time). Suppose we did not know this and instead guessed f were bandlimited on [ π; π]. We would then believe samples were only needed every 1 unit of − time. But this sampling rate is now below Nyquist and thus too slow to prevent aliasing. Let us P see why: The Fourier transform of the discretization fd = n2Z f(n)δn with T = 1 is X 1 X 3π 3π f^d(!) = f^(! 2πk) = δ ! + 2πk + δ ! 2πk : (1) − 2 2 − − 2 − k2Z k2Z which is a spike train of spacing π. What we then see in the frequency range [ π; π] is not f^, but − rather ^ 1 π π fd(!)I ! π =g ^(!) = δ ! + + δ ! ; fj j ≤ g 2 2 − 2 | {z } | {z } from k = −1 from k = 1 5⇡ 3⇡ ⇡ ⇡ 3⇡ 5⇡ − 2 − 2 − 2 2 2 2 k = 2 k =0 k = 1 k =1 k =0 k =2 − − Figure 1: Shown above is the portion of the Dirac comb f^d in [ π; π], which has contributions from − five different copies of f^, corresponding to k = 0, 1, and 2 in (1). Part of each of the k = 1 copies ± ± ± is aliased into the interval [ 3π ; 3π ], giving artifact delta functions (displayed with solid lines) in the − 2 2 supposed frequency band [ π; π]. Such aliasing produces low-frequency artifacts in signal reconstruction. − This is the Fourier transform of π g(t) = cos t ; 2 a cosine wave with frequency three times smaller than that of f. Indeed, if Shannon's formula is naively applied to f^d, the signal recovered is X X 3π X π X f(n) sinc(t) = cos n sinc(t) = cos n sinc(t) = g(n) sinc(t) = g(t) 2 2 n2Z n2Z n2Z n2Z 2 since g is bandlimited to [ π; π] and f is not. − In other words, by sampling f at too low a rate, we cannot detect the difference between the signal f and the signal g: their values at the integers are identical. Shannon interpolation can only return the unique function that gives those samples and is bandlimited to [ π; π]. In this case, that − function is g(t). 2 2 − 3π π Figure 2: When sampled at the integers, the functions f(t) = cos 2 t and g(t) = cos 2 t return identical values. The higher frequency of f (solid, black) is aliased into the lower frequency band [ π; π], − giving a reconstruction of g rather than of f. 3 Discrete convolutions Now we consider discrete signals f(n) n2 , or more precisely, discrete-time signals. As in lecture f g Z 1, we are interested in linear maps L : f Lf. We define for τ Z the translation of f as 7! 2 fτ (n) = f(n τ); n Z; − 2 and we say L is time-invariant if (Lf)τ = Lfτ . The discrete impulse δ is defined as ( 1 n = 0; δ(n) = 0 otherwise, and the discrete convolution between f and g as X (f g)(n) = f(m)g(n m): ∗ − m2Z As before, δ acts as the identity under convolution: X (f δ)(n) = f(m)δ(n m) = f(n)δ(0) = f(n); ∗ − m2Z 3 Consequently, for any time-invariant map L, X X X (Lf)(n) = f(m)(Lδm)(n) = f(m)(Lδ)m(n) = f(m)(Lδ)(n m): − m2Z m2Z m2Z If we let h := Lδ, we see that Lf = f h. The kernel h is called the impulse response for L, ∗ and we conclude that every time-invariant map can be written as the convolution with the impulse response. It is easy to check that the converse is true, and so a map is time-invariant if and only if it is a convolution with a fixed function. We saw in the continuous-time case that complex exponentials are eigenvectors of time-invariant operators. The same is true in the discrete setting, but now the exponentials are evaluated only at i!n integers. Indeed, write e!(n) = e , n Z. For a time-invariant map L, 2 X i!(n−m) i!n X −i!m X (Le!)(n) = (h e!)(n) = h(k)e = e h(m)e = e!(n) h(m)e!(m): ∗ m2Z m2Z m2Z We once more have a transfer function X X −i!m h^(!) := h; e! = h(m)e!(m) = h(m)e ; h i m2Z m2Z defined to be return the eigenvalue for the eigenfunction e!: Le! = h^(!)e!: We call h^ the Fourier series of h. In fact, this is just a special case of the Fourier transform. Considering the discretization X fd(t) = f(n)δ(t n); − n2Z of a continuous-time signal f(t), we have X −i!n f^d(!) = f(n)e ; n2Z which is the Fourier series of the discrete-time signal f(n) n2 . f g Z 4 Fourier series What separates Fourier series from Fourier transforms is their periodicity. Since only integer values of m are considered, X h^(!) = h(m)e−i!m m2Z is necessarily 2π-periodic in !. This fact is actually one we already know, since we learned in Lecture 5 that discretization in time is equivalent to periodization in frequency. Obviously Fourier transforms need not be periodic. After all, the inverse Fourier transform told 2 us that every L function is a superposition of complex exponentials with frequencies in R. Once 4 again drawing analogy with the continuous case, we can ask: Is every periodic L2 function (with period 2π) the superposition of exponentials with frequencies in Z? Perhaps less surprising now, but no less amazing, the answer is yes. To properly develop this answer, we consider a periodic domain S = R=2πZ (read \R modulo 2πZ"), which we call the circle. To say f is a function on S just means f is 2π-periodic: f(t) = f(t + 2π) for every t R. 2 2 2 Instead of working in the vector space L (R), we now work in L (S), the vector space of square- integrable functions on the circle: Z π 2 2 L (S) = f : S C : f(t) dt < : ! −π j j 1 2 L (S) is a Hilbert space under the inner product Z π f; g = f(t)g(t) dt; h i −π and complex exponentials with integer frequencies form a basis. Theorem 1 (Riesz-F´ejer). The sequence en k2 where f g Z 1 int en(t) = e ; p2π 2 is a complete orthonormal set for L (S). Riesz-F´ejer'stheorem is equivalent to Shannon's sampling theorem. By this, we mean that if we know Riesz-F´ejer,then Shannon's theorem is an easy consequence and vice versa. Since we proved Shannon's theorem, we will show how Riesz-F´ejeris an easy consequence. Historically, Riesz-F´ejer came first. Recall from linear algebra that if v1; v2; : : : ; vd is an orthonormal basis of an d-dimensional inner f g product space, any vector v has representation d X v = vn; v vn: h i n=1 The analogous statement is true for infinite-dimensional Hilbert spaces, namely that if vn n2 is f g Z a complete orthonormal set, then X v = vn; v vn: (2) h i n2Z So Theorem 1 states that for f square-integrable in [ π; π] with f( π) = f(π), we can write − − X f = cnen with cn = en; f ; (3) h i n2Z which means Z π 1 X int 1 −int f(t) = cne with cn = f(t)e dt: p2π p2π −π n2Z 5 Here the right-hand side of the first expression is the Fourier series of f, and we understand the equality to hold in the L2 sense.
Recommended publications
  • Mathematical Basics of Bandlimited Sampling and Aliasing
    Signal Processing Mathematical basics of bandlimited sampling and aliasing Modern applications often require that we sample analog signals, convert them to digital form, perform operations on them, and reconstruct them as analog signals. The important question is how to sample and reconstruct an analog signal while preserving the full information of the original. By Vladimir Vitchev o begin, we are concerned exclusively The limits of integration for Equation 5 T about bandlimited signals. The reasons are specified only for one period. That isn’t a are both mathematical and physical, as we problem when dealing with the delta func- discuss later. A signal is said to be band- Furthermore, note that the asterisk in Equa- tion, but to give rigor to the above expres- limited if the amplitude of its spectrum goes tion 3 denotes convolution, not multiplica- sions, note that substitutions can be made: to zero for all frequencies beyond some thresh- tion. Since we know the spectrum of the The integral can be replaced with a Fourier old called the cutoff frequency. For one such original signal G(f), we need find only the integral from minus infinity to infinity, and signal (g(t) in Figure 1), the spectrum is zero Fourier transform of the train of impulses. To the periodic train of delta functions can be for frequencies above a. In that case, the do so we recognize that the train of impulses replaced with a single delta function, which is value a is also the bandwidth (BW) for this is a periodic function and can, therefore, be the basis for the periodic signal.
    [Show full text]
  • Efficient Supersampling Antialiasing for High-Performance Architectures
    Efficient Supersampling Antialiasing for High-Performance Architectures TR91-023 April, 1991 Steven Molnar The University of North Carolina at Chapel Hill Department of Computer Science CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 This work was supported by DARPA/ISTO Order No. 6090, NSF Grant No. DCI- 8601152 and IBM. UNC is an Equa.l Opportunity/Affirmative Action Institution. EFFICIENT SUPERSAMPLING ANTIALIASING FOR HIGH­ PERFORMANCE ARCHITECTURES Steven Molnar Department of Computer Science University of North Carolina Chapel Hill, NC 27599-3175 Abstract Techniques are presented for increasing the efficiency of supersampling antialiasing in high-performance graphics architectures. The traditional approach is to sample each pixel with multiple, regularly spaced or jittered samples, and to blend the sample values into a final value using a weighted average [FUCH85][DEER88][MAMM89][HAEB90]. This paper describes a new type of antialiasing kernel that is optimized for the constraints of hardware systems and produces higher quality images with fewer sample points than traditional methods. The central idea is to compute a Poisson-disk distribution of sample points for a small region of the screen (typically pixel-sized, or the size of a few pixels). Sample points are then assigned to pixels so that the density of samples points (rather than weights) for each pixel approximates a Gaussian (or other) reconstruction filter as closely as possible. The result is a supersampling kernel that implements importance sampling with Poisson-disk-distributed samples. The method incurs no additional run-time expense over standard weighted-average supersampling methods, supports successive-refinement, and can be implemented on any high-performance system that point samples accurately and has sufficient frame-buffer storage for two color buffers.
    [Show full text]
  • Super-Sampling Anti-Aliasing Analyzed
    Super-sampling Anti-aliasing Analyzed Kristof Beets Dave Barron Beyond3D [email protected] Abstract - This paper examines two varieties of super-sample anti-aliasing: Rotated Grid Super- Sampling (RGSS) and Ordered Grid Super-Sampling (OGSS). RGSS employs a sub-sampling grid that is rotated around the standard horizontal and vertical offset axes used in OGSS by (typically) 20 to 30°. RGSS is seen to have one basic advantage over OGSS: More effective anti-aliasing near the horizontal and vertical axes, where the human eye can most easily detect screen aliasing (jaggies). This advantage also permits the use of fewer sub-samples to achieve approximately the same visual effect as OGSS. In addition, this paper examines the fill-rate, memory, and bandwidth usage of both anti-aliasing techniques. Super-sampling anti-aliasing is found to be a costly process that inevitably reduces graphics processing performance, typically by a substantial margin. However, anti-aliasing’s posi- tive impact on image quality is significant and is seen to be very important to an improved gaming experience and worth the performance cost. What is Aliasing? digital medium like a CD. This translates to graphics in that a sample represents a specific moment as well as a Computers have always strived to achieve a higher-level specific area. A pixel represents each area and a frame of quality in graphics, with the goal in mind of eventu- represents each moment. ally being able to create an accurate representation of reality. Of course, to achieve reality itself is impossible, At our current level of consumer technology, it simply as reality is infinitely detailed.
    [Show full text]
  • Efficient Multidimensional Sampling
    EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel Volume 21 (2002 ), Number 3 (Guest Editors) Efficient Multidimensional Sampling Thomas Kollig and Alexander Keller Department of Computer Science, Kaiserslautern University, Germany Abstract Image synthesis often requires the Monte Carlo estimation of integrals. Based on a generalized con- cept of stratification we present an efficient sampling scheme that consistently outperforms previous techniques. This is achieved by assembling sampling patterns that are stratified in the sense of jittered sampling and N-rooks sampling at the same time. The faster convergence and improved anti-aliasing are demonstrated by numerical experiments. Categories and Subject Descriptors (according to ACM CCS): G.3 [Probability and Statistics]: Prob- abilistic Algorithms (including Monte Carlo); I.3.2 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. 1. Introduction general concept of stratification than just joining jit- tered and Latin hypercube sampling. Since our sam- Many rendering tasks are given in integral form and ples are highly correlated and satisfy a minimum dis- usually the integrands are discontinuous and of high tance property, noise artifacts are attenuated much 22 dimension, too. Since the Monte Carlo method is in- more efficiently and anti-aliasing is improved. dependent of dimension and applicable to all square- integrable functions, it has proven to be a practical tool for numerical integration. It relies on the point 2. Monte Carlo Integration sampling paradigm and such on sample placement. In- The Monte Carlo method of integration estimates the creasing the uniformity of the samples is crucial for integral of a square-integrable function f over the s- the efficiency of the stochastic method and the level dimensional unit cube by of noise contained in the rendered images.
    [Show full text]
  • Signal Sampling
    FYS3240 PC-based instrumentation and microcontrollers Signal sampling Spring 2017 – Lecture #5 Bekkeng, 30.01.2017 Content – Aliasing – Sampling – Analog to Digital Conversion (ADC) – Filtering – Oversampling – Triggering Analog Signal Information Three types of information: • Level • Shape • Frequency Sampling Considerations – An analog signal is continuous – A sampled signal is a series of discrete samples acquired at a specified sampling rate – The faster we sample the more our sampled signal will look like our actual signal Actual Signal – If not sampled fast enough a problem known as aliasing will occur Sampled Signal Aliasing Adequately Sampled SignalSignal Aliased Signal Bandwidth of a filter • The bandwidth B of a filter is defined to be between the -3 dB points Sampling & Nyquist’s Theorem • Nyquist’s sampling theorem: – The sample frequency should be at least twice the highest frequency contained in the signal Δf • Or, more correctly: The sample frequency fs should be at least twice the bandwidth Δf of your signal 0 f • In mathematical terms: fs ≥ 2 *Δf, where Δf = fhigh – flow • However, to accurately represent the shape of the ECG signal signal, or to determine peak maximum and peak locations, a higher sampling rate is required – Typically a sample rate of 10 times the bandwidth of the signal is required. Illustration from wikipedia Sampling Example Aliased Signal 100Hz Sine Wave Sampled at 100Hz Adequately Sampled for Frequency Only (Same # of cycles) 100Hz Sine Wave Sampled at 200Hz Adequately Sampled for Frequency and Shape 100Hz Sine Wave Sampled at 1kHz Hardware Filtering • Filtering – To remove unwanted signals from the signal that you are trying to measure • Analog anti-aliasing low-pass filtering before the A/D converter – To remove all signal frequencies that are higher than the input bandwidth of the device.
    [Show full text]
  • F • Aliasing Distortion • Quantization Noise • Bandwidth Limitations • Cost of A/D & D/A Conversion
    Aliasing • Aliasing distortion • Quantization noise • A 1 Hz Sine wave sampled at 1.8 Hz • Bandwidth limitations • A 0.8 Hz sine wave sampled at 1.8 Hz • Cost of A/D & D/A conversion -fs fs THE UNIVERSITY OF TEXAS AT AUSTIN Advantages of Digital Systems Perfect reconstruction of a Better trade-off between signal is possible even after bandwidth and noise severe distortion immunity performance digital analog bandwidth Increase signal-to-noise ratio simply by adding more bits SNR = -7.2 + 6 dB/bit THE UNIVERSITY OF TEXAS AT AUSTIN Advantages of Digital Systems Programmability • Modifiable in the field • Implement multiple standards • Better user interfaces • Tolerance for changes in specifications • Get better use of hardware for low-speed operations • Debugging • User programmability THE UNIVERSITY OF TEXAS AT AUSTIN Disadvantages of Digital Systems Programmability • Speed is too slow for some applications • High average power and peak power consumption RISC (2 Watts) vs. DSP (50 mW) DATA PROG MEMORY MEMORY HARVARD ARCHITECTURE • Aliasing from undersampling • Clipping from quantization Q[v] v v THE UNIVERSITY OF TEXAS AT AUSTIN Analog-to-Digital Conversion 1 --- T h(t) Q[.] xt() yt() ynT() yˆ()nT Anti-Aliasing Sampler Quantizer Filter xt() y(nT) t n y(t) ^y(nT) t n THE UNIVERSITY OF TEXAS AT AUSTIN Resampling Changing the Sampling Rate • Conversion between audio formats Compact 48.0 Digital Disc ---------- Audio Tape 44.1 KHz44.1 48 KHz • Speech compression Speech 1 Speech for on DAT --- Telephone 48 KHz 6 8 KHz • Video format conversion
    [Show full text]
  • Lecture 12: Sampling, Aliasing, and the Discrete Fourier Transform Foundations of Digital Signal Processing
    Lecture 12: Sampling, Aliasing, and the Discrete Fourier Transform Foundations of Digital Signal Processing Outline • Review of Sampling • The Nyquist-Shannon Sampling Theorem • Continuous-time Reconstruction / Interpolation • Aliasing and anti-Aliasing • Deriving Transforms from the Fourier Transform • Discrete-time Fourier Transform, Fourier Series, Discrete-time Fourier Series • The Discrete Fourier Transform Foundations of Digital Signal Processing Lecture 12: Sampling, Aliasing, and the Discrete Fourier Transform 1 News Homework #5 . Due this week . Submit via canvas Coding Problem #4 . Due this week . Submit via canvas Foundations of Digital Signal Processing Lecture 12: Sampling, Aliasing, and the Discrete Fourier Transform 2 Exam 1 Grades The class did exceedingly well . Mean: 89.3 . Median: 91.5 Foundations of Digital Signal Processing Lecture 12: Sampling, Aliasing, and the Discrete Fourier Transform 3 Lecture 12: Sampling, Aliasing, and the Discrete Fourier Transform Foundations of Digital Signal Processing Outline • Review of Sampling • The Nyquist-Shannon Sampling Theorem • Continuous-time Reconstruction / Interpolation • Aliasing and anti-Aliasing • Deriving Transforms from the Fourier Transform • Discrete-time Fourier Transform, Fourier Series, Discrete-time Fourier Series • The Discrete Fourier Transform Foundations of Digital Signal Processing Lecture 12: Sampling, Aliasing, and the Discrete Fourier Transform 4 Sampling Discrete-Time Fourier Transform Foundations of Digital Signal Processing Lecture 12: Sampling, Aliasing,
    [Show full text]
  • Multidimensional Sampling Dr
    Multidimensional Sampling Dr. Vishal Monga Motivation for the General Case for Sampling We need the more general case to treat three important applications. 1. Human Vision System: the human vision system is a nonlinear, spatially- varying, non-uniformly sampled system. Rods and cones on the retina, which spatially sample are not arranged in rows and columns. a. Hexagonal Sampling: when modeled as a linear shift-invariant system, the human visual system is circularly bandlimited (lowpass in radial frequency). The optimal uniform sampling grid is hexagonal. Optimal means that we need the fewest discrete-time samples to sample the continuous-space analog signal without aliasing. b. Foveated grid: This is based on the fovea in the retina. When you focus on an object, you sample the object at a high resolution, and the resolution falls off away from the point-of-focus. Shown below is a simple example of a foveated grid. The grid is a 4 x 4 uniform sampling with each of the middle four grids subdivided into 4 x 4 grids themselves. The point of focus is at the middle of the grid. We can convert this grid to a uniform grid in several ways. For example, we could start with a rectangular grid and keep the resolution at the point-of-focus. Then, away from the point-of-focus, we can average the pixel values in increasingly larger blocks of samples. This approach allows the use a foveated grid while maintaining compatibility with systems that require rectangular sampling (e.g. image and video compression standards). 2. Television 650 samples /row 362.5 rows /interlace 2 interlaces /frame 30 frames /sec No two samples taken at the same instant of time Can signals be sampled this way without losing information? How can we handle a.
    [Show full text]
  • Sampling & Anti-Aliasing
    CS 431/636 Advanced Rendering Techniques Dr. David Breen University Crossings 149 Tuesday 6PM → 8:50PM Presentation 5 5/5/09 Questions from Last Time? Acceleration techniques n Bounding Regions n Acceleration Spatial Data Structures n Regular Grids n Adaptive Grids n Bounding Volume Hierarchies n Dot product problem Slide Credits David Luebke - University of Virginia Kevin Suffern - University of Technology, Sydney, Australia Linge Bai - Drexel Anti-aliasing n Aliasing: signal processing term with very specific meaning n Aliasing: computer graphics term for any unwanted visual artifact based on sampling n Jaggies, drop-outs, etc. n Anti-aliasing: computer graphics term for fixing these unwanted artifacts n We’ll tackle these in order Signal Processing n Raster display: regular sampling of a continuous(?) function n Think about sampling a 1-D function: Signal Processing n Sampling a 1-D function: Signal Processing n Sampling a 1-D function: Signal Processing n Sampling a 1-D function: n What do you notice? Signal Processing n Sampling a 1-D function: what do you notice? n Jagged, not smooth Signal Processing n Sampling a 1-D function: what do you notice? n Jagged, not smooth n Loses information! Signal Processing n Sampling a 1-D function: what do you notice? n Jagged, not smooth n Loses information! n What can we do about these? n Use higher-order reconstruction n Use more samples n How many more samples? The Sampling Theorem n Obviously, the more samples we take the better those samples approximate the original function n The sampling theorem: A continuous bandlimited function can be completely represented by a set of equally spaced samples, if the samples occur at more than twice the frequency of the highest frequency component of the function The Sampling Theorem n In other words, to adequately capture a function with maximum frequency F, we need to sample it at frequency N = 2F.
    [Show full text]
  • Aliasing, Image Sampling and Reconstruction
    https://youtu.be/yr3ngmRuGUc Recall: a pixel is a point… Aliasing, Image Sampling • It is NOT a box, disc or teeny wee light and Reconstruction • It has no dimension • It occupies no area • It can have a coordinate • More than a point, it is a SAMPLE Many of the slides are taken from Thomas Funkhouser course slides and the rest from various sources over the web… Image Sampling Imaging devices area sample. • An image is a 2D rectilinear array of samples • In video camera the CCD Quantization due to limited intensity resolution array is an area integral Sampling due to limited spatial and temporal resolution over a pixel. • The eye: photoreceptors Intensity, I Pixels are infinitely small point samples J. Liang, D. R. Williams and D. Miller, "Supernormal vision and high- resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14, 2884- 2892 (1997) 1 Aliasing Good Old Aliasing Reconstruction artefact Sampling and Reconstruction Sampling Reconstruction Slide © Rosalee Nerheim-Wolfe 2 Sources of Error Aliasing (in general) • Intensity quantization • In general: Artifacts due to under-sampling or poor reconstruction Not enough intensity resolution • Specifically, in graphics: Spatial aliasing • Spatial aliasing Temporal aliasing Not enough spatial resolution • Temporal aliasing Not enough temporal resolution Under-sampling Figure 14.17 FvDFH Sampling & Aliasing Spatial Aliasing • Artifacts due to limited spatial resolution • Real world is continuous • The computer world is discrete • Mapping a continuous function to a
    [Show full text]
  • Oversampling with Noise Shaping • Place the Quantizer in a Feedback Loop Xn() Yn() Un() Hz() – Quantizer
    Oversampling Converters David Johns and Ken Martin University of Toronto ([email protected]) ([email protected]) University of Toronto slide 1 of 57 © D.A. Johns, K. Martin, 1997 Motivation • Popular approach for medium-to-low speed A/D and D/A applications requiring high resolution Easier Analog • reduced matching tolerances • relaxed anti-aliasing specs • relaxed smoothing filters More Digital Signal Processing • Needs to perform strict anti-aliasing or smoothing filtering • Also removes shaped quantization noise and decimation (or interpolation) University of Toronto slide 2 of 57 © D.A. Johns, K. Martin, 1997 Quantization Noise en() xn() yn() xn() yn() en()= yn()– xn() Quantizer Model • Above model is exact — approx made when assumptions made about en() • Often assume en() is white, uniformily distributed number between ±∆ ⁄ 2 •∆ is difference between two quantization levels University of Toronto slide 3 of 57 © D.A. Johns, K. Martin, 1997 Quantization Noise T ∆ time • White noise assumption reasonable when: — fine quantization levels — signal crosses through many levels between samples — sampling rate not synchronized to signal frequency • Sample lands somewhere in quantization interval leading to random error of ±∆ ⁄ 2 University of Toronto slide 4 of 57 © D.A. Johns, K. Martin, 1997 Quantization Noise • Quantization noise power shown to be ∆2 ⁄ 12 and is independent of sampling frequency • If white, then spectral density of noise, Se()f , is constant. S f e() ∆ 1 Height k = ---------- ---- x f 12 s f f f s 0 s –---- ---- 2 2 University of Toronto slide 5 of 57 © D.A. Johns, K. Martin, 1997 Oversampling Advantage • Oversampling occurs when signal of interest is bandlimited to f0 but we sample higher than 2f0 • Define oversampling-rate OSR = fs ⁄ ()2f0 (1) • After quantizing input signal, pass it through a brickwall digital filter with passband up to f0 y ()n 1 un() Hf() y2()n N-bit quantizer Hf() 1 f f fs –f 0 f s –---- 0 0 ---- 2 2 University of Toronto slide 6 of 57 © D.A.
    [Show full text]
  • Sampling and Quantization
    Chapter 5 Sampling and Quantization Often the domain and the range of an original signal x(t) are modeled as contin- uous. That is, the time (or spatial) coordinate t is allowed to take on arbitrary real values (perhaps over some interval) and the value x(t) of the signal itself is allowed to take on arbitrary real values (again perhaps within some interval). As mentioned previously in Chapter XX, such signals are called analog signals. A continuous model is convenient for some situations, but in other situations it is more convenient to work with digital signals — i.e., signals which have a discrete (often finite) domain and range. The process of digitizing the domain is called sampling and the process of digitizing the range is called quantization. Most devices we encounter deal with both analog and digital signals. Digi- tal signals are particularly robust to noise, and extremely efficient and versatile means for processing digital signals have been developed. On the other hand, in certain situations analog signals are sometimes more appropriate or even necessary. For example, most underlying physical processes are analog (or at least most conveniently modeled as analog), including the human sensorimotor systems. Hence, analog signals are typically necessary to interface with sensors and actuators. Also, some types of data processing and transmission are most conveniently performed with analog signals. Thus, the conversion of analog sig- nals to digital signals (and vice versa) is an important part of many information processing systems. In this chapter, we consider some of the fundamental issues and techniques in converting between analog and digital signals.
    [Show full text]