Aliasing, Image Sampling and Reconstruction

Total Page:16

File Type:pdf, Size:1020Kb

Aliasing, Image Sampling and Reconstruction https://youtu.be/yr3ngmRuGUc Recall: a pixel is a point… Aliasing, Image Sampling • It is NOT a box, disc or teeny wee light and Reconstruction • It has no dimension • It occupies no area • It can have a coordinate • More than a point, it is a SAMPLE Many of the slides are taken from Thomas Funkhouser course slides and the rest from various sources over the web… Image Sampling Imaging devices area sample. • An image is a 2D rectilinear array of samples • In video camera the CCD Quantization due to limited intensity resolution array is an area integral Sampling due to limited spatial and temporal resolution over a pixel. • The eye: photoreceptors Intensity, I Pixels are infinitely small point samples J. Liang, D. R. Williams and D. Miller, "Supernormal vision and high- resolution retinal imaging through adaptive optics," J. Opt. Soc. Am. A 14, 2884- 2892 (1997) 1 Aliasing Good Old Aliasing Reconstruction artefact Sampling and Reconstruction Sampling Reconstruction Slide © Rosalee Nerheim-Wolfe 2 Sources of Error Aliasing (in general) • Intensity quantization • In general: Artifacts due to under-sampling or poor reconstruction Not enough intensity resolution • Specifically, in graphics: Spatial aliasing • Spatial aliasing Temporal aliasing Not enough spatial resolution • Temporal aliasing Not enough temporal resolution Under-sampling Figure 14.17 FvDFH Sampling & Aliasing Spatial Aliasing • Artifacts due to limited spatial resolution • Real world is continuous • The computer world is discrete • Mapping a continuous function to a discrete one is called sampling • Mapping a continuous variable to a discrete one is called quantizaion • To represent or render an image using a computer, we must both sample and quantize 3 Can be a serious problem… Spatial Aliasing • Artifacts due to limited spatial resolution Slide © Rosalee Nerheim-Wolfe “Jaggies” Aliasing Anti-aliasing 4 Spatial Aliasing Temporal Aliasing • Artifacts due to limited spatial resolution • Artifacts due to limited temporal resolution Strobing Flickering “Jaggies” Temporal Aliasing Temporal Aliasing • Artifacts due to limited temporal resolution • Artifacts due to limited temporal resolution Strobing Strobing Flickering Flickering 5 Temporal Aliasing Temporal Aliasing • Artifacts due to limited temporal resolution Strobing Flickering Staircasing or Jaggies …very serious problem! The raster aliasing effect – removal is called antialiasing Slide © Rosalee Nerheim-Wolfe Images by Don Mitchell 6 Sampling and artifacts Blurring doesn’t work well. Removed the jaggies, but also all the detail ! Reduction in resolution Unweighted Area Sampling Weighted Area Sampling 7 …with Overlap Sampling and Reconstruction Figure 19.9 FvDFH Antialiasing How is antialiasing done? • Sample at higher rate • We need some mathematical tools to Not always possible Doesn’t always solve problem analyse the situation. find an optimum solution. • Pre-filter to form bandlimited signal Form bandlimited function (low-pass filter) • Tools we will use : Trades aliasing for blurring Fourier transform. Convolution theory. Must consider Sampling theory. sampling theory! We need to understand the behavior of the signal in frequency domain 8 Spectral Analysis / Fourier Transforms Spectral Analysis / Fourier Transforms Note the Euler formula : eit cost isin t • Spectral representation treats the function as a weighted sum of sines and cosines F() f (x)eixdx Spatial domain Frequency domain. 1 • Every function has two representations f (x) F()eixd 2 Spatial (time) domain - normal representation Frequency domain - spectral representation • The Fourier transform converts between the spatial and frequency domain. • Real and imaginary components. • The Fourier transform converts between • Forward and reverse transforms very similar. the spatial and frequency domains. f(x) F() Sampling Theory Spectral Analysis • How many samples are required to represent a • Spatial domain: • Frequency domain: given signal without loss of information? Function: f(x) Function: F(u) Filtering: convolution Filtering: multiplication • What signals can be reconstructed without loss for a given sampling rate? Any signal can be written as a sum of periodic functions. 9 Fourier Transform Fourier Transform Figure 2.6 Wolberg Figure 2.5 Wolberg Sampling Theorem Convolution • A signal can be reconstructed from its samples, • Convolution of two functions (= filtering): if the original signal has no frequencies above 1/2 the sampling frequency - Shannon g(x) f (x) h(x) f ()h(x )d • The minimum sampling rate for bandlimited function is called “Nyquist rate” A signal is bandlimited if its highest frequency is bounded. The frequency is called the bandwidth. 10 Convolution in frequency domain is same as multiplication in spatial domain, and vice-versa f g F G F G f g Image Processing Antialiasing in Image Processing • Image processing is a resampling problem • General Strategy Pre-filter transformed image via convolution with low-pass filter to form bandlimited signal Resampling • Rationale Prefer blurring over aliasing 11 Filtering in the frequency domain Low-pass Filtering Frequency Image Filter Image domain Fourier Fourier Transform Transform Lowpass filter Highpass filter Low and High Pass Filtering. Low-pass Filtering • Low pass • High pass 12 Low-pass Filtering How can we represent sampling ? Multiplication of the sample with a regular train of delta functions. Sampling: Frequency domain. Sampling , the Comb function The Fourier transform of regular comb of delta functions is a comb. Spacing is inversely proportional Multiple solutions at regularly increasing values of f 13 Reconstruction in frequency domain Undersampling leads to aliasing. Original signal. Samples are too close Spurious components : Cause of aliasing. Bandpass filter due to regular array of pixels. The Sampling Theorem. Sampling at the Nyquist Frequency This result is known as the Sampling Theorem and is due to Claude Shannon who first discovered it in 1949 A signal can be reconstructed from its samples without loss of information, if the original signal has no frequencies above 1/2 the sampling frequency For a given bandlimited function, the rate at which it must be sampled is called the Nyquist Frequency 14 Aliasing in the frequency domain How do we remove aliasing ? • Perfect solution - prefilter with perfect bandpass filter. No aliasing. Aliased example Perfect bandpass Removing aliasing is called antialiasing How do we remove aliasing ? Adequate sampling • Perfect solution - prefilter with perfect bandpass filter. Difficult/Impossible to do in frequency domain. • Convolve with sinc function in space domain Optimal filter - better than area sampling. Sinc function is infinite !! Computationally expensive. 15 Inadequate sampling Pre-filtering The ‘Sinc’ function. The Sinc Filter 1 1/ 2 eix 2 square(x)eixdx eixdx 1 ix 1/ 2 2 1 1 1 i i sin e 2 e2 2 Recall Euler’s formula : 1 1 1 = sinc f i i eit cost i sin t 2 2 2 16 Common Filters Sample-and-Hold Image Reconstruction End… • Re-create continuous image from samples Example: cathode ray tube Image is reconstructed by displaying pixels with finite area (Gaussian) 17 Adjusting Brightness Adjusting Contrast • Simply scale pixel components • Compute mean luminance L for all pixels Must clamp to range (e.g., 0 to 255) luminance = 0.30*r + 0.59*g + 0.11*b • Scale deviation from L for each pixel component Must clamp to range (e.g., 0 to 255) L Original Brighter Original More Contrast Image Processing Adjust Blurriness • Quantization • Filtering • Convolve with a filter whose entries sum to one Each pixel becomes a weighted average of its neighbors Uniform Quantization Blur Random dither Detect edges Ordered dither • Warping Floyd-Steinberg dither Scale • Pixel operations Rotate Add random noise Warps Add luminance • Combining Original Blur Add contrast Morphs 1 2 1 Add saturation 16 16 16 Composite 2 4 2 Filter = 16 16 16 1 2 1 16 16 16 18 Edge Detection Image Processing • Convolve with a filter that finds differences • Quantization • Filtering between neighbor pixels Uniform Quantization Blur Random dither Detect edges Ordered dither • Warping Floyd-Steinberg dither Scale • Pixel operations Rotate Add random noise Warps Add luminance Original Detect edges • Combining Add contrast Morphs Add saturation 1 1 1 Composite Filter = 1 8 1 1 1 1 Scaling Summary • Resample with triangle or Gaussian filter • Image processing is a resampling problem Avoid aliasing Use filtering More on this next lecture! Original 1/4X 4X resolution resolution 19 Triangle Filter • Convolution with triangle filter • Convolution with Gaussian filter Figure 2.4 Wolberg 20.
Recommended publications
  • Mathematical Basics of Bandlimited Sampling and Aliasing
    Signal Processing Mathematical basics of bandlimited sampling and aliasing Modern applications often require that we sample analog signals, convert them to digital form, perform operations on them, and reconstruct them as analog signals. The important question is how to sample and reconstruct an analog signal while preserving the full information of the original. By Vladimir Vitchev o begin, we are concerned exclusively The limits of integration for Equation 5 T about bandlimited signals. The reasons are specified only for one period. That isn’t a are both mathematical and physical, as we problem when dealing with the delta func- discuss later. A signal is said to be band- Furthermore, note that the asterisk in Equa- tion, but to give rigor to the above expres- limited if the amplitude of its spectrum goes tion 3 denotes convolution, not multiplica- sions, note that substitutions can be made: to zero for all frequencies beyond some thresh- tion. Since we know the spectrum of the The integral can be replaced with a Fourier old called the cutoff frequency. For one such original signal G(f), we need find only the integral from minus infinity to infinity, and signal (g(t) in Figure 1), the spectrum is zero Fourier transform of the train of impulses. To the periodic train of delta functions can be for frequencies above a. In that case, the do so we recognize that the train of impulses replaced with a single delta function, which is value a is also the bandwidth (BW) for this is a periodic function and can, therefore, be the basis for the periodic signal.
    [Show full text]
  • Efficient Supersampling Antialiasing for High-Performance Architectures
    Efficient Supersampling Antialiasing for High-Performance Architectures TR91-023 April, 1991 Steven Molnar The University of North Carolina at Chapel Hill Department of Computer Science CB#3175, Sitterson Hall Chapel Hill, NC 27599-3175 This work was supported by DARPA/ISTO Order No. 6090, NSF Grant No. DCI- 8601152 and IBM. UNC is an Equa.l Opportunity/Affirmative Action Institution. EFFICIENT SUPERSAMPLING ANTIALIASING FOR HIGH­ PERFORMANCE ARCHITECTURES Steven Molnar Department of Computer Science University of North Carolina Chapel Hill, NC 27599-3175 Abstract Techniques are presented for increasing the efficiency of supersampling antialiasing in high-performance graphics architectures. The traditional approach is to sample each pixel with multiple, regularly spaced or jittered samples, and to blend the sample values into a final value using a weighted average [FUCH85][DEER88][MAMM89][HAEB90]. This paper describes a new type of antialiasing kernel that is optimized for the constraints of hardware systems and produces higher quality images with fewer sample points than traditional methods. The central idea is to compute a Poisson-disk distribution of sample points for a small region of the screen (typically pixel-sized, or the size of a few pixels). Sample points are then assigned to pixels so that the density of samples points (rather than weights) for each pixel approximates a Gaussian (or other) reconstruction filter as closely as possible. The result is a supersampling kernel that implements importance sampling with Poisson-disk-distributed samples. The method incurs no additional run-time expense over standard weighted-average supersampling methods, supports successive-refinement, and can be implemented on any high-performance system that point samples accurately and has sufficient frame-buffer storage for two color buffers.
    [Show full text]
  • Generalizing Sampling Theory for Time-Varying Nyquist Rates Using Self-Adjoint Extensions of Symmetric Operators with Deficiency Indices (1,1) in Hilbert Spaces
    Generalizing Sampling Theory for Time-Varying Nyquist Rates using Self-Adjoint Extensions of Symmetric Operators with Deficiency Indices (1,1) in Hilbert Spaces by Yufang Hao A thesis presented to the University of Waterloo in fulfillment of the thesis requirement for the degree of Doctor of Philosophy in Applied Mathematics Waterloo, Ontario, Canada, 2011 c Yufang Hao 2011 I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I understand that my thesis may be made electronically available to the public. ii Abstract Sampling theory studies the equivalence between continuous and discrete representa- tions of information. This equivalence is ubiquitously used in communication engineering and signal processing. For example, it allows engineers to store continuous signals as discrete data on digital media. The classical sampling theorem, also known as the theorem of Whittaker-Shannon- Kotel'nikov, enables one to perfectly and stably reconstruct continuous signals with a con- stant bandwidth from their discrete samples at a constant Nyquist rate. The Nyquist rate depends on the bandwidth of the signals, namely, the frequency upper bound. Intuitively, a signal's `information density' and ‘effective bandwidth' should vary in time. Adjusting the sampling rate accordingly should improve the sampling efficiency and information storage. While this old idea has been pursued in numerous publications, fundamental problems have remained: How can a reliable concept of time-varying bandwidth been defined? How can samples taken at a time-varying Nyquist rate lead to perfect and stable reconstruction of the continuous signals? This thesis develops a new non-Fourier generalized sampling theory which takes samples only as often as necessary at a time-varying Nyquist rate and maintains the ability to perfectly reconstruct the signals.
    [Show full text]
  • Super-Sampling Anti-Aliasing Analyzed
    Super-sampling Anti-aliasing Analyzed Kristof Beets Dave Barron Beyond3D [email protected] Abstract - This paper examines two varieties of super-sample anti-aliasing: Rotated Grid Super- Sampling (RGSS) and Ordered Grid Super-Sampling (OGSS). RGSS employs a sub-sampling grid that is rotated around the standard horizontal and vertical offset axes used in OGSS by (typically) 20 to 30°. RGSS is seen to have one basic advantage over OGSS: More effective anti-aliasing near the horizontal and vertical axes, where the human eye can most easily detect screen aliasing (jaggies). This advantage also permits the use of fewer sub-samples to achieve approximately the same visual effect as OGSS. In addition, this paper examines the fill-rate, memory, and bandwidth usage of both anti-aliasing techniques. Super-sampling anti-aliasing is found to be a costly process that inevitably reduces graphics processing performance, typically by a substantial margin. However, anti-aliasing’s posi- tive impact on image quality is significant and is seen to be very important to an improved gaming experience and worth the performance cost. What is Aliasing? digital medium like a CD. This translates to graphics in that a sample represents a specific moment as well as a Computers have always strived to achieve a higher-level specific area. A pixel represents each area and a frame of quality in graphics, with the goal in mind of eventu- represents each moment. ally being able to create an accurate representation of reality. Of course, to achieve reality itself is impossible, At our current level of consumer technology, it simply as reality is infinitely detailed.
    [Show full text]
  • Discrete - Time Signals and Systems
    Discrete - Time Signals and Systems Sampling – II Sampling theorem & Reconstruction Yogananda Isukapalli Sampling at diffe- -rent rates From these figures, it can be concluded that it is very important to sample the signal adequately to avoid problems in reconstruction, which leads us to Shannon’s sampling theorem 2 Fig:7.1 Claude Shannon: The man who started the digital revolution Shannon arrived at the revolutionary idea of digital representation by sampling the information source at an appropriate rate, and converting the samples to a bit stream Before Shannon, it was commonly believed that the only way of achieving arbitrarily small probability of error in a communication channel was to 1916-2001 reduce the transmission rate to zero. All this changed in 1948 with the publication of “A Mathematical Theory of Communication”—Shannon’s landmark work Shannon’s Sampling theorem A continuous signal xt( ) with frequencies no higher than fmax can be reconstructed exactly from its samples xn[ ]= xn [Ts ], if the samples are taken at a rate ffs ³ 2,max where fTss= 1 This simple theorem is one of the theoretical Pillars of digital communications, control and signal processing Shannon’s Sampling theorem, • States that reconstruction from the samples is possible, but it doesn’t specify any algorithm for reconstruction • It gives a minimum sampling rate that is dependent only on the frequency content of the continuous signal x(t) • The minimum sampling rate of 2fmax is called the “Nyquist rate” Example1: Sampling theorem-Nyquist rate x( t )= 2cos(20p t ), find the Nyquist frequency ? xt( )= 2cos(2p (10) t ) The only frequency in the continuous- time signal is 10 Hz \ fHzmax =10 Nyquist sampling rate Sampling rate, ffsnyq ==2max 20 Hz Continuous-time sinusoid of frequency 10Hz Fig:7.2 Sampled at Nyquist rate, so, the theorem states that 2 samples are enough per period.
    [Show full text]
  • The Nyquist Sampling Rate for Spiraling Curves 11
    THE NYQUIST SAMPLING RATE FOR SPIRALING CURVES PHILIPPE JAMING, FELIPE NEGREIRA & JOSE´ LUIS ROMERO Abstract. We consider the problem of reconstructing a compactly supported function from samples of its Fourier transform taken along a spiral. We determine the Nyquist sampling rate in terms of the density of the spiral and show that, below this rate, spirals suffer from an approximate form of aliasing. This sets a limit to the amount of under- sampling that compressible signals admit when sampled along spirals. More precisely, we derive a lower bound on the condition number for the reconstruction of functions of bounded variation, and for functions that are sparse in the Haar wavelet basis. 1. Introduction 1.1. The mobile sampling problem. In this article, we consider the reconstruction of a compactly supported function from samples of its Fourier transform taken along certain curves, that we call spiraling. This problem is relevant, for example, in magnetic resonance imaging (MRI), where the anatomy and physiology of a person are captured by moving sensors. The Fourier sampling problem is equivalent to the sampling problem for bandlimited functions - that is, functions whose Fourier transform are supported on a given compact set. The most classical setting concerns functions of one real variable with Fourier transform supported on the unit interval [ 1/2, 1/2], and sampled on a grid ηZ, with η > 0. The sampling rate η determines whether− every bandlimited function can be reconstructed from its samples: reconstruction fails if η > 1 and succeeds if η 6 1 [42]. The transition value η = 1 is known as the Nyquist sampling rate, and it is the benchmark for all sampling schemes: modern sampling strategies that exploit the particular structure of a certain class of signals are praised because they achieve sub-Nyquist sampling rates.
    [Show full text]
  • 2.161 Signal Processing: Continuous and Discrete Fall 2008
    MIT OpenCourseWare http://ocw.mit.edu 2.161 Signal Processing: Continuous and Discrete Fall 2008 For information about citing these materials or our Terms of Use, visit: http://ocw.mit.edu/terms. MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.161 Signal Processing - Continuous and Discrete 1 Sampling and the Discrete Fourier Transform 1 Sampling Consider a continuous function f(t) that is limited in extent, T1 · t < T2. In order to process this function in the computer it must be sampled and represented by a ¯nite set of numbers. The most common sampling scheme is to use a ¯xed sampling interval ¢T and to form a sequence of length N: ffng (n = 0 : : : N ¡ 1), where fn = f(T1 + n¢T ): In subsequent processing the function f(t) is represented by the ¯nite sequence ffng and the sampling interval ¢T . In practice, sampling occurs in the time domain by the use of an analog-digital (A/D) converter. The mathematical operation of sampling (not to be confused with the physics of sampling) is most commonly described as a multiplicative operation, in which f(t) is multiplied by a Dirac comb sampling function s(t; ¢T ), consisting of a set of delayed Dirac delta functions: X1 s(t; ¢T ) = ±(t ¡ n¢T ): (1) n=¡1 ? We denote the sampled waveform f (t) as X1 ? f (t) = s(t; ¢T )f(t) = f(t)±(t ¡ n¢T ) (2) n=¡1 ? as shown in Fig. 1. Note that f (t) is a set of delayed weighted delta functions, and that the waveform must be interpreted in the distribution sense by the strength (or area) of each component impulse.
    [Show full text]
  • Efficient Multidimensional Sampling
    EUROGRAPHICS 2002 / G. Drettakis and H.-P. Seidel Volume 21 (2002 ), Number 3 (Guest Editors) Efficient Multidimensional Sampling Thomas Kollig and Alexander Keller Department of Computer Science, Kaiserslautern University, Germany Abstract Image synthesis often requires the Monte Carlo estimation of integrals. Based on a generalized con- cept of stratification we present an efficient sampling scheme that consistently outperforms previous techniques. This is achieved by assembling sampling patterns that are stratified in the sense of jittered sampling and N-rooks sampling at the same time. The faster convergence and improved anti-aliasing are demonstrated by numerical experiments. Categories and Subject Descriptors (according to ACM CCS): G.3 [Probability and Statistics]: Prob- abilistic Algorithms (including Monte Carlo); I.3.2 [Computer Graphics]: Picture/Image Generation; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism. 1. Introduction general concept of stratification than just joining jit- tered and Latin hypercube sampling. Since our sam- Many rendering tasks are given in integral form and ples are highly correlated and satisfy a minimum dis- usually the integrands are discontinuous and of high tance property, noise artifacts are attenuated much 22 dimension, too. Since the Monte Carlo method is in- more efficiently and anti-aliasing is improved. dependent of dimension and applicable to all square- integrable functions, it has proven to be a practical tool for numerical integration. It relies on the point 2. Monte Carlo Integration sampling paradigm and such on sample placement. In- The Monte Carlo method of integration estimates the creasing the uniformity of the samples is crucial for integral of a square-integrable function f over the s- the efficiency of the stochastic method and the level dimensional unit cube by of noise contained in the rendered images.
    [Show full text]
  • Signal Sampling
    FYS3240 PC-based instrumentation and microcontrollers Signal sampling Spring 2017 – Lecture #5 Bekkeng, 30.01.2017 Content – Aliasing – Sampling – Analog to Digital Conversion (ADC) – Filtering – Oversampling – Triggering Analog Signal Information Three types of information: • Level • Shape • Frequency Sampling Considerations – An analog signal is continuous – A sampled signal is a series of discrete samples acquired at a specified sampling rate – The faster we sample the more our sampled signal will look like our actual signal Actual Signal – If not sampled fast enough a problem known as aliasing will occur Sampled Signal Aliasing Adequately Sampled SignalSignal Aliased Signal Bandwidth of a filter • The bandwidth B of a filter is defined to be between the -3 dB points Sampling & Nyquist’s Theorem • Nyquist’s sampling theorem: – The sample frequency should be at least twice the highest frequency contained in the signal Δf • Or, more correctly: The sample frequency fs should be at least twice the bandwidth Δf of your signal 0 f • In mathematical terms: fs ≥ 2 *Δf, where Δf = fhigh – flow • However, to accurately represent the shape of the ECG signal signal, or to determine peak maximum and peak locations, a higher sampling rate is required – Typically a sample rate of 10 times the bandwidth of the signal is required. Illustration from wikipedia Sampling Example Aliased Signal 100Hz Sine Wave Sampled at 100Hz Adequately Sampled for Frequency Only (Same # of cycles) 100Hz Sine Wave Sampled at 200Hz Adequately Sampled for Frequency and Shape 100Hz Sine Wave Sampled at 1kHz Hardware Filtering • Filtering – To remove unwanted signals from the signal that you are trying to measure • Analog anti-aliasing low-pass filtering before the A/D converter – To remove all signal frequencies that are higher than the input bandwidth of the device.
    [Show full text]
  • Evaluating Oscilloscope Sample Rates Vs. Sampling Fidelity: How to Make the Most Accurate Digital Measurements
    AC 2011-2914: EVALUATING OSCILLOSCOPE SAMPLE RATES VS. SAM- PLING FIDELITY Johnnie Lynn Hancock, Agilent Technologies About the Author Johnnie Hancock is a Product Manager at Agilent Technologies Digital Test Division. He began his career with Hewlett-Packard in 1979 as an embedded hardware designer, and holds a patent for digital oscillo- scope amplifier calibration. Johnnie is currently responsible for worldwide application support activities that promote Agilent’s digitizing oscilloscopes and he regularly speaks at technical conferences world- wide. Johnnie graduated from the University of South Florida with a degree in electrical engineering. In his spare time, he enjoys spending time with his four grandchildren and restoring his century-old Victorian home located in Colorado Springs. Contact Information: Johnnie Hancock Agilent Technologies 1900 Garden of the Gods Rd Colorado Springs, CO 80907 USA +1 (719) 590-3183 johnnie [email protected] c American Society for Engineering Education, 2011 Evaluating Oscilloscope Sample Rates vs. Sampling Fidelity: How to Make the Most Accurate Digital Measurements Introduction Digital storage oscilloscopes (DSO) are the primary tools used today by digital designers to perform signal integrity measurements such as setup/hold times, rise/fall times, and eye margin tests. High performance oscilloscopes are also widely used in university research labs to accurately characterize high-speed digital devices and systems, as well as to perform high energy physics experiments such as pulsed laser testing. In addition, general-purpose oscilloscopes are used extensively by Electrical Engineering students in their various EE analog and digital circuits lab courses. The two key banner specifications that affect an oscilloscope’s signal integrity measurement accuracy are bandwidth and sample rate.
    [Show full text]
  • Enhancing ADC Resolution by Oversampling
    AVR121: Enhancing ADC resolution by oversampling 8-bit Features Microcontrollers • Increasing the resolution by oversampling • Averaging and decimation • Noise reduction by averaging samples Application Note 1 Introduction Atmel’s AVR controller offers an Analog to Digital Converter with 10-bit resolution. In most cases 10-bit resolution is sufficient, but in some cases higher accuracy is desired. Special signal processing techniques can be used to improve the resolution of the measurement. By using a method called ‘Oversampling and Decimation’ higher resolution might be achieved, without using an external ADC. This Application Note explains the method, and which conditions need to be fulfilled to make this method work properly. Figure 1-1. Enhancing the resolution. A/D A/D A/D 10-bit 11-bit 12-bit t t t Rev. 8003A-AVR-09/05 2 Theory of operation Before reading the rest of this Application Note, the reader is encouraged to read Application Note AVR120 - ‘Calibration of the ADC’, and the ADC section in the AVR datasheet. The following examples and numbers are calculated for Single Ended Input in a Free Running Mode. ADC Noise Reduction Mode is not used. This method is also valid in the other modes, though the numbers in the following examples will be different. The ADCs reference voltage and the ADCs resolution define the ADC step size. The ADC’s reference voltage, VREF, may be selected to AVCC, an internal 2.56V / 1.1V reference, or a reference voltage at the AREF pin. A lower VREF provides a higher voltage precision but minimizes the dynamic range of the input signal.
    [Show full text]
  • Fourier Analysis and Sampling Theory
    Reading Required: Shirley, Ch. 9 Recommended: Ron Bracewell, The Fourier Transform and Its Applications, McGraw-Hill. Fourier analysis and Don P. Mitchell and Arun N. Netravali, “Reconstruction Filters in Computer Computer sampling theory Graphics ,” Computer Graphics, (Proceedings of SIGGRAPH 88). 22 (4), pp. 221-228, 1988. Brian Curless CSE 557 Fall 2009 1 2 What is an image? Images as functions We can think of an image as a function, f, from R2 to R: f(x,y) gives the intensity of a channel at position (x,y) Realistically, we expect the image only to be defined over a rectangle, with a finite range: • f: [a,b]x[c,d] Æ [0,1] A color image is just three functions pasted together. We can write this as a “vector-valued” function: ⎡⎤rxy(, ) f (,xy )= ⎢⎥ gxy (, ) ⎢⎥ ⎣⎦⎢⎥bxy(, ) We’ll focus in grayscale (scalar-valued) images for now. 3 4 Digital images Motivation: filtering and resizing In computer graphics, we usually create or operate What if we now want to: on digital (discrete)images: smooth an image? Sample the space on a regular grid sharpen an image? Quantize each sample (round to nearest enlarge an image? integer) shrink an image? If our samples are Δ apart, we can write this as: In this lecture, we will explore the mathematical underpinnings of these operations. f [n ,m] = Quantize{ f (n Δ, m Δ) } 5 6 Convolution Convolution in 2D One of the most common methods for filtering a In two dimensions, convolution becomes: function, e.g., for smoothing or sharpening, is called convolution.
    [Show full text]