Summary of Spectral Estimation

Total Page:16

File Type:pdf, Size:1020Kb

Summary of Spectral Estimation Spectral Estimation Examples from research of Kyoung Hoon Lee, Aaron Hastings, Don Gallant, Shashikant More, Weonchan Sung Herrick Graduate Students Estimation: Bias, Variance and Mean Square Error Let φ denote the thing that we are trying to estimate. Let φ ˆ denote the result of an estimation based on one data set with N pieces of information. Each data set used for estimation à a different estimate of φ. ˆ ˆ Bias: b ( φ ) = φ − E [ φ ] True value - the average of all possible estimates formed from N data points 2 2 Variance: σ = E[ (φˆ − E[φˆ]) ] Measure of the spread of the estimates about the mean of all estimates. 2 2 2 Mean Square Error: m.s.e. = E[ (φˆ −φ) ] = b +σ Estimation: Some definitions Estimate is consistent if, when we use more data to form the estimate, the mean square error is reduced. If we have two ways of estimating the same thing, we say that the estimator that leads to the smaller mean square error is more efficient than the other estimator. true estimates value φ = (a,b) bias xxxx b xx x mean of all x x x estimates a Examples 1 N Bias and variance of an estimate of the mean: X ,µˆ = ∑ Xn N n=1 ⎡ 1 N ⎤ 1 N 1 N E[µˆ] = E⎢ ∑ X ⎥ = ∑ E⎡X ⎤ = ∑ µ = µ (unbiased) N n N ⎣ n⎦ N ⎣⎢ n=1 ⎦⎥ n=1 n=1 Derivation ⎡ 2⎤ ⎡ 2⎤ ⎛⎛ N ⎞ ⎞ ⎛ N ⎞ ⎡ 2⎤ ⎢ 1 ⎥ ⎢ 1 ⎥ assuming that 2 ˆ ˆ ⎜⎜ ⎟ ⎟ ⎜ ⎟ σµˆ E⎢ µ − E[µ] ⎥ = E⎢ ∑ Xn − µ ⎥ = E⎢ ∑ Xn − µ ⎥ ⎣( ) ⎦ ⎜⎜ N ⎟ ⎟ ⎜ N ( )⎟ the samples X ⎢⎝⎝ n=1 ⎠ ⎠ ⎥ ⎢⎝ n=1 ⎠ ⎥ n ⎣ ⎦ ⎣ ⎦ are independent 1 ⎡ N N ⎤ = E⎢ X − µ X − µ ⎥ 2 ∑ ∑ ( m )( n ) of one another. N ⎣⎢n=1m=1 ⎦⎥ ⎧ 2 ⎫ 1 2 ⎡ ⎤ ⎡ ⎤ = ⎨ N − N E⎣(Xn − µ)(Xm − µ)⎦ + N E⎢(Xn − µ) ⎥⎬ N 2 ⎩( ) ⎣ ⎦⎭ Separate into terms ⎧ 2 ⎫ 1 2 ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ = ⎨ N − N E⎣(Xn − µ)⎦E⎣(Xm − µ)⎦ + N E⎢(Xn − µ) ⎥⎬ where n does not N 2 ⎩( ) ⎣ ⎦⎭ 1 ⎡ 2⎤ 1 2 equal m and where = N E⎢(Xn − µ) ⎥ = σ x N 2 ⎣ ⎦ N n=m Examples Biased Estimate of the variance of a set of N measurements: N 1 2 ∑ (Xn − µˆ) N n=1 Unbiased Estimates of the variance of a set of N measurements:: N N 1 2 1 2 ∑ (Xn − µˆ) and ∑ (Xn − µ) N −1n=1 N n=1 First estimate the mean, and use Special case where the mean is that estimate in this calculate known and doesn’t need to be (have lost 1 degree of freedom) estimated from the data Estimation of Autocovariance functions Two methods of estimating Rxx(τ) from T sec. of data. 1. Dividing by the integration time: T-|τ| Estimation was unbiased but had very high variance, particularly when τ is close to T. 2. Dividing by total time: T Estimation was biased (asymptotically unbiased). This was equivalent to multiplying first estimate by a triangular window (T-|τ|)/T. This window attenuates the high variance estimates. x(t) x(t) x(t+τ) T secs τ time Calculating the average value of [x(t) x(t+τ)] from T seconds of data. Estimation of Autocovariance functions Two methods of estimating Rxx(τ) from T sec. of data. 1. Dividing by the integration time: T-|τ| Estimation was unbiased but had very high variance, particularly when τ is close to T. 2. Dividing by total time: T Estimation was biased (asymptotically unbiased). This was equivalent to multiplying first estimate by a triangular window (T-|τ|)/T. This window attenuates the high variance estimates. x(t+τ) x(t) T secs x(t) x(t+ ) x(t) τ τ time x(t) x(t+ ) x(t+ ) x(t) τ τ x(t) τ τ τ τ Calculating the average value of [x(t) x(t+τ)] from T seconds of data. Estimation of Cross Covariance Same issues as for Auto-Covariance: Bigger τ less averaging for finite T. y(t-τ) x(t) x(t) T time τ y(t) T x(t) y(t+τ) time x(t) and y(t), zero mean, weakly stationary random processes. Average value of [x(t) y(t+τ)]. Additional problem: must make T large enough to accommodate system delays. Estimation of Covariance With fast computation of spectra, these are now more usually estimated by inverse Fourier transforming the power and cross spectral density estimates. Inverse transform of RAW PSD or CSD ESTIMATE equivalent to Method 2 for calculating covariance functions with triangular window for data of size Tr Power Spectral Density Estimation Definition: ⎡ X * X ⎤ S ( f ) = lim E⎢ T T ⎥ = +∞ R (τ)e− j2π f τ dτ. xx T → ∞ ∫−∞ xx ⎣⎢ T ⎦⎥ Estimation: 1. Could Fourier Transform the Autocorrelation Function estimate (not computationally efficient). 2. Could use the frequency domain definition directly. ⎡ * ⎤ ˆ XT XT Raw Estimate = Sxx ( f ) = ⎢ ⎥ ⎣⎢ T ⎦⎥ No averaging! Extremely poor variance characteristics. 2 Variance is S x x ( f ) and is unaffected by T, the length of data used. Power Spectral Density Estimation (Continued) Smoothed estimate from segment averaging. w(t) x(t) Ts time 1. Break signal up into Nseg segments, Tr seconds long. 2. For each segment: 1. Apply a window to smooth transition at ends of segments 2. Fourier Transform windowed segment à XT(f) 2 3. Calculate a raw power spectral density: |XTs | /Ts estimate 3. Average the results from each segment to get the smoothed estimate and do a power compensation for the window used. NSEG 1 ˆ 1 2 S!xx ( f ) = Sxx ( f ) wcomp = w (t)dt NSEG.wcomp ∑ i T ∫ i=1 Power Spectral Density Estimation (Continued) Smoothed estimate from segment averaging. w(t) x(t) Ts time Overlap: For some windows segment overlap makes sense. A Hann window, 50% overlap means that data de-emphasized in one windowed segment is strong emphasized in the next window (and vice versa). Bias: Note PSD estimate bias is controlled by the size of the window (Ts) which controls the frequency resolution (1/Ts). Larger window, smoother transitions à less power leakage à less bias Power Spectral Density (PSD) Estimation (Continued) We argue that the distribution of the smoothed PSD was related to that of a Chi-squared 2 random variable (χν ) with ν = 2.NSEG degrees of freedom, if Tr was large enough so we could ignore bias errors. Therefore: ⎡2.Nseg.S! ⎤ 4.Nseg2 xx ⎡ ! ⎤ Variance⎢ ⎥ = Variance Sxx = 2(2.Nseg) S 2 ⎣ ⎦ ⎣ xx ⎦ Sxx 2 ! Sxx and rearranging we showed that: Variance[Sxx ] = Nseg Therefore, we can control variance by averaging more segments. Note: shorter segments mean larger bias, so for a fixed T seconds of data, there is a trade-off between Segment Length (Tr), which controls the bias, and Number of Segments (NSEG), which controls the variance: T=Tr.NSEG. Cross Spectral Density (CSD) Definition: ⎡ * ⎤ lim XTYT +∞ − j2π f τ Sxy ( f ) = E⎢ ⎥ = ∫ Rxy (τ)e dτ. T → ∞ ⎢ T ⎥ −∞ ⎣ ⎦ Estimation: Could Fourier Transform the Cross-correlation function estimate (not computationally efficient). Could use the frequency domain definition directly. ⎡ * ⎤ ˆ XTYT Raw Estimate = Sxy ( f ) = ⎢ ⎥ ⎣⎢ T ⎦⎥ As with PSD, this has extremely poor variance characteristics, so – divide the time histories into segments, – generate a raw estimate from each segment, and – average to reduce variance and produce a smoothed estimate. Cross Spectral Density Estimation: Segment Averaging w(t) x(t) time Ts y(t) w(t) Ts time à Fourier Transform of Windowed Segments XT(f) & YT(f). * ˆ XTs( f )YTs( f ) Sxy ( f ) = Raw Estimate from ith segment = i Ts 1 Nseg S! ( f ) = Sˆ ( f ) Smoothed Estimate = xy ∑ xyi Nseg i=1 Issues with Cross Spectral Density Estimates 1. Reduce bias by choosing the segment length (Tr) as large as possible. (Bias greatest where the phase changes rapidly.) 2. Reduce variance by averaging many segments. 3. Might require a large amount of averaging to reduce noise effects: y (t) y(t) n(t) h(t) x(t) n(t) m = + = ∗ + x(t), n(t) zero mean, weakly stationary, uncorrelated random processes 2 Syy H( f ) Sxx ! ! ! ! SNR = = Sxy ≈ H( f )Sxx + Sxn → H( f )Sxx ym S S nyny nyny 1 ⎡ 1 ⎤ Var{Sxy} proportional to ⎢1+ ⎥ Nseg 2 ⎣⎢ γxy ⎦⎥ 4. Time delays between x and y cause problems, if the time delay (to) is greater than a small fraction of the segment length (Tr). Can estimate t0 and offset y segments, but need T+t0 seconds of data. Cross Spectral Density Estimation: Segment Averaging with System Delays w(t) x(t) time estimated Ts t y(t) 0 w(t) Ts time Offsetting y segements essentially removes most ofà the delay from the Fourier Transform of Windowed Segments XT(f) & YT(f). estimated frequency response function. Can put back delay effects in by multiplying estimate of H(f) by: ˆ e− j2π f t0 Coherence Function Estimation: Substitute in Smoothed Estimates of Spectral Densities Coherence takes values in the range 0 to 1. 2 ! 2 2 | Sxy | 2 | Sxy | Definition: γ = ; Estimate: γ! = xy xy ! ! SxxSyy SxxSyy – Substituting raw spectral density estimates into formula results in 1 A result where the coherence = 1 at all frequencies from measured signals should be treated with a high degree of suspicion. – Estimate highly sensitive to bias in spectral density estimates, which is particularly bad where the phase of the cross spectral density changes rapidly (at maxima and minima in |Sxy|). – COHERENCE à 0 because of: NOISE ON INPUT AND OUTPUT NONLINEARITY BIAS ERRORS IN ESTIMATION Example: System with Some Nonlinearities (cubic stiffness) and Noisy Measurements Nonlinearity causes spread of energy here, around 3x and 5x this frequency Nonlinear Mode Poor Poor SNRy SNRy Nonlineary causes broad dips in coherence function.
Recommended publications
  • Moving Average Filters
    CHAPTER 15 Moving Average Filters The moving average is the most common filter in DSP, mainly because it is the easiest digital filter to understand and use. In spite of its simplicity, the moving average filter is optimal for a common task: reducing random noise while retaining a sharp step response. This makes it the premier filter for time domain encoded signals. However, the moving average is the worst filter for frequency domain encoded signals, with little ability to separate one band of frequencies from another. Relatives of the moving average filter include the Gaussian, Blackman, and multiple- pass moving average. These have slightly better performance in the frequency domain, at the expense of increased computation time. Implementation by Convolution As the name implies, the moving average filter operates by averaging a number of points from the input signal to produce each point in the output signal. In equation form, this is written: EQUATION 15-1 Equation of the moving average filter. In M &1 this equation, x[ ] is the input signal, y[ ] is ' 1 % y[i] j x [i j ] the output signal, and M is the number of M j'0 points used in the moving average. This equation only uses points on one side of the output sample being calculated. Where x[ ] is the input signal, y[ ] is the output signal, and M is the number of points in the average. For example, in a 5 point moving average filter, point 80 in the output signal is given by: x [80] % x [81] % x [82] % x [83] % x [84] y [80] ' 5 277 278 The Scientist and Engineer's Guide to Digital Signal Processing As an alternative, the group of points from the input signal can be chosen symmetrically around the output point: x[78] % x[79] % x[80] % x[81] % x[82] y[80] ' 5 This corresponds to changing the summation in Eq.
    [Show full text]
  • Measurement Techniques of Ultra-Wideband Transmissions
    Rec. ITU-R SM.1754-0 1 RECOMMENDATION ITU-R SM.1754-0* Measurement techniques of ultra-wideband transmissions (2006) Scope Taking into account that there are two general measurement approaches (time domain and frequency domain) this Recommendation gives the appropriate techniques to be applied when measuring UWB transmissions. Keywords Ultra-wideband( UWB), international transmissions, short-duration pulse The ITU Radiocommunication Assembly, considering a) that intentional transmissions from devices using ultra-wideband (UWB) technology may extend over a very large frequency range; b) that devices using UWB technology are being developed with transmissions that span numerous radiocommunication service allocations; c) that UWB technology may be integrated into many wireless applications such as short- range indoor and outdoor communications, radar imaging, medical imaging, asset tracking, surveillance, vehicular radar and intelligent transportation; d) that a UWB transmission may be a sequence of short-duration pulses; e) that UWB transmissions may appear as noise-like, which may add to the difficulty of their measurement; f) that the measurements of UWB transmissions are different from those of conventional radiocommunication systems; g) that proper measurements and assessments of power spectral density are key issues to be addressed for any radiation, noting a) that terms and definitions for UWB technology and devices are given in Recommendation ITU-R SM.1755; b) that there are two general measurement approaches, time domain and frequency domain, with each having a particular set of advantages and disadvantages, recommends 1 that techniques described in Annex 1 to this Recommendation should be considered when measuring UWB transmissions. * Radiocommunication Study Group 1 made editorial amendments to this Recommendation in the years 2018 and 2019 in accordance with Resolution ITU-R 1.
    [Show full text]
  • Random Signals
    Chapter 8 RANDOM SIGNALS Signals can be divided into two main categories - deterministic and random. The term random signal is used primarily to denote signals, which have a random in its nature source. As an example we can mention the thermal noise, which is created by the random movement of electrons in an electric conductor. Apart from this, the term random signal is used also for signals falling into other categories, such as periodic signals, which have one or several parameters that have appropriate random behavior. An example is a periodic sinusoidal signal with a random phase or amplitude. Signals can be treated either as deterministic or random, depending on the application. Speech, for example, can be considered as a deterministic signal, if one specific speech waveform is considered. It can also be viewed as a random process if one considers the ensemble of all possible speech waveforms in order to design a system that will optimally process speech signals, in general. The behavior of stochastic signals can be described only in the mean. The description of such signals is as a rule based on terms and concepts borrowed from probability theory. Signals are, however, a function of time and such description becomes quickly difficult to manage and impractical. Only a fraction of the signals, known as ergodic, can be handled in a relatively simple way. Among those signals that are excluded are the class of the non-stationary signals, which otherwise play an essential part in practice. Working in frequency domain is a powerful technique in signal processing. While the spectrum is directly related to the deterministic signals, the spectrum of a ran- dom signal is defined through its correlation function.
    [Show full text]
  • STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS and INTERPRETATIONS by DSG Pollock
    STATISTICAL FOURIER ANALYSIS: CLARIFICATIONS AND INTERPRETATIONS by D.S.G. Pollock (University of Leicester) Email: stephen [email protected] This paper expounds some of the results of Fourier theory that are es- sential to the statistical analysis of time series. It employs the algebra of circulant matrices to expose the structure of the discrete Fourier transform and to elucidate the filtering operations that may be applied to finite data sequences. An ideal filter with a gain of unity throughout the pass band and a gain of zero throughout the stop band is commonly regarded as incapable of being realised in finite samples. It is shown here that, to the contrary, such a filter can be realised both in the time domain and in the frequency domain. The algebra of circulant matrices is also helpful in revealing the nature of statistical processes that are band limited in the frequency domain. In order to apply the conventional techniques of autoregressive moving-average modelling, the data generated by such processes must be subjected to anti- aliasing filtering and sub sampling. These techniques are also described. It is argued that band-limited processes are more prevalent in statis- tical and econometric time series than is commonly recognised. 1 D.S.G. POLLOCK: Statistical Fourier Analysis 1. Introduction Statistical Fourier analysis is an important part of modern time-series analysis, yet it frequently poses an impediment that prevents a full understanding of temporal stochastic processes and of the manipulations to which their data are amenable. This paper provides a survey of the theory that is not overburdened by inessential complications, and it addresses some enduring misapprehensions.
    [Show full text]
  • Use of the Kurtosis Statistic in the Frequency Domain As an Aid In
    lEEE JOURNALlEEE OF OCEANICENGINEERING, VOL. OE-9, NO. 2, APRIL 1984 85 Use of the Kurtosis Statistic in the FrequencyDomain as an Aid in Detecting Random Signals Absmact-Power spectral density estimation is often employed as a couldbe utilized in signal processing. The objective ofthis method for signal ,detection. For signals which occur randomly, a paper is to compare the PSD technique for signal processing frequency domain kurtosis estimate supplements the power spectral witha new methodwhich computes the frequency domain density estimate and, in some cases, can be.employed to detect their presence. This has been verified from experiments vith real data of kurtosis (FDK) [2] forthe real and imaginary parts of the randomly occurring signals. In order to better understand the detec- complex frequency components. Kurtosis is defined as a ratio tion of randomlyoccurring signals, sinusoidal and narrow-band of a fourth-order central moment to the square of a second- Gaussian signals are considered, which when modeled to represent a order central moment. fading or multipath environment, are received as nowGaussian in Using theNeyman-Pearson theory in thetime domain, terms of a frequency domain kurtosis estimate. Several fading and multipath propagation probability density distributions of practical Ferguson [3] , has shown that kurtosis is a locally optimum interestare considered, including Rayleigh and log-normal. The detectionstatistic under certain conditions. The reader is model is generalized to handle transient and frequency modulated referred to Ferguson'swork for the details; however, it can signals by taking into account the probability of the signal being in a be simply said thatit is concernedwith detecting outliers specific frequency range over the total data interval.
    [Show full text]
  • 2D Fourier, Scale, and Cross-Correlation
    2D Fourier, Scale, and Cross-correlation CS 510 Lecture #12 February 26th, 2014 Where are we? • We can detect objects, but they can only differ in translation and 2D rotation • Then we introduced Fourier analysis. • Why? – Because Fourier analysis can help us with scale – Because Fourier analysis can make correlation faster Review: Discrete Fourier Transform • Problem: an image is not an analogue signal that we can integrate. • Therefore for 0 ≤ x < N and 0 ≤ u <N/2: N −1 * # 2πux & # 2πux &- F(u) = ∑ f (x),cos % ( − isin% (/ x=0 + $ N ' $ N '. And the discrete inverse transform is: € 1 N −1 ) # 2πux & # 2πux &, f (x) = ∑F(u)+cos % ( + isin% (. N x=0 * $ N ' $ N '- CS 510, Image Computaon, ©Ross 3/2/14 3 Beveridge & Bruce Draper € 2D Fourier Transform • So far, we have looked only at 1D signals • For 2D signals, the continuous generalization is: ∞ ∞ F(u,v) ≡ ∫ ∫ f (x, y)[cos(2π(ux + vy)) − isin(2π(ux + vy))] −∞ −∞ • Note that frequencies are now two- dimensional € – u= freq in x, v = freq in y • Every frequency (u,v) has a real and an imaginary component. CS 510, Image Computaon, ©Ross 3/2/14 4 Beveridge & Bruce Draper 2D sine waves • This looks like you’d expect in 2D Ø Note that the frequencies don’t have to be equal in the two dimensions. hp://images.google.com/imgres?imgurl=hFp://developer.nvidia.com/dev_content/cg/cg_examples/images/ sine_wave_perturbaon_ogl.jpg&imgrefurl=hFp://developer.nvidia.com/object/ cg_effects_explained.html&usg=__0FimoxuhWMm59cbwhch0TLwGpQM=&h=350&w=350&sz=13&hl=en&start=8&sig2=dBEtH0hp5I1BExgkXAe_kg&tbnid=fc yrIaap0P3M:&tbnh=120&tbnw=120&ei=llCYSbLNL4miMoOwoP8L&prev=/images%3Fq%3D2D%2Bsine%2Bwave%26gbv%3D2%26hl%3Den%26sa%3DG CS 510, Image Computaon, ©Ross 3/2/14 5 Beveridge & Bruce Draper 2D Discrete Fourier Transform N /2 N /2 * # 2π & # 2π &- F(u,v) = ∑ ∑ f (x, y),cos % (ux + vy)( − isin% (ux + vy)(/ x=−N /2 y=−N /2 + $ N ' $ N '.
    [Show full text]
  • 20. the Fourier Transform in Optics, II Parseval’S Theorem
    20. The Fourier Transform in optics, II Parseval’s Theorem The Shift theorem Convolutions and the Convolution Theorem Autocorrelations and the Autocorrelation Theorem The Shah Function in optics The Fourier Transform of a train of pulses The spectrum of a light wave The spectrum of a light wave is defined as: 2 SFEt {()} where F{E(t)} denotes E(), the Fourier transform of E(t). The Fourier transform of E(t) contains the same information as the original function E(t). The Fourier transform is just a different way of representing a signal (in the frequency domain rather than in the time domain). But the spectrum contains less information, because we take the magnitude of E(), therefore losing the phase information. Parseval’s Theorem Parseval’s Theorem* says that the 221 energy in a function is the same, whether f ()tdt F ( ) d 2 you integrate over time or frequency: Proof: f ()tdt2 f ()t f *()tdt 11 F( exp(j td ) F *( exp(j td ) dt 22 11 FF() *(') exp([j '])tdtd ' d 22 11 FF( ) * ( ') [2 ')] dd ' 22 112 FF() *() d F () d * also known as 22Rayleigh’s Identity. The Fourier Transform of a sum of two f(t) F() functions t g(t) G() Faft() bgt () aF ft() bFgt () t The FT of a sum is the F() + sum of the FT’s. f(t)+g(t) G() Also, constants factor out. t This property reflects the fact that the Fourier transform is a linear operation. Shift Theorem The Fourier transform of a shifted function, f ():ta Ffta ( ) exp( jaF ) ( ) Proof : This theorem is F ft a ft( a )exp( jtdt ) important in optics, because we often encounter functions Change variables : uta that are shifting (continuously) along fu( )exp( j [ u a ]) du the time axis – they are called waves! exp(ja ) fu ( )exp( judu ) exp(jaF ) ( ) QED An example of the Shift Theorem in optics Suppose that we’re measuring the spectrum of a light wave, E(t), but a small fraction of the irradiance of this light, say , takes a different path that also leads to the spectrometer.
    [Show full text]
  • Introduction to Frequency Domain Processing
    MASSACHUSETTS INSTITUTE OF TECHNOLOGY DEPARTMENT OF MECHANICAL ENGINEERING 2.14 Analysis and Design of Feedback Control Systems Introduction to Frequency Domain Processing 1 Introduction - Superposition In this set of notes we examine an alternative to the time-domain convolution operations describing the input-output operations of a linear processing system. The methods developed here use Fourier techniques to transform the temporal representation f(t) to a reciprocal frequency domain space F (jω) where the difficult operation of convolution is replaced by simple multiplication. In addition, an understanding of Fourier methods gives qualitative insights to signal processing techniques such as filtering. Linear systems,by definition,obey the principle of superposition for the forced component of their responses: If linear system is at rest at time t = 0,and is subjected to an input u(t) that is the sum of a set of causal inputs,that is u(t)=u1(t)+u2(t)+...,the response y(t) will be the sum of the individual responses to each component of the input,that is y(t)=y1(t)+y2(t)+... Suppose that a system input u(t) may be expressed as a sum of complex n exponentials n s t u(t)= aie i , i=1 where the complex coefficients ai and constants si are known. Assume that each component is applied to the system alone; if at time t = 0 the system is at rest,the solution component yi(t)is of the form sit yi(t)=(yh (t))i + aiH(si)e where (yh(t))i is a homogeneous solution.
    [Show full text]
  • Quantum Noise and Quantum Measurement
    Quantum noise and quantum measurement Aashish A. Clerk Department of Physics, McGill University, Montreal, Quebec, Canada H3A 2T8 1 Contents 1 Introduction 1 2 Quantum noise spectral densities: some essential features 2 2.1 Classical noise basics 2 2.2 Quantum noise spectral densities 3 2.3 Brief example: current noise of a quantum point contact 9 2.4 Heisenberg inequality on detector quantum noise 10 3 Quantum limit on QND qubit detection 16 3.1 Measurement rate and dephasing rate 16 3.2 Efficiency ratio 18 3.3 Example: QPC detector 20 3.4 Significance of the quantum limit on QND qubit detection 23 3.5 QND quantum limit beyond linear response 23 4 Quantum limit on linear amplification: the op-amp mode 24 4.1 Weak continuous position detection 24 4.2 A possible correlation-based loophole? 26 4.3 Power gain 27 4.4 Simplifications for a detector with ideal quantum noise and large power gain 30 4.5 Derivation of the quantum limit 30 4.6 Noise temperature 33 4.7 Quantum limit on an \op-amp" style voltage amplifier 33 5 Quantum limit on a linear-amplifier: scattering mode 38 5.1 Caves-Haus formulation of the scattering-mode quantum limit 38 5.2 Bosonic Scattering Description of a Two-Port Amplifier 41 References 50 1 Introduction The fact that quantum mechanics can place restrictions on our ability to make measurements is something we all encounter in our first quantum mechanics class. One is typically presented with the example of the Heisenberg microscope (Heisenberg, 1930), where the position of a particle is measured by scattering light off it.
    [Show full text]
  • Time Domain and Frequency Domain Signal Representation
    ES442-Lab 1 ES440. Lab 1 Time Domain and Frequency Domain Signal Representation I. Objective 1. Get familiar with the basic lab equipment: signal generator, oscilloscope, spectrum analyzer, power supply. 2. Learn to observe the time domain signal representation with oscilloscope. 3. Learn to observe the frequency domain signal representation with oscilloscope. 4. Learn to observe signal spectrum with spectrum analyzer. 5. Understand the time domain representation of electrical signals. 6. Understand the frequency domain representation of electrical signals. II. Pre-lab 1) Let S1(t) = sin(2pf1t) and S1(t) = sin(2pf1t). Find the Fourier transform of S1(t) + S2(t) and S1(t) * S2(t). 2) Let S3(t) = rect (t/T), being a train of square waveform. What is the Fourier transform of S3(t)? 3) Assume S4(t) is a train of square waveform from +4V to -4V with period of 1 msec and 50 percent duty cycle, zero offset. Answer the following questions: a) What is the time representation of this signal? b) Express the Fourier series representation for this signal for all its harmonics (assume the signal is odd symmetric). c) Write the Fourier series expression for the first five harmonics. Does this signal have any DC signal? d) Determine the peak magnitudes and frequencies of the first five odd harmonics. e) Draw the frequency spectrum for the first five harmonics and clearly show the frequency and amplitude (in Volts) for each line spectra. 4) Given the above signal, assume it passes through a bandlimited twisted cable with maximum bandwidth of 10KHz. a) Using Fourier series, show the time-domain representation of the signal as it passes through the cable.
    [Show full text]
  • The Wavelet Tutorial Second Edition Part I by Robi Polikar
    THE WAVELET TUTORIAL SECOND EDITION PART I BY ROBI POLIKAR FUNDAMENTAL CONCEPTS & AN OVERVIEW OF THE WAVELET THEORY Welcome to this introductory tutorial on wavelet transforms. The wavelet transform is a relatively new concept (about 10 years old), but yet there are quite a few articles and books written on them. However, most of these books and articles are written by math people, for the other math people; still most of the math people don't know what the other math people are 1 talking about (a math professor of mine made this confession). In other words, majority of the literature available on wavelet transforms are of little help, if any, to those who are new to this subject (this is my personal opinion). When I first started working on wavelet transforms I have struggled for many hours and days to figure out what was going on in this mysterious world of wavelet transforms, due to the lack of introductory level text(s) in this subject. Therefore, I have decided to write this tutorial for the ones who are new to the topic. I consider myself quite new to the subject too, and I have to confess that I have not figured out all the theoretical details yet. However, as far as the engineering applications are concerned, I think all the theoretical details are not necessarily necessary (!). In this tutorial I will try to give basic principles underlying the wavelet theory. The proofs of the theorems and related equations will not be given in this tutorial due to the simple assumption that the intended readers of this tutorial do not need them at this time.
    [Show full text]
  • Wavelets T( )= T( ) G F( ) F ,T ( )D F
    Notes on Wavelets- Sandra Chapman (MPAGS: Time series analysis) Wavelets Recall: we can choose ! f (t ) as basis on which we expand, ie: y(t ) = " y f (t ) = "G f ! f (t ) f f ! f may be orthogonal – chosen for "appropriate" properties. # This is equivalent to the transform: y(t ) = $ G( f )!( f ,t )d f "# We have discussed !( f ,t ) = e2!ift for the Fourier transform. Now choose different Kernel- in particular to achieve space-time localization. Main advantage- offers complete space-time localization (which may deal with issues of non- stationarity) whilst retaining scale invariant property of the ! . First, why not just use (windowed) short time DFT to achieve space-time localization? Wavelets- we can optimize i.e. have a short time interval at high frequencies, and a long time interval at low frequencies; i.e. simple Wavelet can in principle be constructed as a band- pass Fourier process. A subset of wavelets are orthogonal (energy preserving c.f Parseval theorem) and have inverse transforms. Finite time domain DFT Wavelet- note scale parameter s 1 Notes on Wavelets- Sandra Chapman (MPAGS: Time series analysis) So at its simplest, a wavelet transform is simply a collection of windowed band pass filters applied to the Fourier transform- and this is how wavelet transforms are often computed (as in Matlab). However we will want to impose some desirable properties, invertability (orthogonality) and completeness. Continuous Fourier transform: " m 1 T / 2 2!ifmt !2!ifmt x(t ) = Sme , fm = Sm = x(t )e dt # "!T / 2 m=!" T T ! i( n!m)x with orthogonality: e dx = 2!"mn "!! " x(t ) = # S( f )e2!ift df continuous Fourier transform pair: !" " S( f ) = # x(t )e!2!ift dt !" Continuous Wavelet transform: $ W !,a = x t " * t dt ( ) % ( ) ! ,a ( ) #$ 1 $ & $ ) da x(t) = ( W (!,a)"! ! ,a d! + 2 C % % a " 0 '#$ * 1 $ t # " ' Where the mother wavelet is ! " ,a (t) = ! & ) where ! is the shift parameter and a a % a ( is the scale (dilation) parameter (we can generalize to have a scaling function a(t)).
    [Show full text]