Observational Astrophysics II January 18, 2007
Total Page:16
File Type:pdf, Size:1020Kb
Observational Astrophysics II JOHAN BLEEKER SRON Netherlands Institute for Space Research Astronomical Institute Utrecht FRANK VERBUNT Astronomical Institute Utrecht January 18, 2007 Contents 1RadiationFields 4 1.1 Stochastic processes: distribution functions, mean and variance . 4 1.2Autocorrelationandautocovariance........................ 5 1.3Wide-sensestationaryandergodicsignals..................... 6 1.4Powerspectraldensity................................ 7 1.5 Intrinsic stochastic nature of a radiation beam: Application of Bose-Einstein statistics ....................................... 10 1.5.1 Intermezzo: Bose-Einstein statistics . ................... 10 1.5.2 Intermezzo: Fermi Dirac statistics . ................... 13 1.6 Stochastic description of a radiation field in the thermal limit . 15 1.7 Stochastic description of a radiation field in the quantum limit . 20 2 Astronomical measuring process: information transfer 23 2.1Integralresponsefunctionforastronomicalmeasurements............ 23 2.2Timefiltering..................................... 24 2.2.1 Finiteexposureandtimeresolution.................... 24 2.2.2 Error assessment in sample average μT ................... 25 2.3 Data sampling in a bandlimited measuring system: Critical or Nyquist frequency 28 3 Indirect Imaging and Spectroscopy 31 3.1Coherence....................................... 31 3.1.1 The Visibility function . ................... 31 3.1.2 Young’sdualbeaminterferenceexperiment................ 31 3.1.3 Themutualcoherencefunction....................... 32 3.1.4 Interference law for a partially coherent radiation field: the complex degreeofcoherence.............................. 34 3.2Indirectspectroscopy................................. 36 3.2.1 Temporalcoherence............................. 36 3.2.2 Longitudinalcorrelation........................... 37 3.3Indirectimaging................................... 39 3.3.1 Quasi-monochromatic point source: spatial response function of a two- elementinterferometer............................ 39 3.3.2 Quasi-monochromatic extended source: spatial or lateral coherence . 44 3.3.3 TheVanCittert-Zerniketheorem...................... 45 3.3.4 Etendue of coherence ............................. 48 3.4Aperturesynthesis.................................. 50 3.4.1 Quasi-monochromatic point source: spatial response function (PSF) and optical tranfer function (OTF) of a multi-element interferometer . 50 1 3.4.2 EarthRotationApertureSynthesis(ERAS)................ 56 3.4.3 TheWesterborkRadioSynthesisTelescope(WSRT)........... 57 3.4.4 Maximum allowable bandwidth of a quasi-monochromatic source . 61 3.4.5 The general case: arbitrary baselines and a field of view in arbitrary direction.................................... 62 4 Radiation sensing: a selection 64 4.1General........................................ 64 4.2Heterodynedetection................................. 64 4.3 Frequency limited noise in the thermal limit: signal to noise ratio and limiting sensitivity....................................... 67 4.4 Photoconductive detection . ............................. 73 4.4.1 Operationprinciple.............................. 73 4.4.2 Responsivity of a photoconductor . ................... 73 4.4.3 Temporal frequency response of a photoconductor ............ 76 4.5Frequencylimitednoiseinthequantumlimit................... 77 4.5.1 Frequency limited shot noise in the signal-photon limit . 77 4.5.2 Example: Shot noise in the signal-photon limit for a photoconductor . 79 4.5.3 Shot noise in the background-photon limit for a photoconductor: the normalized detectivity D∗ .......................... 80 4.6ChargeCoupledDevices............................... 81 4.6.1 Operationprinciple.............................. 81 4.6.2 ChargestorageinaCCD.......................... 81 4.6.3 ChargetransportinaCCD......................... 84 4.6.4 Charge capacity and transfer speed in CCD structures . 87 4.6.5 Focalplanearchitectures.......................... 88 4.6.6 WavelengthresponseofCCDs........................ 91 5 Fitting observed data 95 5.1Errors......................................... 95 5.1.1 Accuracyversusprecision.......................... 95 5.1.2 Computingthevariousdistributions.................... 97 5.2 Error propagation . ............................. 99 5.2.1 Examples of error propagation . ...................100 5.3 Errors distributed as a Gaussian and the least squares method . 101 5.3.1 Weightedaverages..............................102 5.3.2 Fittingastraightline............................103 5.3.3 Non-linearmodels:Levenberg-Marquardt.................105 5.3.4 Optimalextractionofaspectrum......................106 5.4 Errors distributed according to a Poisson distribution and Maximum likelihood 107 5.4.1 Aconstantbackground...........................107 5.4.2 Constantbackgroundplusonesource...................108 5.5GeneralMethods...................................108 5.5.1 Amoebe....................................108 5.5.2 Geneticalgorithms..............................109 5.6Exercises.......................................109 6 Looking for variability and periodicity 112 6.1Fittingsine-functions:Lomb-Scargle........................112 6.2 Period-folding: Stellingwerf .............................114 6.3 Variability through Kolmogorov-Smirnov tests . ............114 6.4Fouriertransforms..................................115 6.4.1 ThediscreteFouriertransform.......................116 6.4.2 Fromcontinuoustodiscrete.........................118 6.4.3 Thestatisticsofpowerspectra.......................120 6.4.4 Detectingandquantifyingasignal.....................121 6.5Exercises.......................................121 Chapter 1 Radiation Fields 1.1 Stochastic processes: distribution functions, mean and variance A stochastic process is defined as an infinite series of stochastic variables, one for each value of the time t. For a specific value of t,thestochastic variable X(t) possesses a certain probability density distribution. The numerical value of the stochastic variable X(t)attimet corresponds to a particular draw (outcome) from this probability distribution at time t.Thetimeseries of draws represents a single time function and is commonly referred to as a realisation of the stochastic process. The full set of all realisations is called the ensemble of time functions (see Fig. 1.1). At each time t, the stochastic variable X(t) describing the stochastic process, is distributed Figure 1.1: The ensemble of time functions of a stochastic process. 4 by a momentary cumulative distribution, called the first-order distribution: F (x; t)=P{X(t) ≤ x} (1.1) which gives the probability that the outcome at time t will not exceed the numerical value x. The probability density function, or first-order density, of X(t) follows from the derivative of F (x; t): ∂F(x; t) f(x; t) ≡ (1.2) ∂x This probability function may be a binomial, Poisson or normal (Gaussian) distribution. For the determination of the statistical properties of a stochastic process it suffices in many applications to consider only certain averages, in particular the expected values of X(t)and of X2(t). The mean or average μ(t)ofX(t) is the expected value of X(t) and is defined as +∞ μ(t)=E{X(t)} = xf(x; t) dx (1.3) −∞ The variance of X(t) is the expected value of the square of the difference of X(t)andμ(t) and is, by definition, equal to the square of the standard deviation: σ2(t)=E{(X(t) − μ(t))2} = E{X2(t)}−μ2(t) (1.4) (in case of real functions X(t)andμ(t)). 1.2 Autocorrelation and autocovariance This distribution can be generalized to higher-order distributions, for example the second order distribution of the process X(t) is the joint distribution G(x1,x2; t1,t2)=P{X(t1) ≤ x1,X(t2) ≤ x2} (1.5) The corresponding second derivative follows from 2 ∂ G(x1,x2; t1,t2) g(x1,x2; t1,t2) ≡ (1.6) ∂x1 ∂x2 The autocorrelation R(t1,t2)ofX(t) is the expected value of the product X(t1) · X(t2)or ∗ X(t1) · X (t2) if we deal with complex quantities: +∞ +∞ { · ∗ } ∗ ∗ R(t1,t2)=E X(t1) X (t2) = x1 x2 g(x1,x2; t1,t2) dx1dx2 (1.7) −∞ −∞ with the asterisk (∗) indicating the complex conjugate in case of complex variables (see Fig. 1.2). The value of R(t1,t2) on the diagonal t1 = t2 = t represents the average power of the signal at time t: R(t)=R(t, t)=E{X2(t)} = E{|X(t)|2 } (1.8) 5 Figure 1.2: The autocorrelation of a stochastic process. Since the average of X(t) is in general not zero, another quantity is introduced, the autoco- variance C(t1,t2), which is centered around the averages μ(t1)andμ(t2): ∗ C(t1,t2)=E{(X(t1) − μ(t1)) · (X(t2) − μ(t2)) } (1.9) For t1 = t2 = t (verify yourself) C(t)=C(t, t)=R(t, t)−|μ(t)|2= σ2(t) (1.10) C(t) represents the average power contained in the fluctuations of the signal around its mean value at time t. 1.3 Wide-sense stationary and ergodic signals Next, consider a signal for which the average does not depend on time, and for which the autocorrelation only depends on the time difference τ ≡ t2 − t1. This is called a wide-sense stationary (wss) signal. The following relations hold: signal average μ(t)=μ = constant (1.11) autocorrelation R(t1,t2)=R(τ) (1.12) 2 autocovariance C(t1,t2)=C(τ)=R(τ) − μ (1.13) 6 For τ =0: R(0) = μ2 + C(0) = μ2 + σ2 (1.14) which expresses that the total power in the signal equals the power in the average signal plus the power in its fluctuations around the average value. Note that both the autocorrelation and the autocovariance are even functions, i.e. R(−τ)=R(τ)andC(−τ)=C(τ). Finally, a wss-stochastic process X(t)isconsideredergodic when the momentaneous