Advanced Cosmology : Primordial Non-Gaussianities
Total Page:16
File Type:pdf, Size:1020Kb
Advanced Cosmology : Primordial non-Gaussianities Eugene A. Lim This is a set of lectures given to the Cambridge DAMTP Math Tripos 2012 Advanced Cosmology Class (Lent Term). What it really needs are figures right now. I. GAUSSIAN RANDOM FIELDS AND CHARACTERIZATION OF NON-GAUSSIAN FIELDS Consider a field f(x) living on some space S, where x is a coordinate on S. We can Fourier transform this Z d3k Z f(x) = f(k)eik·x ; f(k) = d3xf(x)e−ik·x (1) (2π)3 where, without loss of generality, we can parameterize the Fourier coefficients as f(k) = ak + ibk; (2) p 2 2 with amplitude jf(k)j = ak + bk. Reality of f(x) imposes ak = a−k , bk = −b−k. In words, for a given configuration in real space f(x), a set of real numbers (ak; bk), parameterized by the vector k, completely (and uniquely) describe it. Different configurations of f(x) are described by different set of numbers. Suppose now we want to randomly generate a field configuration f(x), then one way to do it is to prescribe a probability distribution function for the set (ak; bk). If, furthermore, we want f(x) to be a Random Gaussian Field configuration, then this put a tight constraint on what is the PDF for (ak; bk). Let's begin by studying the PDF for one particular mode k. Definition : A Gaussian distribution for a mode is one such that ak and bk is drawn from the following Gaussian distribution in k space with zero mean, and k-dependent variance σk 2 2 1 ak + bk P (ak; bk) = 2 exp − 2 : (3) πσk σk where we have normalized this distribution such that Z 1 Z 1 1 2 2 dak dbk 2 exp[−|f(k)j /σk] = 1: (4) −∞ −∞ πσk Note that the integral is over all possible values of ak and bk for the mode k. The double Gaussian integral results in 2 p 2 the normalization 1/πσk instead of 1/πσk. Generalizing to all the modes, then the formal definition of the PDF is via the functional 1 2 2 P [f(k)] = 2 exp[−|f(k)j /σk]: (5) πσk We have made quite a large leap between Eqn. (3) and Eqn. (5), so let's decode a bit. A functional, as you may have learned, is a map that eats a function, and spits out a scalar field. Eqn. (5) is the just high brow way of saying that all the fields f(k) are drawn from a distribution that obeys Eqn. (3). 2 2 Notice that, in general we can have a directional dependence in the variance, i.e. σk instead of σk. But if we assume statistical isotropy, then the variance becomes just a function of the amplitude k { this also of course means that we can have non-isotropic but still Random Gaussian fields. Now we encounter the first instance of the much abused angled brackets : h i. What is hQi? Here what we are doing is \taking the expectation value" of some observable Q over an ensemble of possible realizations of this observable. Consider the following simple example. A plane drops many parachutists from the sky, who will land on a line x on the ground. The parachutists are scattered around some normalized distribution ρ(x). The expectation value of x is then defined to be Z 1 hxi = xρ(x)dx: (6) −∞ 2 In words, the expectation value is the average of a large (formally infinite) number of drawings from this distribution, i.e. our \best" guess value. If you like, for example, ρ(x) could be some normalized Gaussian distribution with mean x0 and variance σ r 1 ρ(x) = exp[−(x − x )2/σ2]: (7) πσ2 0 We can also calculate the expectation value of any function of x, say g(x) i.e. Z 1 hg(x)i = g(x)ρ(x)dx: (8) −∞ Given a functional distribution P [φ(k)] for some scalar field φ(k), then we can generalize the idea : the expectation value of finding the functional of some field configuration φ(k), Q[φ(k)], is given by (remember, a functional spits out a scalar) Z hQ[φ(k)]i = Dφ Q[φ(k)]P [φ(k)] (9) where the integral Dφ is over all possible field configurations in k-space. (Of course, here we are working in fourier space { in general it can be any space that φ lives in, including real/configuration space x). With all these definitions, we can now define the notion of a Gaussian Random Field: Let P [f(k)] be a PDF of the form Eqn. (5). Such a distribution is called Gaussian Random, and such a field f(k) is called a Gaussian Random Field. The expectation value for some functional Q[f(k)] is given by Z Z 1 2 2 2 hQ[f(k)]i = Πk dak dbkQ[f(k)] 2 exp[−(ak + bk)/σk]: (10) πσk R R Note that the integral over all possible configurations has become Df ! Πk dak dbk, since ak and bk parameterize the configurations. Sometimes you hear the words \both the amplitudes and the phases of f(k) are drawn from a 2 2 2 2 Gaussian PDF" with variance σk. Now you can see this is clearly wrong : jf(k)j = ak + bk, so for each mode in a GRF, the amplitude is drawn from a Gaussian PDF while the phase is essentially drawn from a flat distribution { this is kinda obvious since you can't really define a Gaussian (with infinite support) over a compact phase, but these words are uttered with such religious fervor that we tend to repeat it without thinking. Despite the scary looking equation Eqn. (10), with all the integrals and product sums and all, it is actually very easy to use it. Consider the simple example Q[f(k)] = akak0 , then Z Z 1 2 2 2 haqaq0 i = Πk dak dbk aqaq0 2 exp[−(ak + bk)/σk] πσk σ2 = k δ(q + q0) + q0 $ −q: (11) 2 The point is that since the distribution is even under both ak and bk, all odd products of a's and b's vanish while, except for k = q and k0 = q0, the rest are simply trivial Gaussian integrals. Even-ness means that q = q0 gives us the delta function (recall that ak = a−k by reality). Let's do a simple but important example : what is the expectation of a two-point correlation function1 Q[f(k)] = f(k)f(k0): 0 2 0 hf(k)f(k )i = hakak0 i − hbkbk0 i = σkδ(k + k ) (12) where we have used the fact that the cross terms vanishes and reality imposes bk = −b−k and Z 2 3 0 σk 0 ha a 0 i = hb b 0 i = d k a a P (a ; b )δ(k + k ) = δ(k + k ) (13) k k k −k k k k k 2 1 Recall that the expectation value of any variable x or any product of variables f(x), given its PDF P (x), is hf(x)i = R dxf(x)P (x). 3 This definition carries through to configuration space : to compute something like the power spectrum hf(x)f(x0)i, Fourier transform f(x), and plug into Eqn. (10). Let's do a famous example Z 3 3 d k1 d k1 hf(x )f(x )i = eik1·x1+ik2·x2 (ha a i − hb b i) (14) 1 2 (2π)3 (2π)3 k1 k2 k1 k2 Z d3k d3k 1 1 ik1·x1+ik2·x2 2 = e δ(k1 + k2)σ (15) (2π)3 (2π)3 k1 Z d3k σ2 = eik·(x1−x2) k (16) (2π)3 (2π)3 which is of course the two-point correlation function in real space. You probably have seen Eqn. (16) { it is the 2 3 definition of the power spectrum P (k) ≡ σk=(2π) or two-point correlation function. σk is a k dependent power spectrum, and tells us about the amplitude of the correlations at k. Let us state a few famous true-facts using our high-brow tool: • Scale Invariance :A rescaling is defined to be x ! λx where λ > 0 is some constant. Then a scale invariant power spectrum obeys the following relation hf(x)f(x0)i = hf(λx)f(λx0)i: (17) Using δ(λ(k + k0)) = λ−3δ(k + k0), then it follows that for a power spectrum to be scale invariant, it must obey σ2 1 P (k) = k / : (18) (2π)3 k3 This should be familiar to you. 2 • White Noise : On the other hand, if σk = const for all k, the spectrum is known as white noise. Note it is not scale-invariant! • Correlations vs Gaussianity : Notice that Gaussianity is not a statement about two-point correlations! For example, if I take a random Gaussian sky (say the CMB), and then rearrange the cold and hot spots in such a way that it looks like Mickey Mouse2, the sky will then be highly correlated in a very Mickey Mouse way, but it will still be completely Gaussian random { what I have changed in my magical rearrangment is the power 2 spectrum σk, not the underlying distribution P [f(k)]. Mickey Mouse is of course not very isotropic, so hence the directional dependent power spectrum. It is clear that since the Gaussian PDF Eqn. (3) is even under parity around the mean 0, any odd expectation value vanishes hf(k1)f(k2)f(k3)i = 0 (19) while any even expectation value is simply a product of the respective power spectra which we can compute, for example for a 4-pt function hf(k )f(k )f(k )f(k )i = σ2 σ2 δ(k + k )δ(k + k ) + 1 $ 3 + 1 $ 4: (20) 1 2 3 4 k1 k3 1 2 3 4 You can do the algebra here to convince yourself (Example Sheet), but this kind of \contraction" algebra occurs very often so it's good to practice doing the combinatorics.