Advancing precision cosmology with 21 cm intensity mapping
by
Kiyoshi Wesley Masui
A thesis submitted in conformity with the requirements for the degree of Doctor of Philosophy Graduate Department of Physics University of Toronto
c Copyright 2013 by Kiyoshi Wesley Masui Abstract
Advancing precision cosmology with 21 cm intensity mapping
Kiyoshi Wesley Masui Doctor of Philosophy Graduate Department of Physics University of Toronto 2013
In this thesis we make progress toward establishing the observational method of 21 cm intensity mapping as a sensitive and efficient method for mapping the large-scale struc- ture of the Universe. In Part I we undertake theoretical studies to better understand the potential of intensity mapping. This includes forecasting the ability of intensity mapping experiments to constrain alternative explanations to dark energy for the Universe’s accel- erated expansion. We also consider how 21 cm observations of the neutral gas in the early Universe (after recombination but before reionization) could be used to detect primordial gravity waves, thus providing a window into cosmological inflation. Finally we show that scientifically interesting measurements could in principle be performed using intensity mapping in the near term, using existing telescopes in pilot surveys or prototypes for larger dedicated surveys. Part II describes observational efforts to perform some of the first measurements using 21 cm intensity mapping. We develop a general data analysis pipeline for analyzing intensity mapping data from single dish radio telescopes. We then apply the pipeline to observations using the Green Bank Telescope. By cross-correlating the intensity mapping survey with a traditional galaxy redshift survey we put a lower bound on the amplitude of the 21 cm signal. The auto-correlation provides an upper bound on the signal amplitude and we thus constrain the signal from both above and below. This pilot survey represents a pioneering effort in establishing 21 cm intensity mapping as a probe of the Universe.
ii Dedication
For Kaito’s generation,
that you may reach a better understanding of nature.
iii Acknowledgements
First and foremost, I want to thank my thesis advisor Ue-Li Pen. Early on in the program I was told that no single decision I made in my career would be as important as the match between student and advisor. I think we managed to hit the sweet spot between me having the freedom to pursue creative research and you pushing me to accomplish as much as possible. I have learnt so much from you.
I would also like to acknowledge the efforts of all my collaborators, without whom this thesis would not have been possible. A special thank you to the two post-docs that did all the work: Eric Switzer and Pat McDonald. I am also indebted to the many faculty, post-docs, staff, and grad students at CITA for the countless bits of help, tidbits of advice, and allowing me to bounce ideas off of you relentlessly. This is especially true of Richard Shaw, as well as my office mates who have contributed enumerable snippets of code.
Beyond the professional, I would like to thank the many friends and family who are responsible for me growing to love Toronto. Life here has been wonderful because of you.
Thank you to my parents and brother for making me who I am.
Thank you Maggie, for being my partner through all of this. I can’t wait for what adventures may come.
iv Contents
1 Introduction 1 1.1 Background ...... 1 1.1.1 Cosmology and large-scale structure ...... 1 1.1.2 Redshift surveys using the 21 cm line ...... 4 1.2 Formalism ...... 6 1.2.1 The background expansion ...... 6 1.2.2 Perturbations ...... 9 1.3 Overview ...... 13 1.3.1 Outline ...... 13 1.3.2 Summary of contributions ...... 15
I The potential of 21 cm cosmology 17
2 Constraining modified gravity 18 2.1 Summary ...... 18 2.2 Introduction ...... 19 2.3 Modified Gravity Models ...... 20 2.3.1 f(R) Models ...... 20 2.3.2 DGP Braneworld ...... 22 2.4 Observational Signatures ...... 23 2.4.1 Baryonic acoustic oscillation expansion history test ...... 24 2.4.2 Weak Lensing ...... 25 2.4.3 External Priors from Planck ...... 29 2.5 Results ...... 30 2.6 Discussion ...... 36
v 3 Detecting primordial gravity waves 39 3.1 Summary ...... 39 3.2 Introduction ...... 39 3.3 Mechanism ...... 40 3.4 Tests of inflation ...... 43 3.5 Statistical detection in LSS ...... 44 3.6 Discussion ...... 46 3.7 Addendum ...... 47
4 Forecasts for near term experiments 49 4.1 Summary ...... 49 4.2 Introduction ...... 50 4.3 Redshift Space Distortions ...... 51 4.4 Baryon Acoustic Oscillations ...... 53 4.5 Forecasts ...... 55 4.6 Discussion ...... 60
II Pioneering 21 cm cosmology 62
5 Data analysis pipeline 63 5.1 Introduction ...... 63 5.2 Time ordered data ...... 64 5.2.1 Pipeline design ...... 66 5.2.2 Radio frequency interference ...... 67 5.2.3 Calibration ...... 68 5.3 Map-making ...... 73 5.3.1 Formalism ...... 73 5.3.2 Noise model and estimation ...... 76 5.3.3 An efficient time domain map-maker ...... 81 5.4 Conclusions ...... 85
6 21 cm cross-correlation with an optical galaxy survey 87 6.1 Summary ...... 87 6.2 Introduction ...... 88 6.3 Observations ...... 89 6.4 Analysis ...... 90 6.4.1 From data to maps ...... 90
vi 6.4.2 From maps to power spectra ...... 92 6.5 Results and discussion ...... 93
7 21 cm auto-correlation 98 7.1 Summary ...... 98 7.2 Introduction ...... 99 7.3 Observations and Analysis ...... 100 7.3.1 Foreground Cleaning ...... 101 7.3.2 Instrumental Systematics ...... 102 7.3.3 Power Spectrum Estimation ...... 103 7.4 Results ...... 105 7.5 Discussion and Conclusions ...... 107
8 Conclusions and outlook 111 8.1 Conclusions ...... 112 8.2 Future work ...... 112
Bibliography 115
vii List of Tables
2.1 Projected constraints on f(R) models for various combinations of obser- vational techniques, for a 200 m telescope. Constraints are the 95% confi- dence level upper limits and include forecasts for Planck. The non linear results (column marked NL WL) are for the HS model with n = 1. Results that make use of weak lensing with constraints above 10−3 are only order of magnitude accurate. The linear regime is taken to be ` < 140, with the nonlinear constraints extending up to ` = 600...... 31
viii List of Figures
1.1 NASA/WMAP Science Team depiction of the evolution of the Universe in the Λ-CDM model. The creation of the cosmic microwave background is shown as well as the era of structure formation. Inflation and the ac- celerated expansion is depicted on the left and right sides of the figure respectively...... 2 1.2 From Springel et al. [2005]. Large-scale structure at redshift z = 0, as seen in the Millennium Simulation. The colour map represents density with the brightest colours representing the densest regions. The bright spot near the middle of the image is a galaxy super cluster, containing of order 10 000 galaxies...... 3 1.3 Large-scale structure in the VIPERS survey [Guzzo et al., 2013]. This figure contains roughly half of the 55 000 galaxies in the total survey. The 3D position of each galaxy is represented by a black dot. The figure is then collapsed along one of the angular dimensions (which has a thickness of ≈ 1◦). Large-scale structure is clearly visible, especially at z ≈ 0.7 where the mean galaxy density is highest...... 4 1.4 Proper time t and conformal time η as a function of redshift z. The magnitude of the proper time can be interpreted as the distance trav- elled by a photon observed today and emitted by a source at redshift z, while the magnitude of η is the current/comoving distance to that source. These differ because the source recedes with the Hubble flow as the photon is in transit. t and η are given in terms of the Hubble time
1/H0 = 14.6 Gyr or alternately the Hubble distance c/H0 = 3.00 Gpc/h
where h = H0/(100 km/s/Mpc) = 0.671...... 9
2.1 The Weak lensing convergence power spectra for ΛCDM and the HS f(R) −4 model with n = 1 and fR0 = 10 . Galaxy distribution function is flat between z = 1 and z = 2.5...... 29
ix 2.2 Projected constraints on the HS f(R) model with n = 1 using several combinations of observational techniques, for a 200 m telescope. All curves include forecasts for Planck. Allowed parameter values are shown in the
fR0 − h plane at the 68.3%, and 95.4% confidence level. Results are not shown for “WL” which were calculated much less accurately (see text). . 32
2.3 Same as Figure 2.2 but for a 100 m cylindrical telescope...... 33
2.4 Ratio of the coordinate dA(z) (top) and the Hubble parameter H(z) (bot- tom) as predicted by the best fit DGP model to the fiducial model. Error bars are from 21 cm BAO predictions. Fit includes BAO data available
from the 200 m telescope and CMB priors on θs and ωm...... 34
2.5 Weak lensing spectra in for DGP and a smooth dark energy model with
the same expansion history. DGP parameters are h = 0.665, ωm = 0.116,
ωk = 0 and ωrc = 0.06. Errorbars represent expected accuracy of the 200 m telescope...... 35
3.1 Primordial tensor power spectrum obeying the consistency relation for r = 0.1. The solid line is the tensor power spectrum. Error bars represent the reconstruction uncertainty on the binned power spectrum for a noiseless experiment, surveying 200 (Gpc/h)3 and resolving scalar modes down to
kmax = 168h/Mpc. The dashed, nearly vertical, line is the reconstruction noise power. The non-zero slope of the solid line is the deviation from scale-free...... 47
4.1 Baryon acoustic oscillations averaged over all directions. To show the BAO we plot the ratio of the full matter power spectrum to the wiggle- free power spectrum of Eisenstein and Hu [1998]. The error bars represent projections of the sensitivity possible with 4000 hours observing time on GBT at 0.54 < z < 1.09...... 54
4.2 Ability of GBT to measure the BAO and redshift space distortions as a function of survey area at fixed observing time. Presented survey is between z = 0.54 and z = 1.09 and observing time is 1440 hours. A factor
of 10 has been removed from the Aw curve...... 56
4.3 Roughly optimized survey area as a function of telescope time on GBT. Redshift range is between z = 0.54 and z = 1.09...... 56
x 4.4 Forecasts for fractional error on redshift space distortion and baryon acous- tic oscillation parameters for intensity mapping surveys on the Green Bank Telescope (GBT). Frequency bins are approximately 200 MHz wide and correspond to available GBT receivers. Uncertainties on D should not be
trusted unless the uncertainty on Aw is less than 50% (see text)...... 58
4.5 Forecasts for fractional error on redshift space distortion and baryon acous- tic oscillation parameters for intensity mapping surveys on a prototype cylindrical telescope. Frequency bins are 200 MHz wide corresponding to the capacity of the correlators which will likely be available. These result also apply to the aperture telescope but with the observing time reduced by a factor of 14. Uncertainties on D should not be trusted unless the
uncertainty on Aw is less than 50% (see text). Observing time does not account for lost time due to foreground obstruction...... 59
5.1 Data before (left) and after (right) RFI flagging. Colour scale represents
perturbations in the power, P/hP it −1. Frequency axis has been rebinned from 4096 bins to 256 bins after flagging, which fills in many of the gaps in the data left by the flagging. Any remaining gaps are assigned a value of 0 for plotting...... 68
5.2 Noise power spectrum, averaged over all spectral channels (δνν0 Pνν0ω/c{ν}), as measured in the GBT 800 MHz receiver. Units of the vertical axis are normalized such that pure thermal noise would be a horizontal line at unity. In individual time samples are 0.131 s long and spectral bins are 3.12 MHz wide. The telescope is pointing at the north celestial pole to minimize changes in the sky temperature. Descending the various coloured
lines corresponds to removing additional noise eigenmodes, Vνq, from the noise power spectrum. It is seen that after removing 7 of the 64 possible modes the noise is significantly reduced and is approaching the thermal value on all time scales. The modes removed from each subsequent line are shown in Figure 5.3...... 79
5.3 The modes Vνq removed from the noise power spectra to produce the curves in Figure 5.2. Each mode is offset vertically for clarity, with mode number increasing from bottom to top. The nth mode in this figure is the dominant remaining mode in the nth curve in Figure 5.2...... 80
xi 6.1 Maps of the GBT 15 hr field at approximately the band-center. The purple circle is the FWHM of the GBT beam, and the color range saturates in some places in each map. Top: The raw map as produced by the map- maker. It is dominated by synchrotron emission from both extragalactic point sources and smoother emission from the galaxy. Bottom: The raw map with 20 foreground modes removed per line of sight relative to 256 spectral bins, as described in Sec. 6.4.2. The map edges have visibly higher noise or missing data due to the sparsity of scanning coverage. The cleaned map is dominated by thermal noise, and we have convolved by GBT’s beam shape to bring out the noise on relevant scales...... 96
6.2 Cross-power between the 15 hr and 1 hr GBT fields and WiggleZ. Negative points are shown with reversed sign and a thin line. The solid line is the mean of simulations based on the empirical-NL model of Blake et al. [2011] processed by the same pipeline...... 97
7.1 Temperature scales in our 21 cm intensity mapping survey. The top curve is the power spectrum of the input 15 hr field with no cleaning applied (the 1 hr field is similar). Throughout, the 15 hr field results are green and the 1 hr field results are blue. The dotted and dash-dotted lines show thermal noise in the maps. The power spectra avoid noise bias by crossing two maps made with separate datasets. Nevertheless, thermal noise limits the fidelity with which the foreground modes can be estimated and removed. The points below show the power spectrum of the 15 hr and 1 hr fields after the foreground cleaning described in Sec. 7.3.1. Negative values are shown with thin lines and hollow markers. Any residual foregrounds will additively bias the auto-power. The red dashed line shows the 21 cm signal expected from the amplitude of the cross-power with the WiggleZ survey (for r = 1) and based on simulations processed by the same pipeline. . . 106
xii 7.2 Comparison with the thermal noise limit. The dark and light shaded re- gions are the 68% and 95% confidence intervals of the measured 21 cm fluc- tuation power. The dashed line shows the expected 21 cm signal implied by the WiggleZ cross-correlation if r = 1. The solid line represents the best upper 95% confidence level we could achieve given our error bars, in the absence of foreground contamination. Note that the auto-correlation measurements, which constrain the signal from above, are uncorrelated between k bins, while a single global fit to the cross-power (in Masui et al. [2013]) is used to constrain the signal from below. Confidence intervals do not include the systematic calibration uncertainty, which is 18% in this space...... 108
7.3 The posterior distribution for the parameter ΩHIbHI coming from the Wig- gleZ cross-power spectrum, 15 hr field and 1 hr field auto-powers, as well as the joint likelihood from all three datasets. The individual distribu- tions from the cross-power and auto-powers are dependent on the prior
on ΩHIbHI while the combined distribution is essentially insensitive. The distributions do not include the systematic calibration uncertainty of 9%. 109
xiii Chapter 1
Introduction
The question of the origin of the Universe is arguably as ancient as all of human civiliza- tion. It is only within the last hundred years that we have begun to form an accurate understanding of the Universe on large scales and only within the last twenty years that we have been able to make precise statements about the Universe. It is thought that using the 21 cm line from neutral hydrogen to map the large- scale structure of the Universe is a promising technique for making precise cosmological measurements [Chang et al., 2008]. Such measurements would allow for the detection of subtle effects, ultimately leading to a better understanding of the Universe and continuing humanity’s push to answer fundamental questions about its origin. This thesis represents a significant contribution to the pioneering of 21 cm cosmology, both theoretically and observationally.
1.1 Background
Here we give a brief overview of the field of cosmology, how the field has evolved, and why the 21 cm line is anticipated to be a powerful probe of the Universe and an important part of cosmology’s future. Some of the text in this section has been adapted from research proposals and other similar unpublished documents.
1.1.1 Cosmology and large-scale structure
Observationally, physical cosmology has seen tremendous progress in the past two decades. It has matured from an imprecise field, in which a dearth of observations left the Universe poorly understood, to one in which a richness of data is allowing for precise measurements.
1 Chapter 1. Introduction 2
Figure 1.1: NASA/WMAP Science Team depiction of the evolution of the Universe in the Λ-CDM model. The creation of the cosmic microwave background is shown as well as the era of structure formation. Inflation and the accelerated expansion is depicted on the left and right sides of the figure respectively.
We now have a standard cosmological model, the Λ-cold dark matter model (Λ-CDM), depicted in Figure 1.1, which explains all observations of the cosmic microwave back- ground (CMB), large-scale structure (LSS), and Universe expansion rate. Nevertheless, several fundamental mysteries remain unexplained within Λ-CDM. One of the most com- pelling is the observed accelerated expansion of the Universe, the so called dark energy problem, initially discovered by observations of distant type-1a super-novae [Riess et al., 1998, Perlmutter et al., 1999]. Within Λ-CDM, this is parameterized by a cosmological constant, however, its exact nature remains a mystery. Equally intriguing is the question of what set the initial conditions for the Universe’s evolution. The leading theory is cos- mological inflation [Guth, 1981], however again the physical nature of inflation remains a mystery. The rapid advancement of cosmology has been driven by observations of the CMB, lead by the space observatories: the Cosmic Background Explorer (COBE) [Mather et al., 1990, Smoot et al., 1992], followed by the Wilkinson Microwave Anisotropy Probe (WMAP) [Bennett et al., 2012, Hinshaw et al., 2012], and finally the ongoing Planck mission [Planck Collaboration et al., 2013a]. While this program has been tremendously Chapter 1. Introduction 3
Figure 1.2: From Springel et al. [2005]. Large-scale structure at redshift z = 0, as seen in the Millennium Simulation. The colour map represents density with the brightest colours representing the densest regions. The bright spot near the middle of the image is a galaxy super cluster, containing of order 10 000 galaxies.
successful, the bulk of the information that can be extracted from the CMB has already been retrieved. While there remains information to be extracted from the CMB polar- ization signal, as well as various secondary effects such as weak lensing, new probes of the Universe will become increasingly important. The field of large-scale structure studies how the Universe’s matter is distributed in three dimensional space. Figure 1.2 shows a map of the LSS as simulated in a large n- body simulation. LSS is a potentially powerful probe of the Universe because it is a three dimensional field—in contrast to the CMB which is two dimensional—yielding far more independent observables. There are of order 1018 observable LSS modes in the Universe [Pen, 2004], while the primordial CMB has at most of order 108 observable modes. The large-scale structure is sensitive to a large variety of physical processes through- out the evolution of the Universe, and can thus be used to learn about these processes. The LSS grew, through gravitational collapse instability, from tiny perturbations that existed in the first instants of the Universe’s evolution. The structure in the late Uni- verse maintains information about the precise statistics of these initial perturbations. If inflation is responsible for generating these perturbations, then the LSS can be used to Chapter 1. Introduction 4
Figure 1.3: Large-scale structure in the VIPERS survey [Guzzo et al., 2013]. This figure contains roughly half of the 55 000 galaxies in the total survey. The 3D position of each galaxy is represented by a black dot. The figure is then collapsed along one of the angular dimensions (which has a thickness of ≈ 1◦). Large-scale structure is clearly visible, especially at z ≈ 0.7 where the mean galaxy density is highest. study it. To study the expansion history of the Universe, and thus gain insight into the anoma- lous acceleration, the baryon acoustic oscillations (BAO) can be used. The BAO result from sound waves that propagated in the early Universe. They imprint a characteristic scale in the statistics of the LSS and thus act as a standard ruler on the sky, which expands with the expansion of the Universe. Thus a precise measurement of this scale as a function of redshift can be used to measure the rate of expansion.
1.1.2 Redshift surveys using the 21 cm line
Traditionally, large-scale structure has been measured using galaxy surveys. These in- volve painstakingly measuring the 3D location of many galaxies. Redshift, measured by performing spectroscopy on each galaxy to identify the shift in spectral lines, is used as a proxy for radial distance, giving these surveys their colloquial names of redshift sur- veys. The galaxies are then catalogued and accumulated into three dimensional maps that reveal large-scale structure. This is illustrated in Figure 1.3. Because the individual galaxies are much smaller than the structures being studied, they are hard to detect and a very sensitive telescope must be used. Over two million galaxies have been surveyed in this way [Eisenstein et al., 2011], but despite this massive effort, only a small fraction of the observable Universe has been mapped to date. Recently it has been proposed that the large-scale structure could be mapped much more efficiently using the 21 cm line from the spin flip transition in neutral hydrogen Chapter 1. Introduction 5
[Barnes et al., 2001, Loeb and Zaldarriaga, 2004], which lies in the radio part of the electromagnetic spectrum. This has several advantages over optical redshift surveys. The 21 cm line is by far the brightest line in this part of the spectrum, leaving little chance for line confusion. As such, each observing frequency corresponds unambiguously with a redshift. This eliminates the need to detect individual galaxies at high significance. The 21 cm brightness from each region of space is taken to be a proxy for the total amount of hydrogen in that region, which in turn is assumed to be a biased tracer for the total density. This procedure is referred to as 21 cm intensity mapping [Chang et al., 2008, Loeb and Wyithe, 2008, Ansari et al., 2012a, Mao et al., 2008, Seo et al., 2010, Mao, 2012]. The 21 cm signal is, in principle, measurable at all redshifts up to z ∼ 50, potentially providing a probe of the early Universe. This is in contrast to optical surveys which are difficult in certain redshift ranges and are impossible above a redshift of z ∼ 6 due to a lack of sources. The use of the 21 cm line for cosmology is not without disadvantages. The most concerning is the presence of bright foregrounds. In particular, synchrotron emission from both galactic sources and extra-galactic sources is of order 104 times brighter than the signal from neutral hydrogen. However, all known foreground contaminants are expected to be spectrally smooth. The 21 cm signal on the other hand is a spectral line and even when combining the line emission over all redshifts, the signal from hydrogen is modulated by the large-scale structure. It is thus expected that the foregrounds can be separated from the signal, or at the very least that foregrounds will contaminate only a small number of the signal modes along the line of sight. In practise, instrumental effects; such as imperfect spectral calibration, frequency dependence of the instrumental beam, and contamination of the unpolarized channel by polarized emission; all make foreground subtraction more difficult than might be initially expected. As such 21 cm redshift surveys are very challenging. The first detection of the 21 cm signal from large-scale structure above z = 0.1 was presented in Chang et al. [2010]. There, data from the Green Bank Telescope (GBT) in West Virginia was cross-correlated against a traditional optical galaxy survey at a redshift of z ≈ 0.8. This confirmed the existence of the 21 cm signal from large-scale structure, but did not require the perfect removal of foregrounds, since residual contamination does not correlate with the optical survey. Intensity mapping is both competitive with, and complementary to, traditional galaxy surveys. Technological advances in radio frequency communications and digital signal processing now make it possible to design intensity mapping systems that are capable of surveying large volumes of the Universe at relatively modest cost. This makes intensity Chapter 1. Introduction 6 mapping especially attractive for measuring large-scale features in the Universe, such as the BAO. In cases where intensity mapping surveys overlap with galaxy surveys, there are several synergies that would benefit both surveys. The most basic of these is a cross-check of results, where experiments with independent systematic errors validate one another. Going beyond this, the fact that galaxies and neutral hydrogen are different tracers of the same cosmic structure, allows for the disentanglement of key uncertainties in both surveys, greatly improving the precision of some measurements [McDonald and Seljak, 2009].
1.2 Formalism
Here we review some of the basic cosmological theory and concepts that will be required for understanding the following chapters. This formalism is covered in more detail in Hartle [2003], Dodelson [2003] and Liddle and Lyth [2000].
1.2.1 The background expansion
To zeroth order, the Universe is assumed to be homogeneous and isotropic, which on very large scales agrees well with observations. The metric for a homogeneous and isotropic Universe is the Friedmann-Lemaˆıtre-Robinson-Walker (FLRW) metric, which can be written as
2 2 2 2 2 2 2 2 ds = − dt + a(t) dr + Sk(r) ( dθ + sin θ dφ ) , (1.1)
with √ sin ( kr) √ if k > 0, k Sk(r) = r if k = 0, (1.2) √ sinh ( −kr) √ if k < 0. −k Here, we use units where the speed of light, c, is unity. By assumption of homogeneity and isotropy, all matter-energy components must be on average mutually at rest in these coordinates. The spatial coordinates in the above metric are thus dubbed comoving coordinates (with distances in these coordinates referred to as comoving distance), and t is the proper time of comoving observers. k > 0, k = 0, and k < 0 correspond to an open, flat, and closed Universe respectively. We are free to rescale the coordinates such that a = 1 at the present epoch. Likewise we choose t = 0 at the present epoch and r = 0 at approximately earth’s location. With these definitions, k may be interpreted as the Chapter 1. Introduction 7
Gaussian curvature of the Universe at t = 0. All observations are currently consistent the Universe being flat and as such we will henceforth take k = 0.
It is common to use the alternate time coordinate η, related to t by
a dη = dt, (1.3)
and hence Z t dt˜ η = . (1.4) 0 a(t˜) η is referred to as the conformal or comoving time. It has an especially simple inter- pretation, in that a light ray arriving at earth today, and emitted at time η, originated at a distance r = −cη, where the factor of c has been included for clarity. Since all observations of the Universe involve light arriving at the earth at the present epoch, this correspondence between time and distance is especially convenient, and η is often used to represent either in an observational context (with factors of c omitted).
In our Universe, the scale factor, a(t), has been increasing monotonically with time, with a = 0 corresponding to the Big Bang and a = 1 corresponding to the present day. This gives yet another way to refer to a time in the Universe’s evolution. More commonly used is the redshift, z ≡ 1/a − 1. This is an observationally convenient quantity since, for light emitted by a distant object receding with the Hubble flow, the wavelength shift is ∆λ = zλ0 where λ0 is the rest frame wavelength of the light. For z 1 the recessional velocity is v = cz.
An important quantity is the Hubble parameter defined as H ≡ a0/a =a/a ˙ 2, where the prime represents the derivative with respect to proper time and the over-dot repre- sents the derivative with respect to the conformal time (a0 ≡ da/ dt,a ˙ ≡ da/ dη). A time scale and a length scale can then be defined as 1/H and c/H respectively, corre- sponding to the Hubble time (also called the expansion time) and the Hubble distance. The Hubble distance is the proper separation beyond which two points recede at a speed greater than the speed of light. Points separated by more than this distance at a given time are said to not be causally connected at that time1. A more relevant scale might be the comoving equivalent: c/aH = c(˙a/a)−1.
Relating these time measures requires the Einstein equations, which for the FLRW
1This is not intended to be a precise statement, it simply sets a length scale for causality. Chapter 1. Introduction 8 metric yield the Friedmann equations:
a0 2 8πG = ρ¯ , (1.5) a 3 tot a00 4πG = − (¯ρ + 3¯p ), (1.6) a 3 tot tot
whereρ ¯tot is the total density andp ¯tot is the total pressure. The over-bar, ¯, indicates that the quantity is spatially averaged. The total density and pressure get contributions from matter (m), radiation (r), and dark energy (Λ, assumed to be a cosmological constant). Differentiating the first equation and substituting it into the second gives
0 ρ¯tot = −3H(¯ρtot +p ¯tot). (1.7)
This is the energy conservation equation for an FLRW Universe. If we assume that energy is not interchangeable between the different constituents, then this equation can be used to solve for the a dependence of the densities. We also require the equation of state for each constituent. The equations of state for each component, along with the inferred evolutions of the densities are
4 pr = ρr/3, ρ¯r =ρ ¯r0/a ; (1.8) 3 pm = 0, ρ¯m =ρ ¯m0/a ; (1.9)
pΛ = −ρΛ, ρ¯Λ =ρ ¯Λ0. (1.10)
To make this dependence on a explicit, we define the dimensionless present day density constants as 8πG ΩX ≡ 2 ρ¯X0. (1.11) 3H0
where we have defined the Hubble constant, H0 ≡ H(t = 0). Evaluating the first
Friedmann equation at t = 0 leads to the constraint that Ωr + Ωm + ΩΛ = 1, resulting in
the interpretation that ΩX is the present day energy fraction of component X. The first Friedmann equation can then be written as
a0 2 = H2(Ω /a4 + Ω /a3 + Ω ), (1.12) a 0 r m Λ
or 0 p 4 3 a /a = H0 Ωr/a + Ωm/a + ΩΛ). (1.13)
Given the parameters H0,Ωr,Ωm, and ΩΛ, this equation can be integrated numerically Chapter 1. Introduction 9
−t −η
) 0
0 10 /H 1
10-1 time(Hubble units,
10-2 10-2 10-1 100 101 102 103 redshift, z
Figure 1.4: Proper time t and conformal time η as a function of redshift z. The magnitude of the proper time can be interpreted as the distance travelled by a photon observed today and emitted by a source at redshift z, while the magnitude of η is the current/comoving distance to that source. These differ because the source recedes with the Hubble flow as the photon is in transit. t and η are given in terms of the Hubble time 1/H0 = 14.6 Gyr or alternately the Hubble distance c/H0 = 3.00 Gpc/h where h = H0/(100 km/s/Mpc) = 0.671.
to obtain the full expansion history. Current best fit values for the parameters are H0 = − 67.1 km/s/Mpc, Ωr = 8.24 × 10 5, Ωm = 0.318, and ΩΛ = 0.682 [Planck Collaboration et al., 2013d], with uncertainties at the percent level. Figure 1.4 shows the expansion history calculated from the above equation and these parameters.
1.2.2 Perturbations
While on large scales, above ∼ 100 Mpc, the Universe is homogeneous and isotropic to a good approximation, on smaller scales there are perturbations to the mean background expansion. The evolution of these perturbations is normally calculated using linear per- turbation theory, although below ∼ 10 Mpc, where the perturbations approach order unity, simulations are required to treat the full non-linear evolution. While the assump- tions of homogeneity and isotropy are broken by the perturbations, the Universe is still Chapter 1. Introduction 10 assumed to be statistically homogeneous and isotropic. That is, each location in the Uni- verse is assumed to be statistically equivalent, even if the exact values of any fields differ from place to place. This assumption is often referred to as the cosmological principle.
Basics
In the field of large-scale structure, the primary observable is the matter density field, or some proxy such as the galaxy number density or 21 cm brightness. We define the density perturbations as ρ(~x,t) − ρ¯(t) δ(~x,t) = . (1.14) ρ¯(t) Cosmology does not make predictions about the precise density at a given location. It instead predicts the statistical correlations between densities. The most important quan- tity predicted by theory is the 2-point function, which can be written as the correlation function: hδ(~x,t)δ(~x + ~r)i = ξ(r), (1.15) which is independent of ~x by assumption of statistical homogeneity and independent of rˆ (the direction of ~r) by assumption of statistical isotropy. The perturbations are much more easily described in the spatial Fourier domain, defined by Z ~ δ(~k) = F [δ(~x)] = δ(~x)e−ik·~x d3~x. (1.16)
This has the advantage that all Fourier modes evolve independently. This results from our assumption of statistical homogeneity and is true only at linear order in perturbation theory. In the Fourier domain, the two-point statistic is
hδ(~k)δ∗(~k0)i = (2π)3δ(3)(~k − ~k0)P (k), (1.17) where δ(3)(~k) is the 3D Dirac delta function and is not to be confused with δ(~k). P (k) is the power spectrum is and related to the correlation function by P (~k) = F [ξ(~r)] (where the vector signs over k and r remind us that, while P and ξ only depend on the magnitude of their arguments, the Fourier transform must be performed in 3D). The initial power spectrum of perturbations is set by physical processes in the first instants of the Universe’s evolution, the dominant theory for which is cosmological in- flation. Inflation predicts a nearly scale-invariant primordial power spectrum, where −3 Pp(k) ∼ k . This is in good agreement with observations. Chapter 1. Introduction 11
The initial power spectrum can be related to that observed in the large-scale structure through linear perturbation theory. Perturbation theory describes the evolution of the perturbations over cosmic time. As mentioned above, when perturbation amplitude reach order unity, perturbation theory ceases to be valid, and simulations must be employed.
Evolution equations
The governing equations for linear perturbation theory come from two sources. The first is the conservation conditions, which derive from the fact that the stress-energy tensor has zero divergence. The second is Einstein’s equations. Here we quote these equations for an arbitrary perfect fluid, which includes most of the important cases in cosmology. We use η as our primary time coordinate.
The fluid is described by three variables: the density perturbations, δ; the pressure perturbations, π; and the velocity perturbations, θ. These are defined by
ρ(~x,η) − ρ¯(η) δ(~x,η) = (1.18) ρ¯(η) p(~x,η) − p¯(η) π(~x,η) = (1.19) p¯(η) −i~k θ(~k, η) = ~v(k, η), (1.20) k
where ~v is the 3D velocity field, and all quantities (e.g. ρ, p) refer to the fluid. We note that the curling part of the velocity field does not couple to the density perturbations at linear order and can thus be ignored.
In addition, metric perturbations are represented by the two fields Φ and Ψ, which contribute to the metric in the following manner:
2 g00(~x,η) = −a(η) [1 + 2Ψ(~x,η)] (1.21) 3 X 2 gii(~x,η) = 3a(η) [1 + 2Φ(~x,η)]. (1.22) i=1
Ψ is recognizable as the Newtonian potential, and Φ is the spatial curvature perturbation.
With all the ingredients in place, we can now write down the equations of motion governing the perturbations. The fact that the stress energy tensor has zero divergence Chapter 1. Introduction 12 yields two conservation equations:
(a3ρδ¯ )˙+ a3(¯ρ +p ¯)(kθ + 3Φ)˙ + 3¯pa2aπ˙ = 0 (1.23) p¯ θ˙ + aH(1 − 3p/¯˙ ρ¯˙)θ − kΨ − k π = 0. (1.24) ρ¯ + P¯
The first of these equations is the mass continuity equation and the second is the Euler fluid equation. Einstein’s equations yield an additional two equations:
aH ρ¯ +p ¯ k2Φ = 4πa2ρ¯ δ + 3 θ (1.25) k ρ¯ k2(Φ + Ψ) = 0. (1.26)
The first of these is recognizable as Poisson’s equation, while the second convenient equation allows for the elimination of Φ.
When studying large-scale structure after recombination, when the baryonic matter has decoupled from the radiation, the matter fluid can be treated as being pressureless. The evolution is described by the three reduced equations:
δ˙ + 3Φ˙ + kθ = 0 (1.27) θ˙ + aHθ + kΦ = 0 (1.28) aH k2Φ = 4πa2ρ¯ δ + 3 θ . (1.29) k
On scales much smaller than the Hubble scale, where k aH, the system can be easily reduced to a single equation for the evolution of the density perturbations:
δ¨ + aHδ˙ − 4πa2ρδ¯ = 0. (1.30)
The fact that k does not appear in this equation means that in this limit, structure undergoes scale-independent growth. The linearity of this equation means that the frac- tional growth of perturbations depends only on the background expansion, a(η). This is an important result when studying structure at late times, say z . 50, when all modes of interest are well within the horizon. This is also the regime in which all observations of large-scale structure are made. Chapter 1. Introduction 13
Summary
The evolution of an individual mode at late times is normally split into two components such that ~ a 9 ~ δ(k, η) = g(η) Ti(k) δp(k)(η > ηi). (1.31) ai 10 Here, the subscript i refers to an intermediate time, when all modes of interest are well within the horizon, but before we intend to make observations of the large-scale
structure. A reasonable choice would be zi = 20. g(η) is the growth function, which
describes the scale-independent growth of perturbations at late times (η > ηi), as given
by Equation 1.30. The factor of (a/ai) is removed such that g(η) is unity for a matter
dominated Universe (Ωm = 1). Ti(k) is the transfer function, which describes the scale- ~ dependent growth of the perturbations from the primordial value (δp(k)) to the value at
the intermediate time. The factor is 9/10 is removed such that Ti(k) asymptotes to unity at large scales, (k aiHi). The transfer function is calculated using the full scale-dependent evolution equations. The dark matter component is well described by the pressureless versions, but prior to recombination at z ∼ 1000 the baryonic component of the matter is tightly coupled to the photons. The perturbations of multiple coupled fluids must then be considered, greatly complicating the calculation. This is generally done numerically. With these definitions the matter power spectrum, which we hope to observe in our redshift surveys, is then
a 2 9 2 P (k, η) = g(η) Ti(k) Pp(k)(η > ηi). (1.32) ai 10
1.3 Overview
Here each chapter is summarized in the broader context of this thesis. In addition I state my contributions to each chapter within my collaborations.
1.3.1 Outline
This thesis is divided into two parts. Part I contains entirely theoretical work concern- ing measurements that could in principle be performed using 21 cm intensity mapping. Part II contains entirely observational work, where the Green Bank Telescope was used to perform one of the first large-scale structure surveys using 21 cm intensity mapping. This work represents a pioneering effort to establish intensity mapping as an efficient Chapter 1. Introduction 14
technique for learning about the Universe.
Part I
In Chapter 2, originally published in Masui et al. [2010b], we considered the ability of 21 cm intensity mapping experiments to constrain modified gravity models. Modifications to Einstein’s theory of gravity, General Relativity, are sometimes invoked as an alternative to dark energy to explain the observed accelerating expansion of the Universe. We show that experiments designed to measure the properties of dark energy are also able to tightly constrain modified gravity models, through the observational probes of baryon acoustic oscillations and weak lensing. This chapter involved a relatively straight forward calculation, using well established techniques. The project represents my introduction to statistical analysis in modern cosmology. In Chapter 3, originally published in Masui and Pen [2010], we discovered a new effect by which gravity waves created in cosmological inflation leave a distinct signature in the large-scale structure of the Universe. The effect could be used to gain rare insight into the inflationary era if observed. We considered the feasibility of making a detection of the effect using 21 cm observations of the early Universe at redshift z ∼ 12. We concluded that while such a detection would be very difficult, the reward would be sufficient that searching for the effect using a futuristic experiment would still be very compelling. Chapter 4, originally published in Masui et al. [2010a], considers what measurements could in principle be made using instruments that either currently exist or will be con- structed in the near future. In this way, it differs significantly from the previous two chapters which each consider measurements that would be performed using ‘the ulti- mate’ intensity mapping survey. We showed that even without building a dedicated experiment, intensity mapping could be used to make interesting measurements. In par- ticular, the Green Bank Telescope would be capable of performing a large-scale structure survey that would have the sensitivity to detect the Kaiser red-shift space distortions, settling a long standing controversy about the abundance of neutral hydrogen in the Universe.
Part II
Chapter 5, which forms the basis of an intended future publication, gives an overview of the analysis pipeline used to analyze survey data from the Green Bank Telescope. It describes the formalism for the various parts of the data analysis including radio frequency interference mitigation, calibration, noise estimation and map-making. It also Chapter 1. Introduction 15 describes some details of the software modules that implement the analysis. Chapter 6, originally published in [Masui et al., 2013], presents the cross-correlation power spectrum of the intensity mapping survey at the Green Bank Telescope with a tra- ditional galaxy survey. The cross correlation was detected with a statistical significance of 7.4σ, far exceeding the significance of previous measurements and putting a lower limit on the amplitude of the 21 cm brightness fluctuations. Chapter 7, submitted to Monthly Notices of the Royal Astronomical Society: Letters and available in pre-print as Switzer et al. [2013], represents the first use of the auto- correlation power spectrum from the GBT survey to make an astrophysical measurement. A Bayesian analysis is used to combine the lower limit from the cross-correlation and the upper limit from the auto-correlation into a determination of the 21 cm signal amplitude. We discuss future directions and conclude in Chapter 8.
1.3.2 Summary of contributions
In Chapter 2, Patrick McDonald calculated the BAO error bars for the 21 cm experiments as well as the constraints from the Planck mission. I calculated the predicted BAO signal for the modified gravity models and combined these two ingredients into the projected constraints on the modified gravity models. Likewise, Fabian Schmidt calculated the weak lensing spectrum for the modified gravity models, and the error bars for the intensity mapping experiments were taken from a calculation performed by Ting Ting Lu [Lu et al., 2010]. Again I combined these into the constraints on the models. I also lead the project, produced the figures, and did the majority of the writing for its publication. All calculations, figures and writing for Chapter 3 was prepared by myself, under the guidance of Ue-Li Pen. In Chapter 4, Patrick McDonald wrote the software that calculates the sensitivity for a general 21 cm survey. I used this software to perform forecasts for the surveys under consideration and to optimize the surveys. I also did the majority of the writing and created all the plots. For the observations and data analysis that form the basis of Chapters 5, 6 and 7, I took the lead on the survey planning and data analysis. I made significant contributions to all proposals to the GBT telescope allocation committees, lead the planning of all observations, wrote the vast majority of the telescope control scripts, and performed roughly one quarter of the observations. I designed the software framework for the data analysis pipeline and made contributions to all the data analysis software up to the map making. This includes preprocessing the data, radio frequency interference Chapter 1. Introduction 16 mitigation (written principally by Liviu-Mihai Calin), and calibration (written principally by Tabitha Voytek). I was the sole author of all noise estimation and map making software. While I lead the development of the pipeline, the data was actually run through the software by collaborators, mostly Tabitha Voytek. In early versions of the data analysis, I wrote the software that performed the fore- ground subtraction and power spectrum (then correlation function) estimation. Re- sponsibility for this part of the pipeline has since been transfered to my collaborators, principally Eric Switzer with contributions from Yi-Chao Li. While I have remained a consulting party throughout, I have made no subsequent contributions to writing the software for the parts of the pipeline subsequent to map-making. Eric Switzer and Yi-Chao Li wrote all the software that dealt with the WiggleZ galaxy catalogues and cross-correlating them with the intensity mapping survey. In Chapters 6 and 7, most of the writing was roughly evenly split between myself and Eric Switzer. In Chapter 7, I performed the Bayesian analysis to arrive at the final conclusions of the paper and produced all plots. Part I
The potential of 21 cm cosmology
17 Chapter 2
Projected Constraints on Modified Gravity Cosmologies from 21 cm Intensity Mapping
A version of this chapter was published in Physical Review D as “Projected constraints on modified gravity cosmologies from 21 cm intensity mapping”, Masui, K. W., Schmidt, F., Pen, U.-L. and McDonald, P., Vol. 81, Issue 6, 2010. Reproduced here with the permission of the APS.
2.1 Summary
We present projected constraints on modified gravity models from the observational tech- nique known as 21 cm intensity mapping, where cosmic structure is detected without re- solving individual galaxies. The resulting map is sensitive to both BAO and weak lensing, two of the most powerful cosmological probes. It is found that a 200 m×200 m cylindrical telescope, sensitive out to z = 2.5, would be able to distinguish Dvali, Gabadadze and Porrati (DGP) model from most dark energy models, and constrain the Hu & Sawicki −6 f(R) model to |fR0| < 9 × 10 at 95% confidence. The latter constraint makes exten- sive use of the lensing spectrum in the nonlinear regime. These results show that 21 cm intensity mapping is not only sensitive to modifications of the standard model’s expan- sion history, but also to structure growth. This makes intensity mapping a powerful and economical technique, achievable on much shorter time scales than optical experiments that would probe the same era.
18 Chapter 2. Constraining modified gravity 19
2.2 Introduction
One of the greatest open questions in cosmology is the cause of the observed late time acceleration of the universe. Within the context of normal gravity described by Einstein’s General Relativity, this phenomena can only be explained by an exotic form of matter with negative pressure. Another possible explanation is that on cosmological scales, General Relativity fails and must be replaced by some theory of modified gravity. Several approaches have been proposed to modify gravity at late times to explain the apparent acceleration of the universe. The challenge in these modifications is to preserve successful predictions of the CMB at z ≈ 1000, and also the precision tests at the present epoch in the solar system. A generic class of theories operates with the Chameleon effect, where at sufficiently high densities General Relativity (GR) is restored, thus applying both in the solar system and the early universe. To further understand the nature of gravity would require probing gravity on cosmological scales. Large scales means large volume, requiring large fractions of the sky. Gravity can be probed by gravitational lensing, which measures geodesics and thus the gravitational curvature of space, and is a sensitive probe of the growth of structure in the Universe [Knox et al., 2006, Jain and Zhang, 2008, Tsujikawa and Tatekawa, 2008, Schmidt, 2008]. In working out predictions for cosmology, the theoretical challenge posed by these the- ories are the nonlinear mechanisms in each model, necessary in order to restore Einstein Gravity locally to satisfy Solar System constraints. We present quantitative results from nonlinear calculations for a specific f(R) model, and forecasted constraints for future 21 cm experiments. An upcoming class of experiments propose the observation of the 21 cm spectral line at low resolution over a large fraction of the sky and large range of redshifts [Peterson et al., 2009]. Large scale structure is detected in three dimensions without the detection of individual galaxies. This process is referred to as 21 cm intensity mapping. These experiments are sensitive to structures at a redshift range that is observationally difficult to observe for ground-based optical experiments due to a lack of spectral lines. Yet these experiments are extremely economical since they only require limited resolution and no moving parts [Seo et al., 2010]. Intensity mapping is sensitive to both the Baryon Acoustic Oscillations (BAO) and to weak lensing, two of the most powerful observational methods to determine cosmological parameters. It has been shown that BAO detections from 21 cm intensity mapping are powerful probes of dark energy, comparing favourably with Dark Energy Task Force Chapter 2. Constraining modified gravity 20
Stage IV projects within the figure of merit framework [Chang et al., 2008, Albrecht et al., 2006]. In this paper we present projected constraints on modified gravity models from 21 cm intensity mapping. In Section 2.3 we describe the modified gravity models considered. In Section 2.4 we discuss the observational signatures accessible to 21 cm intensity mapping, and calculate the effects of modified gravity on these signatures. In Section 2.5 we present statistical analysis and results and we conclude in Section 2.6. We assume a fiducial ΛCDM cosmology with WMAP5 cosmological parameters:
Ωm = 0.258, Ωb = 0.0441, ΩΛ = 0.742, h = 0.719, ns = 0.963 and log10 As = −8.65 2 [Komatsu et al., 2009]. We will follow the convention that ωx ≡ h Ωx.
2.3 Modified Gravity Models
Here we describe some popular modified gravity models for which projected constraints will later be derived. Throughout we will use units in which G = c = ~ = 1 and will be using a metric with mostly negative signature: (+, −, −, −).
2.3.1 f(R) Models
In the f(R) paradigm, modifications to gravity are introduced by changing the standard Einstein-Hilbert action, which is linear in R, the Ricci scalar. The modifications are made by adding an additional non linear function of R [Starobinsky, 1980, Capozziello, 2002, Carroll et al., 2004]
Z √ R + f(R) S = d4x −g + L , (2.1) 16π m
where Lm is the matter Lagrangian. See Sotiriou and Faraoni [2010] for a comprehensive review of f(R) theories of gravity. The choice of the function f(R) is arbitrary, but in practice it is highly constrained by precise solar system and cosmological constraints, as well as stability criteria [Nojiri and Odintsov, 2003, Sawicki and Hu, 2007] (see below). In this paper, we choose parameter- izations of f(R) such that it asymptotes to a constant for a certain choice of parameters and thus approaches the fiducial ΛCDM. In general, f(R) models have enough freedom to mimic exactly the ΛCDM expansion history and yet still impose a significant modification to gravity [Nojiri and Odintsov, 2006, Song et al., 2007]. As such probes of the expansion history are less constraining Chapter 2. Constraining modified gravity 21
than probes of structure growth, which will be evident in the constraints presented in later sections.
Variation of the above action yields the modified Einstein Equations
f G + f R − − f g − ∇ ∇ f = 8π T , (2.2) µν R µν 2 R µν µ ν R µν
where fR ≡ df(R)/dR, a convention that will be used throughout. f(R) gravity is equivalent to a scalar-tensor theory [Nojiri and Odintsov, 2003, Chiba, 2003] with the
scalar field fR having a mass and potential determined by the functional form of f(R). The field has a Compton wavelength given by its inverse mass
1 p λC = = 3fRR. (2.3) mfR
The main criterion for stability of the f(R) model is that the mass squared of the fR
field is positive, i.e. fRR > 0. In most cases, this simply corresponds to a sign choice for
the field fR (specifically for the model we consider below, fR0 is constrained to be less than 0).
On scales smaller than λC , gravitational forces are enhanced by 4/3, while they reduce
to unmodified gravity on larger scales. The reach of the modified forces λC generically leads to a scale-dependent growth in f(R) models.
While the dynamics are significantly changed in f(R), the relation between matter and the lensing potential is unchanged up to a rescaling of the gravitational constant by the linear contribution in f. The fractional change is of order the background field value ¯ ¯ fR ≡ fR(R) 1 where R is the background curvature scalar.
Proceeding further requires a choice of the functional form for f. A functional form is considered which is representative of many other cases.
Hu and Sawicki [2007] (HS) proposed a simple functional form for f(R), which can be written as n c1(R/R0) f(R) = −R0 n , (2.4) c2(R/R0) + 1 ¯ where we have used the value of the scalar curvature in the background today, R0 ≡ R|z=0 for convenience. This three parameter model passes all stability criteria for positive n,
c1 and c2. One parameter can be fixed by demanding the expansion history to be close (within observational limits) to ΛCDM. In this case, Equation 2.4 can be conveniently Chapter 2. Constraining modified gravity 22
reparametrized and approximated by
f R R n f(R) ≈ −2Λ − R0 0 0 . (2.5) n R
Here Λ and fR0—the value of the fR field in the background today—have been used
to parameterize the function in lieu of c1 and c2. This approximation is valid as long
as |fR0| 1, which is necessary to satisfy current observational constraints [Hu and Sawicki, 2007, Schmidt et al., 2009b]. While Λ is conceptually different than vacuum energy, it is mathematically identical and will thus be absorbed into the right hand side
of the Friedmann equation and parameterized by ΩΛ. In quoting constraints, we will marginalize over this parameter as it is of no use in identifying signatures of modified gravity. The parameter fR0 can be though of as controlling the strength of modifications to gravity today, while higher n pushes these modifications to later times. The effects of changing these parameters are discussed in greater detail in Hu and Sawicki [2007].
Allowed f(R) models exhibit the so-called chameleon mechanism: the fR field be- comes very massive in dense environments and effectively decouples from matter. This ef- fect is active whenever the Newtonian potential is of order the background fR field. Since cosmological potential wells are typically of order 10−5 for massive halos, the chameleon −5 −7 effect becomes important if |fR| . 10 . If the background field is ∼ 10 or smaller, a large fraction of the collapsed structures in the universe are chameleon-screened, so that the model becomes observationally indistinguishable from ΛCDM. Since the chameleon effect will affect the formation of structure, standard fitting formulas based on ordinary GR simulations, such as those mapping the linear to the nonlinear power spectrum, cannot be used for these models. Recently, however, self- consistent N-body simulations of f(R) gravity have been performed which include the chameleon mechanism [Oyaizu, 2008, Oyaizu et al., 2008, Schmidt et al., 2009a]. We will use the simulation results for forecasts of weak lensing in the nonlinear regime below. It should be noted that f(R) models are not without difficulties. In particular, an open issue is the problem of potential unprotected singularities [Abdalla et al., 2005, Frolov, 2008, Nojiri and Odintsov, 2008].
2.3.2 DGP Braneworld
A theory of gravity proposed by Dvali, Gabadadze and Porrati (DGP) assumes that our four dimensional universe sits on a brane in five dimensional Minkowski space [Dvali et al., 2000]. On small scales gravity is four dimensional but, on larger scales it becomes fully Chapter 2. Constraining modified gravity 23
five dimensional. Here we parameterize DGP by rc, the scale at which gravity crosses over in dimensionality. The DGP model has two branches depending on the embedding of the brane in 5D space. In the self-accelerating branch, the universe accelerates without
need for a cosmological constant if rc ∼ 1/H0 [Deffayet, 2001, Deffayet et al., 2002]. In this branch, assuming a spatially flat Universe for now, the modified Friedmann equation is given by H 8π H2 − = ρ,¯ (2.6) rc 3 which clearly differs from ΛCDM. Thus, in contrast to the other models considered here, DGP without a cosmological constant does not reduce to ΛCDM and it is possible to completely rule out this scenario (where the others can only be constrained). In fact DGP (without a cosmological constant) has been shown to be in conflict with current data [Fang et al., 2008]. It is presented here largely for illustrative purposes.
On scales much smaller than rc, gravity is four-dimensional but not GR. On these scales, DGP can be described as an effective scalar-tensor theory [Koyama and Maartens, 2006, Koyama and Silva, 2007, Scoccimarro, 2009]. The massless scalar field, the brane- bending mode, is repulsive in the self-accelerating branch of DGP. Hence, structure for- mation is slowed in DGP when compared to an effective smooth Dark Energy model with the same expansion history. While the growth of structure is thus modified in DGP
even on scales much smaller than rc, gravitational lensing is unchanged. In other words, the relation between matter overdensities and the lensing potential is the same as in GR [Lue et al., 2004]. As in f(R), the DGP model contains a nonlinear mechanism to restore GR locally. This Vainshtein mechanism is due to self-interactions of the scalar brane-bending mode which generally become important as soon as the density field becomes of order unity. In the Vainshtein regime, second derivatives of the field saturate, and thus modified gravity effects are highly suppressed in high-density regions [Lue et al., 2004, Koyama and Silva, 2007, Schmidt, 2009]. We will only consider linear predictions for the DGP model here.
2.4 Observational Signatures
In this Section we describe the observational signatures available to 21 cm intensity map- ping. We also give details on calculating the observables within modified gravity models. We consider two types of measurements: the Baryon Acoustic Oscillations and weak gravitational lensing. For the fiducial survey, we assume a 200 m × 200 m cylindrical telescope, as in Chang Chapter 2. Constraining modified gravity 24
et al. [2008]. We will also present limited results for a 100 m×100 m cylindrical telescope to illustrate effects of reduced resolution and collecting area on the results. This latter case is representative of first generation projects [Seo et al., 2010]. In the 200 m case we assume 4000 receivers, and in the 100 m case 1000 receivers. We assume either telescope covers 15000 sq. deg. over 4 years. We assume neutral hydrogen fraction and the bias
remain constant with ΩHI = 0.0005 today and b = 1. The object number density is assumed to ben ¯ = 0.03 per cubic h−1Mpc (effectively no shot-noise, as should be the case in practice [Chang et al., 2008]).
2.4.1 Baryonic acoustic oscillation expansion history test
Acoustic oscillations in the primordial photon-baryon plasma have ubiquitously left a distinctive imprint in the distribution of matter in the universe today. This process is understood from first principles and gives a clean length scale in the universe’s large scale structure, largely free of systematic uncertainties and calibrations. This can be used to measure the global cosmological expansion history through the angular diameter
distance, dA, and Hubble parameter, H, vs redshift relation. The detailed expansion and acceleration will differ between pure cosmological constant and modified gravity models. We use essentially the method of Seo and Eisenstein [2007] for estimating distance errors obtainable from a BAO measurement, including 50% reconstruction of nonlinear degradation of the BAO feature. We assume the frequency range corresponding to z < 2.5 is covered (the lower z end should be covered by equivalent galaxy redshift surveys if not a 21cm survey). For the sky area and redshift range surveyed, the 200 m telescope is nearly equivalent to a perfect BAO measurement. The limited resolution and collecting area of the 100 m telescope substantially degrades the measurement at the high-z end. The expansion history for modified gravity models can be calculated in an analogous way to that in General Relativity. The Friedmann Equation in DGP, Equation 2.6 can be written as r 2 2 k 1 1 8πρ¯ H = − 2 + + 2 + , (2.7) a 2rc 4rc 3
where k is the curvature, and rc is the crossover scale. It is convenient to introduce the 2 parameter ωrc ≡ 1/4rc which stands in for rc. This equation can be solved numerically to calculate the observable quantities. We now calculate the expansion history in the HS f(R) model using a perturbative
framework which is well suited for calculating constraints on fR0. Working in the confor- mal gauge and mostly negative signature, we start with the modified Einstein’s Equation (2.2). At zeroth order the left hand side of the 00 component contains the modified Chapter 2. Constraining modified gravity 25
Friedmann equation 8πρ¯ ... H2 = + f g (a, a,˙ a,¨ a ), (2.8) 3 R0 n whereρ ¯ is the average density (including contributions from Λ), the over-dot represents a conformal derivative and
−1 (f¯+ 2Λ)a2 a˙ 2 a¨ gn ≡ 2 + fR 2 − fR0a 6 a a ... a a˙ 3¨aa˙ 2 +6f − . (2.9) RR a4 a5
For verifiability we quote ... a2R2(2aa¨2 − 7¨aa˙ 2 + 2 a aa˙ ) g = 0 . (2.10) 1 36¨a3
Evaluating Equation 2.8 at the present epoch yields the modified version of the standard constraint 2 h = ωm + ωr + ωk + ωΛ + fR0gn0. (2.11)
Note that the modified version of the Friedmann Equation is third order instead of first order, however, it has been shown that the expansion history stably approaches that of ΛCDM for vanishing fR0 [Hu and Sawicki, 2007]. For observationally allowed cosmologies fR0 1 we expand ... ˙ ¨ 2 fR0gn = fR0gn(˜a, a,˜ a,˜ a˜ ) + O(fR0), (2.12) wherea ˜ is the solution to the standard GR Friedmann equation.
By using Equation 2.12 in Equation 2.8 and keeping only terms linear in fR0, the ex- pansion history can be calculated in the regular way, along with the observable quantities dA(z) and H(z). For small fR0 this agrees well with the calculation in Hu and Sawicki [2007] where the full third order differential equation was integrated In calculating the Fisher Matrix, this treatment is exact because the Fisher Matrix depends only on the first derivative of the observables with respect to the model param- eters, evaluated at the fiducial model.
2.4.2 Weak Lensing
A second class of observables measures the spatial perturbations in the gravitational met- ric. Modified gravity will change the strength of gravity on large scales and thus modify Chapter 2. Constraining modified gravity 26 the growth of cosmological structure. Weak gravitational lensing, the gravitational bend- ing of source light by intervening matter, is a probe of this effect. Weak lensing measures the distortion of background structures as their light propa- gates to us. Here, the background structure is the 21 cm emission from unresolved sources. While light rays are deflected by gravitational forces, this deflection is not directly ob- servable, since we don’t know the primary unlensed 21 cm sky. However, weak lensing will induce correlations in the measured 21 cm background, since neighbouring rays pass through common lens planes. While the deflection angles themselves are small (of order arcseconds) the deflections are coherent over scales of arcminutes. In this way, the lensing signal can be extracted statistically using quadratic estimators [Lu et al., 2010]. Given the smallness of the lensing effect, a high resolution (high equivalent number density of “sources”) is necessary to detect the effect. The weak lensing observable that is predicted by theory is the power spectrum of the convergence κ. It is given by
2 Z χs 2 κκ 3 2 dχ WL(χ) 2 C (`) = Ωm H0 2 (χ)P (`/χ; χ), (2.13) 2 0 χ χ a (χ) where χ denotes comoving distances, P (k, χ) is the (linear or nonlinear) matter power spectrum at the given redshift, and we have assumed flat space. The lensing weight function WL(χ) is given by:
Z ∞ χ dN WL(χ) = dzs (χ(zs) − χ) (zs). (2.14) z(χ) χ(zs) dz
Here, dN/dz is the redshift distribution of source galaxies, normalized to unity. The factor (χ) in Equation 2.13 encodes possible modifications to the Poisson equation relating the −1 lensing potential to matter (Section 2.3). In f(R), it is given by (χ) = (1 + fR(χ)) , while = 1 for GR as well as DGP. Note that for viable f(R) models, − 1 . 0.01, so the effect of on the lensing power spectra is very small. The CAMB Sources module [Lewis et al., 2000, Lewis and Challinor, 2007] was used to calculate the lensing convergence power spectrum in flat ΛCDM models. The HALOFIT [Smith et al., 2003] interface for CAMB was used for calculations that include lensing at nonlinear scales. For the modified gravity models in the linear regime, the convergence power spectra were calculated using the Parametrized Post-Friedmann (PPF) approach [Hu and Saw- icki, 2007] as in Schmidt [2008]. Briefly, the PPF approach uses an interpolation between super-horizon scales and the quasi-static limit. On super-horizon scales (k aH), spec- Chapter 2. Constraining modified gravity 27
ifying the background expansion history, together with a relation between the two metric potentials, already determines the evolution of metric and density perturbations. On small scales (k aH), time derivatives in the equations for the metric perturbations can be neglected with respect to spatial derivatives, leading to a modified Poisson equa- tion for the metric potentials. The PPF approach uses a simple interpolation scheme between these limits, with a few fitting parameters adjusted to match the full calcu- lations [Hu and Sawicki, 2007]. The full calculations are reproduced to within a few percent accuracy. We use the transfer function of Eisenstein and Hu [1998] to calculate the ΛCDM power spectrum at an initial redshift of zi = 40, were modified gravity effects are negligible, and evolve forward using the PPF equations. For the f(R) model, we also calculate predictions in the nonlinear regime. For these, −6 −4 we use simulations of the HS model with n = 1 and fR0 values ranging from 10 to 10 . We use the deviation ∆P (k)/P (k) of the nonlinear matter power spectrum measured in f(R) simulations from that of ΛCDM simulations with the same initial conditions [Oyaizu et al., 2008]. This deviation is measured more precisely than P (k) itself. We then spline-interpolate the measurements of ∆P (k)/P (k) for k = 0.04 − 3.1 h/Mpc and at scale factors a = 0.1, 0.2, ...1.0, and multiply the standard nonlinear ΛCDM prediction (HALOFIT) with this value. For values of k > 3.1 h/Mpc, we simply set ∆P (k) = 0. However, for the angular scales and redshifts considered here (` < 600, see below), such high values of k do not contribute significantly. One might be concerned that this mixing of methods, for calculating the lensing spec- trum, might artificially exaggerate the effects of modified gravity if these methods do not agree perfectly. While the spectra calculated for the fiducial ΛCDM model differed by up to a percent between these methods, presumably due to slight differences in the transfer function, this should have no effect on the results. Any direct comparison between spec- tra (for example finite difference derivatives) are made between spectra calculated in the same manner. Note that the Fisher Matrix depends only on the first derivative of the observables with respect to the parameters and no cross derivatives are needed. The lensing spectra were not calculated for non-flat models, but it is expected that the CMB and BAO are much more sensitive to the curvature and as such the lensing spectra are relatively unaffected. Formally we are assuming that
σ ∂Cκκ ωk 1. σCκκ ∂ωk
Reconstructing weak lensing from 21 cm intensity maps involves the use of quadratic estimators to estimate the convergence and shear fields. The accuracy with which this Chapter 2. Constraining modified gravity 28 can be done increases with information in the source maps, however, this information saturates at small scales due to nonlinear evolution. As such, one cannot improve the lensing measurement indefinitely by increasing resolution, and the experiments considered here extract much of the available information within the redshift range considered. The accuracy with which the convergence power spectrum can be reconstructed from 21 cm intensity maps was derived in Lu et al. [2010], where the effective lensing galaxy density was calculated at redshifts 1.25, 3 and 5 (see Figure 7 and Table 2 therein). The effective volume galaxy density was corrected for the finite resolution of the experiment considered here. It was then interpolated, using a piecewise power law, and integrated 2 −2 from redshift 1 to 2.5 to obtain an effective area galaxy density of ng/σe = 0.37arcmin . 2 The parameter σe is the variance in the intrinsic galaxy ellipticity, which is only used here for comparison with optical weak lensing surveys. From the effective galaxy density the error on the convergence power is given by
s 2 σ2 ∆Cκκ(`) = Cκκ(`) + e , (2.15) (2` + 1)fsky ng where fsky is the fraction of the sky surveyed. The galaxy distribution function dN/dz used to calculate the theoretical curves (from Equation 2.13) should follow the effective galaxy density. Instead for simplicity, a flat step function was used, with this distribution function equal from redshift 1 to 2.5 and zero elsewhere. While the difference between the these distributions would have an effect on the lensing spectra, the effect on differences of spectra when varying parameters is expected to be negligible. Our approximation is also conservative, since the proper distribution function is more heavily weighted toward high redshift. Rays travelling from high redshift will be affected by more intervening matter and thus experience more lensing. This would increase the lensing signal, allowing a more precise measurement. Figure 2.1 shows the lensing spectra for the fiducial cosmology and a modified gravity model, including both linear and nonlinear calculations. The linear regime is taken to be up to ` = 140 for projected constraints. For calculations including weak lensing in the nonlinear regime, Cκκ(`) up to ` = 600 is used for the larger telescope. Beyond this scale the model used for lensing error-bars is not considered accurate at the shallowest redshifts in the source window [Lu et al., 2010]. This cut off coincides with the scale at which information in the source structures saturates due to non-linear evolution in standard gravity (although it is also not far from the resolution limit of the experiment). We speculate that a similar phenomena would occur in modified gravity and smaller scales are not expected to carry significant additional information. Note that it is the Chapter 2. Constraining modified gravity 29
−4 10 π
−5 C(l)/2
2 10 l Fiducial model HS f(R) with f = 10−4 R0 Fiducial model, linear f(R) linear −6 10 1 2 3 10 10 10 l
Figure 2.1: The Weak lensing convergence power spectra for ΛCDM and the HS f(R) −4 model with n = 1 and fR0 = 10 . Galaxy distribution function is flat between z = 1 and z = 2.5. source structures in which information saturates. At smaller scales the lensing spectrum would continue to carry information [Dore et al., 2009] if it could be reconstructed. For the smaller telescope the scale is limited to ` < 425 by the resolution at the high end of the redshift window. If the redshift window were subdivided into narrower bins, it would be possible to use information at scales down to ` ≈ 1000 in the centre bins as at these redshifts the telescope resolutions are better and structures are less non-linear. However, considering tomographic information is beyond the scope of this work. It is noted that these scales are very large by weak lensing standards where optical surveys typically make detections down to an ` of order 105.
2.4.3 External Priors from Planck
While the CMB is not sensitive to the late time effects of modified gravity (except by the integrated Sachs-Wolfe effect), it is invaluable for constraining other parameters and breaking degeneracies. As such, projected information from the Planck experiment is included. The Planck covariance matrix used here is given in McDonald and Eisenstein [2007, Table II]. All late time cosmological parameters (including the curvature) are Chapter 2. Constraining modified gravity 30 marginalized over, removing information contained in the ISW effect, and ensuring that sensitivity to f(R) is entirely from 21 cm tests below. The only remaining parameter that is related to the late time expansion is θs, the angular size of the sound horizon, which is then used as a constraint on the parameter sets of the modified gravity models.
2.5 Results
To quantify the projected constraints on f(R) models, the Fisher matrix formalism is employed. The HS f(R) models reduces to the fiducial model for vanishing fR0 and any value of n. Thus the Fisher Matrix formalism is used to project constraints on fR0 for given values of n. In the case of DGP, which does not reduce to the fiducial model, it is shown that a measurement consistent with the fiducial model can not be consistent with DGP for any parameter set. Unless otherwise noted, we account for freedom in the full cosmological parameter set: h, ωm, ωb, ωk, As and ns; representing the Hubble parameter; physical matter, baryon and curvature densities; amplitude of primordial scalar fluctuations and the spectral index; respectively. Within the f(R) models, the fiducial model is a special point in the parameter space as there are no modifications to gravity. As such, one cannot in general expect perturbations to observables to be linear in the f(R) parameter fR0, an assumption implicit in the Fisher Matrix formalism. This assumption does seem to hold for the expansion history, where our first order perturbative calculation agrees with the full solution to the modified Friedmann Equations calculated in Hu and Sawicki [2007]. However, this is not the case for weak lensing. For each f(R) model, the lensing spectrum was calculated for several values of fR0. It was observed that enhancements to the lensing power spectrum go as
κκ κκ α(`) C (`) − Cfiducial(`) ∼ (fR0) , with α(`) in the 0.5–0.7 range. This is because the reach of the enhanced forces in f(R) is a power law in fR0 following Equation (2.3), and the enhancement of the power spectrum for a given mode k roughly scales with the time that this mode has been within the reach of the enhanced force. Because of this behaviour, the constraints derived within the
Fisher Matrix formalism depend on the step size in fR0 used for finite differences. To correct for this, we use a step size that is dependent on the final constraint. −3 −4 The weak lensing Fisher Matrices where calculated for fR0 step sizes of 10 , 10 and 10−5. These were then interpolated—using a power law—such that the ultimate step size used for finite differences is roughly the quoted constraint on the modified gravity Chapter 2. Constraining modified gravity 31
95% confidence HS |fR0| upper limits n = 1 NL WL n = 2 n = 4 BAO 1.5e-02 ∼ 1.8e-02 3.0e-02 WL 2.3e-03 4.3e-05 4.0e-03 8.6e-03 BAO+WL 5.0e-05 8.9e-06 9.7e-05 4.6e-04
Table 2.1: Projected constraints on f(R) models for various combinations of observa- tional techniques, for a 200 m telescope. Constraints are the 95% confidence level upper limits and include forecasts for Planck. The non linear results (column marked NL WL) are for the HS model with n = 1. Results that make use of weak lensing with con- straints above 10−3 are only order of magnitude accurate. The linear regime is taken to be ` < 140, with the nonlinear constraints extending up to ` = 600.
parameter. For instance when the 95% confidence constraint on fR0 is quoted, the step size for finite differences is ∆fR0 ≈ 2σfR0 , where σfR0 is calculated from the interpolated Fisher matrix. This is expected to be valid down to step sizes at the 10−6 level where the chameleon mechanism is important. As such, for constraints below 10−5 a step size of 10−5 is always used. Note that this is conservative because an over sized finite difference step always underestimates the derivative of a power law with an power less than unity. For constraints above the 10−3 level a step size of 10−3 is used, which is the largest modification to gravity simulated. These constraints are considered unreliable due to these difficulties. We reiterate this this only affects results that include weak lensing information. Likelihood contours remain perfect ellipses in this procedure (which is clearly inaccurate), however the spacing between contours at different confidence levels is altered. Figure 2.2 shows the projected constraints on the HS f(R) model with n = 1 for various combinations of observational techniques, and a (200m)2 telescope. The elements in the lensing fisher matrix associated with the curvature are taken to be zero for the reasons given in Section 2.4.2. While this assumption is not conservative, it is expected to be valid, as the angular diameter distance as measured by the BAO is very sensitive to the curvature. In total three f(R) models were considered: HS with n = 1, 2, 4. The results are summarized in Table 2.1. It was found that while weak lensing, in the linear regime, is very sensitive to the modifications to gravity, it is only barely capable of constraining f(R) models without separate information about the expansion history. Even with the inclusion of Planck forecasts, degeneracies with h and ωk, the mean curvature, drastically increase the un- certainties on the modified gravity parameters. Indeed these three parameters are more than 95% correlated (depending on the exact model and confidence interval). This of course brings into question the neglect of the ωk terms in the weak lensing Fisher Ma- Chapter 2. Constraining modified gravity 32
−2 10 BAO BAO+WL NL WL −3 10 BAO+NL WL |
R0 −4
|f 10
−5 10
−6 10 0.65 0.7 0.75 h
Figure 2.2: Projected constraints on the HS f(R) model with n = 1 using several com- binations of observational techniques, for a 200 m telescope. All curves include forecasts for Planck. Allowed parameter values are shown in the fR0 − h plane at the 68.3%, and 95.4% confidence level. Results are not shown for “WL” which were calculated much less accurately (see text). Chapter 2. Constraining modified gravity 33
−2 10 BAO BAO+WL NL WL −3 10 BAO+NL WL |
R0 −4
|f 10
−5 10
−6 10 0.65 0.7 0.75 h
Figure 2.3: Same as Figure 2.2 but for a 100 m cylindrical telescope. trix. However it is noted that in these cases, the predicted limits on the curvature are
|ωk| < 0.025 at 95% confidence. The current, model independent, limits on the curvature using WMAP, SDSS and HST data are approximately half this value [Komatsu et al., 2011]. Our neglect of any direct probes of the expansion history for the Planck+WL con- straints is clearly unrealistic; however, the constraints illustrate what is actually measured by weak lensing. In any case these degeneracies are broken once BAO measurements are included, and in this final case the modified gravity parameters are correlated with the other parameters by at most 35%. Also, considering lensing in the nonlinear regime breaks the degeneracy to a certain extent. First generation cylindrical telescopes will likely be smaller than the one considered above. To illustrate the differences in constraining ability, we now present a few results for a cylindrical radio telescope that is 100m on the side. Reducing the resolution of the experiment degrades measurements in a number of ways. BAO measurements become less than ideal in the higher redshift bins. The smallest scale that can be considered for weak lensing drops to about ` = 425. A more important effect is that the lensing spectra can not be as accurately reconstructed, dropping the effective galaxy density down to 2 ng/σe = 0.22. Figure 2.3 shows analogous results to 2.2 but for a telescope with half the resolution. Chapter 2. Constraining modified gravity 34
1.05 A,fiducial
/d 1 A,DGP d 0.95 1.05
fiducial 1 /H DGP H 0.95 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Scale factor, a
Figure 2.4: Ratio of the coordinate dA(z) (top) and the Hubble parameter H(z) (bottom) as predicted by the best fit DGP model to the fiducial model. Error bars are from 21 cm BAO predictions. Fit includes BAO data available from the 200 m telescope and CMB priors on θs and ωm.
To show that a set of measurements consistent with the fiducial model would be inconsistent with DGP we first fit DGP to the fiducial model’s CMB and BAO expansion history by minimizing
2 T −1 χ˜ = (rDGP − rfiducial) C (rDGP − rfiducial), (2.16)
where rDGP and rfiducial are vectors of observable quantities as calculated in the DGP and fiducial models, and C is the covariance matrix. r includes BAO dA(z) and H(z) as 2 well as Planck priors on ωm and θs. Note thatχ ˜ is not truly chi-squared since rfiducial contains fiducial model predictions and is not randomly distributed like a real data set.
Performing the fit yields DGP parameters: h = 0.677, ωm = 0.112, ωk = −0.0086 and
ωrc = 0.067. Figure 2.4 shows the deviation of H and dA respectively for the best fit DGP model compared to the fiducial model.χ ˜2 = 332.8 for the fit despite there only being 16 degrees of freedom, and as such a measurement consistent with the fiducial model would thoroughly rule out DGP. In the case that expansion history measurements are consistent with DGP, the ques- Chapter 2. Constraining modified gravity 35
2
1.5 π
C(l)/2 1 2 l
0.5 Dark Energy DGP 0 0 20 40 60 80 100 120 140 l
Figure 2.5: Weak lensing spectra in for DGP and a smooth dark energy model with the same expansion history. DGP parameters are h = 0.665, ωm = 0.116, ωk = 0 and ωrc = 0.06. Errorbars represent expected accuracy of the 200 m telescope. tion arises as to whether DGP could be distinguished from a smooth dark energy model that had the same expansion history. The additional information in linear perturbations as measured by weak lensing allows DGP to be distinguished even from a dark energy model with an identical expansion history. Figure 2.5 shows the lensing spectra for a DGP cosmology similar to the best fit discussed above, as well as the dark energy model with the same expansion history as in Fang et al. [2008]. In principle one should consider the small amount of freedom within the DGP pa- rameter set that could be used to make the DGP spectrum better fit the dark energy spectra. However this is unlikely to significantly change the spectrum as all relevant parameters are tightly constrained by the CMB and BAO. For example it is clear from Figure 2.5 that the lensing spectra of the two models would better agree if the amplitude of primordial scaler perturbations was increased in the DGP model. However, Planck measurements would only allow of order half a percent increase while the disagreement is of order 10%. This is justified by the lack of correlations found in the f(R) fisher matrices once all three observational techniques are included. In addition we have not considered information from weak lensing in the nonlinear regime. Adding nonlinear scales would Chapter 2. Constraining modified gravity 36
only make our conclusion that DGP and smooth dark energy are distinguishable with these observations more robust.
2.6 Discussion
We have shown that the first generation of 21 cm intensity mapping instruments will be capable of constraining the HS f(R) model (with n = 1) down to a field value of −5 |fR0| . 2×10 at 95% confidence (Figure 2.3). This is an order of magnitude tighter than constraints currently available from galaxy cluster abundance [Schmidt et al., 2009b]. Furthermore, model parameters in this regime are not ruled out by Solar System tests. In comparing Figures 2.2 and 2.3 it is clear that a more advanced experiment, with resolution improved by a factor of two, would further half the allowed value of |fR0|. It should be noted however, that halving of the allowed parameter space does not correspond to a factor of four increase in information. Deviations in the lensing spectrum scale sub- linearly in the f(R) parameters, enhancing the narrowing of constraints as information is added (see Section 2.5). While we have concentrated on a particular f(R) model, many viable functional forms for f(R) have been proposed in the literature [Nojiri and Odintsov, 2007, Starobinsky, 2007, Appleby and Battye, 2007]. The predictions for the growth of structure in these
different models agree qualitatively: the gravitational force is enhanced by 4/3 within λC , enhancing the growth on small scales. However, there are quantitative differences in the
model predictions due to the different evolution of λC over cosmic time. Our results for the HS model with different values of n should thus cover a range of different functional forms for f(R). Table 2.1 shows that our constraints do not depend very sensitively on the value of n. This is because the weak lensing measurements cover a wide range of scales as well as redshifts. Furthermore, it is straightforward to map the enhancement in the linear P (k, z) at given k and z from the HS model considered here to any other given model, to obtain approximate constraints for that model. Future cluster constraints will almost certainly improve on the current limits of −4 |fR0| . few10 [Schmidt et al., 2009b]. However, for smaller field values, the main effect of f(R) gravity shifts to lower mass halos, since the highest mass halos are chameleon- screened (see Fig. 2 in Schmidt et al. [2009a]). Hence, future cluster constraints will depend on the ability to accurately measure the halo abundance at masses around few 14 10 M and less. Furthermore, the constraints from cluster abundances depend sensi- tively on the knowledge of the cluster mass scale, and are already systematics-dominated [Schmidt et al., 2009b]. Weak lensing constraints have a completely independent set of Chapter 2. Constraining modified gravity 37 observational systematics, and are in principle less sensitive to baryonic or astrophysical effects. Thus, the forecasted constraints on modified gravity presented here are quite complementary to constraints from cluster abundances. The processes that produce the BAO feature in the matter power spectrum are un- derstood from first principles. In addition the BAO length scale can be extracted even in the presence of large uncertainties in biases and mass calibrations. Likewise, weak lensing on large scales is well understood, with baryonic physics being much less important than on smaller scales [Zhan and Knox, 2004]. In addition the dominant systematics present in optical weak lensing surveys are instrumental in nature and not intrinsic to the quan- tities being measured. While 21 cm intensity mapping is as yet untested, instrumental systematics will be very different from those that affect the optical. In the case of this study, and more generally for cosmological models which sub- stantially modify structure formation, the motivation for higher resolution comes not from improved BAO measurements but from better weak lensing reconstruction. Higher resolution not only makes weak lensing information available at higher multi-poles, but improves the accuracy at which lensing can be reconstructed on all scales. The inclusion of lensing information in the nonlinear regime was crucial, and largely responsible for the competitiveness of these forecasts. As seen in Figure 2.1, much of the constraints come from multi-poles in the nonlinear regime. It should be noted that for the higher resolution experiment considered, the minimum scale is limited not by the resolution at high redshift, but by the saturation of information in nonlinear source structures at low redshift [Lu et al., 2010]. Our constraints from lensing are conservative since only one wide source redshift bin was considered, limited to ` < 600 as described above. To maximize information, the source redshift range could be split into multiple bins, properly considering the correlation in the lensing signal between them; a process known as lensing tomography. The low redshift bin would be limited as above, and the high redshift bin would be limited by the resolution to ` ≈ 850 at z = 2.5. However in intermediate bins, the lensing signal could be reliably reconstructed above ` ≈ 1000. Unlike most smooth dark energy models, such as quintessence, constraints on the mod- els considered here are chiefly sensitive to structure formation, as is clear from Figure 2.2. These forecasts show that 21 cm intensity mapping is not only sensitive to a cosmology’s expansion history through the BAO, but also to structure growth through weak lens- ing. The weak lensing measurements cannot compete with far off space based surveys like Euclid or JDEM, which will have galaxy densities of order 100 arcmin−2[Albrecht et al., 2006] and resolution to far greater `. However, cylindrical 21 cm experiments are Chapter 2. Constraining modified gravity 38 realizable on a much shorter time scale and at a fraction of the cost. In addition, the measurements considered here are approaching the limit at which f(R) models can be −6 tested. For |fR0| much less than 10 the chameleon mechanism becomes important be- fore there are observable modifications to structure growth, reducing the motivation to further study these models. It has also been shown that, for these experiments, a BAO measurement consistent with ΛCDM would definitively rule out DGP without a cosmological constant as a cosmo- logical model. Even in the case that a BAO measurement consistent with DGP is made, the model is still distinguishable from an exotic smooth dark energy model through struc- ture growth. The former result is not surprising given that DGP is now in conflict with current data [Fang et al., 2008]. However it is illustrative that a single experiment can precisely probe both structure formation and expansion history. Even a dark energy model that conspires to mimic DGP is, to a large extent, distinguishable. We have studied the effects of modified gravity theories on observational quantities for future 21 cm surveys. Because these surveys measure the distribution of galaxies on large angular scales over large parts of the sky, they are well suited to measure the expected deviations relative to standard general relativity. We have computed the predictions of modified gravity in the linear and nonlinear regimes, and compared to the sensitivity of future surveys. We find that a large part of parameter space can be tested.
Acknowledgements
We would like to thank Tingting Lu for helpful discussions. KM is supported by an NSERC Canadian Graduate Scholars-M scholarship. FS is supported by the Gordon and Betty Moore Foundation at Caltech. PM acknowledges support of the Beatrice D. Tremaine Fellowship. Chapter 3
Primordial gravity waves fossils and their use in testing inflation
A version of this chapter was published in Physical Review Letters as “Primordial Gravity Wave Fossils and Their Use in Testing Inflation”, Masui, K. W. and Pen, U.-L., Vol. 105, Issue 16, 2010. Reproduced here with the permission of the APS.
3.1 Summary
A new effect is described by which primordial gravity waves leave a permanent signature in the large scale structure of the Universe. The effect occurs at second order in per- turbation theory and is sensitive to the order in which perturbations on different scales are generated. We derive general forecasts for the detectability of the effect with future experiments, and consider observations of the pre-reionization gas through the 21 cm line. It is found that the Square Kilometre Array will not be competitive with current cosmic microwave background constraints on primordial gravity waves from inflation. However, a more futuristic experiment could, through this effect, provide the highest ultimate sen- sitivity to tensor modes and possibly even measure the tensor spectral index. It is thus a potentially quantitative probe of the inflationary paradigm.
3.2 Introduction
It has been proposed that redshifted 21 cm radiation, from the spin flip transition in neutral hydrogen, might be a powerful probe of the early universe. The era before the first luminous objects reionized the universe–around redshift 10–contains most of
39 Chapter 3. Detecting primordial gravity waves 40
the observable volume of the universe, and 21 cm radiation is the only known probe of these so called dark ages (see Furlanetto et al. [2006] for a review). The density of the hydrogen could be mapped in 3D analogous to how the cosmic microwave background (CMB) is mapped in 2D. The wealth of obtainable statistical information may allow for the detection of many subtle effects which could probe the early universe. In particular, the primordial gravity wave background, also referred to as tensor perturbations, are of considerable cosmological interest. Inflation robustly predicts the production of tensor perturbations with a nearly scale- free spectrum, however, their amplitude is essentially unconstrained theoretically. The amplitude of the tensor power spectrum is quantified by r, the tensor to scalar ratio. The current upper limit is r < 0.24 at 95% confidence [Komatsu et al., 2011], however upcoming CMB measurements will be sensitive down to r of a few percent [Burigana et al., 2010]. The current limits on r correspond to characteristic primordial shear on the order of 10−5 per logarithmic interval of wavenumber. Several probes of gravity waves using the pre-reionization 21 cm signal have been proposed. These include polarization [Lewis and Challinor, 2007] and redshift space distortions [Bharadwaj and Sarkar, 2009]. Dodelson et al. [2003] considered the weak lensing signature of gravity waves and found that the signal is sensitive to the so called metric shear. This is closely related to the present work. Here we describe a mechanism by which primordial gravitational waves may leave an imprint in the statistics of the large scale structure (LSS) of the universe. This signature becomes observable when the gravity wave enters the horizon and begins to decay.
3.3 Mechanism
In the following, Greek indices run from 0 to 3 and lower case Latins from 1 to 3. Latin indices are always raised and lowered with Kronecker deltas. Commas denote partial derivatives, and an over-dot (#)˙ represents a derivative with respect to the cosmological conformal time. Finally, we adopt a mostly positive metric signature (−1, 1, 1, 1). We start with an inflating universe with some distribution of previously generated tensor modes that are now super horizon (have wave-length much longer than the horizon scale). Scalar, vector and smaller scale tensor modes may exist but their contribution to the metric is ignored. The line element is given by
2 2 2 i j ds = a(η) −dη + (δij + hij)dx dx . (3.1) Chapter 3. Detecting primordial gravity waves 41
where a is the scale factor, η the conformal time and a spatially flat background geometry
has been assumed. The metric perturbations hij are assumed to be transverse and
traceless and thus contain only tensor modes. The elements of hij are also assumed to be small such that only leading order terms need be retained. The assumption that all tensor modes under consideration are super horizon implies that kh a/a˙ , where kh denotes the wave numbers of tensor modes. The frame in which the line element takes the form in Eq. 3.1 will hereafter be referred to as the cosmological frame (CF).
By the equivalence principle, it is possible to perform a coordinate transformation such that the space-time appears locally Minkowski at a point. New coordinates are defined in which the tensor modes are gauged away at the origin:
1 x˜α = (xα + hα xβ), (3.2) 2 β
where the elements h0α are taken to be zero. The metric now takes the form (up to first
order in hij) 2 2 2 i j c α β ds = a −dη + δijdx˜ dx˜ − x˜ ∂αhβcdx˜ dx˜ . (3.3)
This frame will be loosely referred to as the locally Friedmann frame (LFF), because in these coordinates the metric is locally that of an unperturbed FLRW Universe. We will give quantities in these coordinates a tilde (#)˜ to distinguish them from their counterparts in the CF. It is seen from Eq. 3.3 that the local effects of gravity waves are suppressed −1 not only by the smallness of hij but also by kh/k where k = L and L is some length scale of interest. This will be important in justifying some later assumptions. Note that for super horizon gravity waves, temporal derivatives are much smaller than spacial ones.
On small scales, inflation generates scalar perturbations which are then carried to larger scales by the expansion. By the equivalence principle, physical processes on small scales can not know about the long wavelength tensor modes. As such these small scale scalar modes must be uncorrelated with the long wavelength tensor modes. We assume statistical homogeneity and isotropy in the LFF as would be expected from inflation. The power spectrum of scalar perturbations can then be written as a function of only ˜ ˜ ˜ ˜ the magnitude of the wave number, i.e., P (ka) = P (k). This applies only within the local patch near the point where the tensor mode was gauged away. The average in the definition of the scalar power spectrum is over realizations of the scalar map, but not the tensor map.
In the CF, the isotropy is broken. Transforming back to cosmological coordinates Chapter 3. Detecting primordial gravity waves 42
˜ j maps ki → ki − kjhi /2. The power spectrum becomes sheared:
k k hij dP˜ k P (k ) = P˜(k) − i j + O( h h ) + O(h 2). (3.4) a 2k dk k ij ij
If the metric perturbations are not assumed to be traceless, the right hand side of this equation gains an additional term proportional to this trace. This deviation from isotropy is not observable since any possible observation would take place in the LFF. It is noted that the leading order correction to CF power spectrum is not suppressed by kh/k. It is therefore not expected that the residual terms in the LFF metric (Eq. 3.3) can break isotropy to undo CF anisotropy. However it was the CF in which the power spectrum should be isotropic, then there would be observable anisotropy in the LFF. This would be a violation of the equivalence principle, since an experiment local in both space and time would be able to detect the super horizon tensor modes by measuring the power spectrum of the locally generated scalar perturbations. We would now like to evolve the system to some later time when observations can be made. Ignoring the internal dynamics of the scalar perturbations, we solve for their evolution as if they were embedded in a sea of test particles. This is trivial since an object at coordinate rest in the CF will remain at rest for any time dependence of hij (this is true at all orders). At some point well after inflation, when the universe is in its deceleration phase, the horizon will become larger than the length scale of the tensor modes. The tensor modes will then decay by redshifting, and after some period of time the metric perturbations hij become negligible. The CF and LFF then become equivalent and both correspond to the frame in which observations can be made. The distribution of test particles is the same as it initially was in the CF. As such, the initially physically isotropic power spectrum now contains a measurable local anisotropy given by Eq. 3.4. The values of the initial metric perturbations can be determined by measuring this distortion at any time in the future, constituting a fossil of the initial tensor modes. The scalar perturbations remain Gaussian but become non-stationary, and the trispec- trum gains the corresponding terms. This is analogous to the apparent distortions ex- pected in the CMB and 21 cm fields induced by gravitational lensing. Similarly the bispectra of mixed scalars and tensors were calculated in Maldacena [2003], employing similar methodology to that presented here. The effect described here is a second order perturbation theory effect, in that it is a small effect due to tensor modes on the already small scalar perturbations. This coupling occurs in the initial conditions, not between the dynamics of the scalars and tensors. The simple argument presented above avoided the complication of a full second order calcula- Chapter 3. Detecting primordial gravity waves 43 tion, but it is expected that such calculations would yield the same results. Specifically, an expression agreeing with Eq. 3.4, to relevant order, was derived in Giddings and Sloth [2011a, Eq. 4.5] as part of a longer calculation.
3.4 Tests of inflation
The above arguments relied on perturbations on large scales being generated before per- turbations on small scales. This is the case in any conceivable model of inflation, however it is not be the case in all scenarios. As an illustrative example, in the cosmic defect scenario perturbations are generated on small scales and then causally transported to larger scales as the universe evolves. It is argued that in this scenario, tensor perturba- tions leave no fossils, i.e. the described effect does not occur. A detection of primordial tensors by another means (CMB B-modes for example) with an observed lack of the corresponding fossils would provide a serious challenge to inflation. The most specific prediction of single field inflation is the power spectrum of tensor modes, defined by 3 0 ij 0 (2π) δ(ka − ka)Ph(ka) ≡ hhij(ka)h (ka)i. (3.5)
Given the amplitude of the scalar power spectrum As, the tensor power spectrum is fixed by a single parameter, the tensor to scalar ratio r. The shape of the spectrum is then nearly scale-free: 2 nt 2π rAs k Ph = 3 . (3.6) k k0
We follow the WMAP conventions for defining Ph, As and r [Komatsu et al., 2009]. The spectral index fixed by the consistency relation, nt = −r/8 [Liddle and Lyth, 2000]. The −1 pivot scale is taken to be k0 = 0.002 Mpc and we assume the WMAP7 central value −9 for As of 2.46 × 10 . Because r is likely small, any deviation from a scale-free spectrum will be difficult to measure, making the verification of the consistency relation correspondingly difficult. The CMB is sensitive primarily to large scale tensor modes, with smaller scale modes having decayed by recombination. Cosmic variance and lensing contamination will likely prevent a measurement of nt from the CMB, unless the lensing can be cleaned from the signal [Zhao and Baskaran, 2009]. Conversely, the amplitude of the fossil signal does not decay as the universe expands. It may thus be possible to make a measurement of the spectral index, provided r is sufficiently large. Chapter 3. Detecting primordial gravity waves 44
3.5 Statistical detection in LSS
In practice, the tensor gravity wave fossils could be reconstructed by applying quadratic estimators to the density field. Aside from the increased dimensionality, this is identical to the manner in which lensing shear is reconstructed [Zaldarriaga and Seljak, 1999, Lu and Pen, 2007]. Rather than considering the statistics of such estimators, here we follow a simpler line of reasoning to approximate the accuracy to which the tensor parameter can be measured.
We begin by asking how well a long wavelength, tensor mode can be reconstructed from its effects on the scalar power spectrum (Eq. 3.4). The metric perturbations are assumed to be spatially constant and take the form
+ × hij = h+eij(ˆz) + h×eij(ˆz) (3.7)
+ × where eij and eij are the polarization tensors and thez ˆ direction of propagation is chosen for convenience. The uncertainty on the scalar power spectrum is
2 2 [∆P (ka)] = 2 [P (ka) + N] , (3.8)
where N is the noise power. We use a Fisher Matrix analysis to sum this information over
all ka to determine the corresponding uncertainty on the shear h+ and h×. Assuming an experiment whose noise is sub dominant to sample variance (N P ), the resulting variance is inversely proportional to the number of modes surveyed: