FIRST GALAXY CLUSTERS DISCOVERED VIA THE

SUNYAEV ZEL-D’OVICH EFFECT

by

Zak Staniszewski

Submitted in Partial Fulfillment of the Requirements

For the Degree of Doctor of Philosophy

Dissertation Adviser: John E. Ruhl

Department of Physics

Case Western Reserve University

May 2010 CASE WESTERN RESERVE UNIVERSITY

SCHOOL OF GRADUATE STUDIES

We hereby approve the thesis/dissertation of

______

candidate for the ______degree *.

(signed)______(chair of the committee)

______

______

______

______

______

(date) ______

*We also certify that written approval has been obtained for any proprietary material contained therein. To past, present, and future South Pole ”Winter-Overs” Contents

Contents i

List of Figures v

List of Tables viii

1 Introduction 3

1.1 Cosmological Introduction ...... 3 1.2BigBangNucleosynthesis...... 5 1.3 Cosmic Microwave Background ...... 6 1.4DarkEnergy...... 8

1.5 The SZ Effect...... 9 1.6TheSouthPoleTelescope...... 13 1.7ThesisOutline...... 15

2 Telescope 17 2.1TelescopeSite...... 17 2.2Optics...... 18 2.3 Cold Stop and Baffle ...... 20

2.4 Ground Shields ...... 22 2.5Primary...... 22 2.6 Secondary Mirror ...... 22

i CONTENTS ii

2.7 Receiver Cabin and Optics Cryostat ...... 23 2.8MetrologyandPointingHardware...... 24

3 Receiver 26 3.1 Introduction ...... 26 3.2Detectors...... 26 3.3FocalPlaneConstruction...... 28

3.4 Readout ...... 28

4 Cold Secondary Cryostat 31

4.1 BaffleOverview...... 32 4.2OpticalDesign...... 32 4.2.1 AbsorberTesting...... 33 4.2.2 Scattering...... 35

4.3 BaffleandOpticsCryostatMechanicalDesign...... 41 4.3.1 BaffleDesign...... 42 4.3.2 BaffleandRadiationShieldAssembly...... 45 4.3.3 OpticsCryostatAssembly...... 49 4.3.4 Secondary Mirror Mount ...... 51

4.3.5 PulseTube...... 52 4.3.6 Assembly Along With Receiver ...... 55 4.3.7 VacuumWindowandFilters...... 56 4.4 Cold Stop Cryogenic Design ...... 59

4.4.1 HeatLoads...... 60 4.4.2 FirstStageHeatLoads...... 61 4.4.3 SecondStageLoads...... 63 4.4.4 HeatLoadResults...... 64

4.4.5 Gradients...... 65 4.4.6 CoolingTime...... 66 CONTENTS iii

4.4.7 TemperatureOscillations...... 67 4.5Calibrator...... 67 4.5.1 CalibratorTasks...... 69 4.5.2 CalibratorHardware...... 69

4.5.3 CalibratorThermometry...... 72 4.6Summary...... 75

5 Data Selection, Processing, and Map Making 76 5.1Observations...... 76 5.2FromRawDatatoFinalMaps...... 77 5.2.1 PointingReconstruction...... 78 5.2.2 BeamMeasurement...... 80

5.2.3 RelativeandAbsoluteCalibration...... 80 5.2.4 DataSelection...... 83 5.2.5 TimeStreamProcessing...... 84 5.2.6 Mapmaking...... 86

6 Cluster Finding and First Results 88 6.1TheMatchedFilter...... 88

6.2SZClusterTemplates...... 89 6.3 Noise and Foreground Estimates ...... 90 6.4MatchedFilterConstruction...... 93 6.5MatchedFilterApplication...... 97

6.6FirstResults...... 98 6.6.1 Optical Confirmation and X-ray Counterparts ...... 100 6.7Conclusions...... 103

7 MCMC Cluster Finder 104 7.1 Introduction ...... 104 CONTENTS iv

7.2Algorithm...... 105 7.2.1 Likelihood Expression ...... 106 7.2.2 MCMCSampler...... 108 7.3Implementation...... 109

7.3.1 ComputationalConsiderations...... 110 7.3.2 EvaluationDetails...... 111 7.3.3 ClusterCandidates...... 113 7.4SimulatedMaps...... 113 7.5Performance...... 114

7.5.1 ClusterIdentificationandPerformance...... 114 7.5.2 FluxEstimates...... 117 7.6Future...... 119 7.6.1 ComputationalOptimizations...... 121

7.6.2 MultipleFrequencyMCMCFinder...... 124 7.7Conclusion...... 125

8 Conclusion 126

Bibliography 128

A Thermometer Locations and Nominal Temperatures 137 List of Figures

1.1 Hubble diagram ...... 4

1.2 CMB blackbody from FIRAS and others ...... 7 1.3CMBPowerspectrum...... 9 1.4SN1adata...... 10 1.5 Cluster DN/DZ...... 12

1.6SZspectrum...... 14

2.1TheSouthPoleTelescope...... 19 2.2Primaryilluminationpattern...... 21 2.3 Secondary surface accuracy ...... 24

3.1SPTwedge,bolometerandTES...... 27

3.2Focalplaneassembly...... 29 3.3Frequencymultiplexingschematic...... 30

4.1Opticscryostatsectionview...... 33 4.2Absorberemissivitytest...... 34

4.3Absorberemissivityresults...... 35 4.4 Bafflescatteringsimulation...... 37 4.5Zemaxdetectorplot...... 37 4.6 Zemax model of 3 baffles...... 39

4.7 Bafflescatteringgrazingincidence...... 40

v LIST OF FIGURES vi

4.810Kcone...... 43 4.9Unrollingthecones...... 44 4.10Coneassemblyjig...... 45 4.11HR10installation...... 46

4.12 Baffle and radiation shield assembly photo ...... 47 4.13 G10 cylinder supports ...... 48 4.14 G10 fin supports ...... 49 4.15 Baffleassemblyhanging...... 50 4.16 Baffleandradiationshieldgeometry...... 51

4.17 Cold stop optics with receiver ...... 52 4.18Trussrodballjoints...... 53 4.19Pulsetubeheatstraps...... 54 4.20 Receiver baffleandshieldtestfitjig...... 55

4.21 Filter, window and interleaving shrouds ...... 56 4.22Windowtestsetup...... 57 4.23Windowtestresults...... 58 4.24 Optics cryostat filter mounts ...... 59

4.25Opticscryostatcooldowncurves...... 68 4.26SPTcalibrator...... 70 4.27Calibratorhardware...... 71 4.28Calibratorsource...... 72 4.29Calibratorandopticscryostat...... 73

4.30Calibratorthermometers...... 74

5.1RCW28template...... 79 5.2SPTbeams...... 80 5.3Beamandfilteringwindowfunction...... 86

5.4 SPT sum and differencemaps...... 87 LIST OF FIGURES vii

6.1 Matched filter 1:Beam filtered β modelcluster...... 91 6.2Matchedfilter2:CMBpower...... 93 6.3Matchedfilter3:Atmosphericandinstrumentalnoise...... 94 6.4Matchedfilter4:Noisecovarianceandmatchedfilter...... 95

6.5Matchedfilter5:Generalized2Dmatchedfilter...... 96

6.6 Filtering with different θcore ...... 97 6.7Matchedfilterresults:SPTclusterdiscoveries...... 99 6.8BCSimagesofSPTclustercandidates...... 102

7.1 β profile...... 106 7.2 Likelihood evaluation time benchmark ...... 111 7.3Noisecovarianceinterpolation...... 112 7.4CompositeMap...... 115 7.5Clusterdetection...... 116

7.6ParameterconstraintswithMCMCmethod...... 118 7.7AmplituderecoveryfromMCMC...... 120

7.8 θcore recoveryfromMCMC...... 121 List of Tables

4.1Scatteringtable1...... 40 4.2Scatteringtable2...... 41

4.3Firststageheatloads...... 65 4.4Secondstageheatloads...... 65

5.1Calibrationuncertainties...... 82 5.2Datacuts...... 85

6.1ClusterDetections...... 100

A.1 2008 Diodes and nominal temperatures ...... 138 A.2 2008 Cernox thermometers ...... 138 A.3 2008 auxiliary sensors ...... 138

A.4 2007 Diodes thermometers ...... 139 A.5 2007 Cernox thermometers ...... 139 A.6 2007 auxiliary sensors ...... 139

viii FIRST GALAXY CLUSTERS DISCOVERED VIA THE

SUNYAEV ZEL-D’OVICH EFFECT

Abstract by

Zak Staniszewski

We are currently living in one of the most dramatic times, both technologically and cosmologically. We now realize that the Universe is expanding in an accelerating man- ner due to the effects of dark energy. These effects were hidden until recent times because dark energy has only recently dominated the energy budget of the Universe.

Also, we finally have telescopes powerful enough to execute surveys that can place lim- its on the equation of state of dark energy. The primary goal of the recently built South

Pole Telescope (SPT) is to understand the nature of this dark energy by measuring the structure formation history of the Universe. We are doing so through the construction of a large, mass limited, catalog of galaxy clusters, whose numbers as a function of redshift are critically dependent on the expansion history of the Universe.

This dissertation explains critical design choices made by the SPT to enable such an ambitious survey. In addition, we describe the design, construction, and testing of the cold secondary optics which represents the hardware contribution unique to this dissertation.

We also describe two analysis techniques used to identify cluster candidates in the

SPT data. One such method, the matched filter, was used to obtain the SPT’s first results, which are the first clusters discovered with an SZ experiment. The second method explored is an alternative cluster finder that could be used in concert with, or as a replacement for the matched filter. Acknowledgments

There are many people I would like take time to thank.

I will always feel incredibly lucky and fortunate to have been able to work on such a wonderful experiment as SPT. For this, I am forever thankful to John Ruhl. John, thank you for providing the right kind of leadership that allowed me to grow into an independent, and confident scientist. It is unfathomable to think how fun, hard-working, and supportive the rest of the SPT team has been. Somehow, two day collaboration meetings, with all of us crammed in a giant room, working through lunch and into the evening, always seemed like a vacation. It could have been that I knew there would be a tamale as a reward at the end of the night, but I suspect I was just enjoying the science and the friendship. I benefited immensely from the selflessness on the part of Kathryn, Tom C., Brad, Laurie, and Christian, who were the analysis work horses of the project, yet deferred much of the glory us younger graduate students (read old graduate students.) The senior leadership, particularly John C., Bill, and John R. always had our best interests in mind, and that has always been greatly appreciated.

I would like to thank Steve Padin for many reasons. First, in your absence, the telescope would have never been built on time or correctly and I would be a really old graduate student with no data. Also, thank you for being a mentor and friend during our winter together and in the present.

My contemporary SPT graduate students, Joaquin, Tom P., Martin, and Erik. We’ve had some crazy times. From the crazy months in Berkeley before the first deployment, to the

1 equally crazy first and second deployments, you all made it seem like a blast, eventhough they were the also the most stressful times inourlives.Itsverynicetohangoutinmore civilized situations at the collaboration meetings. Erik, thanks for slaving in fab for years to make our experiment truly kick butt.

Thanks to all members of Ruhl lab(!) during my tenure, John Ruhl, Jon G, Tom Mon- trizzle, Ted, Kecheng, Wizzle, Poopoo, Rick, Sean, J.T., and Craigatron. The atmosphere in the lab was always so hilarious because of Jon G, who was our mentor as well as really good friend. Too bad you had to go off and get a real job. Tom Montroy, thanks for coming back and kicking butt again. Thanks Craig for your slave labor during the building of the cones, it was well beyond the call of duty for a first year undergrad. And thank you Rick, for making all of our lives brighter and easier with your help. I think our class at Case was the best. Thankfully, my best friends of the bunch stayed nearly as long as I did, and even lived with me for a long time. George and Jonathan were undoubtedly my favorite thing about my graduate experience. Min, Mao, and Wenyang were also great colleagues and friends who made the first year a blast. Thank you Nathan, Todd head, roommate and buddy Adam, Francesc, and Irit for your friendship during grad school. I would also like to thank all of the faculty and staff at Case, especially Betty for bending over backward to help us with whatever we needed to get our science done. Finally, I’d like to thank my family, especially my parents, and wife Margaret for sup- porting me through this journey.

2 Chapter 1

Introduction

This thesis describes the construction of the South Pole Telescope (SPT) and presents the project’s first results. The SPT was built to probe the nature of dark energy through the construction of a large galaxy cluster survey, and recently became the first telescope to discover galaxy clusters with the Sunyaev-Zel’dovich effect. With an ever growing sample of clusters, the SPT is poised to reconstruct the expansion history of the Universe and constrain the equation of state of dark energy. Here we describe the motivation for building such an instrument. First, we outline why we think dark energy exists, and how it fits into our current cosmological paradigm. Next, we describe how galaxy clusters imprint their signatures in the cosmic microwave background, and how we use these objects to trace the structure formation history of the Universe.

1.1 Cosmological Introduction

We live in an expanding Universe that originated with a hot big bang 13.7 billion years ago[1]. The first evidence for this came with Edwin Hubble’s discovery in 1929 that neb- ulae are moving away at a rate proportional to their distance from us (see Figure 1.4). We now refer to this as Hubble’s law, which relates the recessional velocities of astronomical

3 Figure 1.1: Radial velocities in km/s of nebulae plotted as a function of distance. Distances were estimated using a series of standard candles known as Cepheid variables. The slope of the lines shown give his estimate of H0, now known as Hubble’s constant. He came up with the value of 500 km/(s Mpc). The current best fit value is H0  70km/(sMpc). Plot taken from Hubble’s seminal paper[2]. objects to their distance[2]

v = H0d, (1.1) where, v is the velocity, d is the distance, and H0 is Hubble’s constant. The notion of expansion inevitably leads one to contemplate a time early on when the Universe’s contents were extremely dense and hot. At the time of Hubble’s discovery, Einstein had developed the theory of General Rel- ativity,whichcouldbeusedtodescribetheevolution of the Universe. His formulation included an ad hoc cosmological constant to keep the Universe static. Realizing the sig- nificance of the Hubble measurements, and the expanding nature of the Universe, Einstein declared the cosmological constant his “biggest blunder.” We now realize the Universe is indeed expanding, and in an accelerating manner. This, once again, brings the need for something that behaves like a cosmological constant.

4 1.2 Big Bang Nucleosynthesis

Hubble’s law alone suggests a hot big bang, but our confidence in this is strengthened greatly by two seminal predictions and their accompanying measurements. The first is a prediction of the abundances of the light elements through a mechanism known as big bang nucleosynthesis (BBN). This explains the abundances of the light elements as cre- ated during the expansion after the big bang. In the hot, dense, early Universe, all the constituents were in thermal equilibrium because species conversion and scattering were occurring at rapid rates. As the Universe expanded, certain reactions no longer took place quickly enough to maintain thermal equilibrium and constituents began to freeze-out. Nucleosynthesis began immediately after the big bang when the Universe was domi- nated by radiation. Weak interactions occurred rapidly enough to keep protons and neu- trons in thermodynamic equilibrium. During this time, the relative number of protons and neutrons evolved simply as a function of the temperature of the Universe as it rapidly ex- panded and cooled. One second after the big bang, the expansion had decreased the density enough to essentially halt these weak interactions. At this point, the number of protons and neutrons froze-out and remained nearly constant except for a slow decrease in the num- ber of neutrons due to free neutron decays. The precise ratio of protons-to-neutrons is an important prediction of BBN, and ultimately dictates the relative abundances of the light elements.

Until roughly 3 minutes after the big bang, the thermal energy of the plasma was greater than the nuclear binding energy of deuterium, so heavy nuclei could not form. During this period neutrons continued to slowly decay into protons, decreasing the total number of neu- trons in the Universe. Three minutes after the big bang, the Universe had finally expanded and cooled enough for nuclei to form without being blown apart by the hot radiation. First, deuterium formed, and it did so quickly. The unstable deuterium quickly found additional protons and neutrons and captured nearly all of the neutrons into 4He. Therefore, the rel- ative abundance of 4He to protons is dictated by the amount of time that elapsed before

5 deuterium could form. Atthesametimethat4He was forming, other heavier elements were forming with their relative abundances being dictated by Boltzmann suppression. These relative abundances evolved with the temperature of the Universe until a few minutes later, when the expansion of the Universe became faster than the rate at which species were interacting. At this time, BBN ended and the primordial abundances of the light elements were fixed. The abundance of helium froze out at a mass fraction of one quarter that of hydrogen, and the rest of the light elements did so in much smaller proportions. In a big bang Universe, the relative abundances depend on only one cosmological pa- rameter, η, which is the photon-to-baryon ratio. BBN correctly predicts all of the light element abundances for a single value of η. Because of its precise predictions, and ex- cellent agreement with measured abundances for one value of η, BBN is one of the most important pieces of evidence for the current big bang theory of the Universe.

1.3 Cosmic Microwave Background

The second important measurement supporting the big bang model is that of the Cosmic Microwave Background (CMB) radiation. 700,000 years after the big bang, another impor- tant decoupling occurred. At that time, the Universe had cooled to the point where protons and electrons could form neutral atoms. At earlier times, Compton scattering tightly cou- pled photons and free electrons. When the free electrons and protons disappeared, the photons no longer had charged particles to scatter off of and they streamed freely through the Universe. This bath of photons remains today as the CMB, unchanged except for being redshifted to a blackbody temperature of 2.73 K.

In 1946, Gamow predicted that a big bang Universe would have a characteristic temper- ature of ∼50K [4]. It was subsequently associated with a background radiation and revised be ∼5K by Alpher and Herman[4]. By 1960’s, these predictions had been somewhat for- gotten. In 1965, Penzias and Wilson of Bell labs built a sensitive instrument to measure

6 Figure 1.2: The CMB blackbody is shown for the best fit temperature of 2.725 K. From FIRAS from CMB for pedestrians[3]. faint radio signals. During their measurements, they measured an excess noise signal that was uniform in the sky that was 100 times the noise they expected. When Penzias and

Wilson heard that their measurement could be consistent with a measurement of the CMB, they invited Dicke, who was building an experiment to look for the CMB, to look at their instrument and measurements. Dicke’s team verified that this measurement was, serendip- itously, the first detection of the CMB. In 1965 Penzias and Wilson report this as the first detection of the CMB[5]. Since its discovery, measurements of the CMB have become a fruitful pursuit which have provided many key insights into the nature of our Universe. One of the most signifi- cant early measurements was that of the blackbody spectrum by the Far-InfraRed Absolute

Spectrophotometer (FIRAS) experiment on the COBE satellite[6, 7], which shows that the CMB is the most perfect blackbody in nature (See Figure 1.2).

7 The temperature of the CMB is nearly uniform across the sky to the level of 100 parts per million. At the same time the FIRAS measurements were made, the COBE Differential Microwave Radiometer (DMR) became the first to detect anisotropies in the CMB. We now believe that the slight fluctuations in temperature were the seeds of the structure that we now see in many forms including clusters of galaxies. We think that quantum fluctuations in the early universe created an initial spectrum of density perturbations in the photon-baryon fluid. These over densities led to oscillations with gravity as the driving force and radiation pressure as the restoring force. These oscillations became imprinted in the CMB at decoupling.

We now measure the statistical properties of the temperature anisotropies by plotting their power spectrum. At the time of the decoupling, when these anisotropies were im- printed on the CMB, some oscillations had reached their maximum density. These oscil- lations represent the first peak in the CMB power spectrum. Higher order peaks are due to oscillations that had enough time to collapse, expand, and collapse again some integer number of times. The locations and relative heights of the peaks are sensitive to the back- ground cosmology, including the density of each constituent in the Universe and the overall geometry of the Universe.

Precision measurements of the shape of the CMB power spectrum have now been made. A current state of the art presentation of current measurements is shown in Figure 1.3.

1.4 Dark Energy

Evidence for dark energy first came in the 1990’s with measurements of the dimming of supernovae in an expanding Universe[9, 10]. The supernovae were measured to be dimmer than they would be in a non-accelerating Universe. With some care, type Ia supernovae can be used as excellent standard candles, and, because they are so bright, they are visible at moderately high redshifts. Using measure- ments of the host galaxy redshift, one can determine how a supernova brightness should be

8 Figure 1.3: The WMAP7 power spectrum combined with the ACBAR, and QUAD data from Komatsu et al. 2010[8]. This plot shows the excellent agreement between the data points from different experiments, and the primary anisotropy power spectrum at low ell. The line is the best-fit of a 6 parameter ΛCDM cosmology, fit to the WMAP7 data alone. The large angular scale data shows the The excellent full sky map of WMAP provides great measurements of the first few peaks in the CMB power spectrum. Higher order peaks have been mapped more accurately with telescopes with smaller beams where WMAP ceases to resolve small angular scale anisotropies. dimmed as a function of redshift. Data from both the Supernova Cosmology Project and the High-z SN Search show that distant supernovae are ∼0.25% magnitudes dimmer than they would be in a decelerating Universe[12]. This implies that the expansion is accelerat- ing due to dark energy that is, or behaves similarly to, a cosmological constant. Figure 1.4 shows a compilation of supernova measurements which provide evidence for dark energy.

1.5 The SZ Effect

Type Ia supernovae observations provided the first evidence for dark energy, but there are other ways to characterize its properties. We plan to measure the equation of state of dark

9 Figure 1.4: Taken from Friemann et al.[11] review on dark energy. Current SN1a mea- surements. The figure is adapted from current data with the plot based on the work by Perlmutter et al. and Riess et al. The figure shows how the supernova data are better fit by a cosmology with a non-zero vacuum energy density.[9, 10] energy by measuring the density of galaxy clusters using the Sunyaev-Zel’dovich effect. Galaxy clusters are the largest collapsed bodies in the Universe, and are tracers of the largest dark matter halos. Their number density and mass distribution are critically dependent on the expansion history of the Universe, and hence, dark energy. The observed

10 cluster number density is dN dV = n z . dΩdz dΩdz ( ) (1.2)

dV The first factor, dΩdz, describes the geometry of space. It is the volume per unit solid angle and redshift as seen by an observer. Therefore, it depends on the expansion rate of the

Universe. In a geometrically flat Universe, the expansion is dictated by ΩM, ΩDE,andthe equation of state of dark energy, w. For a fixed density of dark energy (ΩDE), decreasing w decreases the amount of volume per unit solid angle, and results in a smaller cluster count.

The second factor, n(z), is the number density of clusters. It depends on ΩDE, ΩM, the normalization of the matter power spectrum (σ8), and w. Increasing ΩDE reduces the number of clusters at a given redshift by allowing expansion to fight the collapse of bound objects. n(z) exponentially suppresses higher mass clusters because higher over-densities in a Gaussian field become more and more rare. Combining the volume and evolution effects, we obtain a prediction of the number of clusters above a given mass as a function of redshift (see figure 1.5). The volume factor dominates for lower redshifts, and growth factor effects dominate at higher redshift.

The primary goal of SPT is to count the number of galaxy clusters as a function of redshift and thereby constrain the equation of state of dark energy. We find clusters by measuring distortions in the CMB due to the presence of galaxy clusters. CMB photons are scattered by hot electrons in a cluster, resulting in a small spectral distortion known as the Sunyaev-Zel’dovich effect[14]. The large, dark matter dominated, potential well of the cluster creates a density profile of ionized gas that is at temperatures > 106K. For a massive cluster, roughly 1% of the CMB photons passing through the hot ionized gas are scattered, preferentially up in energy, leading to a distortion of the blackbody spectrum.

Following Rephaeli 95, we can express the change in intensity of the CMB as,

x3 ΔIT = I [Φ(x, Te) − 1]τ (1.3) 0 ex − 1  T x = hν/kT I = kT 3/ hc 2 τ = n σ dl where 0 is the CMB temperature, 0, 0 2( 0 ( ) ,and e T

11 Figure 1.5: Top: The number of clusters expected for varying values of w. Here the three models are normalized to yield the same total number of clusters. Bottom: the difference of the different models with respect to w = −1.0. Plot taken from Mohr 2005[13]

[15, 16]. Here, τ is the optical depth of the cluster and Φ(x, Te) is the integral over electron velocities and scattering directions. In the non-relativistic limit, [Φ(x, Te) − 1] becomes separable in terms of a spectral shape for the SZ effect multiplied by the electron tempera- ture. We commonly absorb the electron temperature into the optical depth term and rewrite the SZ signal as being proportional to  ΔT ∝ f (ν) neTedl. (1.4) TCMB

The term f (ν) contains the frequency dependence term of the SZ effect, and the integral contains the cluster property terms that determine the intensity of the SZ signal. The SZ

12 effect relative to the nominal CMB blackbody is shown in Figure 1.6. The unique spectrum of the SZ signal helps distinguish it from the primordial temper- ature anisotropies. Galaxy clusters are typically of order 1014 − 1015 solar masses, and ∼2 Megaparsec in diameter. Sensitivity to clusters varies as a function of redshift because clusters of a given mass are more dense at earlier redshifts. Setting a threshold for mass turns out to correspond roughly to a fixed comoving cluster size. For clusters with diame- ter 1.8h−1Mpc, their angular size reaches a minimum of almost exactly one arcminute for reasonable, flat cosmologies. The surface brightness of the SZ signal is essentially independent of redshift because it is a fractional distortion of the already redshifted CMB. The SZ brightness is also a good proxy for cluster mass. An SZ cluster survey is therefore mass limited as long as the telescope has sufficient angular resolution. For this reason, SZ surveys are uniquely positioned to produce unbiased catalogs of clusters to constrain cosmology.

Many experiments have made SZ measurements of previously known clusters[17, 18]. Several tens of clusters have been mapped, providing detailed information on cluster den- sity profiles, and telling us how SZ inferred mass measurements correspond to X-ray de- rived cluster mass. These measurements give us confidence that a large SZ survey can use

SZ flux as a proxy for cluster mass while trying to constrain cosmology. To place useful constraints on the equation of state of dark energy, we need a catalog of many hundreds or thousands of clusters. This thesis represents the beginning of a cluster survey by showing the first clusters ever discovered with an SZ experiment.

1.6 The South Pole Telescope

The SPT is designed to survey a large region of the sky at high angular resolution to dis- cover galaxy clusters and map fine scale CMB anisotropies. By looking for the SZ effect in our maps, we should produce a catalog of hundreds to thousands of newly discovered galaxy clusters, which will allow us to constrain σ8 (the amplitude of the matter power

13 Figure 1.6: The spectrum of the Sunyaev-Zel’dovich effect (including relativistic correc- tions) along with the three observing bands for the SPT. Here we show the change in intensity of the radiation relative to the nominal CMB blackbody spectrum for a 10 keV cluster.

−1 spectrum on 8h Mpc scales), ΩM, ΩDE, and w. We will also characterize the secondary- dominated CMB power spectrum out to l ∼10,000. We could also provide a precise mea- surement of the primary anisotropy damping tail which could improve constraints on the spectral index of initial density perturbations and may yield information about the infla- tionary potential. The SPT was installed at the geographic South Pole between November 2006 and February 2007 and has been successfully carrying out observations since then. The SPT is equipped with a 960 bolometer detector array with a frequency domain multiplexed SQUID readout. We simultaneously observe in three frequency bands, 95, 150 and 225 GHz, which allows us to use the SZ spectrum to distinguish clusters from CMB anisotropies. We have partnered with the Blanco Cosmology Survey (BCS) team to obtain photo- metric redshift estimates for our clusters that lie within their ∼80 square degrees of survey

14 data, and are obtaining redshifts for all clusters in our catalog through programs involving BCS, Magellan, Spitzer, and the Dark Energy Survey (DES). The SPT has now mapped many hundreds of square degrees. Here we present the first results from a 40 square degree region which overlaps the BCS field.

1.7 Thesis Outline

This thesis concentrates on three main topics; 1) the design of a cold secondary optics system: 2) the first scientific result for our project: 3) the development of a new analysis technique for extracting clusters from SPT data. Chapter 2 describes the overall design of the telescope. We illustrate the benefits of the South Pole site, and describe the telescope optics design. This provides motivation for building an off axis design with a cooled secondary and baffle. Chapter 3 highlights new innovations needed to make the SPT focal plane work, which includes the deployment of a large format ∼1000 bolometer array and a frequency multi- plexed read-out scheme developed for the SPT.

Chapter 4 describes the successful design and deployment of the cryogenic cold stop and baffle for the secondary optics. This represents the bulk of the hardware contribution that is unique to this dissertation. We begin by describing the optical requirements for the cold stop and baffle. An explanation of the mechanical design and construction of the baffle follows. This chapter concludes with a discussion of the cryogenic requirements and performance of the system. Chapter 5 Explains the observations and data processing steps required to make SPT maps, and describe the matched filter technique and apply the method to our maps to yield cluster candidates. Chapter 6 presents the first scientific results from the SPT. These are the first clusters discovered with any SZ survey and describe their statistical significance. A discussion on the follow up observations that confirm our detections is also presented.

15 Chapter 7 explores a new method for finding clusters within SPT data. We begin with an introduction to Bayesian statistics and Markov Chain Monte Carlo methods. We follow with an interpretation of these methods to the problem of object detection. Fake data sets are used to quantify the performance of both this and the matched filter algorithm. We

finish with a discussion of the performance of the new method and possible extensions and refinements that can be done in the future.

16 Chapter 2

Telescope

The South Pole Telescope (SPT) is a 10m diameter, off-axis Gregorian telescope located at the geographic South Pole. It is designed to make sensitive measurements in millimeter wave bands with a wide field of view using transition edge sensor bolometers. The primary goal of the SPT is to map the distribution of galaxy clusters in the universe by making maps of the Cosmic Microwave Background radiation that are large, sensitive, and at high angular resolution. The only way to meet all of these requirements simultaneously is to build a large telescope with a large field of view and an extremely clean optics design. Two of the most unique parts of the telescope design are the cold stop around the secondary and the cryogenic cooling of the large secondary mirror. The design and construction of the cold stop and secondary mirror cryogenics is a unique contribution completed for this thesis and is the topic of Chapter 4. In this chapter I describe the telescope optics and explain the motivation for the cold stop which represents the hardware portion of my thesis.

2.1 Telescope Site

The South Pole site is an ideal location for mm and sub-mm wave experiments. The atmo- sphere is extremely transparent and stable in the millimeter and sub-millimeter observing bands. The South Pole site sits on an ice pack that is over 2 km thick, and the pressure

17 altitude is ∼3300 m in the winter. This, combined with cold, stable temperatures, and a mean precipitable water vapor of ∼0.25 mm[19] result in ideal observing conditions for over half of the year. The median sub-mm brightness fluctuations (in CMB units) are an order of magnitude better than other established ground based sites[20].

Although the weather is stable in the winter, temperatures are extreme and reach −80C. This, combined with its remote location provide unique engineering and logistical prob- lems. One of the more unique design aspects of the SPT was to have a receiver cabin that docks to the control room building, so that personnel do not need to go outside to access receiver hardware.

2.2 Optics

The ambitious SZ survey planned for the SPT established a few critical requirements for the telescope:

• arcminute angular resolution to resolve typical galaxy clusters,

• high sensitivity detectors in the 95, 150, and 225 GHz observing windows to help distinguish the spectra of SZ sources from CMB anisotropies,

• low scattering to reduce loading and scan synchronous signals that could drown out SZ signals,

• high throughput so that we can observe with a thousand detectors simultaneously,

increasing our mapping speed.

To achieve this, we designed SPT as an off-axis Gregorian with a section of a parabola as the primary. The offset nature of this design minimizes scattering while allowing for a sufficiently large field of view. An on-axis design for a telescope this large is unattractive because it would require hefty secondary supports that would create scattering problems.

18 Figure 2.1: The South Pole Telescope. Below and on both sides of the primary mirror are the co-moving ground shields. These help redirect spilled over radiation toward the sky.

We require arcminute resolution to be able to resolve typical clusters regardless of their redshift. The angular resolution of a telescope is dictated by the size of the primary optic, θ  1.22λ λ and is approximately equal to, res D where is the wavelength, and D is the diameter of the primary. At 150 GHz, this implies that we need an 8 m primary. We built a 10 m primary and under-illuminate it out to 8 m to reduce spillover while simultaneously achieving the desired angular resolution.

CMB experiments like the SPT must scan rapidly across the sky to modulate the CMB hot and cold spots at a timescale faster than changes induced by the atmosphere and other noise terms. To achieve this, the SPT rapidly scans the entire telescope. We choose to scan the entire telescope and design an off-axis telescope with only two mirrors, one of which is cryogenically cooled. Because we only have two mirrors, the alignment of our optics is

19 easier and the scattering in internal loading are less. In order to survey a large field quickly, we needed to design a telescope with a large field of view with room for a large number of detectors. The ambitious survey planned for the SPT requires roughly one thousand background limited bolometers observing simulta- neously. We describe the design and assembly of our kilopixel bolometer array in Chapter 3.

2.3 Cold Stop and Baffle

Our optics design is enabled by the cold stop and baffle that surround the secondary. This dramatically reduces the amount of radiation that spills over the primary onto the ground or telescope structure. The way this is achieved is best described by pretending the telescope is broadcasting radiation instead of accepting it. This is allowed because of time reversal symmetry. To estimate the telescope’s response on the sky, we use physical optics. We start by projecting the initial detector or feed response onto the next optical element using ray optics. We then calculate the electric field on this element and project its response outward.

We continue this exercise until we come up with the optics system’s response on the sky. Thinking in broadcast mode, one could avoid spilling radiation over the primary by under-illuminating it with a narrow Gaussian beam so that the power has fallen off sig- nificantly by the time it gets to the edge of the mirror. This narrow beam on our primary would result in a large beam on the sky which would be too large to resolve galaxy clus- ters. Instead, we create an aperture stop at the secondary, and surround it with an absorber. Thinking in broadcast mode again, we strongly over-illuminate the secondary such that a large fraction of the Gaussian beam is spilled over intentionally. The result is that the pri- mary illumination is high out to 4 m in radius and then is nearly zero for the last meter of the primary. The beam created by truncating at the cold stop is well behaved in that less than 1% of the total power seen by the detectors hits the outer 1 meter and the power is down by -30 dB by the time it gets to the edge of the primary. The shape of the beam on

20 Figure 2.2: The primary illumination pattern. Solid curve: An analytic solution for the illumination pattern on the primary with a stop at the edge of the secondary mirror. Dashed curve: The best fit truncated Gaussian beam shape. Thick solid line: The outer radius of the 10 meter diameter primary mirror. Figure courtesy of Steve Padin[21]. the primary is shown in Figure 2.2. The surface quality of the outer 1 meter of the mirror does not need to be high because we use it mostly to reduce spillover. We did, however, make the guard ring section the same surface accuracy as the rest of the mirror to make the telescope flexible for future receivers. We intentionally spill over 20, 30 and 50% of the radiation off the secondary mirror on to the cold stop for the 220, 150 and 90 GHz detectors respectively. Because this is a large fraction of our total power, we need to absorb the spilled over radiation with a cold baffle or else our loading will increase. We also need to trap the radiation on a surface with a stable temperature to reduce varying offset signals. A thorough description of the cold stop’s scattering performance, cryogenic performance, and mechanical design are the topics of Chapter 4.

21 2.4 Ground Shields

In addition to having an off axis design with only a few optical elements, and a cold stop and baffle, the SPT has a co-moving ground shield. The co-moving ground shield sits on both sides and below the telescope primary mirror redirecting scattered radiation toward the sky (see Figure 2.1). It helps redirect scattered radiation up toward the cold sky rather than letting it be moved around on the hot ground and telescope structure.

2.5 Primary

The primary mirror is made of 218 individual panels, each of which can be adjusted with 6 adjustment screws. Gaps between the panels are ∼1-2 mm wide depending on the ambient temperature and account for 1% of the total area of the primary. Each panel is mounted to the carbon fiber reinforced backing structure of the telescope. The surface accuracy of the primary was measured the first season using a method called photogrammetry, which uses a digital camera and six reflectors per panel to reconstruct the shape of the mirror. This resulted in a measured surface accuracy of 40 microns R.M.S. for the entire 10 meter mirror[22]. The following season, we made further adjustments of the mirror coupled with holography measurements to obtain a surface accuracy of 20 microns R.M.S.[22]. This is sufficiently accurate not to degrade the beam shape at 220 GHz, and should be sufficient for future submillimeter measurements.

2.6 Secondary Mirror

We designed the optics to have a secondary that was small enough to be made on a CNC mill out of a solid piece of metal. The secondary mirror is made of aluminum 7075-T6, and its weight is 40 pounds. It was machined from a solid piece of aluminum, and is lightweighted on its back surface with a honeycomb structure.

22 The secondary mirror accuracy is dictated by machining tolerance and stress induced deformations during machining and thermal cycling. Stresses induced during machining are non uniform and cryogenic cycling results in differential thermal contraction which causes warping. To avoid this, we thermally cycled the secondary mirror before the final cut. We measured the surface error of the secondary with a holographic technique. We placed a 89 GHz source at prime focus and a receiver diode at the Gregory focus near the location of the focal plane. By measuring phase errors at the focal plane we deduce the shape of the secondary mirror. The surface profile errors are shown in Figure 2.3, and the measurement is described more thoroughly in Padin et al. 2008[23]. These measurements show a small potato chip shape deformation feature that domi- nates the inaccuracy of the surface[23]. We now suspect that the surface was still accurate after machining and that the thermal link between the secondary mirror and backing struc- ture was not sufficient during the first integrated thermal cycle. This resulted in the backing structure cooling and shrinking at a rate much faster than the cooling and shrinking of the mirror. The backing structure most likely shrunk enough to induce stresses needed to per- manently deform the secondary to the level that we measure it. The secondary surface is good enough for the current observations, but may need to be replaced sometime in the future for a shorter wavelength receiver.

2.7 Receiver Cabin and Optics Cryostat

The receiver, optics cryostat, and read out electronics sit in a warm receiver cabin that moves with the telescope. The Gregory design of the telescope provides a relatively com- pact prime focus that is an excellent location for radiation to enter the receiver cabin and optics cryostat. At the entrance of the cabin, near prime focus, sits a 1 inch thick foam window that keeps the cold outside air and snow out of the cabin. Directly behind it is the vacuum window which provides an entrance to the cold secondary cryostat.

23 Figure 2.3: The surface accuracy of the secondary mirror. Measurements from the holog- raphy setup show the surface error in microns. Figure from Padin et al. 2008[23]

2.8 Metrology and Pointing Hardware

In order to take advantage of the angular resolution of the SPT we need to make sure that our pointing accuracy is significantly smaller than the size of our beam. A few factors make such requirements difficult. One factor is that the ice pad at the south Pole shifts and tilts to a small degree day-to-day. Another is that different parts of the telescope structure flex and bend under gravity and differential thermal contraction. We use a myriad of tools including optical star pointing cameras, dedicated mm-wave pointing observations, and a metrology system to help us reconstruct our pointing. There are three star cameras mounted to different parts of the telescope structure. One is on the receiver boom and two are on different locations on the primary mirror support structure. By mapping out the positions of a large number of stars we can solve for degrees

24 of freedom in the telescope structure. Similarly, we occasionally spend many hours mapping radio sources spread over the sky to measure the remaining pointing parameters and time varying ones that are specific to our mm-wave receiver. We also bracket our cluster observations with radio pointing observations of galactic HII regions to figure out how our pointing is changing on short timescales. Optical pointing provides our best measure of azimuth tilt, while HII observa- tions are used to constrain elevation-tilt and telescope flexure. Finally, the SPT is outfitted with three metrology subsystems to measure azimuth, el- evation, and tilt of the telescope. These include, azimuth and elevation encoders, biaxial tilt meters, and linear displacement sensors. Tilt meters measure the tilt of the telescope bearing relative to the ice pad and telescope base, and linear-displacement-devices mea- sure tilts and deflections of the upper telescope structure. The linear-displacement-devices are made of four linear carbon fiber sensors which connect the azimuth bearing to loca- tions near the elevation encoders. With these four sensors we measure how the structure is shrinking, tilting or twisting. There are also roughly twenty temperature sensors on the telescope structure that we use to correlate with measurements from the linear sensors. With a large-format array, and large sky coverage, we can reconstruct pointing after the fact. Therefore, we do not need real time pointing accuracy at the ∼few arcsecond level. In the first data release all pointing corrections were done after the data was taken and none were applied real time as the telescope moved. We used a combination the tools described above to reconstruct our pointing. We are confident that our final pointing jitter is less than 10 arcseconds and our absolute pointing compared to other high resolution catalogs is also roughly 10 arcseconds. The pointing accuracy that we are achieving is satisfactory and is not in any way a limiting factor for detecting clusters.

25 Chapter 3

Receiver

3.1 Introduction

The SPT receiver consists of 960 transition edge sensor bolometers which operate in either the 90, 150 or 220 GHz atmospheric windows. The detectors are read out with a frequency domain squid multiplexing scheme, and are cooled to sub Kelvin temperatures. Both the detectors and read out were designed and fabricated by our collaborators at U.C. Berkeley, and Lawrence Berkeley Lab. There were a few key technological developments that the SPT team needed to develop or master in order to deploy the ambitious receiver needed to reach our science goals. One was the jump from individual bolometers to bolometer arrays. Another was to deploy a multiplexed readout system capable of reading out roughly a thousand bolometers. Here we describe the design of the receiver and technological innovations made in the course of its development.

3.2 Detectors

The SPT detectors consist of a spider-web-shaped silicon nitride absorber and a transition edge sensor (TES) thermometer. The absorber is a thin mesh of silicon nitride coated with gold, and is 3 mm in diameter and 1 μm thick. The TES is made of an aluminum-

26 Figure 3.1: The SPT bolometers. The wedge has 160 individual detectors in the wedge shape shown. Zooming in, the bolometer absorber which is of the traditional spider web geometry. The TES is in the middle. Figure courtesy Erik Shirokoff. titanium bilayer which has a superconducting transition near 0.5 K and normal resistance of ∼ 1Ω. We AC voltage bias the TES to create strong, negative, electrothermal feedback. The voltage bias adds an electrical power term which is added to the optical power incident on the bolometer. Because it is voltage biased, the electrical power dissipated is of the P = V2 form R . As the optical power on the bolometer is increased, the temperature of the superconductor rises, and therefore raises the resistance. This decreases the electrical power, nearly balancing out the rise in optical power. This is the nature of the negative feedback, which is strong because of the steep nature of the superconducting transition. The bolometer arrays are fabricated on 100 mm-diameter silicon wafers, and metalized

λ on their back sides to create a 4 backshort to increase optical coupling. The back short

27 creates a boundary condition that requires the electric field to be zero, which means the field is near maximum at the absorber. Each array has 160 bolometers on it, and the space between bolometers is used to route the electrical leads from all the TES sensors. In some cases, our electrical time constant is too fast and can cause instability so we add a gold pad near the TES to increase the heat capacity. Three views of the bolometer arrays are shown in Figure 3.1.

3.3 Focal Plane Construction

The focal plane is made of six single frequency wedges that can be configured to populate the focal plane with 90, 150, and 220 GHz detectors. Wedge shaped arrays of smooth walled horns sit on top of the bolometer arrays. They were machined from a solid piece of aluminum and were gold plated after fabrication to increase their reflectivity. These have short sections of circular wave guide between the horn and the detector which define the low frequency cutoff of the observing band. The high end of the band is defined by wedge shaped low-pass metal mesh filters that are mounted on top of the horn arrays.

The TES leads on the arrays end at the outer edge of the wedge where they are wire bonded to circuit boards, called the LC boards, which carry the bolometer signals out to the read out electronics. Each bolometer connects to a line with both a capacitor and inductor in series which create a notch filter along with the TES resistance.

3.4 Readout

The heat loading on our cryogenic system would be too large if we used one pair of wires per detector. To ameliorate this problem, we multiplex in the frequency domain and read out seven bolometers for each SQUID amplifier[24]. SQUIDS are the most sensitive mag- netic field sensors and are commonly used to read out transition edge sensors. We use them as amplifiers by coupling the time-varying current in our bolometer feedback circuit to an

28 Figure 3.2: The 2007 SPT focal plane assembly. The six detector wedges sit below a filter stack and horn array. The figure on the right shows the horn array which sits on top of the detector array. The filters sit atop the horn arrays. The LC boards on the right contain the inductors and capacitors that form the LC notch filter for the readout circuit. The LC boards are wire bonded to the detector wedges. inductor which creates a time-varying magnetic field through the squid. To multiplex, we create a comb of AC bolometer bias signals, and have each bolometer select one bias signal using its series notch filter. All the bolometer signals in the comb are combined and amplified with one SQUID before they are demodulated to recover the bolometer signal. To increase the dynamic range of the readout, we remove most of the carrier signal with a negative bias comb before amplification. This prevents the SQUIDS from saturating while leaving the information stored in the side-bands. Figure 3.3 shows a schematic of the readout electronics. The SPT readout system was designed and built by our collaborators at U.C. Berkeley and Lawrence Berkeley Lab, and the squids were made by NIST. Each SQUID is a 100- element SQUID array that results in an amplifier that acts like one large SQUID, but one √ with 100 times the amplification, but only 100 times the noise[25, 24]. The bolometers sit on the ∼250 mK stage, and, as mentioned previously, the bolome- ter leads are connected to the LC boards that contain the inductors and capacitors. These boards are flexible and bend to bring the bolometer signals onto the back side of the focal plane. Stripline wires are used to connect between the LC boards and the SQUIDS, which

29 Figure 3.3: A schematic of the frequency domain multiplexing scheme used by the SPT [24]. All 8 detector bias voltages are generated at once. The resistance of the bolometer along with each LC combination determine the bolometer’s AC bias frequency. All of the bolometer currents are added together and are coupled to one squid via the input inductor. Here, the variable resistors represent the bolometers, and the SQUID is shown in the feed- back loop as a circle with two x’s which represent Josephson junctions. Figure from Trevor Lanting’s thesis.[24] are located on the 4K stage. We heat sink the striplines on the intermediate stage of the cooler, which is kept below one Kelvin. The SQUIDS are surrounded by a high mu cryop- erm shield, which shield the SQUIDS from stray magnetic fields. Just outside the vacuum jacket, on the cryostat, sit a series of SQUID controller boards that are used to bias and set up the SQUIDS. These, in turn, are connected through warm cables to boards that generate the bias combs and demodulate the signals. These demodulated signals are recorded by the readout computer and piped into the main data stream.

30 Chapter 4

Cold Secondary Cryostat

The cold stop is the most unique part of the telescope optics design. In Section 2.3 we described how we truncated the detector feed response at the secondary to create the desired beam on the sky. In doing so, we created the need for a cold stop and baffle which are described in this chapter. A stop is an optical element that limits the rays that can pass through an optical system. Ours is a cold stop, created by surrounding the secondary with a cooled microwave absorber. Stopping the rays limits the usually wide angle response of the detectors. Cooling it decreases detector loading. The cold stop absorbtion is imperfect and the detector response is wide, so we surrounded the entire optics chain between prime focus and the detectors with a baffle, which captures spillover in a stable, cold environment, preventing it from exiting the cryostat window. The cold stop and baffle functions are performed by the same millimeter wave absorber. For consistency, we will refer to this as the baffle, where its cold stop functionality is implied. We required the baffle to meet two specifications: (1) it had to contain 99% of the spillover so that the spilled-over power would not be absorbed on a hot surface outside the cryostat; (2) the baffle needed to cool to ∼10K to reduce the optical loading to be com- parable to that from the atmosphere. These specifications, which were loose guidelines, dictated decisions regarding the final mechanical and cryogenic design of the baffle. We begin this chapter with an introduction of the baffle geometry and components

31 within the cryostats. We follow with a description of its optical design in Section 4.2, followed by a review of its mechanical design and assembly in Section 4.3. We finish by providing a description of the cryogenic design of the baffle and radiation shield assembly in Section 4.4.

4.1 BaffleOverview

The SPT baffle is contained within two cryostats that share the same vacuum space. The receiver cryostat contains the detectors, band defining filters, lens, and a small snout that forms the end of the baffle. It shares vacuum with the optics cryostat, which contains the majority of the baffling, the secondary mirror, heat blocking filters and vacuum window.

The baffle and its relation to the secondary mirror and detector array are illustrated in Figure 4.1. We surrounded the secondary with the absorber and extended it by coating the inside of an aluminum shroud just outside the limiting rays. As was done in Chapter 2, we think in broadcast mode to illustrate the telescope optics. In the case of the SPT optics with the baffle, we start by broadcasting rays from the detector feeds toward the secondary mirror. The detector’s response peaks at the center of the secondary mirror and falls off toward its edge where it is stopped down by the bafflebefore it falls to zero power. Most of the spillover is absorbed, and the remaining fraction that is scattered is contained by the baffle. The main beam is redirected toward prime focus, where the filter stack and foam window are located. After exiting the window, the beam hits the primary mirror and is broadcast onto the sky.

4.2 Optical Design

We explored different baffle geometries and absorber materials while trying to minimize the amount of spillover that could exit the window and potentially hit the hot ground or telescope structure. In Subsection 4.2.1 we explain material scattering tests which motivi-

32 Figure 4.1: A section view of the baffle, which is covered with a millimeter wave absorber, HR10. Prime focus is located near the window and filter stacks. ated our choice of of absorber. In Subsection 4.2.2 we describe the scattering performance of different baffle geometries with absorption properties similar to our tested materials.

4.2.1 Absorber Testing

We tested a series of absorber materials to be used as the inner surface of our baffle[26]. The materials tested were chosen because they are flexible and easy to epoxy onto complicated surfaces. The materials, which were manufactured by Emmerson Cuming, included AN72, HR10, GDS and BSR-1. We measured the specular reflectance as a function of angle of incidence in a nar- row band between 130 and 140 GHz for each using a Gunn oscillator and a diode de- tector. Figure 4.2 shows the setup that was used. All measurements were done while backing the absorber with aluminum and our measurements are normalized to the re- flectance of a bare aluminum sheet. The real baffle required a metal backing support, so the measurements were a good approximation to reality. We should point out that we were only measuring specular reflection and not scattered radiation at other angles.

33 Figure 4.2: The millimeter material reflectivity test setup. The Gunn oscillator source operates at 150 GHz. The intensity of the reflected beam is measured by a horn and diode detector which are mounted to an optical table and protractor.

Transmission was zero because of the metal backing. For each material, we measure re f lectance = Ire f lectedabsorber /Ire f lectedmetal and seek the material that has the lowest value of reflectance over all angles. Here, I was the raw voltage measured at the lock in amplifier on the diode detector. Figure 4.3 shows the results from our measurements, with HR10 performing much bet- ter than other common absorbers such as ECCOSORB AN72. The value of reflectance was shown to be low at all angles of incidence, rising to a few percent near grazing incidence. Assuming transmission and scattering of zero, the Aluminum backed HR10 absorber has an emissivity greater than 0.95 at 150 GHz. We therefore used HR10 as the cold secondary baffle absorber and assume its emissivity is somewhere between 0.95 and 0.99. While de- signing the baffle, we had to be careful to minimize the number of reflections with grazing incidence because emissivity rises steeply for shallow angles.

34 Figure 4.3: Results from the millimeter-wave reflection tests. These show that HR10 per- forms better than the other materials tested. Reflecance is below 1% for most angles and only rises at angles above 60 degrees (grazing incidence). Data and plot from W. Lu.

4.2.2 Scattering

We simulated the containment of the spillover with different baffle geometries, and changed the baffle shape to mimimize the fraction of power that escaped the window. In these simulations, we broadcast ray bundles from detector positions into the SPT optics system.

We used the ZEMAX Non-Sequential Components (NSC) optics software package [27] ,which was ideal for these simulations because it did not need to know the order that rays hit different elements. NSC is a ray tracing package that incorporates reflection, scattering and absorption but not diffraction. Simulated rays were allowed to bounce off any number of surfaces in any order, reflecting specularly, scattering, or splitting at each surface, where the power of the split rays was divided to conserve energy. The ZEMAX program provides simulated sources which launch rays, and detector ar-

35 rays that count the energy of rays that hit different surfaces. A bundle of rays was gener- ated by drawing from a user defined angular distribution and assigning a power for each ray that depended on the number of rays generated. Each simulation broadcast rays from one particular bolometer location. The rays were traced through the system until they were absorbed, escaped, or hit one of the simulated detectors. ZEMAX filter surfaces were used to ignore rays that hit the secondary first, so we were only investigating the spilled-over radiation. We then counted the total amount of power that hit a simulated detector at the window exit. Figure 4.4 shows the setup for one of the simulations where a bundle of rays was traced through the system until they were absorbed or escaped through the window.

The total power that exited the window, and its distribution on the ZEMAX detector placed at the window are shown in Figure 4.5[28, 29]. Fixing the SPT optics design, we were free to change the baffle geometry and its ab- sorptivity. The baffle size was constrained to fit in a cryostat which would fit in the receiver cabin. 3-D Cad designs of different baffle shapes were imported and incorporated into the optics model. The beam was taken to be a Gaussian with a 6dB truncation at the edge of the secondary for a center pixel, which corresponds to 75% of the radiated power hitting the secondary.

We investigated a number of different geometries before settling on a two-cone design. The three designs explored most thoroughly were the box, two-cone, and two-cone-with- rings models, shown in Figure 4.6. The two-cone baffle design was based on the shape of the limiting rays as they travel from the window to the secondary and then toward the focal plane. We called it a two-cone design because the limiting rays, emanating from the feed horn form a conical shape with its base near the secondary. The limiting rays then travel toward prime focus forming the second conical shape. The closeness of this baffle to the limiting rays helped keep it size smallsothatitfitinarelativelysmall cryostat. We opened the angle of the baffleupto increase its size near the secondary. This gave us extra space at the secondary to surround the mirror with an absorbing shelf and also created baffle angles that contained spillover

36 Figure 4.4: One simulated ray bundle from the ZEMAX scattering simulations. All rays from this bundle were absorbed before they could exit through the optics cryostat window.

Figure 4.5: The incoherent irradiance pattern on the ZEMAX detector at the window is shown. The pattern shows where rays exited the window. This particular simulation was for the box baffle geometry4.6.

37 more efficiently. The two-cones-with-rings design added sets of baffling rings near the focal plane to eliminate the largest trouble spot for the two cone design, where rays hit the baffle very near the window at shallow angles. Large simulations, where we launched tens of thousands of rays, were used to quantify the performance of these three baffle designs. We kept the bolometer position fixed near the center of the focal plane for all of these simulations, and we repeated them for different absorber emissivities for each baffle geometry. Because we were using a metal backing, we assumed transmission was zero, which implied R = 1 − . The mirror emissivity was taken to be  = 0, and we varied the absorber emissivity between 0.3 and 0.99. These simulations helped quantify the performance of each baffle and show how performance changed as emissivity dropped. Table 4.1 shows results from simulations where we compared the performance of dif- ferent baffles as a function of emissivity. For   0.8, all the proposed geometries beat our loose specification by a fair margin and contained much more than the required 98% of the spilled-over power. While it would be nice to gain extra room for error by picking the best performing geometry, we picked the simpler, more elegant two-cone baffle. Results show that this design beat our specification by two orders of magnitude for a conservative HR10 emissivity of 0.9, as long as there were not a large number of rays with grazing incidence. Note that this result assumes a constant emissivity, which is not true for grazing incidence. The two-cone design simulations had assumed uniform emissivity regardless of inci- dence angle, but we know that the HR10 emissivity is only greater than 0.9 for certain angles. Therefore, we ran separate simulations for the two-cone model where we investi- gated the number of rays that hit at grazing angles, ensuring they were scattered at least one more time at a more favorable angle before exting the cryostat. To investigate this, we used additional simulations to broadcast small ray bundles in particular directions. These simulations traced the paths the rays took until they were absorbed, and helped locate the poorest performing parts of our baffle. Figure 4.7 shows a simulation for the most prob- lematic ray bundle where rays hit the absorbing walls very near the cryostat window. These

38 Figure 4.6: The three proposed baffles. Top: A box design that was similar in shape to the originally proposed cryostat. It did not include any inner cryostat elements, such as the mirror support, and was therefore an unrealistic shape. Middle: The two-cone baffle based on the limiting rays from the edge pixels. Bottom: The two-cone-with-rings model. It was an extension of the two-cone model, and included additional baffling rings near the array.

39 Figure 4.7: A simulated ray bundle where the angle is fairly shallow for the first reflection. We show that such rays are required to hit the absorber surface at least one more time before exiting the cryostat window.

Baffle Two Cone Two Cone w/rings Box  Power Fraction Power Fraction Power Fraction 0.3 4e-2 5e-3 4e-3 0.5 2e-2 1e-3 1e-3 0.8 3e-3 1e-4 1e-4 0.9 8e-4 2e-5 4e-5 0.99 9e-6 3e-7 2e-6 Table 4.1: The fraction of spilled-over power for different baffle geometries and absorber emissivities. All three geometries meet our specifications that we collect greater than 98% of the spilled over radiation for emissivities higher than 0.9. The two cone baffledesignis what we chose to build.[29] rays reflect at a fairly shallow angle, but one that is not greater than 80 degrees. After the first reflection, they hit the absorber at least one more time before exiting the window. We also ran the large simulations for different focal plane positions and demonstrated that we meet our stray ray containment specification for all bolometers on the focal plane. Here, we assumed the emissivity was 0.98, which is approximately true for angles up to 60 deg from normal. Table 4.2 shows that our two cone baffle performs well for the extreme positions on the focal plane.

40 xpos [mm] ypos [mm] Power Fraction 0 0 2e-5 20 0 2e-5 100 0 3e-4 0 100 6e-4 50 50 5e-4 75 75 8e-4 Table 4.2: The fraction of spilled-over power that escaped as a function of focal plane position. The points tested represented the full extent of the focal plane. This was for the two cone geometry with emissivity of 0.98. Both the emissivity and bafflegeometryarea good approximation to what is installed in the SPT.

All simulations show that the two-cone baffle contains the required amount of spilled- over radiation, leading us to choose the simple two-cone design because it minimized the size of the cryostat needed to house it, and it simplified the mechanical design. The box model performed well, but was an unrealistic design as it did not contain mirror support structures and it was excessively large. Also, the extra rings in the two-cone-with-rings design would have helped for scattering purposes, but would have required making the baffle larger to the point that they would have interfered with the cryostat.

4.3 Baffle and Optics Cryostat Mechanical Design

Here we describe the mechanical design of the baffle and how it is assembled in the optics cryostat. We also describe the other components inside the optics cryostat that are needed to make the baffle work as desired. This includes the radiation shield, secondary mirror, mirror support, and filters, all of which are cooled by a pulse tube cooler. At the end of the section, we illustrate how the optics cryostat and receiver connect together, share vacuum space, and form the final baffle shape. Thermal issues will be discussed separately in Section 4.4.

41 4.3.1 BaffleDesign

The two-cone baffle geometry introduced in Section 4.2.2 was shown to effectively contain spillover and scattered radiation. The baffle, shown in Figure 4.1, surrounds the mirror and extends toward both the window and focal plane. Most of the baffle is contained inside the optics cryostat, and is supported by an aluminum shroud which keeps it rigid and in the desired shape. The shroud serves other crucial roles; one is to help cool the absorber surface uniformly to ∼10K: another is to provide rigid surfaces at the ends of the baffleto mount filters. The aluminum shroud, shown in Figure 4.8, consists of two rolled conic sections that are welded to a flange and then to a cylindrical piece that extends it behind the secondary to a mount. The shroud is strengthened by placing flanges at the small openings as well, and the smaller of the two is used to mount the 10K filter. We designed the baffle and modeled its attachment within the assembly using the Solid- works [30] 3-D CAD software. It was designed around the cold optics as though it was at cryogenic temperatures. We accounted for thermal expansion of the baffle from 10 to 300K by scaling up each of the shroud components when we machined them. We used a similar trick to expand the mirror support structure to see where the cold baffle moves to when it warms up. The Solidworks design package provided tools to unroll sheet metal parts which took into account the stretching of the material that would occur during rolling pro- cess. After designing the baffle and enlarging it, we used this to flatten the conic sections and exported the flat designs to be fabricated with a CNC water jet cutter. The shroud was made from 1.6mm thick, dead soft, annealed aluminum 1100 which is soft enough to bend by hand. The 1/4 inch strenthening flanges and the conical shape of the baffle made it quite rigid when it was fully assembled.

The locations of flanges had to be accurate to 2 mm to prevent interference with the cryostat or a thermal short to a different temperature stage (See Figure 4.21). This ma- chining tolerance was difficult to achieve for a large structure made from one of the softest

42 Figure 4.8: The 10 K baffle shroud. It is shown to consist of two conic sections with a cylindrical extension at the bottom and strengthening flanges at the top. The flange on the smaller of the two opennings is used to mount a filter stack.

43 Figure 4.9: Here we show how we unroll one section of the cones to create a flat model that can be cut with a CNC machine.. aluminum alloys. The rolled aluminum was very maleable and welding it typically induces distortions, which adds to the difficulty in constructing the baffle within specifications. Therefore, we constructed a jig, shown in Figure 4.10, to hold the end flanges in place and then wrapped the conical shapes to intersect the flanges. The sheet metal sections were welded to the flanges to hold them in the correct place. They were then welded together at only a few points to reduce weld induced stresses while providing ample thermal path. We coated the inner surface of the baffle with 1cm thick HR-10, a millimeter-wave absorber made by Emerson and Cumming. This is the material that performed best in the scattering tests described in Subsection 4.2.1. It is an open-cell polyurethane foam constructed to be electrically conductive, which makes it a broad-band absorber. We coated the inside of the aluminum baffle with Stycast 2850 FT mixed with catalyst 9 to attach the HR-10 sheets. Because the sheets were flexible, they conformed to the conical shape inside the baffle. Figure 4.11 shows the optics cryostat section of the baffle partially coated with the absorber. We also coated the side of the mirror with the absorber material and created a shelf that extended the absorber up to the side of the mirror so no broadcast rays can get behind the secondary. The small conic sections of the baffle that reside inside the receiver

44 Figure 4.10: Here we show the jig used to assemble the baffle. The disks at the top keep the flanges in position while welding. cryostat were also covered with the HR-10 absorber.

4.3.2 Baffle and Radiation Shield Assembly

We surrounded the baffle with a nearly identical, larger shroud to block the radiative heat load from the cryostat walls. This radiation shield was kept at ∼60K by the first stage of the pulse tube cooler. The radiation shield extended beyond the mirror and bafflesuch that it enclosed both. We welded the filter flange at the end of this shroud. This filter stack reduced the heat load on the second stage cryogenics and helped reduce the detectors optical loading. The baffle and radiation shield were connected togther to form the assembly shown in Figure 4.12. The 10K baffle was held in place on a mirror mount described in Subsection

45 Figure 4.11: Installation of the HR10 absorber in the baffle. Individual 2x2 foot sheets of the absorber are epoxied into the aluminum 1100 shroud.

4.3.4, and the radiation shield was hung from the baffle. Using sets of G10 standoffs, we thermally isolated the baffle and radiation shield while providing mechanical support. At the bottom, we had 1.25 inch long, 1 inch diameter, 0.01 inch thick, molded tubes in recessed box mounts (see Figures 4.13,4.12). The set of four G10 cylinders are stiff in all directions and constrain the position and rotation of the outer radiation shield relative to the inner baffle structure. Signals would be created in the detector timestreams if the filters or baffle moved rela- tive to the beam as the telescope moves. To keep this from happening, we installed two sets of G10 fins, which constrain the baffle and radiation shield in the radial direction near the window port. The fins are thin, flat sheets which are 0.01 inches thick, three inches wide, and one inch long. The fins are pulled taught to hold surfaces in tension, thus setting the ra- dial positions of the cone ends. One set of fins connects the radiation shield to the cryostat shell and the other connects the radiation shield to the baffle. They are thin and intenionally

46 Figure 4.12: The cold stop baffle and radiation shield assembly. The baffle two-cone struc- ture is pictured with its inside coated with the mm-wave absorber. The radiation shield sits just outside the baffle with the same general shape as the baffle. The bottom G10 support boxes can clearly be seen. The larger of the two opennings is where the baffle intersects with the small part of the baffle that is inside the receiver cryostat. The smaller openning is near the window. weak along the axis of the cones(perpindicular to the mirror surface) to allow movements of one shroud relative to the other while cooling. This axial movement is required because the two shrouds shrink by different amounts while they cool to different final temperatures. A schematic view of the G10 fins is shown in Figure 4.14. Diode and Cernox thermometers are placed throughout the baffle and shield structure.

We use them to monitor the absorber’s temperature and to investigate heat loads and tem- perature gradients. Their locations are listed and illustrated in Appendix A and some of the thermometers are visible in Figure 4.12. They are installed before putting together or installing the baffle/shield assembly into the optics cryostat, as they are not accessible after

47 Figure 4.13: One of the four G10 cylinder supports for the radiation shield located near the mirror end of the baffle. Left: a cut through the G10 support. This shows how the G10 cylinder connects the two structures together. The recessed box increases the thermal path. Right: A 3-D view of the assembly with the top flange invisible. wrapping the aluminum surfaces with superinsulation. We attach the thermometers with small screws and stycast them in place for better thermal sinking. After installing all baffle and shield thermometers, we wrap both shields with nine lay- ers of superinsulation. This greatly reduces the heat loads on each temperature stage. See Equation 4.1 and the heat load calculations that follow in Subsection 4.4.2 for a description of how superinsulation decrease heat loads. A small amount of superinsulation needs to be applied with the assembly inside the cryostat, but most of it is permanently attached to the cone assembly. The superinsulation is NRC-2 Cryolam [31], which we cut to match sections of the baffle. One large cutout is made for the conical shape that includes the receiver opening and another for the window port. We overlap the two superinsulation sec- tions at the weldment seams by roughly 4 inches on either side, and we continued wrapping and taping until we build up nine layers of superinsulation on both the baffle and radiation shield. We also apply superinsulation to the back plates behind the secondary, which are attached after the baffle assembly is installed. The superinsulated assembly, ready to install in the cryostat is shown in Figure 4.15.

48 Figure 4.14: The cold stop and radiation fin supports near the receiver mounting flange. The G10 fin on the right is one of three that constrain the spacing between the shield and stop. The radiation shield has been made transparent to illustrate the mounting details. The G10 fin on the left is one of four that support the radiation shield and connects the shield to the cryostat near the receiver flange. In both cases, the fin consists of a sheet of G10 that is epoxied into an aluminum strip that slips into a slot in the shroud. The fin is pulled and held in tension by connecting the other end to an aluminum bracket. Figure from Padin et al. 2008[23]

4.3.3 Optics Cryostat Assembly

The baffle and radiation shield assembly are housed in the optics cryostat, which also holds the secondary mirror. The receiver cryostat, with the other small section of the baffle, connects to the optics cryostat, completing the optics chain. The baffle is split between the two cryostats in a way that allows different receivers to plug into the telescope without changing the optics cryostat configuration. This requires us to break both the baffleand radiation shield in two, with one section in the optics cryostat, and the other in the receiver cryostat.

49 Figure 4.15: The baffle and radiation shield assembly, ready to be installed in the optics cryostat. The truss rods have been attached and superinsulation has been installed. The cover plates were installed for protection and were used to lift the baffle assembly into the cryostat.

The ∼ 4 − 10K baffle sections must overlap to block infrared radiation originating from the hot cryostat walls or radiation shield that would warm and saturate the detectors. The radiation shields also have to overlap to block radiation from the cryostat walls which would thermally load the second stage cryogenics. At the same time, none of the shields or baffles can touch because that would thermally load the receiver pulse-tube cooler. Space constraints required the baffles and shields to be very close together and made the interleav- ingverydifficult. The interleaving geometry near the filters is shown in Figures 4.21 and 4.17. The interleaving and overlapping occurs while bolting the two cryostats together, and it does so blind, where we cannot watch for interferences. These thermal and IR-blocking constraints result in our requirement that all shroud ends to be fabricated with 2mm toler- ances.

50 Figure 4.16: The baffle and radiation shield assembly and their relation to the secondary mirror and receiver. The optics cryostat and receiver cryostat flanges are shown. Near this interface, the baffle and shields are split in two to allow the receiver to be separated. The shrouds were required to be close near the overlap to block IR radiaiton, but were not allowed to touch each other.

4.3.4 Secondary Mirror Mount

The mirror and baffle assembly are attached to a back plate and are held in place by a kinematic mirror mount which is formed by six truss rods that connect the mirror mount to the cryostat shell. The truss rod tubes are 1 mm thick, 25 mm in outside diameter and approximately 1 m in length, and have ball joints at their ends. The locations of the ball joints are constrained, but they are allowed to rotate. The length of the six rods define the location and angle of the secondary mirror and baffle. As the assembly cools, truss rods shrink by ∼2mm, finally moving the mirror and baffle into their nominal positions when completely cold. The mirror is supported on the back plate with a three point mount that allows it to contract freely without developing stress[32]. One of the three points is held in place with a bolt and spherical washer set and is not allowed to move. The others sit on

51 Figure 4.17: The optics and receiver cryostat components. The receiver and optics cryostat assemblies connected together to form the baffle. Shown is a cross section of all cryostat components, and the cryostat shells. bearings and are allowed to float. The baffle assembly also mounts to the mirror back plate. The bottom flange of the 10K baffle bolts to the plate, which extends beyond the radius of the mirror, and the radiation shield passes over the mirror and back plate. The baffle, radiation shield, secondary, back plate and truss rods are asembled as a unit (see Figure 4.15) before being insterted into the optics cryostat.

4.3.5 Pulse Tube

Heat straps connect the pulse tube stages to the back plates of the baffle and radiation shield.

They are shown in figure 4.19. The second stage strap connects to the back of the mirror

52 Figure 4.18: An illustration of the rod ball joints. The mirror mount is supported by a similar joint at each of the three corners of the triangular back plate. Similar joints also connect the other ends of the truss rods to the cryostat shell near the receiver flange. Each ball joint is allowed to rotate around the ball, but the mount is rigid when all six rods are in place. This forms a semi-kinematic mount for the mirror. A steel disk presses the ball into a cylindrical hole when the bolts are tightened. The ball, disk and cylindrical piece and rods are made of steel. mount, and passes through a hole in the back plate of the radiation shield. The geometry and construction of the heat strap allow for relative motions between the pulse tube and back plates that occur as the truss rods cool and shrink. The pulse tube moves roughly a quarter of an inch toward the backplate while the cryostat is being pumped out and the vibration isolating bellows contracts. The flexible heat strap also helps damp vibrations from the pulse tube. The heat straps are made of oxygen free high conductivity (OFHC) copper and contain a rigid piece and a flexible piece. The rigid piece is machined from a solid block of OFHC copper, and the flexible piece is made from 12 individual sheets which total in a quarter

53 Figure 4.19: The optics cryostat heat straps. The first and second stages of the pulse tube cooler are attached to the back plates of the radiation shield and baffle respectively. The second stage of the pulse tube cooler cools the baffle, mirror back plate and mirror. The heat straps consist of two main parts, an L-bracket and flexible strap, both of which are made of OFHC copper. The flexible strap is made of twelve individual sheets of copper which allow for compliance along one direction while remaining stiff along the telescope scan directions. The flexibility allows for the vibration isolating bellows to contract as the cryostat is evacuated.

54 Figure 4.20: This shows the jig used to measure clearances of the receiver baffle and radi- ation shield and to monitor for interference problems. The jig provides a mounting surface for the receiver baffle and shield that is at the same location as in the receiver. inch of thickness. The individual sheets and curled up shape of the heat straps allows for the pulse tube movement. Our flexible heat strap also provides stiffness along the scan direction and helps keep the mirror in place to reduce scan induced offset signals.

4.3.6 Assembly Along With Receiver

We fashioned a jig to mount the conical sections of the receiver baffle and radiation shield before attaching the receiver cryostat (see Figure 4.20). We must do this because clearances are tight and this mating is done blind without being able to investigate for thermal shorts. This jig is used to mount one receiver cone at a time to investigate touches and overlap.

55 Figure 4.21: The baffle and radiation shields near the cryostat window and receiver. The 250 mm clear aperture filter stacks are shown. The filter stacks sit at different temperatures and cannot touch each other. Near the receiver opening, the optics baffle and shield must interleave and overlap those from the receiver. The space between the optics baffleand radiation shield is less than 3/4 of an inch. The window sags under vacuum pressure and will come within 1/2 inch of the 300K filter on top.

4.3.7 Vacuum Window and Filters

Light enters our optics chain through a vacuum window. The window and a set of heat blocking filters reduce the heat load on the various cryogenic stages while passing the majority of the in-band radiation on to the detectors. A schematic of the window, filters and baffle is shown in Figure 4.21. The window has a ∼10 inch clear aperture, and is made from 4 inch thick, Zotefoam Propazote PPA-30 [33] foam. This is typically made in separate ∼25 mm thick sheets and heat laminated into a thicker sheet by the manufacturer. We cut the sheet into a 300 mm disk with a hot wire cutter and epoxied it into a custom window frame using Stycast 2850FT epoxy. The window material deforms under pressure when we evacuate the cryostats. We cannot make it thicker because it would interfere with the cabin roof, and it cannot be

56 Figure 4.22: Window test setup. The clear vacuum window allows us to measure the window deflection at the bottom of the window. The SPT window holder is shown bolted on top of the window chamber. moved down or deform more than ∼2.5 inches without touching and potentially breaking the 300K heat blocking filter. The filters must be close to the window because they cannot be made with a clear aperture much larger than 250 mm and the beam is diverging quickly near prime focus. We tested window strength and long term deflections of similar windows using a vac- uum chamber with a window in the side to view the underside of the foam window[34]. Figure 4.22 shows the window test setup with the glass viewing window in the side and laser sight to measure bottom deflection. Long term deflection tests show that the window

1 will come no closer than 2 inch from the 300K filter. The top and bottom of the window deflect a similar amount, with the window material compressing a small amount. It was feared that the window would stretch such that the bottom surface would be lower than the top surface would indicate. This was not and should not be an issue. The results from an

57

Figure 4.23: Pump down results for a similar, but larger and thinner window than what we eventually deployed in the SPT. This shows that a twelve inch window will sag more than the 2.5 inches of space that we have between the window and the filter. This window was 3.6 inches thick. Top deflection is measured easily with a ruler. Bottom deflection is 1 measured with a sight glass and laser. Measurement errors are approximately 32 inch and 1 18 inch for the top and bottom respectively. The top and bottom deflection are nearly iden- tical and consistent within measurement errors. There may be slight compression between layers rather than expansion. This measurement shows us that the top and bottom of the window deflect by a similar amount. Subsequent pump-down tests of the real SPT window show lower, ∼2 inch deflections. extended deflection test are shown in Figure 4.23. Directly below the window (as is shown in Figure 4.21) sit sets of filter stacks. They consist of a 200 cm−1 low-pass IR shader at 300K, a 12cm−1(360 GHz) low-pass filter sandwiched between two 200 cm−1 IR shaders at 70 K, and a 9 cm−1 low-pass filter at 10K. These form a series of heat-blocking low-pass radiation filters that block out-of-band radiation coming from the sky or the back of the foam window. The 300 K filter is mounted to the back of the foam window holder, near prime focus, with a series of standoffs. The

58 Figure 4.24: The optics cryostat opennings. The radiation shield filters can be seen cover- ing the top opening which is near prime focus. The foam window and 300K filter mount above the filters shown. The secondary mirror can be seen through the large receiver open- ning. The receiver cryostat attaches to the large bolt circle around this openning.

∼70 K filters are mounted as a stack and are bolted to a flange at the end of the radiation shield shroud. The ∼10 K filter is bolted to a flange at the end of the 10 K baffle. Figure

4.24 shows the filter opening of the optics cryostat.

4.4 Cold Stop Cryogenic Design

The cold secondary and baffle cryogenic system was designed to meet a few crucial require- ments. First, we required the baffle and mirror cool to roughly 10K to reduce the optical load on the detectors. Second, to avoid limiting the time to repair or replace components in the receiver, we needed the baffle to cool at least as fast as the four days required to cool the receiver. The third consideration was to eliminate time varying loading on the bolometers.

59 We therefore had to attenuate the 40 mK peak-to-peak temperature fluctuations created by the pulse tube cooler before they reached the absorber.

4.4.1 Heat Loads

The cooling capacity of the two stages of this cooler sets the magnitudes of acceptable heat loads. In order to effectively buffer the second stage from heat loads, we needed to cool the radiation shield which is connected to the first stage. The first stage of the Cryomech PT 410 delivers roughly 30 Watts of cooling power at a desirable 50K temperature. We required the baffle to cool to 10K, and, to accomplish this, we needed the second stage of the pulse tube a bit colder to accommodate a temperature gradient across the heat strap and baffle structure. At 5K, the second stage of the Cryomech PT410 has roughly 2 Watts of cooling power [35]. Therefore, we required heat loads less than roughly 30, and 2 Watts on the first and second stages, respectively. These heat load requirements drove many design parameters such as the length and thickness of the truss rods and G10 support tubes, the number of layers of superinsulation, and the gauge and length of cryogenic wiring.

One load we anticipate is the radiative transfer between two metal surfaces, which we typically wrap in superinsulation. We see this load from the cryostat shell onto the radiation shield, and from the radiation shield onto the baffle. The Radiative load between plane, parallel, superinsulated plates of area A and temperatures T1 and T2 takes the form

1 Q = σA(T 4 − T 4), (4.1) n + 1 1 2 where  is the emissivity of the surfaces, A is the area, T1 and T2 are the temperatures, σ is the Stefan-Boltzmann constant, and n is the number of pieces of superinsulation between the surfaces. Because both surfaces are covered with the same superinsulation, we can  1 assume they share the same value for . Superinsulation reduces the load by a factor of n+1 , and we choose to have 9 layers so that we reduce the load by a factor of 10. A second type of radiative load enters through the filter stacks near the cryostat win-

60 dow. We install stacks of filters that are transparent in our observing bands, and emissive otherwise to block heat while letting the in-band radiation pass. These are effective because our observing bands are very far into the Rayleigh-Jeans limit of the 4-300K blackbodies in question. In this regime of the blackbody spectrum, there is little power being radiated.

These filter stacks allow our detectors to look out into the world, but do result in some additional loading. At the back of the window we have a filter that sits near 200K. We also have a filter stack on our radiation shield (at the top of the ∼50K cone) that absorbs the 200K blackbody radiation while allowing the in-band radiation through. The final filter stack at the top of the ∼10K baffleblocksthe∼50K radiation and allows the in-band radi- ation to pass. To calculate these loads, we use an equation from Ekin 2006[36] that writes the power in the form

  Q = σ T 4 − T 4 1 2 , ( 1 2 ) (4.2) 1 + 2 − 12 where all the variables are the same as before except that we allow the emissivities of the surfaces to be different. Conductive loads are approximated as having two surfaces, at temperatures T, con- nected by a material of length, l, uniform cross section, a, and temperature-averaged ther- mal conductivity . a Q =  T − T ( 1 2) l (4.3)

4.4.2 First Stage Heat Loads

To cool the radiation shield sufficiently, we required the first stage load to be less than 30 Watts. We designed our cryostat to mitigate heat loads, the largest of which came from the cryostat wall. We wrapped 9 layers of superinsulation around the radiation shield to decrease the load by a factor of 10 according to Equation 4.1. To estimate this load, we set

2 T1 = 300K, T2 = 77K, A = 4m , n = 9, and  = 0.05. The emissivity of highly polished metals is much less than 1, and are typically taken to be between 0.01 and 0.05[36, 23].

61 The resultant load was 9 Watts. We also expected a few Watts of radiative power from the window on the 77K filter stack. The level of this heat load depended greatly on the assumed emissivity of the window material and on the final temperature of the back side of the window, both of which are unknown. We believe that the window cools radiatively, but have no good measurement for its final temperature. For a rough estimate of the heat load, we assume it cools to

T = 200 with emissivities, 1 = 2 = 0.8 for both the window and filter, Using Equation 4.2 and setting and A = 0.05m2 we obtained ∼ 3.5Watts. We also calculated the conductive loads from cryogenic wiring. We had roughly 15 thermometers on each stage for a total of ∼60 wires per stage. We used Lake Shore Cry- otronics QT-36 Quad − TwistTM wire to connect the thermometers[37]. The wire was made of 36 gauge (0.127 mm diameter) phosphor bronze. The total wiring load on the first stage was calculated using the phosphor bronze conductivity in this temperature range,  = W l = . m ΔT = − = K 35 mK [37], 0 5 , (300 40) 260 and the cross sectional area was de- rived from the 36 AWG wire thickness. This estimated heat load, shown to be 2 mW, was negligibly small compared to the rest of the first stage heat loads. We installed heat straps on the truss rods to divert some of the 300K parasitic load from the second stage onto the first stage. The installation of the straps was difficult, and resulted in poor thermal contact between the straps and the radiation shield. Measuring the temperature gradient accross the truss rod it is clear that the straps do little to divert power to the 1st stage. This is also clear because the truss rods read nearly 200K at the location that they are supposed to be sunk to 70K. Because these straps proved insufficient, all of the load ended up on the second stage. We therefore take the first stage load from the truss rods to be zero. The radiative loads were obviously the dominant loads on the first stage, but both had large errors associated with them. Cracks in superinsulation and parts that were very hard to superinsulate could easily account for another few Watts of radiative load. Similarly, our estimate of the temperature or emissivity of the window could be off by a large factor

62 as well. Our estimated total heat load on the first stage was therefore a lower limit of 12.5 Watts, well below our requirement of 30 W.

4.4.3 Second Stage Loads

We required the heat load on the second stage to be lower than 2 Watts in order to cool the pulse tube head to 5K and the baffle below 10K. The second stage load was dominated by the conductive load from the truss rods, which connected directly between 300K and the ∼7 K mirror backing structure. We minimized this load by making the six supporting rods very thin and long. They are thin-wall stainless truss rods of approximately 1 meter in length. Their outer diameter is 0.02 m and the wall thickness is 0.001 m. Using Equation

4.3, and conductivity numbers from the NIST website[38], we calculate the load to be a total of ∼1 Watt from the 6 rods. There are other conductive loads from the G10 tube and fin supports. Each of the G10 tube pieces is 1 inch in diameter, ∼1 inch long, and 0.01 inch thick. We used thermal conductivities published by NIST [38]. Using Equation 4.3 we estimate 15 mW for the four tubes, which is negligible compared to the truss rod heat load[39]. The three G10 fins contributed a similarly negligible heat load, as the length and cross section of the material was similar. We also calculated a head load due to the calibrator light pipe described in Section 4.5.

The light pipe connected directly between 300K and the back of the ∼10K mirror. It was made in two sections, the first of which was 3/8 inch o.d. copper pipe, and the second was made from 3/8 inch o.d. thin-wall stainless steel. The room temperature end was the copper section, and is ∼three feet long. The cold stainless section is roughly 8 inches long.

For reference, a copper-only light pipe would have resulted in a heat load of ∼6Watts. To estimate its load, we first calculated an upper limit for the load by assuming the copper did nothing to reduce the temperature gradient between 300K and 10K. In this scenario, the stainless pipe is 300K at one end, and 10K at the other. Using equation 4.3

63  = x − W a = x − m2 T = K T = K l = . m with 3 10 4 K , 1 10 5 , 1 300 , 2 10 ,and 0 25 , we came up with a load of 100mW. Next, we used this 100mW load to calculate the actual temperature at the cold end of the copper. We do this by setting the power equal to 100mW, and solving Equation 4.3 for

T2. Here, we came up with a temperature of 296K, which implied that the copper did not do much to reduce the temperature at the hot end of the stainless pipe. We conclude that the heat load from the light pipe is similar to our worst-case limit of 100mW. We also use QT-36 phosphor bronze wires to connect the thermometers between the first and second stages. We calculate the load using the ∼10K thermal conductivity values  = W l = . m ΔT = − = K provided by Lake Shore [37], 15 mK , and use 0 06 , (40 10) 30 and the cross sectional area is derived from the 36 AWG wire thickness. This yields an 5 mW wiring load, which was sub-dominant compared to the other loads. The largest radiative load is from the radiation shield, which sits parallel to the baffle.

We kept this load small by cooling the radiation shield below 77 K, and by surrounding the baffle shroud with nine layers of superinsulation. Using Equation 4.1 with A = 4m2, n = 9,

T1 = 60K, T2 = 10K,and = 0.05 we calculate a load of 15 mW. Combining all of the loads described above, we arrived at a total second stage heat load of 1.15 Watts. The breakdown of the loads is summarized in Table 4.4.

4.4.4 Heat Load Results

The mirror, baffle and radiation shield cool to temperatures within our specifications. The mirror and baffle cool to ∼8K, while the radiation shield cools to ∼60 K, both of which are within our specifications. Final temperatures of all thermometers are provided in Table

A.1. We reach ∼5Kand∼30 K on the second and first pulse tube stages respectively, which imply heat loads of 2 and 20 Watts respectively. These loads are both roughly a factor of two above the predicted loads presented in the previous section and summarized in Tables 4.3,4.4, but still low enough to achieve our specifications. Gaps in superinsulation could

64 Type Load description Load [Watts] Radiative 300 K cryostat walls to radiation shield 9 Radiative Window 3.5 Conductive Wiring 0.002 Total 12.5 Measured 20 Table 4.3: First stage heat loads

Type Load description Load [W] Conductive Truss rods 1 Conductive Calibrator light pipe 0.1 Conductive G10 tubes 0.015 Conductive G10 tubes 0.015 Conductive Wiring 0.005 Radiative shield to cones 0.015 Total 1.15 Measured 2 Table 4.4: Second stage heat loads account for a large fraction of this discrepancy on the first stage, as could a misestimate of the foam window temperature.

4.4.5 Gradients

The final head loads allow us to cool the first and second stage pulse tube heads to 30K and 5K respectively. For the baffle to cool below 10K, we must minimize temperature gradients between the pulse tube head and the absorber. Temperature gradients are induced across any material with heat flowing through it, and can be estimated by solving Equation

4.3 for the temperature difference T1 − T2. We need both low heat loads and high thermal conductance to have small temperature gradients. We already illustrated how we minimized heat loads in the previous sections, now we discuss how we minimized gradients. The final temperature of the baffleis∼10 K, but the pulse tube head is at ∼5 K. Roughly 1 K of this gradient occurs in the pulse tube heat strap, 1 K of it is along the baffleitself and the rest of it occurs on the back plate and mirror support structure.

65 To choose the appropriate alloy for the baffle and back plate, we calculated gradients that would be incurred for modest heat loads. Here, we assume the bottom end of the baffle would be 4 K and calculated the temperature at the top of the cone as a function of heat load. We approximate the baffle to be a cylinder, one meter in diameter, one meter

1 tall, and 16 inch thick. For aluminum 6061, a gradient of 8 K would result from a modest load of 0.5 Watts. This would have been unacceptable since, in practice, the bottom of the cone is actually 8.4 K. Such a large gradient would raise the final temperature of the baffle above our specification. The same calculation for an aluminum 1100 baffle shows a modest gradient of 1.5 K for a heat load of 0.5 Watts. Therefore, we manufactured the baffle cones and back plate from the high conductivity, soft, annealed aluminum 1100. For reference, soft annealed aluminum 1100 has 4K conductivity a couple times better than that of normal aluminum 1100. An additional temperature increase is expected across the thickness of the HR10. We epoxied a thermometer to the top of the HR10 surface using stycast 2850 FT, and placed the thermometer where we expect the highest radiative load from the window. We placed another thermometer at the same location, but on the metal shroud, and measured that the HR10 surface was never more than 1K warmer than the metal shroud.

4.4.6 Cooling Time

The baffle and mirror cool to their base temperatures in roughly three and a half days, which is faster than the receiver components cool. The second stage of the pulse tube reaches 5.3 K, the secondary mirror 8.4 K, and the baffle, 9.4K. The remaining base temperatures are presented in Table A.1.

The optics cryostat cooldown curves are presented for the final test done at Case before deployment, and are shown in Figure 4.25. Subsequent cool downs behave similarly with the exception that their base temperatures were slightly lower because tests done at Case were done without the receiver cryostat. Without the receiver, we loosely bolt on radiation

66 caps and shields, which remain somewhat hotter than the receiver.

4.4.7 Temperature Oscillations

Temperature variations of the cold stop will show up in the detector time-streams, and the pulse tube head temperatures oscillate at 1.4 Hz, with peak to peak variations of 40 mK. We need to attenuate these fluctuations before they couple to the absorber and drive time varying optical loads. The thermal conductance of the heat straps and the large heat capacitance of the secondary, mirror mount and shroud act as a low-pass filter that largely attenuate the fluctuations. We permanently mount cernox thermometers at various stages to measure these os- cillations. The most relevant of which is located midway up the baffle shroud. In one measurement, we sampled the cernox sensor for two hours while all temperatures were at their base temperature. The amplitude of the 1.4 Hz fluctuations at this point were not detected at a noise level of 1μK, so the optical signal in the receiver is no more than a few hundred nK. This is a conservative upper limit because the time constant of the HR10 absorber attenuates these oscillations further.

4.5 Calibrator

We use a chopped thermal source to characterize our detector’s short and long term be- havior. This calibrator is permanently coupled to our optics system and is run many times per day to assess performance and to help relatively calibrate our detectors. The calibrator consists of a few basic hardware components which help couple a chopped thermal source into the bolometers optical path without interfering with normal observation when not in use. Here we describe the purpose of the calibrator and its construction.

67 Figure 4.25: Housekeeping thermometer readings for a cooldown of the optics cryostat with the receiver cryostat connected. The temperatures reach steady state after roughly 80 hours. Note that we stopped taking continuous data at hour 70. Final temperatures are reached at hour 80, and final temperatures for the final configuration are show in Appendix A. The contents of the optics cryostat cool faster than the receiver, so we do not limit the system turnaround time when switching out focal plane components. The final heat loads on the stages were estimated using the final temperatures of the pulse tube stages and a heat load map provided by Cryomech.

68 4.5.1 Calibrator Tasks

We use the calibrator to check the performance of our detectors. The calibrator is used to check for live and well-performing detectors immediately after biasing is completed. A quick ∼10 minute schedule is run at the beginning of each day to check for live detectors. We measure each detector’s response to the calibrator at one or more chopping frequencies, and compare the level of this response to the noise. The amplitude of this response is monitored and used to correct for gain variations.

We need to know the bolometer’s temporal response so we can deconvolve it during our time-stream processing before map making. We expect the bolometer response to fit a single pole filter response and for it to be roughly independent of loading. To measure the response, we perform what we call a calibrator sweep by chopping the hot calibrator source between 4 and 60 Hz. We monitor the response functions by measuring them roughly once a month, and also measure them at various elevations and loading conditions.

4.5.2 Calibrator Hardware

The layout of the calibrator is shown in Figure 4.26, and the calibrator itself is pictured in Figure 4.27. It consists of a hot and ambient source that we can chop between at high frequencies. Radiation from the calibrator is broadcast into a light pipe that ends at a hole in the secondary mirror, and therefore, radiates onto the detectors. There are both a hot, and ambient (300K) source. The chopper lets the calibrator alternate between one and the other rapidly. The shutter completely blocks the radiation when it is not in use. The calibrator consists of commercially available hardware put together in a custom assembly that couples the signal into our optics chain. The hot source is an LC-IR-12 thermal source from Boston electronics. It has a ceramic element surrounded by a filament and connected to a built-in thermocouple. The hot source is meant to operate at ∼1000 Kelvin with a lifetime of many years. It has two leads which are to be connected to any DC power supply. The built-in type K thermocouple has leads that we connect to our custom

69 Figure 4.26: The layout of the calibrator. The light-pipe is pictured in blue, the hot source in red, and the other parts are in black. The calibrator consists of a hot and ambient source that we can chop between at high frequencies. We also include a shutter so that we can eliminate any signals from the hot source during normal observations. thermocouple readout. We epoxied the IR source to a custom flange that holds it in place and couples it to our custom light pipe. The first section of the light pipe and flange are made of stainless steel to help thermally isolate it from the rest of the light pipe. The light pipe runs directly to the cold mirror, so we thermally isolate it to reduce its heat load. Our custom light pipe is intentionally broken at 45 degrees by a slit that allows a quickly spinning chopper to pass through. The chopper is a model C-995 from Terahertz Technolo- gies Inc. The chopper spins a blade with alternating closed and opened patterns to create chopped signals between 4 and 500 Hz with an accuracy of 0.01% of the frequency setting. It outputs a reference signal at the chop frequency, which we feed into our data acquisition system. We are able to measure our response to the chopped optical signal by looking at the height of the PSD of the signal, or by reading out the amplitude and phase of the signal with respect to the chopper’s phase. We had the chopper wheel gold plated so it would have high mm-wave reflectance.

70 Figure 4.27: The calibrator hardware.

When the blade is out of the beam, the detectors see the room temperature source, which is a hollow conical section coated with a millimeter wave absorber. The temperature of the ambient source is read out with a separate temperature sensor and monitored with an AD592 and read out with our custom electronics box. We installed a shutter mechanism in another slot in the light pipe to stabilize the optical load that occurs when we aren’t using the calibrator signal. This allows us to leave the hot source on, which would otherwise take time to reach equilibrium. The shutter is the commercially available SH-10 Shutter from Electro-Optical Products Corp. The shutter has a 15 millisecond reaction and closing time and can be used to investigate the bolometer time constant at frequencies slower than 4 Hz.

71 Figure 4.28: The IR-12 calibrator source from Boston Electronics[40] and custom stainless steel housing. The filament heats to ∼1000 K, and a built in thermocouple helps read out its temperature.

A small vacuum window made of plastic embedded in a short section of the lightpipe feeds the radiation from the calibrator into the inside of the cryostat. On the vacuum side of that window is the bent copper light-pipe. The copper is easy to shape , and has only a few shallow bends before it reaches the back of the secondary. The copper light pipe connects to a thin-walled stainless pipe which is attached to the back of the secondary. This stainless section is the last part of the light-pipe. It is used to thermally isolate the conductive copper which sits at 300K from the cold secondary mirror.

4.5.3 Calibrator Thermometry

The calibrator is outfitted with three thermometers in addition to the thermocouple that comes attached to the IR-12 thermal source. These additional sensors are Analog De-

72 Figure 4.29: The calibrator connected to the optics cryostat. The calibrator box is shown attached to a port on the back of the optics cryostat. The calibrator light pipe is also pictured extending to the back of the secondary mirror. vices AD592 temperature transducers. The output current of the AD592 is proportional to its temperature. The thermocouple is an accurate temperature sensor near ∼1000 K and the AD592 sensors are inexpensive sensors to measure the nearly room temperature components. One AD592 monitors the temperature of the ambient load, one monitors the reference junction of the thermocouple, and the final sensor is located near the hot source to monitor the temperature of its stainless steel mount, and was used only as a check of our design. The amount of signal produced seen by the bolometers will depend on the values read out by the thermocouple and the first two AD592’s.

We read out the temperatures with a custom electronics box. Thermocouples work by exploiting the fact that any pair of dissimilar metals produce a voltage that depends on the temperature where the metals are touching. This is called the Seebeck effect, or thermo- electric effect. The IR-12 sensor is a type K thermocouple made of Chromel-Alumel. Using

73 Figure 4.30: The calibrator thermometers. a lookup table for a type-K thermocouple, we convert from the output voltage of the sensor into temperature. The complication arises when connecting these Chromel-Alumel wires to normal copper wires. The addition of these two wires creates a similar Seebeck effect in the reverse direction. This additional effect is easily resolved by measuring the temperature of the reference junction, Tref , with a low cost, room temperature sensor. The real temperature of the filament is measured by converting the thermocouple voltage into temperature with a look-up table and then subtracting the reference junction temperature measured by the AD592. We chop between room temperature and the hot source. The radiation is therefore proportional to the difference between the ambient source temperature and the reference junction corrected thermocouple temperature.

74 4.6 Summary

The mirror and baffle cool to 10 K with small gradients along the baffle, meeting all of our cryogenic requirements. The final temperatures on the first and second stages of the pulse tube cooler are 5K and 36K respectively, which imply loads of 2 and 20 Watts. The entire optics cryogenic system reach base temperature in less than three and a half days, which is less time than it takes to cool the focal plane. We concluded this chapter with a description of the calibrator functionality and hardware.

75 Chapter 5

Data Selection, Processing, and Map

Making

The SPT is currently mapping large regions of the southern sky in search of galaxy clusters. Here we present the first results from a 40 square-degree area centered roughly at right ascension 5h30m, declination -53 degrees (J2000). We refer to this field, which has also been targeted by the Blanco Cosmology Survey (BCS) optical survey, as the BCS5h30 field. Here, we report the four most significant SPT detections within this field and the confirmation of those detections with the BCS optical data as was presented in Staniszewski et al[41]. These first detections demonstrate the capabilities of the SPT and the usefulness of SZ surveys in general for discovering galaxy clusters.

5.1 Observations

We present data from one of two fields that have overlapping BCS coverage because it is our first field to have coverage in all three SPT observing bands, as well as optical coverage. The 90 GHz data was taken in 2007, when our focal plane configuration was more heavily weighted toward 90 GHz detectors as compared to the 2008 focal plane. The 150 and 220 GHz data were taken in early 2008.

76 For each map, we build up a composite map from many individual, ∼2 hour observation maps. Each ∼2hr map is a fair fraction of the size of the composite map, but with much higher noise. The composite 90 GHz map is made from 270 individual ∼40 square degree maps, and the 150 and 220 GHz maps are made from 314 ∼100 square degree maps.

Each observation is carried out by sequencing constant-elevation scans accross the field, where we sweep the entire telescope azimuthally across the field and then back before stepping up in elevation. Our scans were performed with constant velocities between 0.44 and 0.84 degrees of azimuth per second, and the size of the elevations steps were 0.07 degrees for the 2007 observations and 0.125 degrees in 2008.

Dedicated pointing and calibration observations are done between 2 hour maps to mon- itor and correct for changes in our pointing or calibration. These observations, which are discussed further in Subsections 5.2.1, and 5.2.3, include scans across galactic HII re- gions RCW38 and MAT5a, ∼2 degree elevation nods, and measurements of the internally chopped thermal source.

5.2 From Raw Data to Final Maps

Combining individual detector’s data into composite maps requires thorough characteri- zation of the instrument a series of processing choices. This section describes the major steps, which include

1. Pointing reconstruction

2. Beam measurement

3. Relative and absolute calibration

4. Data selection and notch filtering

5. Time stream processing

77 5.2.1 Pointing Reconstruction

Our ability to add many detectors’ time ordered data to a composite map requires a detailed understanding of where each detector is pointing throughout each observation. We achieve this by separating the pointing reconstruction problem in two; 1) What are the locations of each detector’s beams with respect to the boresight? 2) Where is the boresight pointing? We address the first concern by calculating each pixel’s pointing offset from our galactic HII observations. We do so iteratively by fitting each individual detector’s RCW38 map to a high signal to noise RCW38 template. First, we estimate pixel offsets from an optical model of the telescope optics. This provides initial guesses for the pixel offsets, which we use to coadd one observation of RCW38 into a template map. We then iterate on this process, this time solving for pixel offsets by moving each detector’s pixel offset to give the best fit to the template. The new pixel offsets replace the old ones and provide a template that is better yet. We continue iterating until pixel offsets converge, and create one RCW38 template per 160-bolometer wedge. Using these templates, we look for and verify that there are very little to no variations in pixel offsets from day to day, and use one solution for the entire observing season. An RCW38 template is shown in Figure 5.1.

The second task is to reconstruct boresight pointing. A rough estimate for boresight pointing is calculated in real-time by the telescope’s pointing computer, which uses the elevation and azimuth encoders, the location of the telescope and the GPS time. Additional off-line pointing corrections are made by modeling the changes of the telescope structure and ice pad over time with the aid of other data. These data include tilt meters, metrology sensors and temperature sensors placed throughout the telescope, ∼monthly optical point- ing runs, and ∼few-per-day observations of RCW38 and Mat5A. The optical pointing data comes from star cameras attached to the primary mirror and receiver cabin. RCW38 and

Mat5A observations are done frequently to monitor calibration, but are also useful for re- constructing pointing because these two sources are at slightly different elevations and we measure them multiple times, a few hours apart as they circle the southern sky.

78 90 GHz 150 GHz 220 GHz dT (K−RJ) 0.8

0.6

0.4

0.2

0 0.800

0.253

0.080

0.025

0.008

Figure 5.1: RCW38 images similar to what were used as calibration templates. Each image is centered at the same position, and is 30 arcminutes by 30 arcminutes. The top row shows the images with a linear scale, and the bottom shows the same images on a logarithmic scale. The amount of the map used for calibration is smaller than the map shown and is described in the text. The 90 GHz map was made from 2007 data, and the 150 and 220 GHz maps were made from 2008 data. Each map consists of ∼10 30 minute observations totaling roughly 5 hours of integration time. Figures from Carlstrom et al. 2009[22].

Combining all of this data, we construct a pointing model whose success can be inde- pendently checked after the fact for single-observation or composite maps. We measure final pointing errors for uncorrected and corrected boresight pointing by measuring the ap- parent smearing of point sources that are visible in single-observation maps. A point source viewed by SPT should appear to have been beam smoothed to the width of our beam. Any additional extent of the point sources is attributed to pointing jitter. Uncorrected pixel er- rors of roughly 20 arc seconds RMS were reduced further to less than 8 arc seconds RMS by using the off-line pointing model.

Finally, we register our overall pointing to a number of point sources within the SUMMS catalog [42, 43]. This astrometry is accurate to 10 arc seconds in the final coadded maps.

79 Figure 5.2: The 150 GHz SPT beam window function and the best fit Gaussian window function used for cluster searching. The best fit Gaussian beam has been multiplied by 0.75 to match its calibration to the small angular scales of the real beam function.

5.2.2 Beam Measurement

We need accurate measurements of our beams to calculate their effects on the SZ and the CMB. The beams used in this analysis are the best fit elliptical Gaussian functions that fit our maps of Mars. The best fit beams had Gaussian full widths at half maximum of 1.5, 1.2 and 1.1 arc minutes for 95, 150, and 225 GHz. The SPT beam window function for 150

GHzisshowninFigure5.2.

5.2.3 Relative and Absolute Calibration

A great deal of effort went into understanding relative and absolute calibrations needed to combine data from many detectors into a single map, and to convert them to absolute temperature units on the sky. The calibrator injects a signal into the optics chain through

80 a small hole in the secondary mirror. The detector response to this signal is sensitive to changes in detector responsivity which could result from changing the detector or squid biasing, or changes in loading. The relative gains of the detectors, and their variations over time, are estimated using measurements of the calibrator before each observation. We apply these relative calibration values before our timestream filtering which includes the removal of a common mode signal. After timestream filtering, we apply a second calibration factor which accounts for variations in illumination of the thermal source accross the focal plane, which is measured by calculating the ratio of each detector’s response to the calibrator to its response to the RCW38, averaged over many measurements. In all subsequent data releases, we included the illumination correction before the common-mode removal. We finally convert the RCW38-corrected relative calibration into an absolute calibra- tion by comparing our measured RCW38 fluxes with those published by the ACBAR and BOOMERANG experiments. For the 150 and 225 GHz channels, we compared to the

RCW38 flux reported by ACBAR[44], which was linked to the WMAP5 calibration at 150 GHz by ACBAR[45]. In order to do this comparison, we smooth our RCW38 map to match the ∼5 arcminute resolution of ACBAR, and compare our flux within the inner 8 arcmin- utes to the ACBAR flux. We use a separate BOOMERANG derived RCW38 calibration for our 95 GHz maps[46]. Smoothing our SPT RCW38 map to match the ∼10 arcminute BOOMERANG beam and comparing the flux within twice the Gaussian width sigma of the BOOMERANG beam. Our derived calibration uncertainties for the 95, 150 and 225 GHz bands are 13%, 8%, and 16% respectively. Calibration errors are described below and their contributions to the overall calibration budget are listed in Table 5.1.

1. Calibrator brightness variation - This error is estimated by how much the calibrator

brightness varies during a ∼few hour observation. This includes variations in detector responsivity and also in the brightness of the calibrator source.

2. Lack of atmospheric opacity variation correction - Estimated as the average opacity

81 Source of Calibration error 90 GHz [%] 150 GHz [%] 220 GHz [%] Cal Brightness 11 4 13 Atmospheric opacity 3.5 3.4 3.5 RCW38 variability 3.4 3.4 4.8 TCMBtoTRJ 1.0 5.0 1.5 RCW38 measurement 3 3 3 ACBAR 5 5 5 Total 13.1 8.2 15.9 Table 5.1: Calibration uncertainty budget.

we expect at 150 GHz. We corrected all observations for the mean opacity of the RCW38 observations over the season, but in principle this varied over the observa- tions and we didn’t explicitly correct for it.

3. RCW38 variability - Estimated as the variation in the RCW38 brightness day to day. In principle, the brightness of RCW38 could vary day to day. We measure this, but our variations are mostly due to variation in the responsivity of the detectors, but also includes the inherent variation of the HII region itself. Most likely this implies that

we are conservatively double-counting the atmospheric opacity uncertainty.

4. TCMBtoTRJ conversion - Pass band spectra for 2008 data was estimated from 2007 spectra, but not yet measured at the time we published our first results. Therefore, we assumed a conservative range of expected bands to derive an uncertainty. This

source of calibration error vanishes when calibrating directly from WMAP.

5. RCW38 flux systematic error - Small variations in the RCW38 calibration due to sensitivity on radius of the integral were used to estimate a calibration uncertainty.

6. ACBAR RCW38 calibration uncertainty - Estimated in Reichardt et al.[45]. This includes ACBAR’s calibration uncertainty from RCW38 to WMAP and the intrinsic variability of RCW38, during ACBAR’s observations.

Future data releases will use a calibration based on WMAP. By observing a ∼1000 square degree region of the sky, we can cross calibrate relative to WMAP without using an

82 intermediate like ACBAR and Boomerang. Calibrating directly to WMAP improves our calibration uncertainty greatly. For example, The calibration uncertainties for the Lueker et al. 2009 power spectrum paper[47] are 3.6%(7.2%) for 150(220) GHz.

5.2.4 Data Selection

Before processing and combining our data intoafinalmap,weimplementaseriesofcuts to improve the overall quality of our final data product. Here we describe the motivation and criteria for these cuts. A summary of the data quality cuts, and the fraction of data removed for the 2008 150 GHz data is presented below in Table 5.2.

1. Unresponsive and high-noise bolometers -Our first cut identifies good detectors by running the calibrator and checking each detector’s response to the chopped source, and then take noise data with the telescope stationary. Next, we perform an el-nod, where we scan the telescope up and down in elevation to see the change in optical

loading in the response of live detectors. We cut detectors whose response to the cal- ibrator and elevation nod are small relative to the noise in a bandwidth where cluster signals are prevalent (∼few Hz.) For the observations included in this analysis, we keep a median of 129 out of 280 detectors in our ∼95 GHz maps, 322 out of 420

detectors for our 150∼GHz maps, and 170 out of 280 detectors for our 220∼GHz maps.

2. Scan cuts - We also cut data when there are abnormally large pointing variations, pointing dropouts, or other data dropouts. These cuts are done on a scan by scan

basis and account for a 6% cut in data.

3. Bolometers cut for one scan - Within a given back and forth scan, we cut an individual bolometer’s entire timestream if it exhibits a spike whose amplitude is 7σ above the

mean of the polynomial filtered scan data, as it would if it were hit by a cosmic ray.

83 We also remove a bolometer’s data for that scan if its DC squid value is a factor of ten higher than normal. These account for another 6% cut.

4. Line notching - We allow ourselves to notch out anomalous line power as seen in our detector PSD’s. If any line is too wide (0.007 Hz), or if a detector has more than 40 lines, we cut the bolometer’s data for the entire observation. If, however, there are

only a few lines with widths less than 0.007 Hz wide, then we notch out those lines and continue processing the bolometer’s data. The bandwidth removed is less than 1% of the total bandwidth.

5. Bad observation - We also cut based on observation-wide criteria. Most frequently, we cut an entire observation if the cryogenic cycle ended part way through it, or if the total number of included detectors was less than 50. For the 2008 150 GHz data,

we kept 314 out of 377 observations, which represents a 17% cut in data.

5.2.5 Time Stream Processing

Additional time stream processing is required before we can combine individual bolometer

1 timestreams into maps. The primary goal of this is to remove f noise from the instru- ment and atmospheric signals before adding timestreams into the coadded map. We filter the data by fitting and subtracting low order Legendre polynomials from each detector’s time stream. We subtract the best fit 19th order polynomial from the ten degree-wide-scan 2008 timestreams and 11th order polynomial from the shorter six degree-wide-scan 2007 timestreams. These two filters remove information at scales larger than a half degree.

Because atmospheric signals are correlated between detectors, we also subtract the instantaneous array mean from each detector. First we subtract the best fit polynomial, and then we subtract the common-mode. This common-mode subtraction acts as a further high-pass filter at the one degree scale, which is the size of the detector array. To avoid creating ghost images when passing over a bright point source, we create a point source mask with the first iteration of the map making and reprocess the data while masking out

84 Cut description Percent of data cut Unresponsive bolometers 23% of bolometers Scan cuts 6% of scans Bolos cut for one scan 6% remaining bolos within a given scan Line notching less than 1% of bandwidth Bad observation 17% of observations Table 5.2: The amount of data lost from different cuts the brightest point sources from each detector timestream as the common-mode mean is calculated.

We also filter the data with a 40 Hz low-pass for the 2007 data and a 25 Hz low-pass for the 2008 data, where the different cutoffs are chosen to match the spatial scales in filtering for each season, as their scan speeds are different. This low pass filtering removes information on spatial scales of < 0.5 arc minutes, which does not significantly affect the

SZ signal, but helps act as an anti-aliasing filter before we bin our data into 0.25 arcminute pixel maps. We need an accurate measure of the effects of this processing on the CMB, SZ, and other sources to create a matched filter for cluster detection. To measure these effects, we insert a delta function in our time stream and send it through our analysis pipeline. The

filtering and beam transfer function is shown in Figure 5.3, and the filtering’s effects on the SZ, CMB, and other noise signals are calculated in Sections 6.2, and 6.3. The final timestream processing step is to deconvolve the bolometer temporal response function described in Subsection 4.5.1.We measure the bolometer time constants by step- ping the calibrator chopper frequency and measuring each bolometer’s amplitude and phase response. We find that the response fits a single pole low-pass filter well enough to reduce the residual amplitude of a point source in a right-left difference map to less than 1%. Our measured time constants range from 5-10 milliseconds in 2007 and 10-20 milliseconds for all detectors in 2008. In the timestream processing, bolometer deconvolution and filtering are done in the same Fourier operation.

85 Figure 5.3: The 150 GHz ell-space filtering and beam transfer function. At large angular scales (low kx), the polynomial filtering dominates. On small angular scales (high kx), signals are filtered by beam smoothing. The non-scan-direction transfer function lacks the filtering at low kx, but is similar at high kx.

5.2.6 Mapmaking

We create pointing timestreams for each bolometer using the boresight and pixel offsets, and add the data to a map with 0.25 arcminute pixels. We use our calibrated, filtered, deconvolved timestreams along with pointing information to create maps. The brightness values for each pixel are calculated using inverse-variance weighting based on the RMS of each detector’s processed and relative-gain-scaled time ordered data. Our ∼100 square degree coadded map is shown in Figure 5.4.

86 Figure 5.4: Left: The final map that we use to search for clusters. These maps are roughly 100 square degrees and we only search in the 40 square degrees where we overlap with the BCS field. Right: Our final left-right difference map. We use this to estimate the noise in our maps and to double check that there are no problems with our beam or time constant measurements.

87 Chapter 6

Cluster Finding and First Results

There are currently two cluster finding pipelines being used by the SPT team. Here we describe the matched filter approach which was used for our first results. This method helps remove noise from our minimally filtered, composite maps, and allows cluster candidates to be identified. It uses knowledge of the shapes of galaxy clusters and different noise contributions to suppress noise relative to the signals in which we are interested. This is possible because galaxy clusters are small, and their power (in the Fourier domain) is at small angular scales where noise power from sources like the CMB have fallen appreciably.

6.1 The Matched Filter

Matched filters were introduced by Haenelt and Tegmark[48] to extract the kinetic SZ effect signal and have since been expanded and refined to search for clusters of galaxies using the thermal SZ effect[49, 50]. The matched filter, which optimally maximizes the signal to noise of a cluster detection, is defined as

S T N−1 ψ = √ . (6.1) S T N−1S Here, N is the noise covariance, including all non-SZ foregrounds, and S is the SZ cluster template. As is done frequently in the literature, we choose to apply the matched

88 filter in the Fourier domain. The matched filter, defined above, is computed by comparing the power spectral density (PSD) of the signal with the PSD of the total noise. The difficulty in constructing the filter is trying to accurately model the SZ template and noise contributions as they are seen by our particular instrument. Specifically, we must account for effects of time stream filtering and the finite beam. The construction of the source template and noise PSD’s are described in the next sections.

6.2 SZ Cluster Templates

The SZ cluster template S in Equation 6.1 informs the matched filter of angular scales where cluster signals are important. Cluster profiles are usually very peaked at the center and quickly fall off, but best fit profiles differ from cluster to cluster, and experts have not yet agreed on a universal density model. Therefore, we try three well motivated profile types; 1) modified NFW profiles [51, 52, 53]: 2) isothermal β models with β between 2/3 and 4/3: 3) simple Gaussian profiles. Each of these profiles has a parameter that changes the effective size of cluster. In practice, we create an array of filters, where each is tailored to look for clusters of a partic- ular characteristic size. We find that our list of candidates remains the same, regardless of profile type as long as we vary the characteristic size from near zero to a few arcminutes. Because we are insensitive to the cluster profile, we pick one, the isothermal β profile, where the SZ surface brightness profile is given by,

2 2 (1−3β)/2 ΔTSZ(θ) =ΔT0(1 + θ /θcore) , (6.2) where θ is the angular distance from the line of sight through the center of the cluster, θcore is the angular core radius, and ΔT0 is the peak signal. For a given β, the filter is changed by varying θcore.

β and θcore are roughly degenerate given our data, so we keep β fixed with β = 1, and search for different size clusters by varying θcore. We construct the array of filters and study

89 the dependence of detection signal to noise as a function of filter θcore, varying it from 0.25’ to 3.5’ in 0.25’ steps. Timestream filtering and beam smoothing, described in Subsection 5.2.5, affect the way these profiles appear in our synthesized maps, and must be accounted for before using them in the matched filter. To calculate these effects, we convolve the SZ cluster template with our filter and beam transfer functions which are shown in Figure 5.3. The transfer functions effects are illustrated in Figure 6.1, where we’ve multiplied the Fourier transform representation of a β profile by the transfer function. We insert the filtered signal, S into Equation 6.1 when calculating the matched filter.

6.3 Noise and Foreground Estimates

We model the noise in our maps as a combination of

1. Instrumental readout noise,

2. Atmospheric noise,

3. The Cosmic Microwave Background,

4. Unresolved point sources, and

5. Unresolved, diffuse SZ signals

While constructing the matched filter from Equation 6.1, we express the noise covariance

N as the combination N = NCMB + Nnoise + NPS + NSZ, which consists of everything in our maps that is not resolved SZ cluster signal. Here, NCMB is the Cosmic Microwave

Background, NPS is the point source contribution, and NSZ is the unresolved SZ. These three are stationary in the sky. In addition, we have an instrument specific term, Nnoise, which is a combination of the SPT specific readout noise and atmospheric noise, which is not fixed on the sky.

90 Figure 6.1: Fourier transform band power for an isothermal β model in the scan direction. The Legendre polynomial filtering acts as a high-pass filter along the kx direction. The non scan direction (ky) profile remains unfiltered.

The noise covariance in the Fourier domain is the sum of the power spectra of the individual components. We adopt this strategy because of its benefits in calculating all the noise terms. In real-space, neighboring pixels are correlated for obvious reasons. The atmosphere and CMB are extended objects compared to our beam and pixel sizes, and our scanning produces correlations that are different for different pixel pairs. Calculation of the pixel-pixel covariance matrix for all our noise terms would be very arduous for our ∼thousand-by-thousand-pixel maps. In contrast, most of our noise sources are approximated fairly well as Gaussian random

fields. By assuming as much, our noise terms are represented in the Fourier domain with zero correlation between neighboring Fourier modes. This implies that the noise covari- ance is diagonal, and therefore, for a megapixel map, we only have one million non-zero covariance elements rather than one-million-squared non-zero elements as is the case in real space. It has recently been shown that, the noise covariance matrix is diagonal in

91 the Fourier domain and equal to the noise power spectrum as long as noise from different Fourier modes are not correlated. To create the final noise covariance, N˜ , we first combine the celestial noise terms by combining their best fit power spectra from the literature. Here, the tilde implies the Fourier transform. The CMB noise, N˜CMB, is estimated from the power spectrum with the best fit parameters from WMAP5,[54] and the unresolved point source contribution, N˜ PS ,comes from the Borys 2003 model for dusty point sources[55]. The dominant power in point sources is already masked out, so the faint radio contribution to the point source amplitude is negligible. The SZ background noise, N˜SZ, is estimated from the simulations described in Shaw et al. 2007[56]. This is SZ signal not detected as candidates, but instead another term that adds small angular scale power to the noise. The CMB, unresolved SZ and point source power undergo the same time stream filter- ing described above for the SZ cluster profile. Therefore, we modify their power spectra by convolving them with the beam and filtering transfer functions. A cartoon illustration of the filtering effects on the CMB power spectrum is shown in Figure 6.2. Because we scan our telescope with fixed elevation and subtract polynomial functions from the time ordered data, we remove low kx modes while leaving the low ky modes unfiltered.

The final contribution to our noise covariance is the instrument specific, N˜noise,which is comprised of atmospheric signal and readout noise. To compute these contributions, we take advantage of the fact that our composite map is made of many hundreds of individ- ual observations, each of which contains a combination of celestial signal, which remains constant, and noise, which varies with each realization. We create our instrument specific noise by adding +/- map pairs to create jacknife maps, which only contain noise. We create hundreds of jacknife maps by multiplying one of the pair by +1, and the other by -1, and adding them together. We take the PSD of each of the jacknife maps, and add them in quadrature and multiply by the correct normalization factor to create the final instrument specific noise PSD. Figure 5.4 shows a left-minus-right jacknife map for an entire season, which similarly subtracts signal and represents the noise left in our maps in the form of

92 Figure 6.2: The PSD of the CMB. Filtering along the kx direction is the same as that applied to the SZ template. atmosphere and readout noise. Figure 6.3 shows the SPT specific noise PSD on the same scale with the SPT filtered CMB power spectrum. The jacknife map’s noise contributions have already undergone the timestream process- ing and beam filtering which we had to model and apply to the other noise terms. Therefore, we create our final noise covaraince, N˜ ,asN˜ = (N˜CMB + N˜SZ + N˜ PS ) ∗ T˜ + N˜ noise Adding all components together, we construct the total noise covariance matrix as shown at the top of Figure 6.4.

6.4 Matched Filter Construction

For each value of θcore, we construct a filter by inserting the total noise covariance and the

PSD of the source template into Equation 6.1. The source template and noise covariance

93 Figure 6.3: Fourier transform band power for the atmospheric plus instrument noise plotted l(l+1) along with the filtered CMB power. Note that the traditional 2π normalization is not applied to the CMB power spectrum are compared at the top of Figure 6.4, and the resultant matched filter is shown in the bottom of the same figure. The SZ signal is high relative to the noise in our maps near l of a few thousand. This is because clusters are typically small, and the power in the CMB has fallen off considerably there because of Silk damping. The filtering of the SZ template eliminates a good deal of its power, but a considerable amount of SZ power still exists at high l where the noise has dropped. These simple one dimensional plots illustrate the construction of the matched filter, and help illustrate the step by step construction of the filter. All plots shown are for the scan direction where low l modes are suppressed. A generalized two dimensional matched filter isshowninFigure6.5.

94 Figure 6.4: The construction of the matched filter. These plots illustrate how we construct the matched filter by dividing the signal template by the total noise power. We illustrate this in the scan direction where the filtering cuts out all the large scale power. The non-scan direction (ky forkx = 0) plots are not informative because all of the large scale power has been removed, so the plots look like flat lines. Both plots are for a cluster with β = 1, and θcore = 1 arcminute. Top: A comparison of the total Noise covariance with a time stream filtered β profile signal template. Middle: Same as top, but with a logarithmic y-axis. Bottom: The matched filter for a fixed β = 1andθcore = 1 arcminute.

95 Figure 6.5: The construction of the 2-D matched filter for a β = 1, 1 arcminute cluster. Top: The Fourier space isothermal β profile for β = 1, and θcore = 1 arcminute with beam smoothing and filtering taken into account. Middle: The 2-D noise covariance including all noise terms. Bottom: The 2-D matched filter. The low kx modes are filtered for all ky, resulting in the stripe down the middle of all three. The remainder of the power in the matched filter is radially symmetric.

96 Figure 6.6: The previously known cluster candidate RXCJ0516.6-5430 from Staniszewski et al 2008 as filtered by matched filters that are tuned to pick out different size clusters. This figure is to illustrate that the signal to noise of a cluster candidate is increased when the filter is tuned to pick out a cluster with the correct size. We used a simplified filter to create this plot that does not diverge up at zero cluster size. Note that the sizes shown here do not represent the range of filters used to detect candidates presented in Staniszewski et al.

6.5 Matched Filter Application

The effects of the matched filter on the maps are illustrated in Figure 6.6, which shows the elimination of most of the CMB signal and the relative amplification of the cluster signal. The overall noise properties of the image change after being filtered, and candidates present themselves as dark spots in the filtered maps. Candidates are chosen by searching the matched-filtered maps for the darkest spots relative to the noise in the filtered maps.

We assess the significance of a cluster detection by dividing the darkest point in the matched-filtered map by the standard deviation of the pixel values away from the cluster in that same map. We account for the small variation in map noise as a function of elevation by estimating the noise in 1.5 degree bands at the same elevation as the cluster candidate. The variation in noise arises from our scan strategywhichcausesustospend fractionally less time per square degree at lower elevations where the scan is longer for the same azimuth range. Because we filter the map with the array of matched filters, each candidate has a signifi- cance value for each filtered map. We take the highest significance value for each candidate, and label the value of θcore which maximized the detection as its best fit θcore.

97 As mentioned in Section 6.3, point sources add noise power at small angular scales. The brightest of these point sources, however, can be seen in our maps and result in false detections if not dealt with properly. These sources create ringing in the matched filter maps, with dark, negative peaks surrounding the point source. To avoid these false can- didates, we search for them using a simplified matched filter, and mask them out before applying the matched filter used to find clusters. We mask out pixels within an 2 arcminute radius of the point source, and set all of these pixels equal to the average value just outside the mask radius. In addition, we ignore any cluster candidates found by the matched filter if they are within 8 arcminutes of a detected point source.

6.6 First Results

We now present the four most significant detections in our 40 square degree field using the matched filter approach described in the previous sections. These detections represent the first discoveries by an SZ survey[41]. Because we search with an array of different filters, the number of effectively inde- pendent pixels varies, so false positive rates are best investigated through simulations. To assess the likelihood of false detections, we run the cluster finder on a series of simulated maps containing models of the CMB, unresolved point sources, atmospheric noise and in- strumental noise, and diffuse SZ. We search for clusters in the array of fourteen filtered maps with β = 1andθcore running from 0.25’ to 3.5’ in 0.25’ steps. Because these maps contain no SZ, any detection is a false one. In the real SPT maps we identified four candidates with S/N > 5.5.Basedonsimu- lations, we’ve shown that there is a ∼2% chance that the lowest significance detection is false. That is, in every one hundred 40 square degree simulated maps, there are two occur- rences where we have a false detection at or above the lowest significance of S/N = 5.5. The chance that any of the other three candidates is false is much smaller than 1%. These four cluster candidates were found using the 150 GHz data only. We use the

98 0517−5430 0547−5345 0509−5342 0528−5300 200 1.2′ beam

0 150 GHz Unfiltered −200 8

0 Filtered 150 GHz

−8 6

0 95 GHz Filtered

−6 6

0 Filtered 225 GHz

−6

Figure 6.7: Images of four galaxy clusters found in the SPT SZ survey. In each panel, theregionshownisa20by20arcminuteboxcentered on the cluster. All images are oriented with north up and east to the left. In the top row, we show the beam-smoothed 150 GHz map, and the scale has units of μKCMB. The lower three rows show the 150, 95 and 225 GHz maps filtered using a β = 1 model with the θcore listed in Table 6.6. The ringing on either side of the cluster in the 150 GHz filtered maps is an artifact of the filtering. The scale gives detection significance in σ. The detections at 95 GHz range in significance from 2.5σ to 3.9σ (see Table 6.6) and provide supporting evidence for the 150 GHz cluster detections. Our 225 GHz maps are consistent with noise at these four locations, (see Table 6.6 and Subsection 6.6), providing another cross check that the data are consistent with SZ sources. The first cluster shown here, SPT-CL 0517-5430, was previously identified in the REFLEX X-ray cluster survey, in which it is identified as RXCJ0516.6-5430 and in the Abell supplementary southern catalog, in which it is identified as AS0520. Plot taken from Staniszewski et al. 2008.

99 4 ID R.A. decl. SNR at: best θcore y0 × 10 150 95 225 SPT-CL 0517-5430 79.144 -54.506 -8.8 -3.4 -0.4 1.5 0.97 ± 0.13 SPT-CL 0547-5346 86.650 -53.756 -7.4 -3.9 1.9 0.5 1.31 ± 0.21 SPT-CL 0509-5342 77.333 -53.702 -6.0 -3.4 0.1 1.25 0.67 ± 0.12 SPT-CL 0528-5300 82.011 -52.998 -5.6 -2.6 -1.3 0.5 1.00 ± 0.19

Table 6.1: Cluster Detections, their positions, best fit θcore, significance, and y0. Here, R.A. and decl. are in units of degrees (J2000). The value of θcorereported here is that which maximized the SNR of each cluster in the filtered 150 GHz maps (out of 14 values of θcorein steps of 0.25) and should be interpreted only as a rough measure of that cluster’s angular scale. The value of y0 reported for each cluster is the value in the 150 GHz map filtered at the best values of θcore. The uncertainty on the value of y0 is calculated for θcore fixed at the best value. Once we fix β and θcore, y0 is the only remaining free parameter, so the fractional uncertainty on y0 is simply equal to the inverse of the 150 GHz signal-to-noise ratio in quadrature with the 150 GHz calibration uncertainty. Table from Staniszewski et al. 2008.

95 and 225 GHz maps to test that the detections are consistent with an SZ-spectrum. The matched filtered 95 and 225 GHz maps shown in Figure 6.7 provide visual evidence that these candidates are consistent with being the result of the SZ effect. Our 95 GHz maps are considerably noisier than our 150 GHz maps, which leads to the marginal detection significances. Yet, the 95 GHz maps do show decrements that are nice agreement with the 150 detection locations. The 225 GHz filtered maps show that the detections in the center of the tiles are inconsistent with being CMB and instead show agreement with the SZ spectrum null. The highest significance cluster in our field was previously reported in the ROSAT- ESO Flux Limited X-ray (REFLEX) survey[57], and is identified as RXCJ0516.6-5430. The three remaining clusters are new discoveries.

6.6.1 Optical Confirmation and X-ray Counterparts

The most significant detection is of the previously known X-ray cluster, RXCJ0516.6-5430, from the REFLEX survey. REFLEX survey clusters were found by correlating ROSAT All-Sky Survey, RASS[58] sources with galaxies from optical data. It is important to

100 point out that two other SPT candidates have RASS Faint Source counterparts (RASS-FSC, [59]). They were not included in the REFLEX survey and had not been identified as cluster candidates. This implies that they had enough X-ray flux to be marginally detectable, but were not bright enough to be considered interesting. Their X-ray confirmation bolsters the case that these are indeed clusters and demonstrates the power of the SZ to discover clusters regardless of their redshift. The SPT candidates, SPT-CL 0547-5354 and SPT-CL 0509-5342, lie within 1 arcminute of their corresponding RASS-FSC object. From the conception of the SPT, we have partnered with the Blanco Cosmology Survey (BCS) so that they could provide deep, multi-band optical images in regions where the two surveys overlap. The BCS is located on the Blanco 4m, and operates in the griz bands. It uniformly imaged ∼ 47degrees2 in its 60 night NOAO survey program between 2005 and 2008. Figure 6.8 shows BCS images of the four SPT cluster candidates. These are pseudo- color images which combine data in the gri bands. The environment in galaxy clusters typically results in a lack of star formation, making them appear more red than galaxies not in clusters. In addition, galaxies appear with similar colors if they are very near in redshift, making them easy to detect by eye in these images. In each of the BCS images, we look for strong lensing effects, or similarly colored galaxies near each other and by the SPT detection. Below are comments on individual clusters where visual evidence suggests their optical confirmation. See Staniszewski et al. 2008[41] for further description of the optical evidence from the BCS data. SPT-CL 0517-5430: This cluster is the previously known cluster and has been included in the Abell supplementary southern cluster catalog[60] and the REFLEX catalog [57]. It is at redshift z ∼ 0.3. The strong red sequence population, arising from the lack of star formation, is apparent in the images. The dominant elliptical galaxy is 0.5 arcminutes to the northeast of the SPT position (see Figure 6.8).

SPT-CL 0509-5342: The image of this candidate shows two or three strong lensing arcs surrounding the central dominant elliptical galaxy that is ∼0.25 arcminutes to the southeast

101 Figure 6.8: Pseudo-color optical images of the galaxy distributions toward the SPT clus- ters. The SPT position is marked with a 1-arcminute-diameter green circle. Populations of early-type galaxies with similar color and central giant elliptical galaxies are found to lie within 0.5 of the SPT position of each system. Gravitational lensing arcs are apparent near the central galaxy in SPT-CL 0509-5342 and to the bottom-right of the cluster core in SPT-CL 0547-5345. The REFLEX position for SPT-CL 0517-5430 / RXCJ0516.6-5430 is indicated with a blue circle in the upper left panel, as are the positions of the possible RASS counterparts for SPT-CL 0547-5345 and SPT-CL 0509-5342 in their respective im- ages. The diameter of each blue circle is equal to the positional error given for that source in the RASS Faint Source Catalog. Plot from Staniszewski et al. 2008.

102 of the SPT position. The image also shows a large population of less luminous galaxies with the same color as the central galaxy. The BCS team estimates the redshift of this cluster to be z ∼ 0.4. SPT-CL 0528-5300: This higher redshift cluster is much fainter in the optical, and the objects appear orange in Figure 6.8. The objects are also more concentrated on the sky. The optical positions of the cluster objects suggest a position ∼0.4 arcminutes east of the SPT position. The BCS team estimates the redshift of this cluster to be z ∼0.8. SPT-CL 0547-5345: This cluster is a very high-redshift system, and shows up very faint and red in Figure 6.8. The optical data suggests the core is located ∼0.25 arcminutes to the southeast of the SPT position. A number of galaxies are detected, but are too faint to show up in the figure, and suggest a cluster that is at the edge of the detection limit for the BCS. There also appears to be a strong lensing arc near the SPT circle to the southwest of the SPT position. The BCS team estimates a lower limit on this cluster redshift of z ∼0.9.

Altogether, the BCS optical data show evidence for cluster members within 0.5 ar- cminutes of the SPT position for each of the four clusters. Strong lensing arcs and visually detectable concentrations of similarly colored objects strongly suggest the existence of galaxy clusters.

6.7 Conclusions

We have presented the detection of four galaxy clusters in a blind SZ survey. Three of these clusters are new detections, and all have been optically confirmed as being galaxy clusters. We have laid out the observations, data reduction and cluster extraction method used to achieve these first results. These detections are the first discoveries of clusters with an SZ experiment. These results are only the most significant detections and come only from forty square degrees out of an existing many hundreds of square degrees with equal or better sensitivity. A cluster catalog for the entire data set, and based on the methods described here, should provide interesting cosmological constraints in the near future.

103 Chapter 7

MCMC Cluster Finder

While the matched filter works and has already been used to produce the first scientific results for SPT, there are some drawbacks to the method that could be improved with an alternate cluster finder. When using the matched filter approach, we search the maps with an array of filters optimized for clusters of different sizes. The method we propose here combines detections of different size clusters in a more elegant fashion. In addition, the matched filter maximizes the signal to noise of the cluster detection by filtering out noise components, which does not necessarily preserve information needed to best constrain the cluster shape or characteristic size. In this chapter, we describe a search mechanism based on a Markov Chain Monte Carlo

(MCMC) methods. It has the power to simultaneously locate clusters and constrain cluster parameters, providing estimates and errors for physical cluster features such as core radius, central decrement and integrated flux.

7.1 Introduction

We enlist a Bayesian approach, first suggested by Hobson et al[61], to recast our cluster

finding into an inverse problem. We assume that a cluster is in our map and find the set of parameters that best fits the data. The cluster finding problem then becomes one of

104 characterizing the posterior distribution in the multi-dimensional space. Bayesian methods require that we come up with an expression for the likelihood, L, that our data, D, has arisen from an underlying model with parameter values, A.Weuse Bayes’ theorem to write the posterior, P(A|D), in terms of the likelihood, P(D|A), our prior knowledge of the values of our parameters, P(A), and a normalization factor, P(D).

Pr(D|A)Pr(A) Pr(A|D) = (7.1) Pr(D) The posterior values tell us how likely it is that a set of parameters is correct. Because the underlying models have multiple free parameters, the posterior defines a multi-dimensional space. Maxima occur for sets of parameters that accurately describe the cluster locations and shapes. Markov Chain Monte Carlo (MCMC) techniques are efficient ways to explore multi dimensional likelihood distributions. They aid our search in two significant ways. First, they provide a more efficient way to sample higher dimensional spaces than a grid search. Second, the density of MCMC samples converges to the posterior. This implies that clus- ter candidates exist in regions with dense samples, and point toward the best fit cluster parameters.

7.2 Algorithm

We choose to search for clusters in a four dimensional parameter space, A = A(RA,δ,θcore, S 0), which includes cluster right ascension, declination, characteristic size (θcore), and central decrement, S 0. Each of the four parameters is taken as continuous so that the cluster cen- tral decrement does not need to be in the center of a pixel. This results in a rather large parameter space to explore. We search for clusters with the isothermal β profile of the form,

|θ¯− θ |2 s A = S + 0 −(3β−1)/2, ( ) 0(1 2 ) (7.2) θcore

105 Isothermal β Profile (β = 1) 1 θ = 1 core 0.9 θ = 2 core θ = 4 0.8 core

0.7

0.6 o 0.5 S/S

0.4

0.3

0.2

0.1

0 −10 −5 0 5 10 θ [arcminutes]

Figure 7.1: The isothermal β profile for various values of θcore.

 2 2 where |θ¯− θ0| = (RA¯ − RA0) + (δ¯− δ0) .

We could simultaneously vary amplitude, θcore and β, however, we choose not to, as β is highly degenerate with θcore. We already learned with the matched filter technique that the template type did not matter much as long as we vary θcore over a wide range (See Section 6.2). For this reason, we choose an isothermal β profile with β = 1. Figure 7.1 illustrates the shape of the β profile for β = 1.

7.2.1 Likelihood Expression

In order to search in the parameter space for the cluster model that best fits our data, we need to evaluate the likelihood many times. We start with the likelihood written as

106 exp(−[D − s(a)]tN−1[D − s(a)]) L = P(D|A) = , (7.3) 2πNpix/2|N|1/2 where D is the SPT map, s(a) is the cluster profile, and N−1 is the pixel-pixel noise co- variance that describes the shape of all of the noise components. This noise covariance encodes the same information as described in Section 6.3, namely, CMB, atmospheric and instrumental noise.

6 2 Our nominal map has Npix  10 , and the pixel-pixel noise covariance goes as Npix. Each likelihood calculation involves matrix multiplication with this large, non-diagonal, matrix, so the cost of these operations makes the evaluation of the likelihood unfeasible in real space. We choose instead to work in the Fourier domain where the noise covariance is diagonal just as in Section 6.3. Priors are easily added within this formalism. We use two-part priors on all parameters that are flat within their physically acceptable ranges and exponentially suppressed outside that range by multiplying the log likelihood by a factor (1000) times the difference between the proposed step’s parameter value and the value at that edge of the prior. This steep exponential suppression implies that we will not explore parameter space outside the prior ranges. The exponential suppression is used in lieu of a separate function that evaluates whether a step is acceptable or not. It serves to push the chain back into the acceptable range when it steps beyond the boundary; a subsequent step in the correct direction will be exponentially favored compared to one further outside our prior range. We allow θcore to run from 0 to 7 arc minutes, cluster amplitude is allowed to range from 50 to 800 μKand cluster positions are not allowed within 10 arc minutes of the edges of the maps so we can avoid edge effects. For the same reasons outlined in Section 6.3, we reduce the computational burden by computing everything in the Fourier domain. The noise covariance matrix is therefore diagonal, with the diagonal elements computed as for the matched filter in Section 6.3.

107 The likelihood in Fourier space is

exp(−[D˜ − s˜(a)]tN˜ −1[D˜ − s˜(a)]) P(D˜ |A) = , (7.4) πNpix |N˜ | where the tilde above each symbol represents the Fourier transform. In practice, it is much easier to calculate the log likelihood, dropping the constant normalization factor in the denominator of Equation 7.4 because we only compare the difference between two log likelihoods, L = [D˜ − s˜(a)]tN˜ −1[D˜ − s˜(a)]. (7.5)

Defining d˜ = D˜ − s˜(a), Equation 7.5 becomes

L = d˜tN˜ −1d˜, (7.6) where the diagonal only sum is computed as  −1 L = d˜iN˜ii d˜i, (7.7) i=1,Npix

7.2.2 MCMC Sampler

We employ the MCMC algorithm to efficiently sample our multi-dimensional space. Monte

Carlo methods allow us to randomly walk through parameter space, where we can evaluate the likelihood at each step in the chain. Markov chains are a special type, where the den- sity of points sampled in parameter space converges to the overall posterior distribution. Therefore, regions in parameter space that have been densely explored are better estimates of the parameters we are trying to constrain.

With our likelihood expression in hand, we employ a MCMC search by creating a random walk algorithm along with an acceptance criteria that ensures that the density of points will converge to the underlying probability distribution. There are a few traditional stepping criteria that ensure one will converge to the posterior.

Our MCMC stepper draws the proposed step parameters from a set of Gaussian distri- butions centered on the current step parameters and with width equal to the characteristic

108 step size. We use the Metropolis-Hastings criteria for accepting or rejecting proposed steps, which depends on the difference in the log probability values at the two locations in pa- rameter space. If L(Anew) > L(Acurrent), then we accept the new location and repeat. If

L(Anew) < L(Acurrent), then we accept this step with the probability

L(Anew) p = . (7.8) L(Acurrent) This stepping is repeated many times until we feel that the space has been explored sufficiently. A common rule of thumb is to set the step size such that there is a rejection rate of roughly 40-60 %. This criteria generally leads to efficiently exploring the parameter space. Making the step sizes larger generally increases the rejection rate. It takes a certain number of steps before the search chain effectively forgets about its starting position and reaches equilibrium. This period is called burn-in. In our algorithm, we have not built in any rigorous tests for burn-in. Instead, we let two chains run for considerably longer than we think is necessary, and look at the similarities of the two chains to pick a time at which both began to look similar. More thorough studies of burn-in and convergence could help us optimize the search and decrease the amount of time required to search a given patch.

7.3 Implementation

Inverse problems are attractive in the simplicity of their approach. One parameterizes a model and creates a likelihood expression and then finds a set of parameters that maximizes the likelihood. In general, most of the work is in getting the correct representation of the likelihood function. The rest of the work is done by a computer which is set off to evaluate a large number of likelihoods. In our implementation of the cluster finder, there are a few major details that should be highlighted. The first set is concerned with reducing the time it takes to complete the full cluster searching. The remainder are concerned with evaluating the likelihood accurately.

109 7.3.1 Computational Considerations

The drawback of the MCMC method is that it is time consuming. Currently, we can search ∼3 square degrees in 16 hours on one processor. We will point out in Section 7.6 ways that may vastly improve its efficiency. Without speed improvements, this method is still tractable for searching for clusters in the SPT data set. A factor of 10 improvement would enable a more thorough investigation with thousands of square degrees of simulated data. The computation time for this algorithm is completely dominated by evaluating the like- lihood many hundreds of thousands of times. The noise covariance, N˜ −1, and the Fourier transform of the data, D˜ , are independent of the parameters, so they can be computed once and stored in memory. The computation time for each step is therefore dominated by two calculations. The first is placing a cluster in real space and taking the Fourier transform of it, and the second is evaluating Equation 7.7, which is essentially matrix multiplication and summing over the elements in the final product.

The Fourier transform at each iteration is an Npixlog(Npix) operation. The other opera- tions scale roughly as Npix. It should be noted that it is possible to use an analytical form for the Fourier transform of certain cluster models, but we have not yet implemented this optimization. Figure 7.2 shows the computation time scaling for the algorithm. This scaling drives us to break up the large map into many small sub-maps to make the total computation time tractable. In practice, we choose to analyze 2.5 ∗ 105 pixel (3 degree2) sub maps. We do not search near the edges of maps, so there is a loss of efficiency that begins to get large as we make the sub maps smaller. These smaller maps are a compromise, but are in no sense an optimization in total computational time. It is difficult to parallelize individual MCMC chains because each successive step de- pends on the previous one. Instead, we parallelize by searching many sub-maps at the same time, which offers considerable improvement when working on machines with mul- tiple cores. We adopt a hybrid of different algorithms suggested in Hobson et al.[61] to make the

110 Figure 7.2: The time it takes per likelihood evaluation as a function of the size of the map. The data fits Npix Log(Npix) very well. For a 512 x 512 map, we typically let the MCMC run for 1 million iterations. algorithm faster. We search for clusters iteratively, assuming that the data includes only one cluster at a time. We do not implement the simplex or downhill search algorithms that they suggest. Each time a cluster is found, we mask it out and search again until our evidence criteria falls below a threshold described below. Large cluster signals can dominate the MCMC steps, so masking them out after locating them speeds up searching the rest of the parameter space. We run the cluster finder iteratively until we have found every object within the small three degree map above a predetermined threshold. This implies that we do full 1 million point MCMC searches two or more times in all fields where there is any significant detection.

7.3.2 Evaluation Details

We mask point sources so that we can compare our results to those obtained with the matched filter, which could have masked a cluster because of its proximity to a point source. We do so using the same algorithm that is used in the matched filter. Any point sources found to be 5 sigma detections with a basic matched filter are flagged and masked. The

111 −14 x 10 Noise Covariance. Original and Interpolated 5

4.5

4

3.5

3

2.5

2

1.5

1

0.5

0 0 5 10 15 20 25 30 35 40 45 50 k /C x

Figure 7.3: A cut through the noise covariance, N˜ ,atafixedky. Over plotted in blue is the interpolated covariance. A similar interpolation is used to generate the beam function and filtering function. masking sets all pixels within a 2 arcminute radius to the average value of pixels bordering the mask. In both cases, the bright end of the point source power has been assumed to be masked out, and only the faint tail of the point source distribution has been left in the estimate of the noise covariance. We also include the effects of the beam and filtering on the source template before inserting it into the likelihood calculation. We include these details exactly the same way as was done in Section 6.2 By shrinking the size of the maps, we are required to shrink the size of the noise co- variance matrix to make the sizes of the matricies in Equation 7.7 the same. Recall that the noise covariance was estimated from the large series of 100 square degree jacknife maps (see Section 6.3.) We also need to shrink the size of the beam shape transfer function and filtering transfer function which were estimated first in large map space. Shrinking N˜ ,

F˜beam,andF˜ f ilter involves re sampling these matrices, and results in some loss of informa- tion. The re-sampled noise covariance mis-estimates the power on extremely large scales, but the signal to noise of the cluster is low there, so this effect is not important. The effect of interpolation on the noise covariance is illustrated in Figure 7.3.

112 7.3.3 Cluster Candidates

We run the MCMC chains for the required length and then search the log-likelihood values for a maximum. Although it was more desirable, we did not evaluate the Bayesian evi- dence for the cluster candidates because it is complicated to implement. Instead, we use a Bayesian search and calculate the frequentist evidence factor

Lcand L Ecandidate = exp 0 . (7.9)

The log likelihood value for a zero cluster model, L0, is calculated by placing any cluster model with zero amplitude in our map and evaluating Equation 7.7. In the future we expect to implement a search criteria in terms of Bayesian evidence.

7.4 Simulated Maps

We test our algorithm on a series of fake maps to characterize the completeness and pu- rity of both cluster finding algorithms[62]. A series of 100 maps were generated for each component that comprises the final SPT map, including CMB, SZ, point sources and at- mospheric plus instrument noise. The CMB maps are realizations of a WMAP cosmol- ogy[54], and the SZ maps are produced by populating n-body dark matter simulation halos with gas and then converting gas pressure into SZ brightness[62]. The point source maps are generated to reproduce the power spectrum in Borys et al.[55], and the atmosphere plus instrument noise maps are are generated from white noise data whose spectral shapes are modified to match the shape of the SPT left-right jackknife power spectra. The real SPT maps have noise that varies by a ∼few percent over the observed elevation. These maps do not include this detail. The fake SZ maps include chance superpositions and have realistic cluster shapes driven by their merger histories. The maps are accompanied by catalogs that include clus- ter position, redshift and mass. The dark matter N-body simulations are examined three hundred times between a redshift of three and zero. At each redshift step, dark matter ha-

113 los are identified and cataloged using the friends-of-friends algorithm[62]. At the end of the simulation we create a light cone that simulates the observable Universe by placing the observer at one corner of the simulation at z = 0. Each redshift slice further from the ob- server is created from the already cataloged simulation as it was at that redshift bin. These catalogs are essential to test completeness and purity of the cluster finder. Initial studies using the matched filter approach show that the maps are similar to the real survey data. The atmospheric plus instrument noise map is created to include effects of time stream filtering, and is created to have noise power spectra identical to those discussed in Section 6.3.

We apply filter functions to and convolve the beam with the fake SZ, CMB, and point source maps before adding them. Figure 7.4 shows the unfiltered CMB, SZ, and point source maps, as well as their filtered counterparts. It also shows the SPT specific noise contribution, and finally, the composite map which we use to search for clusters.

7.5 Performance

We now describe the cluster finder performance in terms of cluster identification and pa- rameter extraction. All of the results described here are for the simulated data described in Section 7.4.

7.5.1 Cluster Identification and Performance

In Subsection 7.3.3 we described how we identify candidates. The SZ-only map in Figure

7.4 shows that there are many fairly bright clusters in the map. Figure 7.5 shows results from one of our searches where we succesfully locate a cluster. Here, the entire parame- ter space is explored, and the density of points is peaked very strongly near the brightest cluster, as is the magnitude of the log-probability distribution.

After searching the 100 square degree map for clusters, there is one universal value for

Ecandidate for which the number of detections and the number of false detections is nearly

114 Figure 7.4: Raw SZ maps from the simulations. Each dark matter halo has had gas put in and has been converted to SZ brightness. Most of the SZ signal is not strong enough to detect, but instead creates a diffuse SZ background.

115 Figure 7.5: The results of a one-million-point MCMC search for cluster candidates. The MCMC succesfully locates the most massive cluster candidate in the map. Here we have searched the composite map shown in Figure 7.4. The highest log probability value is located on the massive cluster. While other candidates line up with the correct positions of clusters in the SZ map, they are below the zero false-positive rate and are not counted as detections.

116 identical to the 5 sigma matched filter results. After running a series of these tests on fake maps, we have begun to characterize the per- formance of our algorithm. Preliminary tests show that it is competitive with the matched filter technique with respect to completeness and purity. We need to perform more tests over many hundreds of square degrees to thoroughly characterize completeness to the level where we can tell whether it outperforms the matched filter.

7.5.2 Flux Estimates

The accuracy of any cosmological result from SPT will depend on how well we recover cluster flux and on how well it traces cluster mass. SZ scaling of flux versus mass has been studied by several teams[18]. These studies show that SZ measurements of flux are tightly correlated with X-ray mass measurements, which gives us hope that we can con- strain cosmology with an SZ selected cluster list with SZ measured masses. SPT will use X-ray measurements on a handful of clusters to estimate the SPT survey SZ flux versus mass relations, but will otherwise rely on its own flux measurements for mass estimates. It is therefore important to understand and optimize our flux estimates. The MCMC finder could have its biggest advantage in estimating cluster parameters, such as the cluster amplitude and θcore. The density of points in the MCMC represents the shape of the posterior distribution and the density, by definition, peaks at the set of parameters that best fit the data. In practice, we locate the peak of the posterior in RA and δ and plot marginalized single parameter histograms for MCMC points near that peak. The histograms peak at the best fit value for each parameter and the widths of the histograms provide us with errors on the parameters. These histograms are produced for each cluster candidate only using MCMC points from the chain and do not require running any further fitting. An example set of histograms are plotted in Figure 7.6 for a simulated cluster that shows up as an ∼8 sigma detection in the matched filter analysis. For a large cluster signal, all parameters are constrained well.

117 Figure 7.6: Histograms are shown for each of the four parameters of the 6x1014 solar mass, redshift 0.6 cluster candidate. The cluster is a relatively bright one which would show up well above 5 sigma in the matched filter. The cluster was roughly centered in the map and the RA, Dec parameters are in terms of pixel number within the 512 by 512 map. The pixels are quarter arcminute in size. The amplitude and θcore histograms are also shown. θcore is presented in arc minutes and peaks at roughly the same scale that maximized the detection significance for the matched filter. This plot illustrates the sizes of the errors given by the matched filter.

We can also use the MCMC cluster finder to estimate cluster parameters for cluster candidates found with the matched filter. It is possible that we will use the matched filter to find cluster candidates and then use the MCMC to quickly and more accurately estimate parameters. Running the MCMC for ∼10 minutes on a candidate, we generate the same quality histograms as shown in Figure 7.6. This is quick enough to run on every SPT candidate, and fast enough to characterize on a series of hundreds or thousands of fake clusters. We devised a series of tests to quantify the MCMC method’s flux reconstruction per-

118 formance. In these simulations we replace the N-body SZ map with a single-cluster-map whose cluster profile is a perfect β = 1 model, and for which we know the true amplitude,

θcore and flux. We generate a number of independent noise maps and characterize each cluster many times while varying the noise map. We do not test accurately for conver- gence, but instead look at single parameter histograms from an MCMC that was taken for a conservatively long 10,000 points. Running multiple MCMC chains for the same cluster, we find that the histograms are unchanged, so we are confident that we have converged at this point. Figure 7.6 shows a set of cluster histograms used to estimate parameters, and Figures 7.7,7.8 show how well we constrain cluster parameters.

We find that we recover cluster parameters well, constraining cluster amplitude more accurately than θcore. As the cluster’s flux gets lower, and the cluster is becoming unde- tectable parameter estimates become worse and histogram-derived errors grow. Amplitude values are recovered accurately to within 10% of their true values above M = 4x1014 solar masses (see Figure 7.7), though there is a constant slight overestimate of the amplitude.

θcore is more difficult to measure accurately (see Figure 7.8.); each recovered value of

θcore is correct within a factor of 2 of the true value. Although the scatter is large, there is little bias in the measurement of the cluster size. Combined with the measurement of cluster amplitude we can integrate the β profile to recover integrated flux which is likely to be our best measurement of mass.

7.6 Future

We have shown that the MCMC cluster finder is competitive with the matched filter ap- proach in finding clusters, and that it is useful for characterizing cluster properties. This promising algorithm could be used to generate a catalog of clusters over the many hundreds of square degrees of SPT survey data, and could be used to estimate the cluster fluxes nec- essary for achieving a cosmological result. There are, however, many improvements that can and should be done to the MCMC

119 Figure 7.7: Average recovered cluster amplitude as a function of true cluster amplitude. We have fixed θcore at 1.05. These results are taken from a series of simulations done with perfect β model clusters as the inputs. Each data point represents 12 individual measure- ments of the same cluster placed on top of different noise realizations. The errors provided are derived from the standard deviation of the measured values. The MCMC recovers am- plitude accurately for all clusters above a certain amplitude. Because we fixed θcore,this translates into a cutoff in flux, below which we neither detect nor recover parameters well for the cluster. algorithm in the near term. We need to speed the cluster finder up by a factor of five to twenty so that we can characterize the cluster finder performance over thousands of square degrees of simulated data. We also need to include small corrections that will improve the performance of the algorithm and improve detections and the way we interpret them. Finally, we need to subject this cluster finder to a battery of tests so that we can more accurately compare the performance of the two cluster finders.

120 Figure 7.8: Recovered cluster θcore as a function of actual θcore. Each measurement is of a different cluster placed within a different noise realization. The central amplitude for the clusters is fixed at 2.5 × 104K, and the cluster radius is varied. Note that we fail to recover the correct value for small clusters. This is because they have small total integrated flux. For this reason, clusters at the left of this plot are not bright enough to be detected, and we do not have to worry that we are failing to constrain their parameters.. While the scatter in the measured values is large, on average they track the actual value well.

7.6.1 Computational Optimizations

The MCMC cluster finder currently takes roughly 16 processor hours to search for clusters within a three square degree patch. Computation time is currently dominated by a few basic operations that take place each time we evaluate the log likelihood, Equation 7.5.

Roughly one half of the time is spent on the Fourier transform of the β model. One quarter of the time is spent placing the β model in real space before Fourier transforming it, and one quarter of the time is spent summing over elements while evaluating the inner product in the log-likelihood calculation. The first potential improvement is to get rid of the Fourier

121 transform of the cluster by generating the Fourier transform of the cluster profile analyt- ically. This would immediately cut down the computation time by a factor of two. This implementation will limit us to searching for cluster templates that have analytical Fourier transform representations. Luckily, the isothermal β = 1 profile does have an analytical solution for its Fourier transform. We also suspect that the shape of the cluster profile is not crucial as long as we allow its characteristic size to vary, so we could use a Gaussian profile as well because its Fourier transform is trivial. We could quickly sample the position dimensions by simply shifting the phases of the model. This is an easy way to move the cluster around and eliminates a costly step of pop- ulating a ∼ 2.5x105 pixel-array with the values needed to describe the template function. If we could generate the model once and store a look up table to calculate the phase shifts, then we could speed up this part of the calculation that takes ∼ 25% of the computation time. This has been tested and works, but only for searching in three dimensions and does not allow us to vary θcore. Variants of this idea should be pursued, but it could be that this step becomes the speed bottleneck of the method, if we wish to continue varying θcore. We can also increase the speed of our algorithm by changing the way we evaluate the inner product inside the likelihood calculation. We should be able to reduce the number of elements that we sum over by a factor of two to ten. Because all of the maps start out as real, there are symmetries in their Fourier transforms, and the number of unique values in the Fourier transforms should be half of the total. There are also a limited number of Fourier modes in which we have large signal to noise. At low ell, the power in the CMB is much higher than the cluster power. Similarly at very high ells, corresponding to the quarter arcminute pixel size, we are dominated by instrumental noise. By summing over only high signal to noise modes, and removing the redundant information stored in the symmetric Fourier transforms, we could reduce the number of points in the sum by a large factor. Our algorithm is currently programmed in Matlab, which has a built in, but inefficient algorithm for summing over a subset of array elements. Matlab resizes the array first, which is as expensive as evaluating the sum over the entire range of values. By

122 converting our algorithm to C, we could address the array elements directly and sum over the relevant values without the intermediate step of generating a subset array. There are other optimizations that don’t involve changing the likelihood calculation, but instead require us to change the way that our MCMC searches for many clusters. We already pointed out that the cluster finder spends a large fraction of its time near large, robust detections. We currently search each area iteratively by running the MCMC to completion, masking out robust detections and then re-running it on the same sub-map. We could instead check for large detections every ∼100,000 iterations. If we evaluate the likelihood for data, given no cluster, L0 at the beginning of each search, we could use the evidence E to do this check. Using a conservative value for the required evidence, we could find extremely robust detections in roughly one tenth of the normal number of iterations. We would then mask out these detections and fully sample the posterior distribution far away from these candidates. This would eliminate the unfortunate behavior where the search has spent an unnecessarily large fraction of time characterizing the massive cluster candidate. Our MCMC search of sub maps lends itself naturally to parallelization, because we can search many sub-maps at the same time to build up a search of the whole map. With the proposed optimizations and the use of many processors, we should be able to search many thousands of square degrees on the timescale which we currently search 100 square degrees. Utilizing a 32-node cluster for a week, combined with a factor of 2 to 4 speedup in the algorithm would result in searching nearly ten thousand square degrees in a week, while we currently can search one hundred square degrees in a week. With these capabilities, we are within reach of characterizing the completeness and purity of our algorithm. We can then compare our results to the performance of the matched filter cluster finder. Hobson et al. 2002[61] suggested optimizations based on simulated annealing and downhill maximization. They also proposed the use of a Powell-Snakes algorithm in a subsequent publication [63]. These algorithms are ways to optimize a parameter search in which there are many local maxima but only one absolute maximum which needs to be

123 characterized thoroughly. Their proposed optimizations decrease the number of MCMC samples by a factor of 10 or 100 depending on the algorithm used. The Powell-Snakes optimization appears to be similar to a hybrid matched filter/MCMC method where one finds all the local maxima first and starts localized MCMC searches in all of them. This was proposed and explored in Section 7.5.2. While it is attractive in its ability to characterize clusters, it no longer finds and characterizes clusters simultaneously which is an attractive feature of the pure MCMC search. The annealing and simplex based methods suggested in Hobson et al. 2002 are attractive alternatives that could be explored in the future.

7.6.2 Multiple Frequency MCMC Finder

The first SPT results were based on detecting clusters in maps made with a single frequency band, 150 GHz. The detections were combined with analysis of the other two frequency’s data to bolster the case of the first SZ discovery of galaxy clusters. The 90 GHz data were taken using a sub-optimal detector array from the 2007 focal plane, and the 220 GHz data were considerably noisier and did not trivially increase the detectability of the clusters. The

SPT has since vastly improved the quality of its detector arrays, especially with regards to its 90 GHz capabilities. The MCMC algorithm could be extended to include multiple frequency maps. Parameter estimation will be greatly improved by including multi frequency data. Diffi- culties in reconstructing core radius most likely stem from our inability to distinguish CMB from SZ signal. An independent measure of the two components will help us effectively re- move the CMB contaminants from our maps. Preliminary results from the multi-frequency matched filter show that the 90 GHz data does indeed improve cluster detection greatly. The addition of the 90 GHz data also allows us to better distinguish between CMB anisotropies and cluster signals, and therefore improves cluster parameter reconstruction. The CMB, atmospheric, point source, and unresolved SZ noise components are corre- lated between frequencies, so we can not simply insert the frequency dependence of the

124 SZ spectrum. The noise covariance will still be diagonal for each frequency, but will need to also include information on the correlations between frequencies. These developments necessarily have to be included in the matched filter analysis as well.

7.7 Conclusion

The MCMC cluster finder has the potential to increase the yield of the SPT derived cluster catalog. It also could be used along with the matched filter to better measure cluster param- eters, including a proxy for cluster mass. Either of these improvements would be welcome additions to an already rich data set. Regardless, it is important to have an independent cluster finder pipeline cross checking results that will be used to constrain cosmological models. The MCMC cluster finder is, in a sense, more appealing as it discovers clusters and characterizes them simultaneously. The inclusion of priors and the way we search for clusters of different shapes is more natural, and the adaptation to a multi-frequency cluster finder is straight forward.

We have made great progress showing the promise of this method and have described our current limitations. The path forward to a fully characterized and optimized cluster finder has been outlined. We have shown the algorithm to be viable for a data set with large maps and many pixels, which was once thought to be very difficult. To improve the algorithm substantially, it is necessary to decrease computation time by a factor of a few, and include a more sophisticated algorithm for computing evidence. These improvements would allow one to directly compare the matched filter and the MCMC method on a large simulated data set.

125 Chapter 8

Conclusion

We presented the exciting first results from the South Pole Telescope which are the first discoveries of galaxy clusters using the Sunyaev-Zel’dovich effect. These first discoveries show that the SPT is on its way to producing a catalog of galaxies that will constrain the fundamental parameters that govern the expansion of the Universe.

The bulk of this dissertation is spent describing the two important phases of the SPT that I took part in. The first important phase was the design, construction, and deploy- ment of the SPT cold secondary optics. We illustrated the importance of the cold stop to our optics design, and then described how it was designed to meet optical and cryogenic specifications.

The second important phase of the project was, and has been, spent collecting, reducing, and presenting results from our survey. This work is described in the latter half of the dissertation. Here we described, in detail, the processing steps used to make maps, and the algorithms discover clusters of galaxies. The second phase is highlighted by the first discovery of galaxy clusters using the Sunyaev-Zel’dovich effect as was reported in Staniszewski et al.[41]. We also presented a promising new MCMC technique for searching for clusters in our maps. We hope this will enable us to increase the number of clusters discovered by the

SPT, and decrease the uncertainty in our estimates for cluster mass and size.

126 It is truly an exciting time to be working in cosmology, and more specifically, to be working on the SPT. The first results of the SPT show that we are capable of discovering clusters of galaxies in our blind survey fields. With many hundreds of square degrees of data in hand, the SPT will soon advance our collective understanding of the Universe.

127 Bibliography

[1] G. Hinshaw et al. Five-Year Wilkinson Microwave Anisotropy Probe) Observations:Data

Processing, Sky Maps, & Basic Results. Astrophys. J. Suppl., 180:225–245, 2009. [cited at p. 3]

[2] E. Hubble. A Relation between Distance and Radial Velocity among Extra-Galactic Nebulae.

Proceedings of the National Academy of Science, 15:168–173, March 1929. [cited at p. 4]

[3] Dorothea Samtleben, Suzanne Staggs, and Bruce Winstein. The Cosmic Microwave Back-

ground for Pedestrians: A Review for Particle and Nuclear Physicists. Ann. Rev. Nucl. Part.

Sci., 57:245–283, 2007. [cited at p. 7]

[4] G. Gamow. The Creation of the Universe. Dover Publications, 2004. [cited at p. 6]

[5] A. A. Penzias and R. W. Wilson. A Measurement of Excess Antenna Temperature at 4080

Mc/s. ApJ, 142:419–421, July 1965. [cited at p. 7]

[6] D. J. Fixsen, E. S. Cheng, J. M. Gales, J. C. Mather, R. A. Shafer, and E. L. Wright. The Cos-

mic Microwave Background Spectrum from the Full COBE FIRAS Data Set. ApJ, 473:576–+,

December 1996. [cited at p. 7]

[7] J. C. Mather, D. J. Fixsen, R. A. Shafer, C. Mosier, and D. T. Wilkinson. Calibrator Design for

the COBE Far-Infrared Absolute Spectrophotometer (FIRAS). ApJ, 512:511–520, February

1999. [cited at p. 7]

[8] E. Komatsu et al. Seven-Year Wilkinson Microwave Anisotropy Probe (WMAP) Observa-

tions: Cosmological Interpretation. 2010. [cited at p. 9]

128 [9] A. G. Riess, A. V. Filippenko, P. Challis, A. Clocchiattia, A. Diercks, P. M. Garnavich, R. L.

Gilliland, C. J. Hogan, S. Jha, R. P. Kirshner, B. Leibundgut, M. M. Phillips, D. Reiss, B. P.

Schmidt, R. A. Schommer, R. C. Smith, J. Spyromilio, C. Stubbs, N. B. Suntzeff, and J. Tonry.

Observational evidence from supernovae for an accelerating universe and a cosmological con-

stant. AJ, 116:1009, 1998. [cited at p. 8, 10]

[10] S. Perlmutter, G. Aldering, G. Goldhaber, R.A.˜ Knop, P. Nugent, P.G.˜ Castro, S. Deustua,

S. Fabbro, A. Goobar, D.E.˜ Groom, I.M.˜ Hook, A.G.˜ Kim, M.Y.˜ Kim, J.C.˜ Lee, N.J.˜ Nunes,

R. Pain, C.R.˜ Pennypacker, R. Quimby, C. Lidman, R.S.˜ Ellis, M. Irwin, R.G.˜ McMahon,

P. Ruiz-Lapuente, N. Walton, B. Schaefer, B.J.˜ Boyle, A.V.˜ Filippenko, T. Matheson, A.S.˜

Fruchter, N. Panagia, H. J.M.˜ Newberg, W.J.˜ Couch, and The Supernova Cosmology Project.

Measurements of omega and lambda from 42 high-redshift supernovae. ApJ, 517:565–586,

June 1999. astro-ph/9812133. [cited at p. 8, 10]

[11] Joshua Frieman, Michael Turner, and Dragan Huterer. Dark Energy and the Accelerating

Universe. Ann. Rev. Astron. Astrophys., 46:385–432, 2008. [cited at p. 10]

[12] M. Kowalski, D. Rubin, G. Aldering, R. J. Agostinho, A. Amadon, R. Amanullah, C. Bal-

land, K. Barbary, G. Blanc, P. J. Challis, A. Conley, N. V. Connolly, R. Covarrubias, K. S.

Dawson, S. E. Deustua, R. Ellis, S. Fabbro, V. Fadeyev, X. Fan, B. Farris, G. Folatelli, B. L.

Frye, G. Garavini, E. L. Gates, L. Germany, G. Goldhaber, B. Goldman, A. Goobar, D. E.

Groom, J. Haissinski, D. Hardin, I. Hook, S. Kent, A. G. Kim, R. A. Knop, C. Lidman, E. V.

Linder, J. Mendez, J. Meyers, G. J. Miller, M. Moniez, A. M. Mour˜ao, H. Newberg, S. Nobili,

P. E. Nugent, R. Pain, O. Perdereau, S. Perlmutter, M. M. Phillips, V. Prasad, R. Quimby,

N. Regnault, J. Rich, E. P. Rubenstein, P. Ruiz-Lapuente, F. D. Santos, B. E. Schaefer, R. A.

Schommer, R. C. Smith, A. M. Soderberg, A. L. Spadafora, L.-G. Strolger, M. Strovink, N. B.

Suntzeff, N. Suzuki, R. C. Thomas, N. A. Walton, L. Wang, W. M. Wood-Vasey, and J. L.

Yun. Improved Cosmological Constraints from New, Old, and Combined Supernova Data

Sets. ApJ, 686:749–778, October 2008. [cited at p. 9]

[13] Joseph J Mohr. Cluster Survey Studies of the Dark Energy. 2004. astro-ph/0408484.

[cited at p. 12]

129 [14] R. A. Sunyaev and Y. B. Zeldovich. The Spectrum of Primordial Radiation, its Distortions

and their Significance. Comments on Astrophysics and Space Physics, 2:66–+, March 1970.

[cited at p. 11]

[15] Y. Rephaeli. Comptonization of the cosmic microwave background: The Sunyaev-Zel’dovich

effect. ARA&A, 33:541, 1995. [cited at p. 12]

[16] B. A. Benson. Spectral Measurements of the Sunyaev-Zel’dovich Effect, PhD thesis.

[cited at p. 12]

[17] M. Bonamente, M. K. Joy, S. J. LaRoque, J. E. Carlstrom, E. D. Reese, and K. S. Daw-

son. Determination of the Cosmic Distance Scale from Sunyaev-Zel’dovich Effect and Chan-

dra X-Ray Measurements of High-Redshift Galaxy Clusters. ApJ, 647:25–54, August 2006.

[cited at p. 13]

[18] B. A. Benson, S. E. Church, P. A. R. Ade, J. J. Bock, K. M. Ganga, C. N. Henson, and K. L.

Thompson. Measurements of Sunyaev-Zel’dovich Effect Scaling Relations for Clusters of

Galaxies. ApJ, 617:829–846, December 2004. [cited at p. 13, 117]

[19] R. A. Chamberlin. South Pole submillimeter sky opacity and correlations with radiosonde

observations. J. Geophys. Res. Atmospheres, 106 (D17):20101–20113, 2001. [cited at p. 18]

[20] J. B. Peterson, S. J. E. Radford, P. A. R. Ade, R. A. Chamberlin, M. J. O’Kelly, K. M. Peterson,

and E. Schartman. Stability of the submillimeter brightness of the atmosphere above Mauna

Kea, Chajnantor, and the South Pole. PASP, 115:383–388, 2003. [cited at p. 18]

[21] S. Padin. Spt optics power point slides internal memo. SPT Internal memo, January 2004.

[cited at p. 21]

[22] J. E. Carlstrom, P. A. R. Ade, K. A. Aird, B. A. Benson, L. E. Bleem, S. Busetti, C. L.

Chang, E. Chauvin, H.-M. Cho, T. M. Crawford, A. T. Crites, M. A. Dobbs, N. W. Halverson,

S. Heimsath, R. E. Hills, W. L. Holzapfel, M. Joy, R. Keisler, T. M. Lanting, A. T. Lee, E. M.

Leitch, J. Leong, W. Lu, M. Lueker, J. J. McMahon, S. S. Meyer, J. J. Mohr, T. E. Montroy,

S. Padin, T. Plagge, C. Pryke, J. E. Ruhl, K. K. Schaffer, D. Schwan, E. Shirokoff,H.G.

130 Spieler, Z. Staniszewski, A. A. Stark, and J. D. Vieira. The South Pole Telescope. PASP,

submitted, 2009. [cited at p. 22, 79]

[23]S.Padin,Z.Staniszewski,R.Keisler,M.Joy,A.A.Stark,P.A.R.Ade,K.A.Aird,B.A.

Benson, L. E. Bleem, J. E. Carlstrom, C. L. Chang, T. M. Crawford, A. T. Crites, M. A.

Dobbs, N. W. Halverson, S. Heimsath, R. E. Hills, W. L. Holzapfel, C. Lawrie, A. T. Lee,

E. M. Leitch, J. Leong, W. Lu, M. Lueker, J. J. McMahon, S. S. Meyer, J. J. Mohr, T. E.

Montroy, T. Plagge, C. Pryke, J. E. Ruhl, K. K. Schaffer, E. Shirokoff, H. G. Spieler, and J. D.

Vieira. South pole telescope optics. Appl. Opt., 47(24):4418–4428, 2008. [cited at p. 23, 24, 49,

61]

[24] T. M. Lanting, K. Arnold, H.-M. Cho, J. Clarke, M. Dobbs, W. Holzapfel, A. T. Lee,

M. Lueker, P. L. Richards, A. D. Smith, and H. G. Spieler. Frequency-domain readout mul-

tiplexing of transition-edge sensor arrays. Nuclear Instruments and Methods in Physics Re-

search A, 559:793–795, April 2006. [cited at p. 28, 29, 30]

[25] M.E. Huber, P.A. Neil, R.G. Benson, D.A. Burns, A.F. Corey, C.S. Flynn, Y. Kitaygorodskaya,

O. Massihzadeh, J.M. Martinis, and G.C. Hilton. DC SQUID series array amplifiers with

120 MHz bandwidth (corrected). IEEE Transactions on Applied Superconductivity, 11:4048–

4053, 2001. [cited at p. 29]

[26] W. Lu and J. Ruhl. Measurement of reflectivity of eccosorb. SPT Internal memo, April 2004.

[cited at p. 33]

[27] ZEMAX. Software for optical system design. Dassault Systemes. http://www.solidworks.com

. [cited at p. 35]

[28] J. Leong, , J. Ruhl, and K. Xiao. Spt cold secondary modeling: Beam spillover i. SPT Internal

memo, March 2004. [cited at p. 36]

[29] Z. Staniszewski, J. Leong, and J. Ruhl. Spt cold secondary modeling: Beam spillover ii. SPT

Internal memo, May 2004. [cited at p. 36, 40]

[30] SOLIDWORKS. 3D CAD Design Software. ZEMAX Development Corporation.

http://www.zemax.com . [cited at p. 42]

131 [31] Metalized Products Inc. 37 East Street, Winchester, Massachusetts 01890, USA. [cited at p. 48]

[32] J. B. Hindle. Floatation systems. Amateur Telescope Making, pages 229–234, 1947.

[cited at p. 51]

[33] Zotefoams PLC. http://cryogenics.nist.gov/MPropsMAY/material [cited at p. 56]

[34] Z. Staniszewski, J. Leong, and J. Ruhl. Long term window deflection measurements. SPT

Internal memo, September 2004. [cited at p. 57]

[35] C. Wang and P.E.” Gifford. Two-Stage Pulse Tube Cryocoolers for 4 K and 10 K Operation.

Springer US, 2002. [cited at p. 60]

[36] J. Ekin. Experimental techniques for low-temperature measurements. Oxford University

Press, 2006. [cited at p. 61]

[37] Lake Shore Cryotronics. 575 McCorkle Blvd, Westerville, OH 43082, USA. [cited at p. 62, 64]

[38] National Institute for Standards and Technology: Cryogenics Technologies Group. Material

properties. http://cryogenics.nist.gov/MPropsMAY/material [cited at p. 63]

[39] J. Klein and S Katz. Experimental cosmology at penn: Thermal conductivity calculator.

http://chile1.physics.upenn.edu/ec/calc.asp. [cited at p. 63]

[40] Boston Electronics Corporation. http://www.boselec.com/. [cited at p. 72]

[41]Z.Staniszewski,P.A.R.Ade,K.A.Aird,B.A.Benson,L.E.Bleem,J.E.Carlstrom,C.L.

Chang, H. . Cho, T. M. Crawford, A. T. Crites, T. de Haan, M. A. Dobbs, N. W. Halverson,

G. P. Holder, W. L. Holzapfel, J. D. Hrubes, M. Joy, R. Keisler, T. M. Lanting, A. T. Lee, E. M.

Leitch, A. Loehr, M. Lueker, J. J. McMahon, J. Mehl, S. S. Meyer, J. J. Mohr, T. E. Montroy,

C. . Ngeow, S. Padin, T. Plagge, C. Pryke, C. L. Reichardt, J. E. Ruhl, K. K. Schaffer, L. Shaw,

E. Shirokoff, H. G. Spieler, B. Stalder, A. A. Stark, K. Vanderlinde, J. D. Vieira, O. Zahn, and

A. Zenteno. Galaxy clusters discovered with a Sunyaev-Zel’dovich effect survey. ArXiv e-

prints. astro-ph/0810.1578. [cited at p. 76, 98, 101, 126]

132 [42] A.E.˜ Wright, M.R.˜ Griffith, B.F.˜ Burke, and R.D.˜ Ekers. The parkes-mit-nrao (pmn) surveys.

2: Source catalog for the southern survey (delta greater than -87.5 deg and less than -37 deg).

ApJS, 91:111–308, March 1994. [cited at p. 79]

[43] T. Mauch, T. Murphy, H.J.˜ Buttery, J. Curran, R.W.˜ Hunstead, B. Piestrzynski, J.G.˜ Robertson,

and E.M.˜ Sadler. Sumss: A wide-field radio imaging survey of the southern sky - ii. the source

catalogue. MNRAS, 342:1117–1130, March 2003. [cited at p. 79]

[44] M. C. Runyan, P. A. R. Ade, R. S. Bhatia, J. J. Bock, M. D. Daub, J. H. Goldstein, C. V.

Haynes, W. L. Holzapfel, C. L. Kuo, A. E. Lange, J. Leong, M. Lueker, M. Newcomb, J. B.

Peterson, C. Reichardt, J. Ruhl, G. Sirbi, E. Torbet, C. Tucker, A. D. Turner, and D. Woolsey.

ACBAR: The Arcminute Cosmology Bolometer Array Receiver. ApJS, 149:265–287, Decem-

ber 2003. [cited at p. 81]

[45] C. L. Reichardt, P. A. R. Ade, J. J. Bock, J. R. Bond, J. A. Brevik, C. R. Contaldi, M. D.

Daub, J. T. Dempsey, J. H. Goldstein, W. L. Holzapfel, C. L. Kuo, A. E. Lange, M. Lueker,

M. Newcomb, J. B. Peterson, J. Ruhl, M. C. Runyan, and Z. Staniszewski. High-Resolution

CMB Power Spectrum from the Complete ACBAR Data Set. ApJ, 694:1200–1219, April

2009. [cited at p. 81, 82]

[46] K. Coble, P. A. R. Ade, J. J. Bock, J. R. Bond, J. Borrill, A. Boscaleri, C. R. Contaldi, B. P.

Crill, P. de Bernardis, P. Farese, K. Ganga, M. Giacometti, E. Hivon, V. V. Hristov, A. Iacoan-

geli, A. H. Jaffe, W. C. Jones, A. E. Lange, L. Martinis, S. Masi, P. Mason, P. D. Mauskopf,

A. Melchiorri, T. Montroy, C. B. Netterfield, L. Nyman, E. Pascale, F. Piacentini, D. Pogosyan,

G. Polenta, F. Pongetti, S. Prunet, G. Romeo, J. E. Ruhl, and F. Scaramuzzi. Observations of

Galactic and Extra-galactic Sources From the BOOMERANG and SEST Telescopes. ArXiv

Astrophysics e-prints, January 2003. astro-ph/0301599. [cited at p. 81]

[47] M. Lueker et al. Measurements of Secondary Cosmic Microwave Background Anisotropies

with the South Pole Telescope. 2009. astro-ph/0912.4317. [cited at p. 83]

[48] M. G. Haehnelt and M. Tegmark. Using the Kinematic Sunyaev-Zeldovich effect to determine

the peculiar velocities of clusters of galaxies. MNRAS, 279:545+, March 1996. [cited at p. 88]

133 [49] D. Herranz, J. L. Sanz, R. B. Barreiro, and E. Mart´ınez-Gonz´alez. Scale-adaptive Fil-

ters for the Detection/Separation of Compact Sources. ApJ, 580:610–625, November 2002.

[cited at p. 88]

[50] J.-B. Melin, J. G. Bartlett, and J. Delabrouille. Catalog extraction in SZ cluster surveys: a

matched filter approach. A&A, 459:341–352, November 2006. [cited at p. 88]

[51] J. F. Navarro, C. S. Frenk, and S. D. M. White. The Structure of Cold Dark Matter Halos.

ApJ, 462:563–+, May 1996. [cited at p. 89]

[52] J. F. Navarro, C. S. Frenk, and S. D. M. White. A Universal Density Profile from Hierarchical

Clustering. ApJ, 490:493–+, December 1997. [cited at p. 89]

[53] D. Nagai, A. V. Kravtsov, and A. Vikhlinin. Effects of Galaxy Formation on Thermodynamics

of the Intracluster Medium. ApJ, 668:1–14, October 2007. [cited at p. 89]

[54] M. R. Nolta, J. Dunkley, R. S. Hill, G. Hinshaw, E. Komatsu, D. Larson, L. Page, D. N.

Spergel, C. L. Bennett, B. Gold, N. Jarosik, N. Odegard, J. L. Weiland, E. Wollack,

M. Halpern, A. Kogut, M. Limon, S. S. Meyer, G. S. Tucker, and E. L. Wright. Five-

Year Wilkinson Microwave Anisotropy Probe Observations: Angular Power Spectra. ApJS,

180:296–305, February 2009. [cited at p. 92, 113]

[55] C. Borys, S. Chapman, M. Halpern, and D. Scott. The Hubble Deep Field North SCUBA

Super-map - I. Submillimetre maps, sources and number counts. MNRAS, 344:385–398,

September 2003. [cited at p. 92, 113]

[56] L. D. Shaw, G. P. Holder, and P. Bode. The Impact of Halo Properties, Energy Feedback,

and Projection Effects on the Mass-SZ Flux Relation. ApJ, 686:206–218, October 2008.

[cited at p. 92]

[57] H. B¨ohringer, P. Schuecker, L. Guzzo, C. A. Collins, W. Voges, R. G. Cruddace, A. Ortiz-Gil,

G. Chincarini, S. De Grandi, A. C. Edge, H. T. MacGillivray, D. M. Neumann, S. Schindler,

and P. Shaver. The ROSAT-ESO Flux Limited X-ray (REFLEX) Galaxy cluster survey. V. The

cluster catalogue. A&A, 425:367–383, October 2004. [cited at p. 100, 101]

134 [58] J. Truemper. ROSAT - A new look at the X-ray sky. Science, 260:1769–1771, June 1993.

[cited at p. 100]

[59] W. Voges, B. Aschenbach, T. Boller, H. Brauninger, U. Briel, W. Burkert, K. Dennerl, J. Engl-

hauser, R. Gruber, F. Haberl, G. Hartner, G. Hasinger, E. Pfeffermann, W. Pietsch, P. Predehl,

J. Schmitt, J. Trumper, and U. Zimmermann. ROSAT All-Sky Survey Faint Source Catalog

(Voges+ 2000). VizieR Online Data Catalog, 9029:0–+, May 2000. [cited at p. 101]

[60] G. O. Abell, H. G. Corwin, Jr., and R. P. Olowin. A catalog of rich clusters of galaxies. ApJS,

70:1–138, May 1989. [cited at p. 101]

[61] M.P. Hobson and C. McLachlan. A Bayesian approach to discrete object detection in astro-

nomical datasets. MNRAS, 338:765–784, 2003. [cited at p. 104, 110, 123]

[62] Laurie. Shaw. Personal communication. [cited at p. 113, 114]

[63] Pedro Carvalho, Graca Rocha, and M. P. Hobson. A fast Bayesian approach to discrete object

detection in astronomical datasets - PowellSnakes I. 2008. astro-ph/0802.3916. [cited at p. 123]

135 Appendices

136 Appendix A

Thermometer Locations and Nominal

Temperatures

The thermometer locations on the cones changed between the first observing season and the subsequent observing seasons. Below are pictures and thermometer locations for the second season on. Also listed are the readout channels for the main data acquisition control system.

137 Diode Name tape label Final Temp 1 G10 Tube D7 9.1 2 10K receiver, opposite side D4 9.0 3 G10 Fin D5 9.5 4 10K Filter D1 9.9 5 10K receiver end D3 9.3 6 Front Spider Cone B.C. D19 8.9 7 10K by filter D2 9.4 8 secondary mirror D20 8.4 9 10K mirror end of cone D6 9.0 10 77K PT Head D26 30 11 Back Spider B.C. D18 8.5 12 77K HS cone end D25 35 13 10K PT Head D22 5.4 14 10K H.S. cone end D21 6.3 15 Back Spider near H.S. D17 7.2 16 77K back plate top D24 50 17 Rod 5 4K end D13 57 18 Rod 4 H.S. D16 87 19 Rod 5 between D14 127 20 Rod 5 H.S. D15 88 21 77K back plate near H.S. D23 114 22 77K filter D8 66 23 77K cone near filter D9 64 24 77K receiver end of cone D10 61 25 77K receiver end, opposite D11 61 26 77K cone, mirror end D12 53 Table A.1: 2008 Diodes

Cernox Name 1 top of window cone 2 bottom of receiver cone 3 mirror 4 4K PT head Table A.2: 2008 Cernox

Misc Channels Name Units TCT1 Cal Source K Spare1 *Pressure Bits Table A.3: 2008 aux sensors

138 Diode Name 1 Front Spider Edge 2 10K Secondary end of Cone 3 10K G10 Fin 4 10K Window end of Cone 5 10K Filter flange on Cone assembly 6 10K Filter, on actual filter 7 10K G10 Tube 8 Secondary Mirror 9 10K Cone Rec end of Cone 10 77K Window end of Cone 11 77K Filter, on actual filter 12 Truss Rod, thermometer 1 13 77K Secondary end of Cone 14 Truss Rod, thermometer 2 15 Truss Rod, thermometer 3 16 77K Back Plate 17 10K Back Spider 18 HR10(removed) 19 Truss rod by 4K end 20 10K Heat Strap PT 21 77K Back 2 22 77K Heat Strap, back plate end 23 10K Back Spider near bolt circle 24 77K PT Head 25 77K Heat Strap on ”L” Bracket 26 HR10 on G10(removed) Table A.4: 2007 Diodes

Cernox Name 1 Secondary Mirror 2 10K Secondary end of Cone 3 10K PT Head Table A.5: 2007 Cernox

Misc Channels Name Units TCT1 Cal Source K Spare1 *Pressure Bits Table A.6: 2007 aux

139