<<

UNIVERSITÀ DEGLI STUDI DI MILANO Facoltà di scienze matematiche, fisiche e naturali DOTTORATO DI RICERCA IN FISICA, ASTROFISICA E FISICA APPLICATA (CICLO XX)

Data Analysis of the /LFI QM/FM Tests

Tesi di dottorato di MAURIZIO TOMASI Matricola n. R06115 CODICE PACS 95.85.E

Coordinatore: Prof. GIANPAOLO BELLINI Tutor: Prof. MARCO BERSANELLI Referee: Prof. GIORGIO SIRONI Typeset with LATEX.

This thesis is available in electronic format (Adobe Portable Document For- mat, PDF) at http://www.geocities.com/zio_tom78/. Contents

Introduction v

1 Our Universe and the Cosmic Microwave Background 1 1.1 The Birth of Cosmology and the Ancient Models of the Uni- verse ...... 2 1.1.1 Greek Philosophy and the Origin of the World . . . . 2 1.1.2 Cosmology in the Middle Ages ...... 4 1.2 From Copernicus to the Crisis of Heliocentrism ...... 6 1.2.1 The Heliocentric Cosmological Model ...... 6 1.2.2 The First Mathematical Models in Cosmology . . . . 6 1.3 Cosmology in the XX Century ...... 7 1.3.1 Einstein’s Static Cosmological Model ...... 7 1.3.2 The Friedmann’s Model ...... 9 1.3.3 Hubble’s Law of Galaxy Recession ...... 10 1.4 The Cosmic Microwave Background ...... 12 1.4.1 Gamow’s Prediction of a Relic Background ...... 12 1.4.2 First Detection of the CMB ...... 12 1.4.3 The Inflationary Universe ...... 14 1.4.4 First CMB Anisotropies Experiments ...... 15 1.4.5 The Science of CMB Anisotropies ...... 17 1.4.6 Origin of CMB Anisotropies ...... 18 1.4.7 Importance of the CMB for Physics ...... 20 1.4.8 The WMAP Experiment ...... 23

2 The Planck Mission and the LFI Instrument 27 2.1 Role of the Planck Project in the CMB Science ...... 28 2.2 The High Frequency Instrument (HFI) ...... 31 2.3 The Low Frequency Instrument (LFI) ...... 33 2.3.1 Overview of the Instrument ...... 33 2.3.2 Structure of an LFI Radiometer ...... 34 2.4 The Planck Cooling System ...... 36 2.4.1 The Sorption Cooler ...... 36 2.4.2 The Joule-Thomson Cooler ...... 38

i ii CONTENTS

2.4.3 The Dilution Cooler ...... 38 2.5 The Planck/LFI Ground Test Campaigns ...... 38 2.5.1 Overview of the Test Stages ...... 38 2.5.2 The Radiometric Chain Assembly (RCA) Tests . . . . 42 2.5.3 The Radiometer Array Assembly (RAA) Tests . . . . 46

3 Dynamic Thermal Analysis of the LFI Focal Plane 49 3.1 Need for a Thermal Characterization of the Focal Plane . . . 50 3.2 General Concepts about Thermal Transfer ...... 52 3.2.1 Derivation of the General Equation ...... 52 3.2.2 Thermal Transfer Functions ...... 53 3.3 Numerical Thermal Analysis ...... 54 3.3.1 Purpose of Numerical Analysis ...... 54 3.3.2 Characteristics of a Numerical Model ...... 55 3.3.3 Calibration of a Numerical Thermal Model ...... 56 3.4 Experimental Measurements of the Transfer Functions . . . 58 3.4.1 The Fourier Transform Method ...... 59 3.4.2 The Non-linear Fitting Method ...... 60 3.4.3 The Direct Estimation Method ...... 60 3.4.4 Overall comparison of the three methods ...... 63 3.5 Data Analysis of the QM Tests ...... 65 3.6 Data Analysis of the FM Tests ...... 70 3.6.1 The Test Procedure ...... 70 3.6.2 Analysis of the Results ...... 71 3.7 Conclusions ...... 76

4 Calibration and Verification of the LFI Data Compressor 81 4.1 Principles of the LFI Data Compressor ...... 82 4.1.1 Need for a Data Compressor ...... 82 4.1.2 Principles of Data Compression ...... 83 4.1.3 Details of the LFI Data Compressor ...... 86 4.1.4 The Radiometer Electronics (REBA) Acquisition Modes 87 4.1.5 Requirements on Data Compression ...... 87 4.2 Calibration of the LFI Data Compressor ...... 88 4.3 Verification of the LFI Data Compressor ...... 91 4.4 Detection of Jumps in the Radiometric Output ...... 92

5 Verification and Calibration of LFI with LIFE 97 5.1 LIFE, an Analysis Tool for the LFI Test Campaigns ...... 98 5.2 Radiometric Test Analysis with LIFE: the RaNA Package . . 100 5.2.1 Format of the LFI RCA Data ...... 100 5.2.2 The RaNA Data Analysis Modules ...... 101 5.3 LFI Test Analysis with LIFE: the LAMA Package ...... 101 5.3.1 Format of the LFI RAA Data ...... 101 CONTENTS iii

5.3.2 Tree Representation of RAA Data under Lama . . . . 102 5.3.3 Support for Multiple Measure Units in the Data . . . 106 5.3.4 Extension of the RaNA Approach to Multiple Feed- Horns ...... 107 5.3.5 Downsampling of Radiometric Outputs ...... 107 5.3.6 Accessing data ...... 108 5.3.7 The Lama Link module ...... 109 5.4 Future Developments of LIFE ...... 113

6 Conclusions and Future Work 115 6.1 Use of LIFE in the LFI QM/FM Tests ...... 115 6.2 In-flight Testing and the Future of LIFE ...... 116 6.2.1 Use in the next Satellite Tests ...... 116 6.2.2 Use in Flight ...... 116

A Algorithmic Asymptotic Behavior 119

List of Acronyms 121 iv CONTENTS Introduction

This thesis is the result of a work lasted three years and carried out at the Istituto Nazionale di Astrofisica e Fisica Cosmica (INAF) in Milan. In my work, I have contributed to the ground calibration of the Low Frequency Instrument (LFI), one of the two instruments that will fly on the Planck satellite, a new-generation experiment for the measurement of tempera- ture and polarization anisotropies in the Cosmic Microwave Background (CMB). Planck (http://www.rssd.esa.int/Planck) is an ESA project with instruments funded by ESA member states (in particular the PI countries: France and Italy), and with special contributions from Denmark and NASA (USA). In this thesis I will also illustrate my contributions to the implemen- tation of the Lfi Integrated perFormance Evaluator (LIFE) software pack- age and will show some of its applications. LIFE has been a key compo- nent in the testing of the qualification and flight models (QM and FM) of the Planck/LFI instrument that were done in the period July 2004–October 2006 in the Alcatel/Alenia Space laboratories of Milan (Italy). Since Novem- ber 2006, many of these tests are being repeated on the full integrated satel- lite in the Thales/Alenia Space laboratories of Cannes (France), according to the Planck calibration plan. The organization of this thesis is as follows:

• Chapter 1 gives an historical introduction to cosmology and CMB, and briefly discusses the most important experiments in the field of CMB measurements. Although somewhat unusual, I have sketched a historical approach, introducing modern cosmology in a wider con- test.

• Chapter 2 provides an overview of the Planck space mission to mea- sure the temperature and polarization anisotropies of the CMB. I will concentrate on the LFI, an array of radiometers in the 30 ÷ 70 GHz frequency range.

• To achieve its ambitious scientific requirements, LFI needs an exten- sive and detailed calibration campaign, started in July 2004 and still

v vi Chapter 0. Introduction

continuing. In chapters 3 and 4 I will discuss two calibration tasks that were under my direct responsability:

– Chapter 3 reports on the thermal calibration activity that char- acterized the dynamical behavior of the Planck/LFI focal plane when temperature fluctuations are induced on it. Such testing is extremely important for achieving the scientific goals of Planck, since LFI requires high temperature stability to reduce the im- pact of systematic errors on its performances. – In order to be able to send all the scientific information acquired during flight to Earth, LFI implements a data compressor. Chap- ter 4 explains the concept of the LFI compressor and discusses the tests we have made to calibrate and verify its performances.

• Chapter 5 discusses in full detail the implementation of the LIFE soft- ware package. LIFE is the tool that has been used to analyze the LFI tests acquired during the test campaigns. I was part of the LIFE de- velopment core team and have designed and coordinated some key features of the software. Here I will discuss my contributions to this activity.

• Finally, chapter 6 sums up the contents of this work and explains the next LFI calibration stages and the foreseen development of the LIFE for its use during the survey observations. CHAPTER 1

Our Universe and the Cosmic Microwave Background

MASHA: To live and not to understand why cranes fly; why children are born; why there are stars in the sky. . . You’ve got to know what you’re living for or else it’s all nonsense and waste.

Anton Chekhov, “Three sisters”, Act II Let us seek with the desire to find, and find with the desire to seek still more.

St. Augustine, quoted in “Address of Pope Paul VI to men of thought and science”, December 8th, 1965

In this chapter I will provide a short historical overview about the his- tory of Cosmology and measurements of the Cosmic Microwave Background (CMB). This chapter will consider a broad time span, starting from the very first Greek philosophers (VI century b.C.) and ending with the most recent advances in (WMAP). The struggle to understand the nature of our Universe has always been present in mankind. Although this chapter cannot be a complete account of the development of cosmological understanding, nonetheless it tries to illustrate some of the great efforts spent in the last two millennia1 — and still continuing — to grasp a full “cosmic vision”. The status of present-day cosmology and of CMB science in particular is then discussed in greater detail. The structure of this chapter is as follows:

1I chose not to include in this short review — despite their unquestionable interest — those cultures that have had no particular influence on the development of modern scien- tific cosmology (e.g. Eastern or pre-Colombian cultures).

1 2 Chapter 1. Our Universe and the Cosmic Microwave Background

• Section 1.1 discusses the first tentative cosmological models made by Greek philosophers in the period between the VI and the III century b.C. Then, the influence of the Christian doctrine in going beyond static cosmological models is explained by discussing the works of St. Augustine and Nicole Oresme.

• Section 1.2 provide an overview of the birth, triumph and decline of heliocentric models in the period between the XV and the XIX cen- tury.

• Section 1.3 discusses the great advancements in our modern under- standing of the Cosmos after Hubble’s discoveries and parallel ad- vancements in theoretical cosmology.

• Section 1.4 introduces the subject of the CMB. This is the main topic of my thesis, and in this chapter I will show how CMB experiments can help us to better understand our Universe. The technical challenges of CMB experiments are discussed as well.

I have used many sources to write sections 1.1 and 1.2. The most used have been Mathieu (1966), Perone et al. (1978), Munitz (1986), Kolb (1996) and Gingerich (1999). References to historical documents and books are given in footnotes, while scientific references are within the text.

§ 1.1. The Birth of Cosmology and the Ancient Models of the Universe

1.1.1. Greek Philosophy and the Origin of the World. Ancient cosmol- ogy searched for common principles governing the observable universe with some analogies with modern cosmology. (And, as we shall see, in some respects it also faced similar problems). Greek cosmology was deeply grounded in philosophical ideas. Investigating these principles meant to understand the problem of the Being (i.e. why do things exist?) and the Change (i.e. why do things change?). A common idea among the first philosophers (VI cent. b.C.) was that the origin of the Being lays in a com- mon natural principle, called arché (αρκè). Depending on the author, the arché was considered a metaphysical (e.g. Heraclitus) or physical phenomenon. The latter was the case in Anaximenes’ philosophy2, where the air provides life to the World by alternating between rarefactions and condensations. Among these thinkers it is worth to mention Anaximander (c. 610–c. 546), who thought the world to be spatially finite. He conceived the outer bounds of the sky as spherical, and apparently — quite surprisingly — considered the Sun and the Moon farther from the Earth than the stars.

2Anaximenes’ ideas were not however completely materialistic. The first true material- istic philosophy would appear with Democritus (c. 460–c. 370) and his theory of atoms. 1.1. The Birth of Cosmology and the Ancient Models of the Universe 3

The Earth itself, with a drum-like shape, stands freely suspended at the center of the Universe. Mathematics appeared in Greek philosophy within the first Pythagorean schools (VI–III cent. b.C.). Pythagoreans believed that the very essence of things were mathematical and therefore tried to describe the world using geometry. Two notable results of their philosophical speculations are: (a) the idea of a universe with infinite extents3, and (b) the later doctrine of he- liocentrism, first proposed by Aristarchus of Samo (around 280 b.C.), with the Earth and the planets rotating around a still Sun4. The next big step in developing a complex cosmology was made by Aristotle (384–322 b.C.), one of the most important philosophers of the An- tiquity. Aristotle’s physics separates the constituents of matter into two classes5: 1. Sublunary things are made by a mixture of four elements (air, earth, water, fire) and can change only as the consequence of some previous change. (The “first change” was originated by God. It was not the “first” one in chronological order however, its nature being rather metaphysical.) 2. The skies are made by a fifth substance, the ether, that is crystalline and eternal. The motion of ether can only be circular, because for Aristotle this was the simplest and most perfect motion6, the circle being the closed polygon with the highest symmetry. (Linear motion is less perfect because it has no finite extents.) According to Aristotle, the skies are made by 55 circles. Despite his distaste for the Pythagorean idea of a spatial infinite7, Aris- totle believed the Universe not to have a finite age8. His reasoning was as follows: since everything is generated by something else, if the Universe had a finite time then at some point in the past something was generated by nothing, and this was considered impossible. A few centuries later, Ptolemy9 (83–161 a.D.) summed up the cosmic knowledge of centuries into an Aristotelean model of the Universe with

3Archytas (428–347 b.C.) proposed this reasoning: if anybody reached the boundaries of the Universe, he could always extend his arm beyond that boundary. 4Aristarchus’ book exposing these ideas is lost. We know of it because of a reference in Archimedes’ book The sand reckoner. 5Physics, book VIII. 6According to Aristotle (and many other Greek philosophers), only stillness is com- pletely perfect: the movement implies a change, and therefore its start and its end cannot be perfect at the same time. However, among the various types of motion, the circular one is the most perfect. 7The dislike for infinity was quite common in Ancient Greece: see e.g. the paradoxes proposed by Zeno to negate the existence of infinity. 8Physics, book I. 9Almagest. 4 Chapter 1. Our Universe and the Cosmic Microwave Background

Figure 1.1: Geocentric model of the Universe as drawn by Peter Apian in his Cosmo- graphia (1539). the Earth at the center and all the planets and stars moving around in com- plex compositions of circles called epicycles (see figure 1.1), which solved the problem of planetary retrograde motions (not considered by Aristotle). Ptolemy’s model become the “cosmological standard model” for centuries.

1.1.2. Cosmology in the Middle Ages. In the age of Christianity, new ideas about the Universe (partly derived from hebraism) appeared in the Western World. God, according to the Bible, is not only the source of Be- ing10, but also the creator of matter11. The Universe was therefore created at some time in the past, both in its matter and its metaphysical essence. This was in contrast with Greek philosophy, where the arché does not cre- ate matter from nothing but rather gives an ordered shape to pre-existing matter. The idea of a historical creation (i.e. taking place at some definite time in the past) introduced new problems. One of the most interesting questions was whether there were something12 before creation. Such problems were at first addressed by Origen of Alexandria (c. 185–c. 254 A.D.), which sup- posed that the Universe had undergone an infinite cycle of creations and

10Exodus 3:13 11Genesis 1:1, John 1:1. 12A common argument against creation was the following. If God created the world, he chose to do it at a specific time. But if time always existed, in the eternity before the creation there would have been nothing to distinguish one moment for creating the world from another, since God is perfect and therefore changeless. Therefore it is contradictory to believe that God created the world. 1.1. The Birth of Cosmology and the Ancient Models of the Universe 5 destructions13. Origen’s theory is based on the hypothesis that God’s cre- ative power acts within an absolute time. An alternative explanation was proposed by St. Augustine of Hippo (354–430 A.D.), who suggested that within the Creation of the Universe God also created Time:

. . . but if any excursive brain rove over the images of forepassed times, and wonder that Thou the God Almighty [. . . ] didst for innumerable ages forbear from so great a work, before Thou wouldest make it; let him awake and consider, that he wonders at false conceits. For whence could innumerable ages pass by, which Thou madest not? [. . . ] or what times should there be, which were not made by Thee? or how should they pass by, if they never were? 14

The modernity of such an idea lies in the high level of abstraction re- quired to think of the possibility that time had a beginning: in fact, despite the many other theories proposed in the following eight centuries, Augus- tine’s idea was considered so convincing that St. Thomas Aquinas (c. 1225– 1274) accepted it15 in developing his philosophical system, the Scholastic. The years 1270–1277 were extremely important for cosmology, since Éti- enne Tempier (d. 1279), bishop of Paris, issued a condemnation of 219 Aris- totelian thesis. Many of them were condemned because they introduced restriction on God’s power, as for instance the one stating that a spatially infinite universe is logically impossible. (Condemning this thesis is not the same as stating that the Universe is infinite: it is indeed the affirmation that such a possibility cannot be excluded.) Duhem (1985) suggests that this act opened a new way of observing the cosmos, because it allowed new speculative possibilities. Nicole Oresme (c. 1323–1382), bishop of Lisieux, contributed to many philosophical fields: a short list includes mathematics, psychology, musi- cology, optics and cosmology (Costé, 1997). He recognized16 that for an earthly observer the rotation of planets and stars around a still Earth is not distinguishable from a natural rotation of the Earth, since only relative mo- tions can be detected. He then favored the hypothesis of a rotating Earth because he considered this model conceptually simpler. Oresme’s idea is remarkable because of two points: (1) it implicitly uses a simplified form of the Galilean relativity of motion, and (2) it is based on the idea that when

13De principiis. The idea of infinite creations was first proposed by the stoic doctrine (c. III century b.C.). Stoics supposed each cycle to be identical to each other, on the belief that the World is guided by necessity. Origen explicitly refused such an explanation, because this would negate human freedom. 14Confessions, XI, XIII, translation by James O’Donnell (Oxford University Press). 15Summa theologiae, I. It is interesting to note that in De aeternitate mundi Thomas also ex- plored the possibility of the existence of an eternal time, although from a purely speculative point of view. 16Glosses to Traité du ciel et du monde (1376–1377), especially chapters XXIV-XXV. 6 Chapter 1. Our Universe and the Cosmic Microwave Background more than one explanation for a physical phenomenon is possible, the most simple one should be favored. Unfortunately, Oresme’s glosses to Traité du ciel et du monde were never published and had therefore little diffusion.

§ 1.2. From Copernicus to the Crisis of Heliocentrism

1.2.1. The Heliocentric Cosmological Model. With Nikolaus Copernicus (1473–1543) the heliocentric model first proposed by Aristarchus was the objective of a renewed interest. He derived from Aristarchus the idea that the Sun was at the center of the Universe and developed a system where the planets orbit around17. Copernicus’ model is considerably simpler than Ptolemy’s but has a worse accuracy, due to the fact that it postulates that the planets move in uniformly circular motion whose center is in the Sun. Copernicus’ model was corrected by Johannes Kepler18 (1571–1630), whose laws of planetary motion19 correctly considered elliptical orbits with the Sun in one of the foci. Mainly because of the lack of any definitive proof, the heliocentric model was rejected by a number of thinkers and in 1633 the Catholic Church too made a statement against Copernicus’ ideas at the end of the trial against Galileo Galilei (1564–1642). Several years would pass before the discovery of such proofs.

1.2.2. The First Mathematical Models in Cosmology. The first cosmo- logical theory based on the conservation of some physical quantity was proposed by Renée Descartes (1596–1650) in his essay Treatise on the World (never published during the author’s lifetime). His model was heliocen- tric and postulated that the conservation of momentum (mv) made the pri- mordial bodies created by God aggregate in circular orbits and originate vortexes. Descartes considered “space” the same as “extension”, therefore refusing atomism (since an extension can always be divided). After Descartes, new more refined heliocentric models were proposed by Gottfried Leibniz (1646–1716) and sir Isaac Newton (1642–1727). Leib- niz proposed20 the conserved quantity in the evolution of the Universe be kinetic energy (i.e. mv2, called by Leibniz vis viva) instead of Descartes’ mo- mentum. Newton’s cosmology used the universal gravity law21 discovered

17De revolutionibus orbium caelestium (1543). Although it does not contain explicit refer- ences to Aristarchus, Copernicus’ first sketches of the book did. 18It is interesting to note that Kepler’s master, Tycho Brahe (1546–1601), proposed another cosmological model where the Earth is at the center of the Universe and the Sun and the Moon orbit around it, but the other planets orbit around the Sun. 19The first two laws were stated in Astronomia nova (1609), while the third one was in- cluded in book V of Harmonices mundi (1619). 20Brevis demonstratio erroris memorabilis Cartesii et aliorum circa legem naturae (1686). 21Philosophiae naturalis principia mathematica (1687). 1.3. Cosmology in the XX Century 7

Figure 1.2: The Solar System according to Copernicus (De revolutionibus, 1543). The sun stands still at the center of the Universe while planets move on circular orbits around it. From the Sun outwards, the planets are: Mercury, Venus, the Earth and its moon, Mars, Jupiter, and Saturn. Notice the absence of Jupiter’s moons, discovered by Galileo in 1610. by himself to model an infinite static universe held together by gravitation alone. (This model was however unstable on large scales against gravita- tional collapse.) Newton’s ideas were then extended by Immanuel Kant22 (1724–1804) and Pierre Simon Laplace23 (1749–1827), which proposed that the observable universe is the mechanical product of gravitational interac- tions in a primitive nebula. Since the XVIII century, new observations of the sky (the most notable ones were made by the Herschel family) showed that planets orbit around the Sun, and that stars have different distances from the Earth. Although this revealed the inadequacy of the heliocentric model, the idea that our galaxy is at the center of the universe was nevertheless still a common be- lief. These views would have been superseded only in 1925, when Edwin Powell Hubble (see below) announced that M31 is 930,000 light-years away from the Earth, thus implying that this object is extra-galactic.

§ 1.3. Cosmology in the XX Century

1.3.1. Einstein’s Static Cosmological Model. A major breakthrough in cos- mology would come in the years 1905–1916, when Albert Einstein (1879–

22Allgemeine Naturgeschichte (1749). 23Mécanique Céleste (1799–1825). 8 Chapter 1. Our Universe and the Cosmic Microwave Background

1955) published a number of papers proposing the Theory of Relativity, where the Newtonian gravity is superseded by a more fundamental theory (see e.g. Wald, 1984). The space-time is no longer seen as a fixed frame of refer- ence for the motion of bodies, but rather a dynamic continuum governed by the Einstein Equation:

1 8πG R − Rg = T , (1.1) µν 2 µν c4 µν where Rµν is the Ricci tensor, R the scalar curvature, gµν the metric tensor, G the gravitational constant (G = 6.67 × 10−11 N m2/Kg2), c the speed of light 8 (c = 2.99 × 10 m/s) and Tµν the stress-energy tensor. Taking into account the symmetries of the tensors, this tensor equation is equivalent to a system of 6 differential equations, which describe how the space-time continuum varies according to the stress-energy tensor. A consequence of equation (1.1) is that the presence of mass (the Tµν term) can bend the space-time (the gµν term). Einstein tried to use equation (1.1) to model a static universe, much like Newton’s attempts using classical mechanics, but failed for the same rea- sons: stationary solutions of (1.1) are unstable under gravitational collapse. Therefore, Einstein tried to add a cosmological constant Λ to the left side of the equation: 1 8πG R − Rg + Λg = T , (1.2) µν 2 µν µν c4 µν The balance given by this additional term allows static solutions (Einstein, 1917). However, the presence of Λ in the equation had no other physical motivation than allowing the universe to be static. Einstein chose to model the universe as an homogeneous medium for simplicity’s sake. This hypothesis however would have been confirmed by many successive measurements: on scales greater than hundreds of Mega- parsec, the Universe indeed exhibits homogeneity (i.e. the characteristics of the universe does not change with the position) and isotropy (i.e. the char- acteristics of the universe does not change with the direction of observa- tion). Virtually all subsequent cosmological models would use this princi- ple, that has been therefore called the cosmological principle. The acceptance of this principle stated the end of human-centric cosmological models like Ptolemy’s and Copernicus’. Einstein’s model suffered from a problem common to many other static models that assume the cosmological principle. This is known as the Ol- bers’ paradox24. If the cosmological principle is true, this implies that stars are distributed uniformly in the space. If each star of radius R and distance

24From Heinrich Wilhelm Olbers, 1758–1840, although this principle was formulated be- fore by others (see Liebscher, 2005, for an interesting discussion), like Marcellus Palingenius (1570) and Edmond Halley (1720). 1.3. Cosmology in the XX Century 9 r covers about R2/r2 of the sky, then the total space covered by the stars is

Z ∞ R2 = 2 = Ω 2 4πr dr ∞, 0 r and therefore the sky should appear infinitely bright. Many explanations of this paradox had been proposed, but none had ever proven to be definitive: • Most of the stars could shine behind other stars nearer to us. How- ever, the brightness of the sky should still be at least equal to that of the Sun.

• If the light of the farthest stars were absorbed by some interstellar gas, then the temperature of the gas should increase till it would shine as brightly as the absorbed radiation.

• Finite lifetimes for the stars do not solve the paradox, since the hy- pothesis of an homogeneous distribution of stars in a static universe imply that the star formation rate must balance their destruction rate. A few years after Einstein’s attempts, his General Theory of Relativity would have been used by an obscure Russian physicist to develop a new cosmological model that solved Olber’s paradox. That physicist was Fried- mann.

1.3.2. The Friedmann’s Model. In 1922 Alexander Friedmann (1888–1925) used the General Relativity to propose an expanding universe. The model does not use the cosmological parameter Λ. Instead, it makes the hypothe- sis that the metric gij in equation (1.1) can change with time: in a spherically symmetric coordinate system, the space-time interval ds is

 dr2  ds2 = −dt2 + a2(t) + r2(dθ2 + sin2 θdφ2) . (1.3) 1 − kr2 where a(t) is the so-called scale factor which describes the expansion at time t. From these hypotheses, Friedmann got the following equation:

 a˙(t) 2 8πG kc2 = ρ − , (1.4) a(t) 3 a2(t) where ρ the mean energy density of the Universe and k the curvature param- eter (a pure number equal either to -1, 0 or +1). The key element of the Friedmann model is the space dilation effect. Given two points A and B in the Universe distant enough for the cosmo- logical principle to apply, then their distance r changes from time t0 to time t according to r(t) = rnow a(t), (1.5) 10 Chapter 1. Our Universe and the Cosmic Microwave Background

where rnow is the distance between A and B today and a(t) is the solution of equation (1.4). According to Friedmann, a(t) depends on the actual density of the Uni- verse and the so-called critical density ρcrit:

1. If ρ > ρcrit then the Universe will begin to collapse after a time tcrit > tnow, because of the gravitational force: a(t) is a function with a max- imum in tcrit. In this case the universe is said to be “closed”: the space has a spherical curvature and the geometry is non-Euclidean. The Universe will eventually recollapse in what has been called a Big Crunch.

2. If ρ < ρcrit the expansion is slowed down by the gravitational force but never interrupted: a(t) is a monotonically increasing function (“open universe”). In this case the universe is said to be “open”: the space has an hyperbolic curvature and the geometry is non-Euclidean.

3. If ρ = ρcrit the expansion will not be interrupted by the gravitational force, but a0(t) → 0 asymptotically (“flat universe”). In this case the universe is said to be “flat” and the postulates of the Euclidean geom- etry are valid.

A key parameter is therefore the ratio between the actual density and the critical density, defined by: ρ Ω0 = . (1.6) ρcrit Some years after Friedmann’s death (1927), Georges Lemaître (1894– 1966), without knowing Friedmann’s studies, proposed the same model but pushed the idea forward, suggesting that this expansion had its origin from an initial singularity (called by Lemaître “cosmic egg”). This would have been lately known as the Big Bang model. Lemaître ideas had the ad- vantage of offering a simple explanation to Olbers’ paradox (see 1.3.1): if the universe had a beginning and the speed of light is finite, then only those stars whose distance allowed their light to reach us can be visible today. A confirmation of the validity of the ideas of Friedmann and Lemaître would have been given by Edwin Hubble.

1.3.3. Hubble’s Law of Galaxy Recession. Edwin Powell Hubble (1889– 1953) was one of the greatest astronomers of all times. Along with the dis- covery of the distance of the Andromeda galaxy (see section 1.2.2), he gave a fundamental contribution to cosmology with the formulation of the law of galaxy recession (Hubble, 1929). This law states that the spectral lines of far galaxies show a red shift associated with a speed v such that

v = H0d, (1.7) 1.3. Cosmology in the XX Century 11

Figure 1.3: From left to right: Albert Einstein, Edwin Hubble, and Walter Adams in 1931 at the Mount Wilson Observatory 100" telescope, in the San Gabriel Mountains of southern California. It was here in 1929 that Hubble discovered the cosmic expansion of the universe (from the archives of the California Institute of Technology).

where d is the distance of the galaxy from us and H0 is the so called Hub- ble’s parameter. It is common practice in cosmology to use the normalized Hubble parameter −1 −1 h = H0/100 km s Mpc . +.031 (The most recent estimate for h is 0.732−0.032, from Spergel et al. 2007.) I shall follow this convention throughout this chapter. The explanation of Hubble’s expansion is straightforward using the Friedmann model, since in an expanding universe the space between two galaxies increases progressively, and this produces a red shift that is pro- portional to the distance of the galaxies. This was the confirmation that the universe is indeed not static. After Hubble’s paper, Einstein himself removed the Λ constant from his equations (Einstein, 1931). Hubble’s law allows an alternative time parameterization that has proven to be useful in modern cosmology. From equation (1.4), if the universe is still expanding now then a(t) must be a monotonic increasing function for 0 < t < tnow. The expansion causes a decrease in the frequency of traveling wave signals which is quantitatively described25 by the redshift parameter z: ∆ν ν − ν R(t ) z = obs = obs emit = obs , (1.8) νemit νemit R(temit) where νobs is the observed frequency at time tobs, and νemit is the frequency at time of emission temit. From the fact that a(t) is invertible (because it is monotonically increasing), then z(t) is invertible too and it univocally determines t. Using z instead of t is a convention widely used in cosmology.

25This equation gives an additional explanation to Olbers’ paradox: because of the cos- mological redshift, the farthest galaxies have their light shifted towards frequencies that are outside of the visible spectrum and are therefore no more visible. 12 Chapter 1. Our Universe and the Cosmic Microwave Background

Despite Hubble’s discovery, the Big Bang model did not gain an im- mediate success. The main reason was because the galactic redshift ob- served by Hubble was the only available proof. This initial reject was also partly caused by the fact that static cosmological models had a greater appeal than evolutionary ones, thanks to the simplicity of their hypothe- sis: dynamic universes pose a number of philosophical problems not very different from those we have examined at the beginning of section 1.1.2. Other astronomers tried alternative explanations, e.g. Fred Hoyle (1915– 2001) proposed a static cosmological model26 where galaxies do separate, but at the same time new matter is continuously produced in the gaps pro- duced by the expansion itself. Such models were abandoned after the dis- covery of a relic radiation that provided another strong proof to the Big Bang cosmology: the cosmic microwave background.

§ 1.4. The Cosmic Microwave Background

1.4.1. Gamow’s Prediction of a Relic Background. George Gamow (1904– 1968) developed the model proposed by Friedmann and Lemaître, suggest- ing that if in the past galaxies were considerably closer, then at some mo- ment their distance should have been so small that they were virtually in contact (Gamow, 1946). From this, Gamow assumed that at this time (called the recombination time) all the matter components were ionized and in ther- mal equilibrium. The radiation should have had a temperature of about 5 000 K, but, due to Wien’s law and to the expansion of the universe, to- day it would be detected as a microwave radiation of a few Kelvin. (In his calculations, Gamow also made some estimations about nucleosynthesis processes in the primordial universe.) Quite surprisingly, Gamow’s ideas were not put under test by experi- mental physicists (see Weinberg, 1977), despite the fact that measuring this relic radiation would have been a strong evidence of the Big Bang model. Its serendipitous discovery would have happened twenty years later.

1.4.2. First Detection of the CMB. In 1965 Arno Allan Penzias (b. 1933) and Robert Woodrow Wilson (b. 1936) detected an excess of antenna tem- perature of about 3 K at 4 GHz while they were measuring the radio emis- sion of our Galaxy. They were unable to understand its nature, and at first tried without success to ascribe it to some systematic effect. They decided to publish their results (Penzias and Wilson, 1965) only when a group of Princeton researchers correctly used Gamow’s idea to provide a cosmolog- ical interpretation of Penzias and Wilson’s measurement (Dicke et al., 1965). (Dicke and his group were considering experimental detection of the relic

26It is interesting to note that the “Big Bang” nickname for the expanding model came from Hoyle himself. 1.4. The Cosmic Microwave Background 13 background even before Penzias and Wilson’s experiment.) Almost twenty years after his paper, Gamow’s relic radiation proved to be a real object. It is interesting to note that some evidences of the existence of this radia- tion had already been reported by McKellar (1940) — even before Gamow’s calculations. McKellar found a population of excited rotational states of CN molecules in interstellar absorption lines, and interpreted this as the gas being in thermal equilibrium with a temperature of around 2.3 K. The cosmological implications of this discovery were however not understood at the time. For further information about this, refer to Bonometto et al. (2002, chapter The cosmic microwave background by Arthur Kosowsky). The radiation, lately called Cosmic Microwave Background (CMB), was then the subject of a number of experiments aiming to verify its isotropy and spectral shape (first generation experiments). They confirmed that the spectrum was that of a planckian blackbody, with a temperature of ∼ 2.73 K. Experiments in the 1970’s and 1980’s showed that the CMB, in accordance with the cosmological principle, exhibited an astonishingly high level of isotropy (∆T/T ∼ 10−4). After the discovery of the CMB, Gamow’s cosmological model was modified and improved to take into account the characteristics of the CMB and became what it is now called the Cosmological Standard Model. Accord- ing to this model, the history of the universe followed this path:

1. About 1.4 × 1010 years ago the Universe began to expand into a hot and dense plasma (T ∼ 102 GeV ∼ 1015 K after about 10−8 s), made by radiation and matter in thermal equilibrium, which cooled down as a consequence of the expansion itself. At this stage, the total en- ergy density was driven by photons, and therefore this epoch is called radiation-dominated era. The event at the beginning of the expansion is the Big Bang supposed by Lemaître.

2. Nucleosyntesis started at a cosmic time ∼ 1 s, when the mean ki- netic energy (with T ∼ 1011 K) was sufficiently low to produce 2H, although at a very low rate. After ∼ 200 s the temperature was less than 109 K, allowing for the formation of 3He and 4He from 2H. The predictions of this model are remarkably consistent with the observed abundances of hydrogen (∼ 76%) and helium (∼ 24%) and other light nuclei.

3. In the first 3 × 105 years after the Big Bang, thermodynamic equilib- rium between matter and radiation was maintained by Thomson scat- tering between free electrons and photons; after this period the tem- perature was low enough (∼ 3000 K) to allow for the combination of electrons and protons into neutral hydrogen: the reduced density of free electrons made the matter transparent to radiation, which started to propagate freely. The time when this event happened is called last 14 Chapter 1. Our Universe and the Cosmic Microwave Background

scattering epoch (about 400,000 years after the Big Bang, z ≈ 1000). After this event, the energy density of the universe is mainly matter, and the universe is therefore said to be matter-dominated.

4. In a matter-dominated universe, the primordial radiation is able to propagate freely, but the expansion of the universe cools it off still maintaining its original blackbody spectrum. At the present time this radiation is in the microwave range (T ∼ 3 K, λmax ∼ 2 mm): this is the CMB detected by Penzias and Wilson.

The key feature in the model that explains how the CMB was produced is the the fact that the expansion of the Universe produces a decrease in its temperature. In a radiation-dominated universe the radiation energy density ρR is energy 1/R(t) 1 ρ = ∝ = (1.9) R volume R3(t) R4(t) (since the energy of a photon is ∝ 1/λ ∝ 1/R(t)) and thus it decreases with increasing times. By using the energy density formula for a black- body u(T) = σ T4 (σ ≈ 5.67 × 10−5 erg/s/cm2/K4 is the Stefan-Boltzmann constant) then from the Friedmann equations the temperature is

1.5 × 1010 T ≈ t−1/2 [K] 1/2 ∝ (1.10) t[s] in a radiation-dominated era: the temperature decreases with the square root of time. At the last-scattering epoch the plasma temperature reached ∼ 3000 K and the mean kinetic energy became low enough (∼ 30 eV) to allow electrons to combine with protons into neutral hydrogen and other light elements (3He, 4He and 7Li). The between photons and free electrons was the most important process which maintained the equilibrium between matter and radiation: at z < 1000 this process was no longer efficient and matter became transparent to radiation which was free to propagate.

1.4.3. The Inflationary Universe. Despite the great success of the Big Bang model in explaining the Hubble’s law and the CMB, there was still an issue to be addressed: why is the CMB so homogeneous? Since nothing can- not travel faster than light (3 × 1010 cm/s), any signal emitted in the first 400,000 years years after the Big Bang could not travel more than ∼ 1016 cm. This distance is equivalent to an angle of about 1◦ on the last scattering sur- face. This apparently contradicts the evidence that the CMB shows an high uniformity even on larger angular scales. A new cosmological model that addresses this point is the inflationary model, first proposed by Guth (1981) but vastly modified and improved in 1.4. The Cosmic Microwave Background 15 the following years. According to the standard Big Bang model, the two big eras in the evolution of the universe were dominated by two different equations of state: p = 1/3ρ in the radiation-dominated era, and p = 0 in the matter-dominated era (assuming c = 1). The inflationary theory postulates a new era preceding the radiation-dominated universe, where the primordial plasma was governed by the following equation of state27:

p = wρ, (1.11) with w < 0 being a parameter whose exact value depends on the infla- tionary model used (usually w = −1). This equation of state leads to an exponential expansion in the a(t) term in equation (1.4). This can explain why the CMB is so homogeneous even at large angular scales: the expo- nential expansion proceeded at an extreme rate and at the very first stages kept together regions that today are no longer in casual contact. The inflationary theory did not gain an immediate success. It won con- siderable attention only when other puzzling discoveries proved the limits of the Friedmann-Lemaître models. We shall see this in the next section dedicated to the CMB anisotropies.

1.4.4. First CMB Anisotropies Experiments. Immediately after the dis- covery of the CMB it was proposed that this highly isotropic signal should contain small anisotropies (today known to have an amplitude ∆T/T ∼ 10−5) due to density fluctuations in the last scattering epoch. These fluctu- ations were required to explain the observed inhomogeneity of the present Universe. The first detection of CMB anisotropies was made by the COBE mis- sion in the early ’90, which was a second generation experiment. The satellite carried three instruments (see The Cobe Homepage, in the bibliography): FIRAS (a polarizing Michelson interferometer to measure the spectrum dis- tribution of the CMB in the 0.1 ÷ 10 mm wavelength range), DMR (an ar- ray of differential radiometers in the 31.5 ÷ 90 GHz range to detect possible CMB temperature anisotropies) and DIRBE (to search for evidences of the cosmic infrared background between 140 and 240 µm). Along with the precise measurement of the CMB Planckian spectrum made by FIRAS — which established the CMB temperature to be 2.725 ± 0.001 K, see Mather et al. 1999 — COBE achieved a very important result: DMR found evidences of anisotropies on large angular scales (36.5 ± 5 µK at 7◦, or ∆T/T ∼ 10−5; see Bennett et al. 1996; Fixsen et al. 1996). The anisotropies observed by DMR are shown in figure 1.4.

27It can be shown that this case is mathematically equivalent to the presence of Einstein’s cosmological constant Λ in equation (1.1), see e.g. Carroll (2001). However, the presence of such constant leads to results different from those predicted by Einstein, since here we are in the context of a time-dependent model. 16 Chapter 1. Our Universe and the Cosmic Microwave Background

Figure 1.4: Maps generated with the COBE/DMR data (from The Cobe Homepage). The raw samples are dominated by the 3 mK dipole (top); after a subtraction, the galactic emission is dominant (middle). By carefully removing them, the 30 µK CMB anisotropies become evident (bottom). 1.4. The Cosmic Microwave Background 17

The discovery of anisotropies in the CMB temperature opened a new promising field of research within experimental cosmology (we shall dis- cuss better the physics of CMB anisotropies later). Therefore, several ground based and balloon borne experiments were planned immediately after COBE, with the aim to improve the 7◦-resolution DMR measurements. A short list includes: MAXIMA (Hanany et al., 2000), a balloon-borne array of 16 cryo- genic bolometers sensitive to multipoles 80 < l < 800 in the 150 ÷ 240 GHz range; DASI (Kovac et al., 2002), a ground-based interferometer sensitive to multipoles 100 < l < 900 based on High Electron Mobility Transistor (HEMT) amplifiers in the spectral window 26 ÷ 36 GHz; CBI (Mason et al., 2003), a ground-based array of 13 detectors based on HEMT amplifiers operating in the 26 ÷ 36 GHz band with an angular resolution of 50 ÷ 1◦ (300 < l < 3000); Boomerang (de Bernardis et al., 1999), an array of bolo- metric detectors measuring the CMB signal on a relatively wide region of the sky (45◦ × 25◦) at 90 ÷ 400 GHz with a resolution of ∼ 100 (l ∼ 1000). In the next sections we shall see how the data collected by these exper- iments can be analyzed and matched with cosmological models, and how the current measurements help us in constraining the possible Big-Bang scenarios.

1.4.5. The Science of CMB Anisotropies. The properties of the CMB an- isotropy field can be studied by expanding it into a linear combination of spherical harmonics: ∆T (θ, ϕ) = ∑ alm Ylm(θ, ϕ), (1.12) T l,m with l ∼ π/θ being inversely proportional to the angular scale θ. The alm coefficients are called multipole moments and according to a wide class of models they must have zero mean (halmi = 0 if the average is done for any observer in the Universe) and obey Gaussian28 statistics. Therefore all statistical properties of the temperature anisotropies can be described by the following quantity: 2 Cl ≡ |alm| , (1.13) where the average is computed over the allowed values for m, i.e. −l, −l + 1, . . . , l − 1, l. Note that the independence of Cl from m implies the absence of a preferred direction in the sky, as a consequence of the cosmological principle (see section 1.3.1). The set of Cl is known as the angular power spectrum and is the key theoretical prediction for any given model. Since the mean in equation (1.13) is computed over all m, there is an higher statistical variance for Cl when l is low (higher angular scales). The

28No deviations from gaussianity have been detected so far (December 2007) in CMB an- isotropies. However, since it is a fundamental hypothesis for so many cosmological models, it is critical to verify it to a great accuracy. 18 Chapter 1. Our Universe and the Cosmic Microwave Background

Universe we can observe from the Solar System is only one realization of a stochastic process, and therefore this variance (called ) puts unavoidable limits on the constraints we can place for theoretical models.

1.4.6. Origin of CMB Anisotropies. CMB anisotropies are said to be pri- mary or secondary depending on whether they were originated before or after the decoupling epoch (t = 3.8 × 105 y) respectively.

Primary Anisotropies. The production of primary CMB anisotropies can be explained by inflation. In its simplest models, the expansion is de- scribed by a scalar potential φ, whose intrinsic quantum fluctuations δφ are “freezed” during the exponential expansion and become classical per- turbations described by General Relativity. These fluctuations produce an- isotropies in the CMB by means of three distinct effects:

Gravitational perturbations: Photons coming from high-density regions undergo a relativistic redshift due to the greater gravitational mass; this phenomenon is called “Sachs-Wolfe effect” (SW). The anisotrop- ies have angular scales larger than the horizon at the last scattering ◦ (θ & 2 ), and are responsible for the features of the CMB spectrum for l . 90. If ∆Φ is the gravitational potential, then the global effect of these anisotropies is ∆T ∆Ψ = − . (1.14) T 3  Since this effect leads to Cl ∼ 1/ l(l + 1) , by plotting l (l + 1) Cl it is possible to recognize the plateau at small l due to the SW effect and directly link it to the initial spectral index.

Acoustic oscillations: In high-density regions radiation is compressed by the higher pressure and produces oscillations. Since recombination is a nearly instantaneous process, modes of acoustic oscillations with different wavelengths are “frozen” at different phases of oscillation. The first peak in the CMB angular power spectrum (the so-called Doppler peak) is therefore due to a wave that as a density maximum just at the time of last scattering; the secondary peaks at higher l-s are high harmonics of the principal oscillations and have oscillated more than once. The effect is directly proportional to the density fluctua- tion ∆ρ: ∆T ∆ρ ∝ (1.15) T ρ

Doppler effects: The frequency of photons can be modified by the Doppler effect if the plasma has a non-zero speed at the last scattering epoch. 1.4. The Cosmic Microwave Background 19

Cl l (l + 1) Doppler peaks

Sachs-Wolfe plateau

Silk damping

l 200 1000

Figure 1.5: The main features of a power spectrum.

The effect on the anisotropy is

∆T ∆v = r , (1.16) T c

where ∆vr the speed of the plasma relative to the observer.

In addition to these three effects, the CMB power spectrum is expected to go to zero for high multiples. This effect, called Silk damping, is caused by the fact that in the short but finite time taken for the Universe to recombine, photons can diffuse a certain distance, therefore erasing by diffusion any anisotropy on scales smaller than this mean free path. This leads to a quasi- exponential decrease in the power spectrum at large l-s. According to the current cosmological models, Silk damping should become quite effective 0 at l & 1000, corresponding to angular scales θ . 10 . Little contribution from primary CMB anisotropies is therefore expected at smaller scales. Figure 1.5 shows the main features of a CMB power spectrum.

Secondary Anisotropies. Secondary anisotropies are due to scattering and other phenomena that take place in the path from the last scattering surface to the observer. Among these effects there are gravitational lensing (alter- ing the direction of propagation of the CMB), the Sunyaev-Zel’dovic (SZ) effect (Compton scattering of the CMB photons with non-relativistic elec- tron gas within clusters of galaxies) and other gravitational effects due to 20 Chapter 1. Our Universe and the Cosmic Microwave Background the time variation of the gravitational potential between the last scattering surface and us.

The Dipole Anisotropy. The relative motion of our local frame with re- spect to the rest frame of the CMB leads to an anisotropy with ∆T/T ∼ 3 mK which is called dipole anisotropy (l = 1). It was the first detected CMB anisotropy, in 1977 (see Smoot et al., 1977), and its signal can be written in the following form:

 v 1  v 2 3 T = T 1 + cos θ + cos 2θ + Ov/c , (1.17) obs 0 c 2 c where θ is the angle between the line of sight and the direction of mo- tion, and v is the velocity. The dynamic quadrupole (third term) is rather small (∼ 1% of the dipole), and it is quite below the intrinsic CMB cosmic quadrupole.

1.4.7. Importance of the CMB for Physics. As we shall see in the next sections, the analysis of CMB anisotropies enables high-precision measure- ment of cosmological parameters and, especially when coupled with differ- ent kinds of measurements, leads to a better understanding of phenomena occurred in the early universe. In this section we shall review some topics where CMB studies can give an important contribute.

The Cosmological Inflation. One of the most simple parameters to extract from the CMB is Ω (eq. 1.6). It is related to the position of the first Doppler peak in the power spectrum (see figure 1.5) by the following relation (Car- roll, 2001): −1/2 lpeak ≈ 220 Ω . (1.18) The first confirmations of its existence came from the Boomerang, MAX- IMA and DASI experiments (see section 1.4.4). (For instance, Boomerang found 0.65 ≤ Ω ≤ 1.45 at the 95% confidence level, see Melchiorri et al. 2000.) Further measurements (see e.g. Riess et al., 1998; Perlmutter et al., 1999) tightened this range around the 1 value, meaning that the geometry of the universe is near to flat. A Ω = 1 parameter raises however theoretical problems, since a conse- quence of equation (1.4) is that any departure of Ω from 1 should get wider and wider as time passes. The fact that today Ω is still close to 1 is therefore a puzzling fact. Inflationary models are able to explain the flatness because in an expo- nential expansion, Ω = ρ/ρcrit evolves towards 1 (in other words, ρcrit is an attractor for the ρ parameter). Mathematically, this can be modeled via a non-zero cosmological constant Λ (see equation 1.2) that contributes to 1.4. The Cosmic Microwave Background 21 3

No Big Bang 99% 95% 90% 2 68%

1 Λ Ω

expands forever 0 lly recollapses eventua

closed Flat Λ = 0 flat -1 Universe open

0 1 2 3

ΩΜ

Figure 1.6: Best-fit confidence regions for Ωm and ΩΛ from the measurement of 42 high- redshift supernovae (Perlmutter et al., 1999). The results provide a strong indicator for dark energy.

Ω together with matter and provides the necessary energy for the expan- sion29: Ω = Ωm + ΩΛ. The energy associated with Λ is called dark energy. Its existence was first proposed to explain another unexpected observation in the measurements on type Ia supernovae done by Riess et al. (1998); Perlmutter et al. (1999) prove that at present times the expansion of the universe is accelerating. This cannot be accounted by a simple Friedmann model, but a nonzero Λ can model this phenomenon. Improved measurements by Riess et al. (2004) have provided the esti- +0.05 mate Ωm = 0.29−0.03 (under the hypothesis of Ω = 1), implying that dark energy contributes substantially to the total energy of the universe (see fig- ure 1.6). An additional striking fact is that the value of Ωm seems to be largely originated by some non-baryonic dark matter, as ordinary matter accounts only for 4% of Ω. This implies that 96% of the whole universe is made by something that we are not able to see (dark matter) or even to explain its essence (dark energy)!

29Other explanations that do not use a cosmological constant have however been pro- posed, see Carroll (2001) for a review. 22 Chapter 1. Our Universe and the Cosmic Microwave Background

Determining the value of ΩΛ and Ωm is difficult with CMB measure- ments alone, but if they are coupled with supernova measurements or large- scale structure data it is possible to obtain tight estimates on the two pa- rameters. We shall see an example of this in the discussion of the WMAP experiment.

Existence of Primordial Gravity Waves. An interesting consequence of the inflationary model (see section 1.4.6) is that this gives a way to deter- mine the existence of one of Einstein’s predicted effects of General Relativ- ity, the gravitational waves. During the inflationary expansion, gravitational waves introduce a tensor contribution to the fluctuations that induces a component in the polarized BB modes of the CMB. (Note however that the primary source of polarization anisotropies is Thomson scattering between electrons and the quadrupole in the radiation field, see Hu and Dodelson 2002). Therefore, the study of CMB polarization anisotropies offers a way to confirm one of the most fundamental predictions of General Relativity.

Estimation of the Cosmological Parameters. A large number of processes can contribute to the generation of the CMB anisotropies. Theoretical works have established precise correlations between the CMB power spectrum and the value of the cosmological parameters used in the Hot Big Bang Model. We have already discussed how the CMB can be used in determin- ing the value of Ω (equation 1.18). The following is a short summary of the relations between some cosmological parameters and the features of the CMB power spectrum:

Baryon density ΩB: a large value of ΩB increases the average height of the peaks, especially for the odd ones.

Hubble constant H0: small values of this parameter boost the peaks and slightly change their location in l-space.

Cosmological constant Λ: increasing this constant will also lead to a change in the peak height and in their position.

Reionization: if the intergalactic medium was re-ionized when z  1000, then the power spectrum for l > 100 would be exponentially sup- pressed.

Nature of dark matter: if the critical density is provided by a mixture of cold (ΩCDM > 0.7) and hot (Ων < 0.3) dark matter, as suggested by present observations, the angular power spectrum is expected to show systematic differences (at the level of ∼ 10%) compared with the ΩCDM = 1 case. 1.4. The Cosmic Microwave Background 23

Figure 1.7: Above: the temperature anisotropies in the CMB as seen by WMAP (23 GHz) after having removed the dipole. Below: the polarization anisotropies in the W-band at 94 GHz (images from the WMAP homepage, http: // map. gsfc. nasa. gov ).

The main problem in determining cosmological quantities from the CMB lies in the number of degeneracies that arise when performing a fit with so many parameters. To reduce this problem, two methods are employed: (1) cross-correlation of the CMB data with other kinds of measurements, and (2) improvements in the angular resolution, sensitivity and sky coverage of CMB experiments. We have already given some details about the first point when we discussed the study of type Ia supernovae (section 1.4.7). We shall discuss better the second point in the next section. Another help in breaking parameter degeneracies can come from the CMB polarization, because the ability to measure these anisotropies offer the chance to triple the number of observed physical quantities, thus en- hancing the constraints on cosmological parameters.

1.4.8. The WMAP Experiment. In the previous section we have seen the importance of CMB anisotropy measurements for cosmology. A number of ground and balloon-borne CMB experiments have been run, and in the 10 years after COBE/DMR results the knowledge of the CMB anisotropies have improved considerably. But the data were limited by two considera- tions:

1. Ground and balloon-borne experiments are not able to cover the whole sky, but only patches. This means that the impact of cosmic variance 24 Chapter 1. Our Universe and the Cosmic Microwave Background

Figure 1.8: Current status of the CMB power spectrum measurements superimposed to the best-fit function (red line). The best data acquired so far came from the WMAP mission (black dots). From Hinshaw et al. (2007).

(see section 1.4.5) limits our confidence in the power spectrum esti- mation.

2. The only full-sky space mission, COBE, was limited to an angular resolution of 7◦, not enough even to detect the first peak in the power spectrum.

To overcome these limitations, in 2003 NASA launched a new CMB space mission, the Wilkinson Microwave Anisotropy Prober (WMAP) (Hin- shaw et al., 2007; Page et al., 2007; Spergel et al., 2007). It has provided the first improved full sky maps of the CMB after COBE, as well as the first full sky polarization anisotropy maps (see figure 1.7). The satellite has an angular resolution of ∼ 0.3◦ (corresponding to l ∼ 1000), with sensitivity of ∼ 35 µK per 0.3◦ squared pixel, and systematics artifacts limited to 5 µK per pixel. The observation is performed by means of two identical tele- scopes (pointing at two directions separated by ∼ 140◦) and an array of passively cooled differential radiometers at five frequency bands from 22 up to 90 GHz. The two telescopes allow to perform differential measure- ments, thus greatly reducing the impact of instrumental systematic effects on the final data. Figure 1.8 shows the WMAP measurement of the power spectrum, compared with some other CMB experiments. Note the exis- 1.4. The Cosmic Microwave Background 25

Figure 1.9: (From Huterer and Turner, 2001) Forecast of constraints on the dark energy equation of state parameter w and Ωm. The following experiments are shown here: WMAP (indicated with MAP), SDSS (Sloan Digital Sky Survey), SN (a survey of 54 type Ia supernovae), SNAP (a proposal for a survey of ∼ 2566 type Ia supernovae) and Planck (a fore-coming CMB experiment whom I discuss in next chapter). The plot assumes w = −1, Ωm = 0.28 as fiducial values of the parameters. tence of the second Doppler peak and an hint of the third one. WMAP measurements greatly improved our knowledge of the CMB and have confirmed the validity of an inflationary model with spatial flat- ness (Ω ≈ 1) and scale-invariant adiabatic fluctuations in the primordial universe. Such model, usually referred as ΛCDM model (see e.g. Bahcall et al., 1999), appears to be consistent with all the measurements made so far. Despite the success of WMAP however, some of the characteristics of the ΛCDM model are still puzzling. For instance, the nature of the dark matter and dark energy is still unknown: despite theoretical models exist, they fail in matching quantitatively the observed parameters. One strik- ing example is the value of the cosmological constant Λ, whose theoretical estimations (Λ ∼ 1069 m−2) are widely different from the measured value of Λ, which is ∼ 10−52 m−2. To solve these problems, we need to gather additional information about the nature of the dark energy. One parameter that can help us in this is the w coefficient in equation (1.11): for instance, if w < −1 then a family of inflationary models can be ruled out. WMAP has not been able to determine the value of w with enough precision (the most likely value is about 1, but the range of allowed values depend on the chosen model used in the analysis), as figure 1.9 shows. It is rather ironic that despite the great achievements of the last 3,000 years we still miss the essence of 96% of the universe. Nevertheless, there is still room for advancements. Figure 1.9 shows that with improved data 26 Chapter 1. Our Universe and the Cosmic Microwave Background from supernovae experiments (the SNAP proposal30) and more sensitive CMB probes (Planck), the range of allowed values for w and Ωm can be greatly reduced, thus enabling to better understand the nature of the dark energy. In the next chapter we shall discuss the Planck experiment, a third- generation mission for the measurement of CMB anisotropies which will push our knowledge a step further and will allow to answer many ques- tions left open by WMAP.

30See http://universe.nasa.gov/program/probes/snap.html. CHAPTER 2

The Planck Mission and the LFI Instrument

In this chapter I will present Planck, an ESA space mission targeted at the measurement of temperature and polarization anisotropies in the Cosmic Microwave Background. Planck will allow to break many degeneracies that prevented WMAP from determining precise values of some important cosmological parameters (see section 1.4.8), thus enabling us to better un- derstand the nature and origin of the Universe. The chapter is structured according to the following layout:

• Section 2.1 explains how Planck fits into the studies about CMB an- isotropies, why such an experiment has been proposed and what are its scientific aims.

• Section 2.2 gives a general introduction to the Planck High Frequency Instrument (HFI) instrument, an array of high-frequency detectors cooled to 0.1 K based on bolometers.

• Section 2.3 presents the Planck Low Frequency Instrument (LFI), an array of radiometers cooled to 20 K and working in the 30÷70 GHz frequency range. The calibration and verification of LFI is the main topic of this thesis.

• Section 2.4 gives some background on the cooling system used by Planck to keep HFI and LFI at the required working temperatures. Such information will be used in chapter 3, where I shall present a study on the thermal properties of the Planck focal plane.

• Planck has underwent several testing and calibration campaigns, each of them made at different levels of hardware integration. Section 2.5 discusses the testing approach used for the LFI instrument. In partic- ular, it focuses on the LFI Radiometric Chain Assembly (RCA)/Radiometric

27 28 Chapter 2. The Planck Mission and the LFI Instrument

Figure 2.1: Overview of the Planck satellite, showing the primary reflector, the three ther- mal shields (“V-grooves”) used to thermally decouple the cold (∼ 50 K) telescope enclosure from the warm (∼ 300 K) service module, and the “baffle”, a shield which prevents stray light from reaching the focal plane (where HFI and LFI are placed).

Assembly Array (RAA) campaigns, where LFI has been tested at the radiometer and the instrument level respectively, as they are the focus of this thesis.

§ 2.1. Role of the Planck Project in the CMB Science

In the previous chapter we have discussed the importance of CMB mea- surements for precision experimental cosmology. In particular, the contri- bution given by WMAP has been fundamental in establishing the validity of the ΛCDM model and measuring full-sky maps of the polarization an- isotropies. In 2008 the European Space Agency (ESA) will launch a new space mission that will provide the most accurate information on the CMB tem- perature anisotropies, and a measurement of the polarization anisotrop- ies: the Planck Surveyor. Planck is a third generation mission which will provide full-sky maps with angular resolution (50 ÷ 330), spectral cover- age (30 ÷ 887 GHz) and sensitivity (∆T/T ∼ 10−6) better than WMAP. This will allow better imaging and power spectrum reconstruction up to l ∼ 2 500 ÷ 3 000 (see figure 2.2). With these technical features, Planck will be able to substantially improve WMAP results in the following fields: 2.1. Role of the Planck Project in the CMB Science 29

Figure 2.2: Comparison of the performances of WMAP and Planck (see the Planck Blue Book at www. rssd. esa. int/ SA/ PLANCK/ docs/ Bluebook-ESA-SCI( 2005) 1_ V2. pdf ). Top: an expanded view of a 5◦ × 5◦ patch of sky at WMAP (94 GHz, 15’ FWHM) and Planck (217 GHz. 5’ FWHM) resolutions, with noise calculated for 2 and 8 years for WMAP and 1 year for Planck. Middle: The left panel shows a realization of the CMB power spectrum of the concordance ΛCDM model (red line) after 4 years of WMAP observations. The right panel shows the same realization observed with the sensitivity and angular resolution of Planck. Bottom: In the left panel, a forecast for the ±1σ errors on TE the temperature-polarization cross-correlation power spectrum Cl in a ΛCDM model from WMAP (4 years of observation) and Boomerang; in the right panel, the same mea- surement as it would be made by Planck. The inset shows the WMAP forecasts on large angular scales with a finer ∆l resolution. For Planck, flat band powers are estimated with ∆l = 20 in the main plot and with ∆l = 2 in the inset on large scales. 30 Chapter 2. The Planck Mission and the LFI Instrument

LFI HFI INSTRUMENT CHARACTERISTIC Detector Technology ...... HEMT arrays Bolometer arrays Center Frequency [GHz] ...... 30 44 70 100 143 217 353 545 857 Bandwidth (δν/ν) ...... 0.2 0.2 0.2 0.33 0.33 0.33 0.33 0.33 0.33 Angular resolution (arcmin) . . . 33 24 14 10 7.1 5.0 5.0 5.0 5.0 ∆T/T per pixel (Stokes I) . . . . . 2.0 2.7 4.7 2.5 2.2 4.8 14.7 147 6700 ∆T/T per pixel (Stokes Q & U) 2.8 3.9 6.7 4.0 4.2 9.8 29.8 ......

Table 2.1: “High level” summary of PLANCK Instrument Characteristics (Efstathiou et al., 2005). The goal (in µK/K) for the Stokes parameters is for 14 months integration, 1σ, for square pixels whose sides are given in the row “Angular Resolution”.

• Planck is expected to improve the estimates on the baryon and dark matter densities by one order of magnitude with respect to WMAP, thanks to its precision in measuring high multipoles (l > 1000);

• Planck will be able to measure the E-mode polarization spectrum up to multipoles l ∼ 1500. These measurements are critical e.g. to discriminate among different models for the primordial fluctuations. Moreover, a possible detection of B-modes in the polarization spec- trum could provide a strong evidence in favor of the existence of pri- mordial gravity waves.

• In our mathematical treatment of CMB anisotropies we have always considered the CMB fluctuations to be Gaussian (see section 1.4.5). However, possible non-gaussianities in WMAP data have been re- ported, but it is not clear if such hints are of cosmological origin or caused by systematic effects. In either way Planck will shed light on this matter, because it is a new full-sky experiment with different sys- tematics and offers greater sensitivity than WMAP.

• The wider range of frequencies measured by Planck will allow a bet- ter removal of foregrounds (galaxy, dust. . . ) than WMAP. Its superior imaging capabilities (see top figure 2.2) will also be useful for other (possibly non-CMB related) sciences.

Today (December 2007) the Planck satellite has been integrated and is currently undergoing tests at room temperature in the Thales/Alenia Space laboratories in Cannes for the forthcoming launch, scheduled for October, 31st 2008. Planck will use a Lissajous orbit around L2 (the Sun-Earth Lagrange point), so that Earth and Sun will be always aligned in the same direction with respect to the satellite throughout the mission, which results in a high degree of thermal stability and low stray light levels. The scanning strategy is shown in figure 2.3: the spacecraft will spin around the Sun-Planck direc- tion with a speed of ∼ 1 rpm and will be repointed by 2.50 each hour: thus 2.2. The High Frequency Instrument (HFI) 31

Direction of view

85°

Sun−Earth−Planck Earth orbit Planck Sun Earth direction

Planck orbit scanning circle

Figure 2.3: Sketch of the Planck scanning strategy (not in scale). Planck will observe from a Lissajous orbit around the L2 point of the Sun-Earth system, and will scan the sky by spinning around the Sun-Earth-Planck direction with an angle of ∼ 85◦. The sky temperature in each pixel will be measured 60 times for each scanning circle in about one hour; after this period, the spinning axis will be tilted by ∼ 2.50 and aligned to a new position. each circle will be scanned 60 times. The satellite is expected to perform at least two full scans of the sky, which means that the mission will last about 14 months. The high data redundancy will allow Planck to reach the ex- tremely tight requirements on systematic errors in the final maps (< 3 µK). The measurements will be made by two different instruments:

1. The HFI is an array of 48 bolometers cooled to 0.1 K for detecting radiation in the 100–857 GHz range.

2. The LFI is an array of 22 pseudo-correlation differential receivers based on HEMT technology cooled to ∼ 20 K for detecting radiation in the 30–70 GHz range.

Table 2.1 shows an overview of the characteristics of the two instruments. Both the HFI and the LFI are placed on the focal plane of an off-axis shaped aplanatic telescope with a primary of physical size 1.9 × 1.5 m. (See figures 2.1 and 2.4.)

§ 2.2. The High Frequency Instrument (HFI)

The High Frequency Instrument (HFI) is an array of 52 receivers in 6 fre- quency bands (in the 100–857 GHz range) based on bolometric technology and cooled to 0.1 K. The HFI will absorb the radiation coming from the tele- scope in a grid and then match its impedance with that of vacuum, increas- ing the temperature of a solid-state thermometer. The angular resolution of 32 Chapter 2. The Planck Mission and the LFI Instrument

Figure 2.4: A detailed view of the Planck focal plane. The 52 HFI horns are located at the center, while the 11 LFI horns are in the outer circles. the instrument will be 50 ÷ 100, and the ∆T/T sensitivity will be of the or- der of a few µK/K for the lowest frequencies, and a few mK for the highest ones (Lamarre et al., 2003). Two kinds of bolometers are implemented in HFI:

1. Twenty “-web” bolometers in the 143–857 GHz range, absorb- ing radiation via a spider-web-like antenna;

2. Thirty-two polarization-sensitive bolometers in the 100–353 GHz range, absorbing radiation via a pair of linear perpendicular grids. (Each grid absorbs only one linear polarization.) This kind of bolometer allows to measure polarized radiation.

The high sensitivity of HFI requires the bolometers to operate at 0.1 K. This is achieved by three coolers: a hydrogen sorption cooler (providing a stage at 18 K), a Joule-Thomson mechanical refrigerator (precooled by the 18 K cooler and providing 4 K) and an open-loop 3He/4He dilution refrig- erator, which provides 0.1 K. (A full description of the cooling system is in section 2.4.) The first two stages of this system are used by the LFI instru- ment as well, as we shall see in the next section. 2.3. The Low Frequency Instrument (LFI) 33

Figure 2.5: The LFI instrument during its integration in the Thales/Alenia Space lab- oratories (Milan, May 2006). In this photo the waveguides connecting the Front End Unit (FEU) with the Back End Unit (BEU) are clearly visible. The three lines intersecting the waveguides show where the three V-grooves will be connected.

§ 2.3. The Low Frequency Instrument (LFI)

2.3.1. Overview of the Instrument. The LFI represents the third gener- ation of millimeter-wave radiometers designed for space observation of CMB anisotropies, following COBE/DMR and WMAP. It is an array of 22 differential coherent radiometers based on InP HEMT amplifiers oper- ating at ∼ 20 K and centered at three frequencies in the 30-70 GHz range. Each radiometer is separated into two components: the front-end is kept at 20 K inside the Front End Unit (FEU) and is connected by waveguides to the warm BEU at 300 K (see figure 2.5). This design allows low noise temperatures in the first amplification stages of the radiometer. The ther- mal transition between the FEU and the BEU is granted by low-thermal conductivity composite waveguides and heat sinks at each of three conical thermal shields (the V-grooves), as shown in figure 2.1. The radiometer design uses a pseudo-correlation scheme in order to reduce non-white noise generated in the radiometers themselves: each of them measures the difference in temperature between the sky and a stable 34 Chapter 2. The Planck Mission and the LFI Instrument

Figure 2.6: Detailed outline of a LFI radiometer front-end (top) and back-end (bottom). The front-end output signal is transmitted through actively-cooled waveguides (see figure 2.5) and becomes the input signal for the back-end part of the radiometer. For the full explanation, see the text. cryogenic reference load which is cooled to ∼ 4 K by means of a thermal contact with the HFI external shield. Because of the high sensitivity of the radiometers, careful control of sys- tematic errors (that must be maintained at a level of ≤ 3 µK) is required. These are mainly originated by 1/ f fluctuations in amplifier gain and noise temperature, electrical effects, fluctuations in the reference signal, stray light, main beam imperfections and pointing errors. The LFI design has good performances in reducing the 1/ f noise of the amplifiers: knee frequency in the radiometric output measured during the LFI Flight Model (FM) tests ranges from 10 to 30 mHz (Mennella et al., 2006a) over a requirement of 50 mHz (Bersanelli et al., 2002).

2.3.2. Structure of an LFI Radiometer. The structure of an LFI radiometer is shown in figure 2.6. In the front-end part the radiation entering the feed- horn is separated by an OrthoMode Transducer (OMT) into two perpendic- ular linearly polarized components that propagate independently through two identical radiometers (called main arm and side arm). In each radiome- 2.4. The Planck Cooling System 35 ter the sky signal is coupled to a stable reference load at 4 K by a 180◦ hybrid and then amplified by low-noise High Electron Mobility Transistor (HEMT) amplifiers. One of the two signals then runs through a switch that applies a phase shift which oscillates between 0 and π with a frequency of 4096 Hz. A second phase switch is present for symmetry and redundancy on the sec- ond radiometer leg; this switch will introduce no phase shifts in the prop- agating signal. The signals are then recombined by a second 180◦ hybrid coupler, so that the output is a sequence of signals alternating at twice the phase switch frequency.

In the back-end of each radiometer (see bottom part of 2.6) the signals are further amplified, filtered by a low-pass filter and then detected. Af- ter detection the sky and reference load signals enter the Data Aquisition Electronics (DAE), where they are integrated and digitized. The digital data is then sent to the Signal Processing Unit (SPU), which is part of the Radiometer Electronics Box Assembly (REBA): this component downsam- ples and compresses the data (we shall discuss the REBA in chapter 4) and put them into packets that are sent to the Earth, where they are received by the Mission Operation Center (MOC) (located in Darmstadt, Germany). Here the data are checked for integrity, decompressed and sent to the in- strument Data Processing Centre (DPC) (located in Trieste, Italy), where they are saved in an Oracle database and made available to the Planck sci- entific team.

According to this architecture each radiometer will produce two inde- pendent streams of sky-load differences; the final measurement is provided by a further average of these differenced data samples between the two ra- diometer legs.

The LFI pseudo-correlation design offers two main advantages: the first is that the radiometer sensitivity does not depend (at first order) on the level of the reference signal (see Seiffert et al., 2002); the second is provided by the fast switching that reduces the impact of 1/ f fluctuations of back- end amplifiers. In fact, if the gain modulation parameter is correctly set, the dominant source of 1/ f noise in the radiometer output is the amplifier noise temperature fluctuations with a knee frequency of ∼ 50 ÷ 100 mHz. Imbalances in the two legs are not relevant at the first order.

In the Planck project there is a standard convention to indicate the LFI radiometers whom I am going to follow in this thesis. A radiometer is re- ferred by the number of its feed horn (18–28), followed by 0 or 1 depending on the radiometer’s polarization (main or side), and then another 0 or 1 indi- cating the radiometer’s detector. For instance, the first radiometric detector (0) of the side arm (1) in feed horn #27 (30 GHz) is #2710. 36 Chapter 2. The Planck Mission and the LFI Instrument

Figure 2.7: Schematics of the Planck cooling system.

§ 2.4. The Planck Cooling System

The description of the two Planck instruments has already given some hints on the criticality of the thermal environment. The HFI bolometers need a working temperature of 0.1 K in order to be operative, while the LFI ra- diometer can achieve the target sensitivity only if the focal plane is cooled to ∼ 20 K (compare this with the 90 K of the WMAP focal plane). Planck implements both active and passive cooling techniques to meet its thermal requirements. Passive cooling is a common technique in space missions (WMAP implements a passive cooling system). Planck uses three thermal radiators (called “V-grooves”; refer to figure 2.1) that decouple the cold focal plane and the telescope from the warm service module. These radiators also provide three thermal stages for the active cooling system, at 140 K, 80 K and 60 K respectively. Active cooling is implemented by three coolers: a sorption cooler, a Joule-Thomson refrigerator and an open-loop 3He/4He cooler. 2.4. The Planck Cooling System 37

Figure 2.8: The Planck sorption cooler during the Planck integration in Alcatel, 2007 (photo courtesy of IASF Bologna).

2.4.1. The Sorption Cooler. Several cryogenic stages will be present on- board the Planck satellite which will be provided by a chain of three ded- icated cryo-coolers. A key role in this chain is played by the Planck Sorp- tion Cooler (SC), a vibration- less hydrogen cooler in which hydrogen is pumped by inducing pressure changes through a chemical sorption pro- cess. This cooler will provide ∼ 1 W of cooling power at 20 K to cool the LFI radiometers and pre-cool the HFI 4 K helium cooler. The SC compressor assembly is mounted in the warm Service Module (SVM) (in figure 2.1 it is the bottom part of the satellite, under the three V-grooves) and is composed by six cylinders containing a hydride mate- rial able to absorb and release hydrogen depending on temperature. Hy- drogen is compressed in each of the six beds (see figure 2.8), which are connected to the high and low pressure sides of the system through check valves (see figure 2.9), and cools down to 20 K by means of a J-T expander after three pre-cooling stages to 140 K, 80 K and 50 K (the V-grooves shown in figure 2.7). After the J-T cooler there are three heat exchangers, indicated with LVHX1, LVHX2 and LVHX3. LVHX2 is directly connected to the LFI structure, while LVHX1 provides a pre-cooling stage for the HFI 4 K cooler; temperature at this point must therefore be very stable (the peak-to-peak temperature fluctuation amplitude is required to be < 100 mK). The cooler compressors are periodically cycled between heating and cooling phases, so that the whole assembly produces a stationary flow of high pressure gas. In such a system there is a basic clock time period over 38 Chapter 2. The Planck Mission and the LFI Instrument which each step of the process is conducted: for each compressor the du- ration of each phase is 667 s, so that the six compressor elements are cycled successively through the steps in the process with one complete cycle tak- ing as baseline 667 s × 6 = 4002 s.

2.4.2. The Joule-Thomson Cooler. In the Planck cryogenic chain the 4 K cooler has two purposes: (1) to cool the HFI focal plane unit to 4 K, and (2) to provide a pre-cooling stage for the HFI 100 mK dilution cooler, used to cool the HFI bolometers (see Bradshaw, 1999). This cooler plays a funda- mental role for LFI as well, since the LFI reference loads are mounted on the HFI shield and will be kept at the required temperature by means of a thermal contact with the shield itself. The 4 K cooler system is composed by a mechanical compressor which provide an high pressure stream of helium (∼ 10 bar) and a Joule-Thomson expander which forces the high pressure gas to cool off and condense by passing through a throttle.

2.4.3. The Dilution Cooler. The dilution cooler is used by HFI to keep the bolometers at a temperature of 0.1 K. Since it uses a new dilution principle based on friction, the dilution cooler does not need gravity to operate. It can provide a cooling power of 100 nW with 12 mol/s of gas. The amount of gas provided by the satellite will suffice for about 30 months (Efstathiou et al., 2005). A prototype of this cooler was already used in the mission, both for ground and balloon-borne experiments (Benoît et al., 2002).

§ 2.5. The Planck/LFI Ground Test Campaigns

2.5.1. Overview of the Test Stages. A crucial point in the development of the Planck instruments is their calibration, i.e. the measurement (or ad- justment) of the set of parameters that allow LFI and HFI to perform the measurement and the scientific team to do data analysis. Ideally the most reliable calibration would be performed in flight, right before the real mea- surement. On a practical point however, this approach is unfeasible since several tests cannot be done during flight1. In fact the calibration of the instruments was started when Planck was even far from being fully assem- bled. Tests on the Planck/LFI instrument have been done during each inte- gration stage (see fig. 2.10):

1A good example is the estimation of the bandwidth of the LFI radiometers, which re- quires a sweeping source (i.e. a quasi-monochromatic microwave source whose frequency can be adjusted by the user) to be used as input instead of the real sky 2.5. The Planck/LFI Ground Test Campaigns 39 low pressure

stabilization bed

bed #6 bed

sorption/compressor

bed #5 bed sorption/compressor low pressure return

high pressure output and heat exchange bed #4 bed

sorption/compressor

bed #3 bed sorption/compressor

50K radiator 80K radiator 140K radiator

bed #2 bed sorption/compressor LR#3 24K

stabilization

power margin/ bed #1 bed

Schematic of the sorption cooler system used on the Planck spacecraft. For details see the text. sorption/compressor 20K LFI cooling HFI shielding expander

Joule−Thompson 18K

stabilization tank stabilization LR#1of 4K J−T LR#2

Figure 2.9: precooling high pressure high 40 Chapter 2. The Planck Mission and the LFI Instrument

Figure 2.10: The Planck/LFI test stages. For each stage, a set of tests has been planned by the Planck/LFI scientific team. See the text for more information. 2.5. The Planck/LFI Ground Test Campaigns 41

1. Single “unit” (e.g. radiometer front-end or back-end modules, feed horns, OMTs, waveguides, DAE, REBA, etc.);

2. Single radiometric chain — the so-called Radiometric Chain Assembly (RCA), that is, a pair of full-functional radiometers connected to the same feed horn by mean of an OMT.

3. The instrument with all the RCAs integrated — the Radiometric As- sembly Array (RAA), as seen in figure 2.5. This is the full-functional instrument that is to be integrated within the satellite.

4. The Radio Frequency Qualification Model (RFQM), a prototype of the satellite which only implements the parts required for optical charac- terization: the telescope, the baffle, the focal plane with the feeds and the third V-groove.

5. The full satellite, with both LFI and HFI integrated. In this case, the calibration will be performed before launch (ground tests) and in flight, during the so-called Core Phase Verification (CPV) phase.

Planck/LFI was calibrated at different stages of integration. The criterion used in establishing the test schedule was to push every test at the last possible place of the integration chain, for this allows a greater accuracy. In a number of cases however the same test has been done more than once, starting from earlier stages. (For example, the biases for the radiometric active components have been tested both during the RCA tests and during the RAA tests.) This has ensured correct performances before higher level calibration. For a complex instrument like LFI, the scientific team has established an extensive and detailed calibration test plan. The tests have been grouped into five different areas (Bersanelli, 2003):

Optical calibration: these tests aim to characterize the beam pattern, po- larization isolation, side lobe level and other optical characteristics of the instruments and the Planck telescope.

Radio frequency calibration: these tests measure the value of many radio- metric parameters, like the noise temperature and the white noise level. This group includes also tests to optimize the biases to use for the active components of the radiometers.

Thermal model calibration: the temperature sensors placed on the satel- lite are calibrated within this group of tests. Also, the thermal models of the instrument are verified and calibrated here. A throughout dis- cussion of the latter tests will be provided in chapter 3. 42 Chapter 2. The Planck Mission and the LFI Instrument

Photometric calibration: these tests establish the conversion between the voltage output of each radiometer with the input antenna tempera- ture. The dependency of the conversion from time is explored here as well.

Attitude calibration: the tests verify the ability of the optical system to point towards a specific point. The majority of the tests within this group will be performed in flight, with some exceptions (see table 2.2).

The complete LFI test procedure exhibits a considerable complexity, since it is composed by several dozen of tests. A complete description of the details of the calibration plan is out of the scope of this thesis. How- ever, in chapters 3 and 4 i shall discuss in detail two crucial tests that were performed under my supervision. Performing the calibration procedures on the flight instrument without having tested them would have been potentially dangerous, and therefore two versions of the Planck LFI have been developed:

1. The LFI Qualification Model (QM) is a prototype in full scale of the instrument. The RAA is composed by 4 fully functional feed horns and 7 dummies. Figure 2.11 shows the feed horn and the front-end of the QM RCA #24 (44 GHz).

2. The LFI Flight Model (FM) is the instrument that will fly on the Planck satellite. The RAA is therefore composed by 11 fully functional feed horns.

All the tests were performed on the QM instrument and then repeated on the FM. This allowed to check for the correctness of the test procedure without putting in danger the flight hardware. The QM instrument is also a good back-up solution, since if some failure happens in the FM instrument before the launch, there is still the possibility to replace the damaged parts with those taken from the QM. Also, even after launch the QM instrument can be still useful e.g. to reproduce possible anomalies detected in the instrument during flight in order to investigate their origin.

2.5.2. The Radiometric Chain Assembly (RCA) Tests. In 2004 the Planck LFI scientific team in cooperation with the industrial contractors started the RCA tests on the 8 QM radiometers. The FM RCA tests were to follow the next year, and this time all the 22 flight radiometers were verified and calibrated. The tests were held in the Laben laboratories in Milan (for the 30 GHz and the 44 GHz chains in the QM tests and for all the chains in the FM) 2.5. The Planck/LFI Ground Test Campaigns 43

Figure 2.11: Qualification Model (QM) of the RCA #24 (44 GHz). From left to right, the feed horn, the OMT, the radiometer front-end and the waveguides are clearly visible. The photo was shot by myself in the Laben laboratories in Milan during the set up of the cryofacility for the QM RCA test campaign (February 2005).

Figure 2.12: The RCA facility during the tests on the #24 RCA QM. The front end, waveguides and back end were covered by MLI. Note the horn pointed towards a cylinder simulating the sky. This sky load is covered internally by Eccosorb, an highly radiation absorbing material whose thermal emission is similar to a blackbody (photo shot by myself). 44 Chapter 2. The Planck Mission and the LFI Instrument ilb performed. be will 2.2: Table umr fclbaintss X en httetsswl epromda h pcfidsae / en htol usto h tests the of subset a only that means “/” stage. specified the at performed be will tests the that means “X” tests. calibration of Summary tiue/X X X X X X X X / / X X X X X X X Attitude Photometric Thermal RF Optical opnn C A FQ aelt Flight Satellite RF-QM RAA RCA Component 2.5. The Planck/LFI Ground Test Campaigns 45

Figure 2.13: The cryofacility used for the RAA FM tests in Vimodrone (Milan). On the left, the LFI FEU till the first V-groove is covered by a MLI blanket kept at 60 K that pro- vides a thermal radiation shield. On the right, the bell has covered the cryofacility, and the vacuum pumps are ready to start (photo courtesy of the Thales/Alenia Space laboratories). and in the Elektrobit laboratories in Finland (for the 70 GHz QM chains), and involved one pair of radiometric chains at the time, each measuring a polarized component of the signal coming from the same feed horn. Each RCA was kept inside a cryofacility (see figure 2.12) in order to cool the Front End Module (FEM)s to tens of Kelvin and keep the instrument protected by external radiation, the exact temperature depending on the test. A number of Lakeshore thermometers were put on the instrument and in the cryofa- cility in order to monitor the thermal stability of the detectors and their environment. All the detector outputs and the thermometer data were sent through a network connection to the computer commanding the facility, and then sent to a second computer where they were recorded into FITS files by a dedicated software, Radiometric Channel EvaLuator (RaChEl) (Maris et al., 2004a). The test operators had the ability to record custom log messages anytime during the test, and these messages were then recorded in the same FITS files for later reference. The information available in each RCA test was therefore the following:

1. Full 4 kHz output of the four detectors (calibrated in Volt by RaChEl).

2. Temperature data for all the sensors in the cryochamber, sampled ev- ery second and calibrated in Kelvin.

3. Currents and biases for the active components of the radiometers.

4. Custom log messages written by the test operators.

The set of data regarding temperatures, currents and biases, as well as 46 Chapter 2. The Planck Mission and the LFI Instrument

Figure 2.14: Sketch of the placement of LFI inside the RAA cryofacility (compare this with figure 2.5). The feed horns are pointed downwards facing a sky load made of Eccosorb, much like the RCA tests. any other non-radiometric data received from the instrument, is generically called housekeeping information.

2.5.3. The Radiometer Array Assembly (RAA) Tests. The Radiometric Assembly Array (RAA) tests were in many aspects different from the RAA ones because they involved the calibration and verification of more than one radiometer at the time (there were 8 radiometers in the QM tests and all the 22 flight radiometers in the FM tests). This required to use a differ- ent acquisition system and a bigger, more complex cryofacility (see figure 2.13). The QM tests were held in Summer 2005 (see Mennella et al., 2006b), while the FM tests were held in Summer 2006 (Mennella et al., 2006a). The cryofacility used in the tests (figures 2.13 and 2.14) has been de- signed to allow the verification of the thermal model and the radiometric and electrical characteristics of the instrument (Radaelli, 2003). It has pro- vided a cryogenic environment where the most part of the RAA tests have been performed. To simulate the presence of HFI, a dummy thermal mass cooled to temperature as low as 10 K has been placed inside the cryocham- ber, in the same position as the HFI instrument (see e.g. figure 2.4). In order not to bend the waveguides under the weight of LFI, a mechanical struc- ture has been implemented to sustain the instrument inside the cryocham- ber. The vacuum inside the chamber was about 1.3 × 10−5 mbar, and the cleanliness class was 100,000. The acquisition system was modeled to be as similar as possible to the one to be used during flight. The REBA was therefore implemented in 2.5. The Planck/LFI Ground Test Campaigns 47 the instrument, although instead of sending data through an antenna it used a socket to communicate with a computer running SCOS2000, the software to be used during flight operations to communicate with the satel- lite. SCOS allows both to receive data from the instrument and send com- mands to the instrument, e.g. to change biases and turn on/off active com- ponents. The data received by SCOS are sent to a machine running an addi- tional software, TQL/TMH, which saves the data in a set of Time Ordered Information (TOI) FITS files. The data acquired by the acquisition system are then accessed for data analysis after the completion of the test. In order to provide a tool for cali- brating and validating the instrument, the scientific team created an analy- sis software called LIFE. I was deeply involved in the testing activities and in the next chapters I shall show the usage of LIFE (chapters 3, 4) as well as its development (ch. 5). 48 Chapter 2. The Planck Mission and the LFI Instrument CHAPTER 3

Dynamic Thermal Analysis of the LFI Focal Plane

For Planck to achieve its ambitious scientific goals, both HFI and LFI must be able to perform high sensitivity measurements (∆T/T ∼ 10−6). The high sensitivity of Planck/LFI is based on a carefully designed thermal system (see section 2.4), which has been the subject of a number of tests during the RAA campaigns. These tests had two purposes: (1) to verify that the absolute working temperature of the components meet the requirements (high temperatures cause high noise in the output of the radiometers), and (2) thermal fluctuations are damped at least as well as the thermal models used in the design stage of LFI (fluctuations can induce artifacts in the final maps). In this chapter I am going to discuss the analysis of the focal plane sta- bility performed during the LFI RAA tests. In order to perform the analysis a team of people under my supervision implemented a software module for LIFE which employs a number of algorithms described in this chapter as well. I shall discuss about LIFE in chapter 5. This chapter reports the results of the thermal analysis according to the following structure: • Section 3.1 explains the importance of a thermal characterization of the focal plane by discussing which kind of systematic errors can be triggered by thermal fluctuations propagating through the main frame. • Section 3.2 recalls some general concepts and formulas used in ther- mal transfer analysis, such as the heat conduction equation, that will be used widely throughout this chapter. • Section 3.3 discusses how thermal analysis can be done numerically using dedicated software packages, and what are the uses of such study in the context of Planck/LFI.

49 50 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Figure 3.1: The focal plane during the integration of LFI in the Thales/Alenia Space labo- ratories (Milan, May 2006).

• Section 3.4 focuses on the comparison of numerical thermal analysis results with the measured behavior of the instrument. Three analysis methods are presented and discussed.

• Sections 3.5 and 3.6 discuss the results of the thermal tests done on the focal plane of the LFI QM/FM instruments.

§ 3.1. Need for a Thermal Characterization of the Focal Plane

Any CMB anisotropy experiment requires an high degree of thermal stabil- ity. Since thermal anisotropies in the CMB have a maximum amplitude of ∼ 10−5 K, even spurious thermal fluctuations of the same amplitude propa- gating through the instrument might in principle induce a systematic effect that would mix with the astrophysical signal. A careful thermal analysis is therefore needed in order (1) to verify if there are potential non-negligible sources of thermal noise that could affect the measurement, and (2) to esti- mate the impact of unexpected or unavoidable temperature oscillations on the performances. In Planck/LFI the two most important sources of thermal fluctuations are the sorption cooler and the 4 K helium cooler (see section 2.4). The sorption cooler cools the LFI focal plane to ∼ 20 K and, being an active cooler, can induce fluctuations of period equal to the length of the duty cycle of its compressors, as we discussed in section 2.4.1 (the maximum allowed peak-to-peak amplitude at LVHX2 is about 300 mK). Fluctuations in the 4 K helium cooler propagate through the HFI external shield, where the LFI reference loads are mounted, and are detected as fluctuations in the sky (this because of the way the LFI radiometers work — see section 2.3.2). The main purpose of the LFI RAA thermal tests was to investigate the stability of the focal plane, the study of the reference load thermal stability 3.1. Need for a Thermal Characterization of the Focal Plane 51

Source High-freq. Periodic [µK] Spin-synch. [µK] Focal plane 1.5% 0.9 0.45 Waveguides 1.5% 0.4 0.4 BEU 1.5% 0.4 0.4 DAE 1.5% 0.4 0.4 Total 3.0% 1.1 0.8

Table 3.1: Total error budget allocated to thermal fluctuations in the LFI instrument. The first column contains the maximum error for high-frequency fluctuations (i.e. > 1/60 Hz) in terms of the radiometer sensitivity√ over 1 s (which, depending on the radiometer fre- quency, is in the 175–275 µK s range). The second and third column contain the error budget for generic periodic fluctuations and spin-synchronous variations. being postponed to the integrated satellite tests. Thermal instabilities in the focal plane can affect the performance of the radiometers at many levels, the most critical being the feed horns, OMT and radiometer front-ends. These instabilities are originated by LVHX2, the sorption cooler cold end connected to the focal plane. A preliminary thermal analysis had already been done during the de- sign of LFI, when a detailed thermal model of the focal plane had been developed. (The focal plane model is part of a larger model that imple- ments the whole instrument.) It has been developed by Thales/Alenia Space using a numerical thermal analysis software, ESATAN/ESARAD. One of the purposes of this model was to help in fixing the requirements on the thermal stability of the components of LFI. Within this task, the numer- ical model has played a fundamental role. In the specific case of the focal plane, requirements have been given both for generic periodic signals and for spin-synchronous1 fluctuations (Mennella et al., 2002; Bersanelli et al., 2002), according to the following pipeline: 1. A requirement on the maximum allowed error per pixel in the maps has been established (1.1 µK, 0.8 µK spin-synchronous);

2. An error budget for each part of LFI (i.e. front-end, BEU, waveguides, DAE) has been derived from the aforementioned errors (see table 3.1);

3. Within the error budget for the front end (0.9 µK, 0.45 µK s.-s.), a re- quirement on the stability of the LVHX2 cold end is established√ by using the numerical thermal model of the focal plane (41 µK s, 4 µK s.-s.). This pipeline clearly shows the importance of having a reliable thermal model to assets the LFI thermal requirements.

1These are the fluctuations whose period is equal to the spin period of the satellite, i.e. 60 s. The impact of such fluctuations is not reduced by the redundancy of scanning the same sky circle 60 times, and therefore the requirements on such fluctuations are always tighter. 52 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

After the LFI FM instrument had been assembled, we performed a veri- fication of the ESATAN thermal model of the focal plane. The purpose was to to certify the reliability of the procedure that assessed the stability re- quirements of the focal plane. We induced thermal fluctuations at LVHX2, measured the induced fluctuations on the thermometers placed on the fo- cal plane and then compared the results with the estimates of the thermal model. Such validation has allowed us to verify the accuracy of the model and therefore to verify the correctness of the LFI focal plane stability re- quirements. This chapter discusses my analysis of the the focal plane thermal tests and my validation of the thermal model. LIFE, the software used to carry the analysis will be discussed in chapter 5.

§ 3.2. General Concepts about Thermal Transfer

Before discussing the thermal models and the RAA thermal tests, in this section I introduce some key concepts about thermal transfer that will be used throughout the whole chapter.

3.2.1. Derivation of the General Equation. We are going to use thermo- dynamics and the Fourier equation to deduce the heat equation. This equa- tion describes how temperature in a solid body changes in time due to thermal conduction. We shall not discuss in detail thermal radiation, as the effects we are going to study on the LFI focal plane are mainly due to conduction. The heat equation expresses the change of temperature T in terms of three physical constants: mass density, heat capacity and heat conductiv- ity. The solid body under study is assumed not to change its size under temperature variations (i.e. we consider thermal dilation negligible). In this case, the First Principle of Thermodynamics becomes

dU = dQ, where U is the internal energy and dQ the exchanged heat. In a small volume dV, the variation of the internal energy is

dU = Mc dT = ρcdV dT, where M is the body mass and c the specific heat ([c] = J g−1 K−1). The exchanged heat is given by the sum of two terms:

dQ = dQint + dQexch.

The first term considers heat produced internally by the body (e.g. because of the Johnson effect in an electrical device). The second one considers any 3.2. General Concepts about Thermal Transfer 53 heat flowing in or out of the volume dV. Thus, the first principle becomes

ρc dV dT = dQint + dQexch. (3.1)

We can rewrite dQint as dQint = q˙g dV dτ, (3.2) where dτ is a small time interval and q˙g is the heat power density generated −3 by the body ([q˙g] = W cm ). The exchanged heat can be written as a surface integral: Z dQexch = −dτ jq · n dS, dS

−2 −1 where jq is the heat flux ([jq] = J cm s ), considered to be positive if it enters into the surface dS and negative otherwise. Vector n is the surface normal (pointing outside). We use the Gauss theorem together with the Fourier equation jq = −k∇T, (3.3) which relates the temperature gradient with the heat flux through the ther- −1 −1 −1 mal conductivity k ([k] = J s m K ), so we can rewrite dQexch as

dQexch = ∇ · (k∇T) dV dτ.

If the medium can be considered homogeneous, then thermal conductivity is independent from the position x and we can write:

2 dQexch = k∇ T dV dτ. (3.4)

By substituting (3.2) and (3.4) into (3.1), we obtain the heat conduction equation: ∂T k∇2T(x, t) + q˙ (x, t) = cρ (x, t). (3.5) g ∂t

3.2.2. Thermal Transfer Functions. Using as boundary condition for equa- tion (3.5) a sinusoidal temperature fluctuation of shape

T(x0, t) = T0 + Td sin 2πνt (3.6) at x0, the generic solution at some point x is still a sinusoid:  T(x, t) = T0 + Tdγ(x, ν) sin 2πνt + ϕ(x, ν) (3.7) for some functions γ(x, ν) and ϕ(x, ν), whose exact shape depends on the physical system (for an analytical derivation of γ and ϕ in the case of LFI reference loads, see Tomasi, 2002). This follows from the fact that equation 54 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

(3.5), being a linear differential equation, is algebraic in the Fourier Space and therefore separable in its sinusoidal components. Since any smooth function f can be approximated by a series of sinu- soids ˜ f = ∑ ci sin(2πνit + ϕi) i so that

sup f (x) − f˜(x) < e x for any given e > 0 (a consequence of the Fourier’s theorem, see Maderna and Soardi, 1997), knowing γ(ν) and ϕ(ν) is enough to estimate the fluctu- ation induced on x by any boundary condition at x0. These two functions are usually called2 transfer functions. This has the consequence that the knowledge of the shape of γ and ϕ is enough to estimate the effect of a boundary condition in x0 to point x.

§ 3.3. Numerical Thermal Analysis

3.3.1. Purpose of Numerical Analysis. A numerical thermal model is a very useful tool to perform a thermal analysis. This is motivated by the following factors:

1. A numerical model helps in the design of the instrument, even before virtually any piece of hardware is assembled.

2. The numerical model can be used to perform tests that cannot be done on the real instrument, e.g. because of limitations in the instrumenta- tion used for the tests (we shall discuss such a case at the end of this chapter).

3. An unexpected behavior in the real object (e.g. high temperatures at some stage of the instrument) can be investigated by analyzing the numerical model if the object under study is not at hand when the failure happens, e.g. for space missions like Planck.

For these purposes, Alcatel/Alenia Spazio (formerly Laben) developed two numerical thermal models of the LFI instrument using Alstom’s ESA- TAN/ESARAD thermal modeling software: the first models the whole in- strument with low detail, while the second uses and greater level of detail and is made by many sub-models. The purpose of the first model is to be

2In literature the term transfer function can also indicate the complex function

iϕ(x0,x,ν) f (x0, x, ν) = γ(x0, x, ν)e . 3.3. Numerical Thermal Analysis 55 integrated into the Planck global thermal model (where an high detail level would be not only useless, but would also make its usage more difficult), while the detailed models allow more refined thermal analysis. The LFI thermal models have two basic usages:

1. To characterize the thermal steady state of each part of the instru- ment;

2. To quantify thermal transfer functions between two physical points of the instrument.

Historically, the primary purpose of the thermal models was to support the design phase of Planck to make sure that the needed thermal character- istics were achievable. For this reason, they were meant to provide “worst case estimates”, e.g. they did not take into account contact resistances (dif- ficult to model numerically) that could further reduce the propagation of thermal fluctuations3.

3.3.2. Characteristics of a Numerical Model. Thales/Alenia Space has used ESATAN to develop the Planck/LFI thermal model. This programs uses a lumped parameter network approach to thermal modelization (ALS, 2003). A lumped network model consists of a set of nodes connected by conduc- tors. There are two types of nodes and two types of conductors:

Normal nodes represent a part of the object to be modeled. They are char- acterized by a temperature and a heat capacity, and can optionally act as heat wells or generators, e.g. to mimic electrical resistances.

Boundary nodes are the boundary conditions of the model. Their temper- ature is fixed to some number or time-dependent equation which is provided by the user.

Linear conductors represent a thermal contact between two nodes and are characterized by a conductance C that depends on the thermal con- ductivity k introduced in equation (3.3).

Radiative conductors represent a radiative connection between two nodes. Their conductance depends not only on the characteristics of the two nodes, but also on their placement in the space (by means of the so-called geometrical factor). This kind of conductor is the most dif- ficult to model numerically. ESATAN is bundled with a separate pro- gram, ESARAD, to calculate radiative conductances from a geomet- rical model of the object under study.

3This since small fluctuations are a desirable feature in experiments like Planck, as we already had noted in section 3.1. 56 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Once a model has been created and the physical parameters of each node and conductor have been established, ESATAN analyzes the model by solving the following system of partial differential equations:

dTi ˙ 4 4 Ci = Qi + ∑ Lki(Tk − Ti) + ∑ Rki(Tk − Ti ), for i = 1 . . . N, dt k k where N is the number of nodes in the thermal models, Ci, Qi and Ti the heat capacity, internal power and temperature of node i, Lki and Rki are the conductances of the linear and radiative conductors between nodes k and i. The N unknowns in the system are the node temperatures Ti. (Note that contact resistances are not explicitly taken into account in this equation.) The typical output of a thermal model is a set of temperatures for each node and heat fluxes for each conductor (the latter are estimated using the node temperatures and the heat conduction coefficients). Thermal models developed with ESATAN can be customized to output derived values (e.g. the maximum and minimum temperature reached in a specified node) by incorporating custom Fortran code into the model. This feature has been used extensively in this work. The complexity of the full LFI ESATAN model can be grasped in fig. 3.2, which still is a simplified view of the real model since the radiative conduc- tors generated by ESARAD are not shown.

3.3.3. Calibration of a Numerical Thermal Model. Any numerical model must be calibrated with experimental data once the real instrument has been developed. Typically two steps are required for the validation:

1. In the first measurement the LFI is kept in a stationary state (i.e. no changing temperatures, constant heat fluxes) and its temperature is measured at a number of points. The results are compared with those estimated by the thermal model when Ci is set to zero for every node i (stationary model). Any discrepancy is fixed by changing the conduc- tances of linear (Lki) and radiative (Rki) conductors.

2. Once Lki and Rki have been calibrated, a temperature variation in the object is forced using some heather, and the induced temperature variations at other points are measured. These variations are then compared with the model: discrepancies help to find wrong values in the heat capacities Ci of the nodes. The second calibration is trickier than the first one, since here the com- parison is not made between two temperatures per each node but instead between two streams of temperatures. My activity in the LFI RAA dynamic thermal tests was to perform an analysis of the latter experiments. In the next section we shall discuss some methods that employ transfer functions to perform this kind of analysis. 3.3. Numerical Thermal Analysis 57 LR2 HFI Baffle Node 8002 Node 15575 Node 41100 Node 50000 Sorption Cooler Pipes Harness Node 15571 FEMs Node 41000 Node 41500 Node 15030 FPU Main Frame Node 13018 Node 15025 Node 13017 Node 18017 Node 15023 Node 15022 Node 8006 Baffle Node 15021 Node 13016 Node 18016 Node 15019 Node 15015 panels Support Node 15011 Node 15450 Node 15013 Node 15501 Node 13015 Node 18310 Node 18410 Node 18450 Node 18013 Node 13500 Node 18300 Node 18301 Node 18011 Node 18012 Node 13014 Node 18007 Node 18014 Waveguides Frame Node 7102 Node 13300 Node 18500 Node 18501 Node 10603 (cold side) Node 15301 Node 7107 Node 18210 Node 18510 Node 18015 V-Groove 3 (warm side) Thermal shields Node 13007 Node 15010 Node 18201 Node 6102 Node 13200 Node 18200 Node 15201 Node 15007 Node 13006 Node 18006 Node 18100 Node 13003 Node 18101 Node 15101 I/F Hr VGs Node 18110 Node 5102 Node 13100 Node 18090 V-Groove 1 V-Groove 2 V-Groove 3 Node 13001 Node 15570 Node 18080 SCC Pipes Node 46200 Node 46000 Node 46100 Node 49000 Node 18000 Node 15001 Node 15002 Node 45200 Node 3600 FEM box Harness Node 45000 Node 45100 Node 47100 Node 47000 Node 47200 MLI BEU BEM/BEU

Figure 3.2: Simplified view of the LFI low-resolution ESATAN Model developed by Thales/Alenia Space. Red rectangles are boundary nodes, black lines are conductive links. Radiative connections are not shown for simplicity. This model provides an abstract rep- resentation of the real object shown in figure 2.5 (page 33). 58 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Figure 3.3: Position of the 12 temperature sensors in the focal plane (see also figure 3.1). High-resolution sensors are indicated with a circle. The black dot near TS5L shows LVHX2, the cold end of the sorption cooler pipes. In my analysis I did not consider TS2L and TS5L, since their being close to the LVHX2 point prevented us from acquiring mean- ingful data during the THF tests.

§ 3.4. Experimental Measurements of the Transfer Functions

The optimal experiment to estimate thermal transfer functions forces a si- nusoidal temperature fluctuation (by means of a heater) at some point on the body under study and measures the induced fluctuation at some other point. With this setup one can easily use eq. (3.7) to characterize the thermal response of the body through γ and ϕ. We have implemented this setup in the Planck/LFI THF tests to study the propagation of thermal fluctuations from the LVHX2 point (the 20 K cold end of the sorption cooler, see figure 2.7 at page 36) to 10 out of the 12 sensors mounted on the focal plane that will monitor the temperature during the flight (their placement is shown in figure 3.3). I chose to exclude sensors TS2L and TS5L from my analysis, since they were too near to the LVHX2 point to allow the measurement of any temperature damping. To perform the test, we mounted an electrical resistance near LVHX2 and applied an alternating power at a fixed frequency. This in turns in- duced a sinusoidal temperature change in the heater that propagated in the focal plane. The two objectives of the THF tests are: (1) to provide a direct measure- ment of the thermal transfer functions γ and ϕ from LVHX2 to each of the thermometers mounted in the focal plane, and (2) to validate the detailed thermal model of the focal plane. Three algorithms to estimate γ and ϕ from pairs of sinusoidal temperature streams were considered: 3.4. Experimental Measurements of the Transfer Functions 59

• The Fourier Transform method (developed by Benedetta Cappellini);

• The nonlinear-fitting method (developed by Anna Gregorio);

• The Direct Estimation method (developed by myself).

The three algorithms have been implemented in the Lama THF analysis module under my coordination. In chapter 5 I will discuss more about the architecture of LIFE: here I will only provide the mathematical details of the algorithms employed for the analysis of these tests. Each method takes as input the two temperature streams at LVHX2 and at one focal plane thermometer, indicated with4

N N n 1 1 o n 2 2 o (tj , Tj ) , (tj , Tj ) j=1 j=1

(with t being the time and T the temperature) and produce as output the values for γ(ν) and ϕ(ν) at the frequency ν used in the sinusoidal fluctua- tion for the THF test under study. Depending on the method, error estimates can be produced as well. In the following paragraphs I will describe the three algorithms. I am going to provide greater detail for the Direct Estimation Method.

3.4.1. The Fourier Transform Method. The Discrete Fourier Transform method applies a discrete Fourier transformation on the two data streams 1 1 2 2 (tj , Tj ) and (tj , Tj ) and, using an estimate for the frequency of the temper- ature fluctuation, finds the corresponding peaks in the two spectra. Each peak is a complex number of the form ρeiθ. The values for γ and ϕ are calculated using the following formulae:

γ(ν) = ρ1/ρ2, (3.8)

ϕ(ν) = θ2 − θ1, (3.9) where ν is the frequency where the peak was found. The advantage of the Fourier Transform method lies in its ability to extract the transfer function from non-sinusoidal periodic fluctuations as well. It has the drawback that no information about the error for γ and ϕ can be easily extracted (but the module uses 1/T as the estimate for σν, where T is the time length of the data stream, since this is the spectral res- olution of the Discrete Fourier Transform), and sometimes gives inaccurate results for ϕ.

4We use here the simplifying assumption that the two data streams have the same num- ber N of elements. The LIFE THF module does not rely on such assumption. 60 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Temperature at feed horn #25 (THF0004) 26

25.98

25.96

25.94

25.92 Temperature [K]

25.9

25.88

25.86 700000 705000 710000 715000 720000 725000 730000 735000 740000 745000 Time [s]

Figure 3.4: Temperature of feed horn #25 during test THF_0004. Note the quantization induced by the digital thermometer and the slow thermal drift (longer than one period of the sinusoid). Any method to extract the amplitude and phase of the sinusoid must be able to deal with both effects.

3.4.2. The Non-linear Fitting Method. The Fitting Method fits the two streams of temperatures with one of these functions:

f1(t) = x1 sin(2πx2t + x3) + x4, (3.10)

f2(t) = x1 sin(2πx2t + x3) + (x4 + x5t), (3.11) 2 f3(t) = x1 sin(2πx2t + x3) + (x4 + x5t + x6t ), (3.12) where xi represents a fitting parameter. The user must choose the best fit function among f1, f2 and f3 by guessing which one does provide the best model for the slow thermal drifts measured in the test under study (i.e. constant, linear or parabolic slope). This is the only method that explicitly takes into account the shape of long-time drifts. However, the polynomial part of the fitting function fail to fit thermal drifts in long data streams. Only short data streams (a few periods of the sinusoid) can therefore be analyzed with this method.

3.4.3. The Direct Estimation Method. The direct estimation method, devel- oped by myself, employs the fact that the extrema of functions (3.6) and (3.7) are respectively at T0 ± Td and T0 ± γTd and show a time delay equal 3.4. Experimental Measurements of the Transfer Functions 61 to ϕ/(2πν). Therefore, measuring the time profile of the sinusoids at two different points in the focal plane (i.e. at the cold end itself and at some other temperature sensor) allows to estimate both γ and ϕ directly. Two complicating factors must however be addressed: (1) the output of each thermometer contains some noise that in some cases makes the posi- tion of the extrema difficult to identify, and (2) spurious temperature drifts can alter the temperature profile. (See figure 3.4.) My implementation tries to address both problems. The algorithm works as follows:

1. In order to reduce the impact of thermal noise, the code subsamples 1 1 2 2 both (tj , Tj ) and (tj , Tj ) from 1 Hz to some frequency νsub such that 1 Hz < νsub < ν, with ν being the frequency of the sinusoid. (The algorithm used to choose νsub is discussed below.) The sub-sampling is performed by dividing each data stream in time windows of size τ = 1/νsub and averaging the points (tj, Tj) within this window. The result of this stage is a pair of shorter data streams of length M < N where each datum is associated with its “downsample” error:

n oM n oM ˜1 1 ˜ 1 1 ˜2 2 ˜ 2 2 (tj ± σ˜t,j, Tj ± σ˜T,j) , (tj ± σ˜t,j, Tj ± σ˜T,j) . j=1 j=1

The value of σt is assumed to be τ/2, i.e. half the size of the time window used for sub sampling, while σT is the standard deviation of the temperature within each time window.

2. The two downsampled datasets are scanned for local extrema. For the k-th extremum, the five samples nearest to it are fitted with a parabola

2 T˜ = akt˜ + bkt˜ + ck,

using a least-square-fitting algorithm (the value of σT is considered by the fitting procedure). The extremum (textr,k, Textr,k) of the k-th parabola is computed as

2 bk bk textr,k = − , Textr,k = ck − . 2ak 4ak

At the end we have two sequences, each made of P extrema:

P n 1 1 o  2 2 P (textr,k, Textr,k) , (textr,k, Textr,k) k=1 k=1 62 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

3. The best estimate for the frequency ν¯ of the fluctuation and its error are given by the following formulae:

− P−1 ! 1 ∑ (t k+ − t k) ν¯ = k=1 extr, 1 extr, , (3.13) P − 1 v u 2 1 uP−1 1  σ = t ν − . (3.14) ν − ∑ − P 2 k=1 textr,k+1 textr,k

Estimating ν from the difference between consecutive extrema reduces the impact of slow temperature drifts (i.e. longer than one period of the sinusoid) on the final result.

4. The amplitude ∆T of the fluctuation is the average between all the absolute temperature differences between consecutive extrema:

1 P−1 ∆T = |T − T | , (3.15) − ∑ extr,k+1 extr,k P 1 k=1 v u 2 1 uP−1  σ = t ∆T − |T − T | . (3.16) ∆T − ∑ extr,k+1 extr,k P 2 k=1

As it was the case for ν, this formula reduces the impact of slow tem- perature drifts.

5. The procedure so far is repeated for many different values of the sub sampling frequency νsub (point 1.), looking for the one that minimizes the quantity r  σ 2  σ 2 ν + ∆T , ν ∆T i.e. that minimizes the relative error for ν and for γ.

6. The value of γ is equal to the ratio between the amplitudes at the two sensors, namely ∆T2 and ∆T1, while the error is calculated using the error propagation formulae. 2 1 The value of ϕ is estimated from the average value of textr − textr, i.e. the time delay of a peak between the two sensors:

1 P−1 ϕ = 2πν¯ (t2 − t1 ), (3.17) − ∑ extr,k extr,k P 1 k=1 v u 2 1 uP−1  σ = t ϕ − (t2 − t1 ) . (3.18) ϕ − ∑ extr,k extr,k P 2 k=1 3.4. Experimental Measurements of the Transfer Functions 63

Method Error estimates1 Automatic2 Extendible3 Direct Yes Yes No FFT Only for ν Yes Yes Fit Yes No Yes (in principle)

Table 3.2: Comparison of the three transient analysis methods employed in out work. For each method, the following information is provided: (1) if the method allows to estimate the error on ν, γ and ϕ, (2) if it can be easily implemented in a script runnable without user intervention nor preliminary calibration, and (3) if it is applicable to fluctuations other than sinusoidal.

2 1 If it happens that textr,k − textr,k < 0, then the code discards the first 2 1 extremum in textr and the last one in textr. Figure 3.5 provides a visual example of how ∆T is estimated for some experimental data (test THF_0004).

3.4.4. Overall comparison of the three methods. Every method discussed so far has its own advantages and disadvantages. We have found three different aspects that can be used to characterize their effectiveness (see also table 3.2):

Error estimates: If a method can produce estimates for ν, γ(ν) and ϕ(ν) with error bars, this can be used to understand the level of signifi- cance and possibly the discrepancies with the numerical thermal mod- els and the other methods. We have chosen not to implement error estimates for the FFT method (apart from ν) because there are no simple mathematical methods to estimate the error bars of Fourier coefficients. All the other methods provide some algorithm to estimate the errors in the transfer func- tions.

Automatization: To carry the analysis of 12 thermometers with fluctua- tions at (at least) three different frequencies requires a module that can run automatically, i.e. that does not need user interaction. Both the direct method and the Fourier method were developed with this objective in mind. On the other side, the fitting method is not easy to automatize because it relies on a preliminary analysis (i.e. the choice of the fitting function) that must be provided as input by the user.

Extendibility: Inducing sinusoidal temperature fluctuations is the best way to characterize a transfer function, but it is not always feasible (i.e. there will be no way to induce them once Planck will be flying). It is important that an analysis method can be used also with other non- sinusoidal periodic fluctuations. 64 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Temperature at feed horn #25 (THF0004) Binned temperature at feed horn #25 (THF0004) 26 26

25.98 25.98

25.96 25.96

25.94 25.94

25.92 25.92 Temperature [K] Temperature [K]

25.9 25.9

25.88 25.88

25.86 25.86 700000 705000 710000 715000 720000 725000 730000 735000 740000 745000 700000 705000 710000 715000 720000 725000 730000 735000 740000 745000 Time [s] Time [s]

First extremum in the binned data First extremum in the binned data and best parabolic fit 25.974 25.974 Binned data Binned data Parabolic least square fit 25.972 25.972

25.97 25.97

25.968 25.968

25.966 25.966

25.964 25.964

25.962 25.962 Temperature [K] Temperature [K]

25.96 25.96

25.958 25.958

25.956 25.956

25.954 25.954 704750 704800 704850 704900 704950 705000 705050 705100 705150 705200 704750 704800 704850 704900 704950 705000 705050 705100 705150 705200 Time [s] Time [s]

Extrema found in the binned data Amplitudes 26 0.07 Extrema Fluctuation amplitude

25.98 0.065

25.96

0.06 25.94

25.92 0.055 Temperature [K] Temperature [K]

25.9

0.05 25.88

25.86 0.045 700000 705000 710000 715000 720000 725000 730000 735000 740000 745000 0 5 10 15 20 25 30 35 40 45 Time [s] Max/min pair number

Distribution of the amplitudes Temperature range [K] 0.050 0.055 0.060 0.065

10 20 30 40

Max/min pair number

Figure 3.5: Application of the direct method to estimate the amplitude of a sinusoidal temperature fluctuation. Reading from left to right, top to bottom: (1) a sinusoidal tem- perature fluctuation is induced on the focal plane, (2) the temperature data are subsample in order to reduce the noise, (3) local maxima are searched and (4) fitted with a parabola in order to find the height of the extremum, so that (5) having found a sequence of M extrema, by differencing consecutive pairs (6) we obtain M estimates of the fluctuation amplitude. The best estimate for the amplitude is therefore the average value (note that the distribution of amplitudes is roughly gaussian). 3.5. Data Analysis of the QM Tests 65

From this point of view, the Fourier method is the best one, since it requires only the fluctuation to be periodic. The fitting method could be extended to these cases as well in principle, but it requires to care- fully choose the fitting functions both5 for LVHX2 and for the focal plane thermometer. The direct method is not extendible to fluctua- tions other than sinusoidal, because it is based on equations (3.6) and (3.7).

§ 3.5. Data Analysis of the QM Tests

After the static comparison between the thermal model and the QM in- strument done by Giorgio Baldan (Thales/Alenia Space), I was supposed to be involved in the derivation of thermal transfer functions during the QM tests. Due to time constraints, instead of following the procedure out- lined in section 3.4, a step temperature change at LVHX2 was forced and the induced change on the focal plane was measured (test THF_0002). My work was then to interpret the data collected during these tests to verify if some information about the transfer functions could be extracted anyway. Due to its limited scope, the study involved only the extraction of γ. The estimation of ϕ has therefore not been considered in this study. Laben proposed to use the Fourier Transform on the temperature pro- files at LVHX2 and at some points of the focal plane, and from them extract the transfer function using the Fourier Transform method (see 3.4.1), de- spite the fact that the temperature change was not periodical. I wrote a module for LIFE that performed the calculations as suggested6 and pro- duced a set of plots showing the comparison between the experimental data and the results of the thermal model. The code was able to easily switch between experimental data (read by LIFE) and numerical data (read from the thermal model) so that the same analysis algorithm could be used for both datasets. Figure 3.6 shows the results of the test. The step temperature change at LVHX2 is the “Input”, and the induced change at the temperature sensor near feed horn #28 is the “Output” (the results for the other 11 sensors were similar and are not shown here). Note that γ does not decrease: instead, it is almost constant and exhibits a fluctuating behavior which is clearly non- physical (γ > 1). To detect if there was some problem with the acquisition system, I used the measured step change on LVHX2 as boundary condition for the thermal

5Remember that if the fluctuation is not sinusoidal, its shape is not preserved through thermal conduction: a temperature varying with a square wave law does not induce square waves. 6The value of γ calculated by my code was simply the ratio between the absolute values of the two Discrete Fourier Transforms. 66 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Transfer function Temperature (K) 0.01 26.6 26.8 27.2 27.4 27.6 27.8 28.2 28.4 0.1 10 27 28 1 1e-05 0 Input 10000 Measured transfer function Temperature variation 20000 Time (s) 1e-04 30000 40000 Output 50000 THF_0002 step function THF_0002 step function THF_0002 step function 0.001 60000 Transfer function Frequency (Hz)

Magnitude 10000 1000 100 0.1 10 1 1e-05 0.01 Input 1e-04 Expected transfer function Frequency (Hz) Spectral profile 0.001 0.1 0.01 Output 0.1 1 1

Figure 3.6: Results of the LFI RAA QM test THF_0002. Top left there is the recorded variation of the temperature at two points of the focal plane, top right there is the corre- sponding Fourier transforms. Bottom are the estimated values of γ as the ratio of the two Fourier transforms, compared with the γ calculated using the numerical thermal model. 3.5. Data Analysis of the QM Tests 67

Transfer function Temperature (K) 0.01 28.4 28.6 28.8 29.2 29.4 29.6 29.8 30.2 30.4 30.6 100 0.1 10 29 30 1 1e-05 0 Input 10000 Measured transfer function Temperature variation 20000 Time (s) 30000 1e-04 40000 Ideal step function on the full LFI model Ideal step function on the full LFI model Ideal step function on the full LFI model Output 50000 60000 Transfer function Frequency (Hz)

0.001 Magnitude 1000 0.01 100 0.1 10 1 1e-05 Input Expected transfer function 1e-04 Frequency (Hz) Spectral profile 0.01 0.001 Output 0.01 0.1 0.1

Figure 3.7: Results of a numerical simulation done using as boundary condition the tem- perature at LVHX2 recorded during test THF_0002 (see also figure 3.6). 68 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Transfer function Temperature (K) 1000 0.8 1.2 1.4 1.6 1.8 0.01 100 0.1 1 2 10 1 1e-05 0 Input 10000 Measured transfer function Temperature variation 20000 Time (s) 30000 1e-04 40000 Output 50000 Ideal step function Ideal step function Ideal step function 0.001 60000 Transfer function Frequency (Hz)

Magnitude 10000 0.001 1000 0.01 100 0.1 10 1 1e-05 0.01 Input 1e-04 Expected transfer function Frequency (Hz) Spectral profile 0.001 0.1 0.01 Output 0.1 1 1

Figure 3.8: Results of a numerical simulation done inducing a perfect temperature step at LVHX2 (see also figure 3.6). 3.5. Data Analysis of the QM Tests 69

Transfer function Temperature (K) 26.2 26.4 26.6 26.8 27.2 27.4 0.001 0.01 26 27 100 0.1 10 1 1e-05 0 Input 10000 Measured transfer function Temperature variation 20000 Time (s) 30000 1e-04 40000 Output 50000 0.001 60000 White noise White noise White noise Transfer function Frequency (Hz)

Magnitude 0.001 1000 0.01 100 0.1 10 1 1e-05 0.01 Input 1e-04 Expected transfer function Frequency (Hz) Spectral profile 0.001 0.1 0.01 Output 0.1 1 1

Figure 3.9: Results of a numerical simulation done by inducing a random fluctuation (white noise) at LVHX2 (see also figure 3.6). Note that in this case we are able to recon- struct γ up to about 10 mHz. 70 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Test ν [mHz] Usable Notes THF_0003 100. No No visible fluctuations THF_0004 0.55 Yes THF_0005 16.67 No Fluctuations detected only at LVHX2 THF_0008 1.40 Yes THF_0009 0.18 Yes THF_0010 50. No No visible fluctuations

Table 3.3: Tests recorded during the RAA FM test campaign to estimate the dynamical thermal response of a sinusoidal temperature fluctuation of frequency ν on the focal plane. Not all tests were usable in our analysis, since in some cases the induced fluctuation was too small to be detected by the thermometers on the focal plane. model of the LFI focal plane and performed a numerical simulation. The results of this simulation were then analyzed by the same analysis module used above. The results are shown in figure 3.7 and are comparable with figure 3.6 (although here the fluctuations in γ are larger). The problem lies in the non-periodical variation induced at LVHX2, as it has been confirmed by further analysis on the thermal model. Figure 3.8 shows what happens when a pure step temperature change is induced on the LVHX2 node. The shape of γ has changed, yet the oscillating shape and the fact that the slope does not bend downwards are still present. This is due to the fact that the Discrete Fourier Transform of a step func- tion has an infinite number of zero coefficients. Since we estimate γ as the ratio of the modulo of the two Fourier transforms7, all the zero coefficients lead to an indeterminate result (0/0). To prove this statement we used as boundary condition for LVHX2 a pure random fluctuation (white noise), where all the frequencies have nonzero components. In this case I was able to retrieve γ. Figure 3.9 shows the results of the numerical simulation, and compares the estimated γ with the measured one. The agreement is quite good. Of course, inducing a temperature fluctuation with white noise profile (even if only in the spectral band of interest) is a though experimental chal- lenge and therefore the use of this method was rejected for the FM tests in favor of the standard approach described in section 3.4.

§ 3.6. Data Analysis of the FM Tests

3.6.1. The Test Procedure. For the THF tests on the LFI RAA instrument, six different fluctuations were induced to an heather placed near LVHX2,

7Note that, due to the fact that we are using the Fast Fourier Transform algorithm, which is a numerical method, the zero coefficients will be actually different from zero, due to numerical roundoffs. The result of the ratio is therefore not indeterminate in a strict sense, but clearly it has no physical meaning. 3.6. Data Analysis of the FM Tests 71

TS1L TS3L

1 1

FFT Direct Fit Average Model 0.1 0.1 0.01 0.1 1 10 0.01 0.1 1 10

TS4L TS6L

1 1

0.1 0.1 0.01 0.1 1 10 0.01 0.1 1 10

Figure 3.10: Comparison between the estimates of the numerical model and the experi- mental results for the γ function. X axis: frequency in mHz, Y axis: value of γ (pure number). and for each one a separate test was recorded (see table 3.3). Unfortunately, three fluctuations proved to be too small to induce detectable fluctuations on the focal plane and were discarded from this analysis. We therefore decided to use only tests THF_0004, THF_0008 and THF_0009. The THF analysis module was applied to each test using each of the three analysis methods implemented in Lama (direct, FFT, fitting). The results are reported in tables 3.4 (ν), 3.5 (γ) and 3.6 (ϕ).

3.6.2. Analysis of the Results. Figures 3.10, 3.11, 3.12 and 3.13 show the corresponding plots for tables 3.4 (ν), 3.5 (γ) and 3.6 (ϕ). In 9 out of 10 plots (i.e. except TS5R) the comparison between the mea- sured transfer function and the model estimates shows a good agreement: 72 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

TS1R TS2R

1 1

FFT Direct Fit Average Model 0.1 0.1 0.01 0.1 1 10 0.01 0.1 1 10

TS3R TS4R

1 1

0.1 0.1 0.01 0.1 1 10 0.01 0.1 1 10

TS5R TS6R

1 1

0.1 0.1 0.01 0.1 1 10 0.01 0.1 1 10

Figure 3.11: Comparison between the estimates of the numerical model and the experi- mental results for the γ function (continuation). Note the crossing between the measured slope of TS5R and the numerical estimate of the model. 3.6. Data Analysis of the FM Tests 73

TS1L TS3L 10 1

1 0.1

0.1

FFT 0.01 0.01 Direct Fit Average Model 0.001 0.001 0.01 0.1 1 10 0.01 0.1 1 10

TS4L TS6L 10 10

1 1

0.1 0.1 0.01

0.01 0.001

0.0001 0.001 0.01 0.1 1 10 0.01 0.1 1 10

Figure 3.12: Comparison between the estimates of the numerical model and the experi- mental results for the ϕ function. X axis: frequency in mHz, Y axis: value of ϕ (radians). 74 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

TS1R TS2R 10 10

1 1

0.1 0.1

FFT 0.01 Direct 0.01 Fit Average Model 0.001 0.001 0.01 0.1 1 10 0.01 0.1 1 10

TS3R TS4R 10 10

1 1

0.1 0.1

0.01 0.01

0.001 0.001 0.01 0.1 1 10 0.01 0.1 1 10

TS5R TS6R 10 10

1 1

0.1

0.1 0.01

0.01 0.001 0.01 0.1 1 10 0.01 0.1 1 10

Figure 3.13: Comparison between the estimates of the numerical model and the experi- mental results for the ϕ function (continuation). As in figure 3.11, TS5R shows a differ- ent behavior from the other sensors, since here the measured ϕ is slightly smaller than the estimated one. 3.6. Data Analysis of the FM Tests 75

Figure 3.14: Sketch of the discretized model of Feed Horn #28. The node numbers, au- tomatically assigned by ESATAN, are indicated with #. The nodes for whom the transfer function was calculated are painted in dark. Note that the connection between the flange (where the temperature sensor TS5R is located) and the main frame is modeled through two nodes: 40228 and 40229. Therefore, thermal fluctuations coming from the focal plane can reach TS5R through two paths.

1. The measured γ is lower than the numerical estimate. This is some- what expected, since the numerical model does not take into account contact resistances (see section 3.3.1). 2. The measured ϕ is somewhat greater than the numerical estimate. This is caused by the same reason stated above, because contact resis- tances slow heat propagation. Sensor TS5R has a strange behavior which requires a dedicated discus- sion. The γ curve for sensor TS5R is the only one in the set that shows a slope significantly different from the prediction of the model. Moreover, for frequencies higher than 1 mHz the measured damping is worse than expected. I have made an analysis of the ESATAN model and have found a possible reason for this discrepancy. Sensor TS5R is the only one not to be placed directly on the focal plane, but on the flange that prevent feed horn #28 from moving. The modelization of feed horn #28 therefore includes two links between the flange and the focal plane (see figure 3.14): 1. The first one connects the HEMT with the bottom side of the focal plane. 2. The second one connects the flange with the top side of the focal plane. Through a careful analysis of the thermal model I have dis- 76 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

1 10 TS5R (model) I/F Flange-MF 1 (model) TS5R (measured)

1

0.1 0.1 [rad] φ [pure number] γ

0.01

TS5R (model) I/F Flange-MF 1 (model) TS5R (measured) 0.01 0.001 0.01 0.1 1 10 100 0.01 0.1 1 10 100 Frequency [mHz] Frequency [mHz]

Figure 3.15: Measured thermal transfer function for FH#28 compared with the numerical transfer function for TS5R (#42920) and I/F Flange-MF (#40228; see figure 3.14). Note that the slope of γ for I/F Flange-MF agrees with experimental data better than the one for TS5R.

covered that this connection was made by two dedicated nodes (in figure 3.14 they are #40228 and #40229).

I have investigated how the heat propagates through each path, and I have found that the most likely cause of the strange behavior of TS5R is a wrong assignation of the node number. The thermometer has been placed on the side of the flange that is nearest to the connection with the focal plane. Plotting the transfer function for TS5R and for #40228 reveals that the latter fits better with the measured data both for γ and ϕ. Fig- ure 3.15 shows a comparison between TS5R and #40228, with the three experimental data: the measured γ is lower than #40228, and the mea- sured ϕ is greater. This behavior matches the one observed for all the other thermometers. Therefore, in any further analysis of LVHX2 fluctuations node #40228 should be used instead of the one suggested by Thales/Alenia Space.

§ 3.7. Conclusions

During the design of LFI, precise requirements on the thermal stability of the focal plane were established. These requirements derived from the LFI scientific goals and were derived by using a numerical thermal model of the focal plane developed by Thales/Alenia Space. Therefore, a validation of the thermal model was scheduled for the RAA tests, in order to verify the correctness of the procedure used to asset the focal plane stability re- quirements. In this chapter I have discussed the validation of the thermal model. To perform this validation, I have compared the estimates of the model with 3.7. Conclusions 77 the transfer functions measured during the RAA FM test campaign. The results are satisfactory, since in every case the measured transfer function shows a better reduction of spurious fluctuations than the numer- ical results. The peak delays are always greater in the experimental data, thus enforcing the conclusion that the real instrument damps fluctuations better than the numerical model. In one case (TS5R) the pairing between the numerical thermal node and the temperature sensor has proven to be wrong, and a better pairing has been found after a careful inspection at the thermal model. With the new pairing, even sensor TS5R exhibits a good match with the numerical simu- lations. The algorithms used to extract the transfer functions from experimental data have proven to be stable, since in all cases they provide comparable results within the error bars, and their trend — when compared with the numerical estimates — is similar and remarkably stable. 78 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Sensor FFT Direct Fit Average TS1L 0.212±0.021 0.1995±0.0008 0.2001±0.0009 0.1998±0.0006 0.599±0.017 0.6000±0.0023 0.5997±0.0009 0.5997±0.0008 1.378±0.098 1.4009±0.0122 1.3999±0.0008 1.3999±0.0008 TS3L 0.212±0.021 0.1995±0.0008 0.2001±0.0009 0.1998±0.0006 0.599±0.017 0.6000±0.0023 0.5995±0.0012 0.5996±0.0011 1.378±0.098 1.4003±0.0126 1.4002±0.0008 1.4002±0.0008 TS4L 0.212±0.021 0.1995±0.0008 0.2001±0.0010 0.1997±0.0006 0.599±0.017 0.6000±0.0023 0.5996±0.0010 0.5997±0.0009 1.378±0.098 1.4005±0.0128 1.4001±0.0011 1.4001±0.0011 TS6L 0.212±0.021 0.1995±0.0008 0.2001±0.0009 0.1998±0.0006 0.599±0.017 0.6000±0.0022 0.5996±0.0010 0.5997±0.0009 1.378±0.098 1.4006±0.0123 1.4001±0.0004 1.4001±0.0004 TS1R 0.212±0.021 0.1995±0.0008 0.2001±0.0010 0.1997±0.0006 0.599±0.017 0.5999±0.0023 0.5995±0.0013 0.5996±0.0011 1.378±0.098 1.4005±0.0131 1.3991±0.0025 1.3991±0.0025 TS2R 0.212±0.021 0.1995±0.0008 0.2001±0.0010 0.1997±0.0006 0.599±0.017 0.6000±0.0023 0.5994±0.0015 0.5996±0.0013 1.378±0.098 1.4005±0.0133 1.3996±0.0017 1.3996±0.0017 TS3R 0.212±0.021 0.1995±0.0008 0.2001±0.0011 0.1997±0.0006 0.599±0.017 0.6000±0.0023 0.5991±0.0021 0.5995±0.0015 1.378±0.098 1.4005±0.0135 1.4002±0.0016 1.4002±0.0016 TS4R 0.212±0.021 0.1995±0.0008 0.2001±0.0010 0.1997±0.0006 0.599±0.017 0.5999±0.0022 0.5996±0.0011 0.5997±0.0010 1.378±0.098 1.4000±0.0121 1.3998±0.0011 1.3998±0.0011 TS5R 0.212±0.021 0.1995±0.0008 0.2001±0.0010 0.1997±0.0006 0.599±0.017 0.5999±0.0022 0.5997±0.0009 0.5997±0.0008 1.378±0.098 1.4002±0.0119 1.3999±0.0008 1.3999±0.0008 TS6R 0.212±0.021 0.1994±0.0008 0.2001±0.0011 0.1997±0.0006 0.599±0.017 0.6000±0.0023 0.5994±0.0014 0.5996±0.0012 1.378±0.098 1.4005±0.0130 1.3990±0.0026 1.3990±0.0025

Table 3.4: Frequencies ν estimated for each of the three tests THF_0003, THF_0004 and THF_0008 for the three methods. The weighted average with its error is provided as well. 3.7. Conclusions 79

Sensor FFT Direct Fit Average TS1L 0.426 0.416±0.004 0.4168±0.0008 0.4196±0.0056 0.276 0.273±0.003 0.2756±0.0004 0.2749±0.0016 0.259 0.257±0.003 0.2554±0.0003 0.2571±0.0018 TS3L 0.512 0.500±0.005 0.5023±0.0010 0.5048±0.0064 0.312 0.312±0.004 0.3125±0.0005 0.3122±0.0003 0.226 0.225±0.004 0.2231±0.0003 0.2247±0.0015 TS4L 0.495 0.483±0.005 0.4853±0.0010 0.4878±0.0064 0.293 0.293±0.003 0.2919±0.0004 0.2926±0.0006 0.197 0.192±0.004 0.1931±0.0002 0.1940±0.0026 TS6L 0.525 0.507±0.004 0.5138±0.0010 0.5153±0.0091 0.345 0.345±0.003 0.3452±0.0005 0.3451±0.0001 0.261 0.262±0.002 0.2576±0.0003 0.2602±0.0023 TS1R 0.557 0.545±0.005 0.5463±0.0011 0.5494±0.0066 0.299 0.294±0.004 0.2971±0.0006 0.2967±0.0025 0.171 0.171±0.003 0.1674±0.0002 0.1698±0.0021 TS2R 0.577 0.564±0.006 0.5658±0.0011 0.5689±0.0070 0.318 0.313±0.005 0.3124±0.0007 0.3145±0.0031 0.161 0.159±0.004 0.1583±0.0002 0.1594±0.0014 TS3R 0.591 0.576±0.006 0.5795±0.0012 0.5822±0.0078 0.295 0.291±0.004 0.2918±0.0009 0.2926±0.0021 0.133 0.131±0.004 0.1317±0.0002 0.1319±0.0010 TS4R 0.482 0.471±0.003 0.4725±0.0009 0.4752±0.0060 0.266 0.265±0.002 0.2666±0.0005 0.2659±0.0008 0.172 0.170±0.002 0.1686±0.0002 0.1702±0.0017 TS5R 0.485 0.473±0.003 0.4746±0.0010 0.4775±0.0065 0.342 0.340±0.003 0.3411±0.0005 0.3410±0.0010 0.224 0.222±0.002 0.2194±0.0003 0.2218±0.0023 TS6R 0.504 0.493±0.003 0.4954±0.0009 0.4975±0.0058 0.246 0.243±0.003 0.2433±0.0005 0.2441±0.0017 0.121 0.118±0.003 0.1202±0.0002 0.1197±0.0016

Table 3.5: Estimated value of γ for each of the three tests THF_0003, THF_0004 and THF_0008 for the three methods. The weighted average with its error is provided as well. 80 Chapter 3. Dynamic Thermal Analysis of the LFI Focal Plane

Sensor FFT Direct Fit Average TS1L 0.252 0.548±0.042 0.73±0.18 0.51±0.24 0.724 0.715±0.028 0.61±0.20 0.68±0.06 0.937 0.974±0.044 0.94±0.03 0.95±0.02 TS3L 0.288 0.577±0.044 0.77±0.19 0.55±0.24 0.743 0.732±0.032 0.57±0.25 0.68±0.10 0.857 0.879±0.048 0.88±0.01 0.87±0.01 TS4L 0.335 0.617±0.045 0.83±0.20 0.59±0.25 0.867 0.856±0.034 0.73±0.21 0.82±0.08 1.085 1.114±0.052 1.10±0.01 1.10±0.01 TS6L 0.249 0.544±0.032 0.73±0.18 0.51±0.24 0.720 0.708±0.024 0.60±0.21 0.68±0.07 0.949 0.990±0.042 0.97±0.01 0.97±0.02 TS1R 0.437 0.727±0.039 0.93±0.20 0.70±0.25 1.071 1.059±0.033 0.89±0.26 1.01±0.10 1.333 1.376±0.059 1.27±0.09 1.33±0.05 TS2R 0.543 0.828±0.045 1.05±0.21 0.81±0.25 1.314 1.301±0.036 1.09±0.30 1.24±0.13 1.769 1.802±0.067 1.76±0.05 1.78±0.02 TS3R 0.565 0.849±0.043 1.08±0.22 0.83±0.26 1.403 1.389±0.035 1.03±0.44 1.27±0.21 1.949 1.988±0.075 1.99±0.01 1.98±0.02 TS4R 0.381 0.660±0.031 0.87±0.19 0.64±0.25 0.949 0.930±0.026 0.79±0.24 0.89±0.09 1.134 1.178±0.044 1.13±0.03 1.15±0.03 TS5R 0.369 0.660±0.030 0.87±0.20 0.63±0.25 0.950 0.936±0.025 0.85±0.18 0.91±0.05 1.236 1.260±0.042 1.25±0.03 1.25±0.01 TS6R 0.624 0.903±0.033 1.14±0.21 0.89±0.26 1.532 1.523±0.029 1.32±0.29 1.46±0.12 2.222 2.271±0.060 2.18±0.10 2.22±0.05

Table 3.6: Estimated value of ϕ for each of the three tests THF_0003, THF_0004 and THF_0008 for the three methods. The weighted average with its error is provided as well. CHAPTER 4

Calibration and Verification of the LFI Data Compressor

The broad frequency coverage and the high angular resolution of Planck pose non-trivial problems in data handling. Planck will fly in an orbit around the L2 point, which is about 1,500,000 km away from the Earth, and this distance does not allow the Planck antenna to transmit all the scientific data to Earth. The limited telemetry budget therefore forced the Planck team to implement a number of techniques to compress data within the satellite. The criticality in these data reduction techniques is to fulfill the requirements on telemetry while at the same time keeping all the scientific information almost intact. In this chapter I will report on tests performed at LFI RAA FM instru- ment level on the LFI data compressor, an on-board module that com- presses the radiometric data acquired by LFI. The tests had the aim to verify that the compression level is enough for the Planck antenna, and that the loss of information induced by the compression is acceptable with respect to the science. The chapter is structured as follows: • Section 4.1 explains why LFI needs a data compressor, and what are its characteristics. • Section 4.2 explains what are the purposes of calibrating the LFI data compressor and illustrate the calibration procedure we followed dur- ing the RAA FM tests. • Section 4.3 discusses the verification of the compressor calibration. The aim of this part of the test is to verify that all the requirements on the data compression are met. • Finally, section 4.4 discusses the presence of some artifacts in the com- pressed data that appeared during the calibration. The explanation of

81 82 Chapter 4. Calibration and Verification of the LFI Data Compressor

Figure 4.1: This chart shows how the overall bandwidth for planck (130 kb/s) is to be distributed among HFI and LFI science and housekeeping parameters (numbers taken from Watson and Biggins, 2005).

the problem and its solution are discussed as well.

§ 4.1. Principles of the LFI Data Compressor

4.1.1. Need for a Data Compressor. The data acquired by the LFI radiome- ters are sent to Earth by a complex pipeline. The DAE sends the sky and reference signals (as well as housekeeping parameters) to the REBA, which splits them into chunks called packets and transmits them to the MOC (lo- cated in Darmstadt, Germany) through the on-board TM/TC transmission antenna. Here packets are collected, sorted in time and sent daily to the LFI DPC (similarly, HFI packets are sent to the HFI DPC). The DPC is respon- sible for all levels of data processing, from raw telemetry to deliverable scientific products. The bandwidth of the satellite antenna has 51.3 kb/s allocated for the LFI scientific data (see figure 4.1), but the raw data rate of a single detector of LFI (out of 44) is about 138 kb/s (Miccolis, 2004), so data compression tech- niques must be implemented. The REBA implements various techniques to reduce the scientific data rate down to an acceptable level. (This reduction is not needed for housekeeping data, since their sampling rate is 1 Hz, low enough to be transmitted as it is.) The REBA reduces the data rate through two techniques: downsam- pling and data compression. Both techniques apply lossy transformations to the data, i.e. they discard some information in order to achieve better compression rates. To minimize the impact on science of this degradation, the compression pipeline has been implemented so that the following con- ditions apply:

1. Downsampling is limited by the beam size so that there always at 4.1. Principles of the LFI Data Compressor 83

least three downsampled sky data within a beam. This is enough for doing science, since structures on scaler smaller than the beam size cannot be resolved by the Planck optical system.

2. Data compression introduces a quantization in the signal, but this does not degrade it significantly with respect to the intrinsic radio- metric noise σ. As stated by Maris et al. (2004b), a quantization error q such that σ/q ≈ 2 induces an uncertainty on the Cl power spec- trum coefficients that is only about 1–2% of the uncertainty induced by radiometric noise. The σ/q ≥ 2 requirement has therefore been accepted by the Planck team.

In this chapter we shall concentrate on the REBA data compression and will therefore not discuss downsampling with great detail.

4.1.2. Principles of Data Compression.

Definition of a Compressor. Given a finite set X, a compressor is a transfor- mation c : v → w of a vector v of N elements of X (the input) into another vector w of M elements of X (the output), with M < N on average, so that w = c(v) records essentially the same information as v (we shall see later what it is meant with “essentially”). The elements of X are called symbols. The ratio N/M is called compression ratio: the greater this number, the better the compressor. Compression algorithms can be divided into two categories:

1. Algorithms of the first kind substitute repetitive sequences of (usu- ally consecutive) symbols with shorter ones. This is called a sequence oriented compression. A very simple example is the common conven- tion of writing “AFAIK” instead of “As far as I know”. An example in computing science is the ZIP compression algorithm.

2. Algorithms of the second kind use less space to code the most fre- quent symbols in v, while using more bits for the less frequent ones. This is a symbol oriented compression. Coding using a variable num- ber of bits is usually referred as variable length coding. The JPEG format uses a derivation of this approach to compress images.

The REBA uses a symbol oriented algorithm based on arithmetic coding (Rissanen and Langdon, 1979). Sequence oriented methods would have performed badly in this context, since it is very difficult to find repetitive sequences of symbols in a white-noise dominated stream of data as it is the output of a radiometric detector. 84 Chapter 4. Calibration and Verification of the LFI Data Compressor

An Example of Symbol Oriented Compressor. Here I discuss the Huff- man compression (Huffman, 1952), which is similar to the arithmetic cod- ing algorithm used by the REBA, but far simpler. More details on the actual implementation of the REBA-LFI compression code can be found in Maris et al. (2006), and are summarized later in this section. We explain the Huffman algorithm with a practical example. Suppose that the output of the REBA is a sequence of eight 16-bit numbers: 1304 1304 1301 1302 1301 1303 1304 1304

This sequence is coded using 8 × 16 = 128 bits. The Huffman algorithm assigns a unique mask of bits to each symbol, where the most frequent symbols get the shortest masks. This set of masks, called dictionary, is then used to rewrite the sequence as a stream of bits. In our example, the dictionary1 could be:

Symbol Freq. Mask 1301 2 10 1302 1 110 1303 1 111 1304 4 0

The sequence would therefore be compressed in the following bit stream: 0 0 10 110 10 111 0 0 which is only 14 bits long and can be coded into one 16-bit number (2908). The compression ratio in this simple example is therefore 128/14 ≈ 9.14. (Note however that the Huffman dictionary must be provided together with the bit stream in order to decompress the 16-bit number 2908. Due to the limited size of the dictionary, this has usually a negligible impact on the compression rate in practical applications.). The arithmetic compression used by the REBA shares many similarities with the Huffman algorithm, but exhibits a better performance. The Huff- man compression builds the dictionary from the list of symbols ordered by their frequency, but it does not use the absolute frequencies. (E.g. if the symbol 1304 would have appeared 8 times instead of 4, the Huffman dictionary would have been the same.) The arithmetic coding uses a so- phisticated technique that relies on the absolute frequency of each symbol to build a dictionary where bit masks need not to have an integer number of binary digits. More information about arithmetic coding can be found in MacKay (2003).

1An equally valid dictionary can be derived by changing each 0 with 1 and vice versa. 4.1. Principles of the LFI Data Compressor 85

The Entropy as a Measure of the Compression Ratio. To estimate the the- oretical compression rate of a symbol oriented compressor the concept of Shannon’s entropy is often used. Given a vector v containing N elements of the finite set X, the entropy H is defined as

H = − ∑ p(i) log2 p(i), (4.1) i where the sum is made over the elements in X and p(i) is the frequency of the i-th element of X in vector v. (The formula uses the convention that

0 log2 0 = 0.) For instance, in the example seen in the previous paragraph we have N = 8, X = {1301, 1302, 1303, 1304} and the entropy H is therefore

2 2 1 1 1 1 4 4 H = − log − log − log − log ≈ 1.75. 8 2 8 8 2 8 8 2 8 8 2 8 We define the theoretical compression ratio of a stream of data coded with nbits bits as the value n ct = bits (4.2) r H

(for the REBA, nbits = 16). This definition is motivated by the source coding theorem (Shannon, 1948). Given a finite set X of symbols to compress, this theorem states that it always exists a variable length encoding such that the average length in bit L of an encoded symbol is such that H ≤ L < H + 1. In other words, the value of H for a vector v can be considered as an estimate of the number of bits required to code v using a symbol oriented compression (see also MacKay, 2003).

Lossy Compressors. In order to reduce the entropy of the data before the arithmetic compression, the REBA transforms scientific data through a number of steps that discard some information. These compression meth- ods are called lossy and usually achieve better compression ratios than loss- less methods (the transformation is usually considered to be part of the compressor). Mathematically speaking, lossy compressors do not admit a proper inverse transformation on the compressed stream that give back the input data. They implement a quasi-inverse transformation cˆ−1 : w → v0 that produces a vector v0 such that kv0 − vk < ε (where v is the vector containing the original input) for some norm2 k·k and a small ε. Lossy compressors are widely used for images, sounds and any other application where some loss of information is tolerable. (Note however that the com- pression itself is not lossy: it is the preliminary entropy-reduction transfor- mation that discards information.)

2For instance, in the case of the MP3 compression, used for digitized music, the norm estimates how great is the deviation between the sound waves v0 and v0 within the ranges of audible frequencies. 86 Chapter 4. Calibration and Verification of the LFI Data Compressor

4.1.3. Details of the LFI Data Compressor. Before the compression stage the REBA reduces the amount of data to be processed through downsam- pling. This operation is called coadding, and is driven by the parameter N_AVERAGE, which is the number of samples to be averaged together. The nominal values of this parameter are chosen to produce three averaged samples per each beam3. Since the beam size depends on the frequency, this value is different for 30, 44 and 70 GHz radiometers (see table 4.1). The information lost during this process is mostly white noise and contains lit- tle science, as the discarded frequencies are at a much smaller scale than the optical resolution of the telescope-feed system.

Freq. (GHz) Beam (’) N_AVERAGE 30 33 126 44 24 88 70 14 52

Table 4.1: Nominal values of N_AVERAGE for the feed horns, chosen according to the beam size.

Coadding helps to greatly reduce the amount of data to be sent to Earth, but it is still not enough. The requisite of having 3 samples per beam im- plies that the expected number of 16-bit samples per minute is:

 8 12 24 2 × 16 bit × 4096 s−1 × + + = 86.3 kb/s, (4.3) 132 88 52 where I considered that each channel produces sky-reference pairs (2×) and used the values for N_AVERAGE listed in table 4.1 together with the fact that there are 8 channels at 30 GHz, 12 at 44 GHz and 24 at 70 GHz. The 86.3 kb/s figure is less than 2% of the amount of raw data (which is 2 × 16 × 4096 × 44 = 5, 800 kb/s), but still above the 51.3 kb/s requirement by 68%. Therefore, arithmetic compression is unavoidable. Before applying arithmetic compression to the data, the REBA applies a lossy transformation to reduce the entropy and therefore optimize the compression. The averaged sky and reference samples (xsky, xref) are trans- formed into (q1, q2) according to the following formulae:   q1 = (xsky − r1xref + ∆) × squant , (4.4)   q2 = (xsky − r2xref + ∆) × squant , (4.5) where the square brackets [·] indicates a rounding operation modulo 216 = 65536 (e.g. [65535.9] = 1). The four parameters of the transformation are r1, r2, ∆ and squant, and they must be specified for each detector. Such a

3Higher sampling would be useless, since structures in the sky of scales smaller than the beam size are inevitably washed away and undetectable. 4.1. Principles of the LFI Data Compressor 87 transformation can lower the data entropy because the difference x − ry shifts both q1 and q2 towards zero and the rounding operation reduces the number of distinct symbols in the dictionary. After the lossy transformation, the arithmetic compressor can be ap- plied on the (q1, q2) stream.

4.1.4. The Radiometer Electronics (REBA) Acquisition Modes. The REBA implements seven different modes of operation, called acquisition modes. Each of them implements some transformations but not others, and are useful for diagnostic and verification of the LFI compression scheme (refer also to figure 4.2):

RAW0: No transformation is applied.

AVR1: Data are coadded.

COM2: Data are coadded and the (xsky, xref) → (q1, q2) transformation is applied.

DIF3: Data are transformed through the (xsky, xref) → xsky − r1xref trans- formation and then quantized. Note that in this case the size of the output is half the size of the input.

RAW4: The arithmetic compression is applied to the RAW0 data.

COM5: The nominal mode. Data are coadded, transformed and compressed.

DIF6: It is like DIF3, but data are compressed as well.

4.1.5. Requirements on Data Compression. The REBA compressor must satisfy two requirements (Miccolis, 2004):

4 1. The overall compression rate cr must be such that cr ≥ 2.4, because of the limit in the amount of data rate that can be transmitted to the Earth by the Planck antenna. Note that a naive application of equa- tion (4.3) would require cr ≈ 1.7, but in that calculation we have ex- cluded diagnostic scientific data and ancillary telemetry information that raise the 86.3 kb/s figure.

2. The maximum quantization error q and the intrinsic noise σ of the uncompressed input data must be such that σ/q ≥ 2, i.e. the quan- tization must not induce a significant error in terms of the intrinsic

4By “overall” we mean a weighted mean of the compression ratio of each detector, where the weights are inversely proportional to the data rate, i.e. inversely proportional to the value of N_AVERAGE. 88 Chapter 4. Calibration and Verification of the LFI Data Compressor

Figure 4.2: The seven REBA acquisition modes. The operations indicated in white boxes are: “coadding” (sub sampling), “mixing” (the xsky − rxref operation), “requantization” (the rounding operation modulo 216) and “compression” (arithmetic coding). The nomi- nal mode is COM5, the others having been implemented for software/hardware debugging during ground tests and flight.

radiometric noise (Maris et al., 2004b, 2006), as we already discussed in section 4.1.1. The definition of q is v u N 1 u  2 q = t∑ |xAVR1 − xCOM5| − |xAVR1 − xCOM5| (4.6) N i=1

for a generic signal x measured both in mode AVR1 and COM5. The purpose of the compressor calibration procedure is to find a con- figuration (r1, r2, ∆, squant) that allows the REBA compressor to satisfy both the requirements. If more than one configuration satisfy the requirements, the one with the greatest σ/q value is preferred. In order for the compressor to satisfy both the requirements on data loss and on the compression ratio, the transformation parameters must be carefully chosen. For instance, a too small value for squant would cause se- vere rounding up in the data, while a large value would lead to insufficient compression rates. Also, similar values for r1 and r2 would cause equations (4.4) and (4.5) to be identical, and therefore they would not be invertible.

§ 4.2. Calibration of the LFI Data Compressor

We have tested the data compressor of the REBA during the RAA FM tests. The agreed calibration procedure goes through the following stages: 4.2. Calibration of the LFI Data Compressor 89

1. We acquired data from the LFI in stable conditions (i.e. no changing biases nor temperatures) for a period of about 20 minutes. The data were acquired in mode AVR1, so that no mixing, requantization or compression are applied.

2. A subset of the 4-D parameter space (r1, r2, ∆, squant) is chosen by the user and used as input by the OCA analysis software (see chapter 5), which runs the REBA compressor on the acquired AVR1 data with different configurations chosen in the specified parameter space and looking for the best one that satisfies the requirements on cr and σ/q. 3. If OCA is not able to find a suitable configuration for the compressor, the user must provide a different subset of the parameter space and point 2 is repeated.

Note that to calibrate the compressor we do not use the REBA but rather a copy of its compression software that is run on a personal computer. This greatly simplifies the procedure. To explore the 4-D parameter space, OCA provides two algorithms:

1. The first one explores a non-trivial subset of the space. Explicit limits must be provided by the user for each of the four parameters.

2. The second algorithm uses as additional assumptions (Maris, private communication) the following equations:

xsky r1 = , hxrefi r + r ∆ = − x + 1 2 hx i . sky 2 ref Therefore this algorithm explores only a 2-D slice of the 4-D param- eter space. This approach is faster and the convergence is generally better.

Although is not guaranteed that the second algorithm will find a solu- tion that satisfies the requirements, during the RAA tests this has always been the case. We have therefore never used the first algorithm. During the set up of the experiment we found that channels #2300, #2311, #2210 and #2211 had more noise than the others. Since noisy data is more difficult to compress and therefore a greater impact of quantiza- tion was expected in these cases, we chose to relax the cr requirement to cr ≥ 2.0 for those channels. This was possible because the cr ≥ 2.4 require- ment must be satisfied on average, not in each channel. Since the limited telemetry prevents from acquiring all the 44 channels in mode AVR1 at the same time, we performed two tests: TUN_0060 and 90 Chapter 4. Calibration and Verification of the LFI Data Compressor

TUN_0060. The results of the calibration are shown in table 4.2. (This ta- ble reports the results of the verification as well, i.e. cr and σ/q. We will comment them in the next section.)

Calibration Verification Ch. r1 r2 ∆ squant cr σ/q 1800 1.00140 -0.22313 -5359.41 1.4614 2.465 3.72 1801 1.01676 -0.21083 -5568.31 1.0663 2.526 3.65 1810 0.94831 -0.24752 -7034.26 1.6219 2.407 2.49 1811 0.97730 -0.23894 -7125.57 1.8188 2.524 2.12 1900 0.94183 -0.26009 -6514.42 1.0863 2.563 2.74 1901 0.96134 -0.24741 -6368.42 1.4767 2.602 2.88 1910 0.90769 -0.28324 -5998.70 0.7184 2.539 2.87 1911 0.94380 -0.26041 -6261.11 1.2745 2.548 2.64 2000 0.94193 -0.26124 -6882.89 1.1884 2.571 2.80 2001 0.95099 -0.25585 -6657.11 1.7122 2.530 3.04 2010 0.91399 -0.27937 -5656.35 0.7110 2.527 2.45 2011 0.92647 -0.26987 -5680.95 1.1202 2.556 2.69 2100 0.93609 -0.26475 -6205.53 1.5200 2.522 2.89 2101 0.92503 -0.27235 -5764.98 1.4962 2.537 2.76 2110 0.96446 -0.24695 -6489.73 1.2028 2.537 2.44 2111 0.95634 -0.25131 -6635.29 1.1845 2.577 2.34 2200 0.97321 -0.24216 -6610.82 1.2794 2.517 2.48 2201 0.94106 -0.26156 -6676.86 1.1884 2.493 2.60 2210 0.97055 -0.23832 -6454.34 1.9473 2.046 3.13 2211 0.97033 -0.23510 -6709.88 2.1005 2.096 2.90 2300 0.97788 -0.23422 -8003.77 2.6275 2.163 3.81 2301 0.96922 -0.24103 -8137.46 1.5160 2.554 2.39 2310 0.94922 -0.25722 -5857.59 1.1480 2.539 2.11 2311 0.98200 -0.23379 -6690.82 3.6717 2.127 3.49 2400 0.95680 -0.24853 -7753.79 1.3414 2.087 2.96 2401 0.94601 -0.25735 -6328.58 0.8006 2.504 2.04 2410 0.95691 -0.25228 -7000.99 2.1139 2.663 2.99 2411 0.94983 -0.25651 -6622.91 2.6921 2.570 3.43 2500 0.94358 -0.26139 -6920.67 1.9575 2.474 3.38 2501 0.95118 -0.25474 -6393.00 1.7279 2.575 3.10 2510 0.94346 -0.25838 -6535.74 2.0436 2.498 3.40 2511 0.93340 -0.26491 -6267.06 1.9167 2.470 3.15 2600 0.95580 -0.25422 -7862.10 3.3452 2.525 3.66 2601 0.93554 -0.27169 -6010.62 3.8100 2.397 4.22 2610 0.96587 -0.24709 -7096.86 3.7435 2.548 3.94 2611 0.96538 -0.27589 -6804.56 5.3330 2.418 4.46 4.3. Verification of the LFI Data Compressor 91

Calibration Verification Ch. r1 r2 ∆ squant cr σ/q 2700 0.86981 -0.30774 -6380.60 1.5327 2.534 3.40 2701 0.86579 -0.30979 -6355.57 1.4485 2.533 3.44 2710 0.91254 -0.28375 -6522.67 2.5157 2.536 3.47 2711 0.84796 -0.32342 -6515.75 1.3425 2.553 3.44 2800 0.94246 -0.26195 -5797.02 1.8068 2.596 3.47 2801 0.92867 -0.26841 -5954.07 2.2545 2.524 3.78 2810 0.88969 -0.29503 -6089.71 1.5622 2.537 3.56 2811 0.91594 -0.27814 -6506.10 2.3519 2.521 3.75

Table 4.2: Results of the REBA calibration on tests TUN_0060 and TUN_0061 and of the REBA verification on tests TUN_0062 and TUN_0063.

§ 4.3. Verification of the LFI Data Compressor

After the best values for the parameters have been calculated using OCA, they must be verified within the real instrument. In order to do this, we applied the following procedure:

1. The REBA was set up with the parameters found in the calibration phase, namely the r1, r2, ∆ and squant columns of table 4.2, and the value of N_AVERAGE was set to the nominal values.

2. The output of the radiometers was recorded in the two modes AVR1 (uncompressed) and COM5 (compressed) while the instrument was kept in a stable state, with temperatures as similar as possible to those used during the calibration.

Recording the output in two uncompressed/compressed modes allows to estimate the quantization efficiency and the impact of the quantization on the data and verify that the requirements on cr and σq were met. To verify the performances of the data compressor, I have implemented a new analysis module in LIFE called ReVerie, the REBA Verifier. It com- pares the data streams in mode AVR1 and COM5 and measures the discrep- ancies between the two modes. The parameters measured are listed here (each parameter is estimated for each packet):

• Compression rate cr: maximum and minimum, mean and standard deviation over all the packets.

• Shannon Entropy s: maximum and minimum, mean and σ.

t • Relative compression rate cr/cr (see eq. 4.2): maximum and mini- mum, mean and σ. 92 Chapter 4. Calibration and Verification of the LFI Data Compressor

• Quantization Error |xCOM5 − xAVR1|: maximum and minimum, mean and standard deviation. All the numbers are provided for the sky signal, the reference signal and their plain difference (e.g. xsky − xref, with no r factor).

• For each channel the root mean square of the sky/reference/differenced signal is estimated, as well as its ratio with the quantization error (σ/q).

Some plots produced by ReVerie are shown in figures 4.3 and 4.4. The verification of the REBA compression was performed using data from tests TUN_0062 and TUN_0063. The numerical results for cr and σ/q are shown in table 4.2. Bar charts are shown in figure 4.5. Figure 4.5 shows that channels #2300, #2311, #2210 and #2211 were op- timized using the more relaxed requirement cr ≥ 2.0. It is important there- fore to verify that, despite this lower compression rate, the cr ≥ 2.4 re- quirement is satisfied on average. I therefore computed the averages for the three frequencies and then calculated the weighted average using as weights 1/N_AVERAGE. The results for cr are shown in the following table:

Freq. cr 30 GHz 2.542 44 GHz 2.477 70 GHz 2.461 Overall 2.482

The requirement that cr > 2.4 has therefore been satisfied. Since the σ/q > 2 requirement has been satisfied as well (for every channel), the verification of the REBA compressor and of the optimizer has therefore been concluded successfully.

§ 4.4. Detection of Jumps in the Radiometric Output

During the verification of the REBA compression we detected a number of jumps in the output of some detectors in feed horn #26 (they were unfor- tunately not recorded by the operators). We noticed that the stable radio- metric output signal changed abruptly each time we changed the REBA pa- rameters. These jumps could be seen only in the COM5 compressed stream, since the AVR1 uncompressed stream showed no jumps. We therefore recorded test XXX_0144 to investigate the cause of these jumps. Since from preliminary studies we found that the problem was caused by the squant parameter, we changed this parameter in several steps for detectors #2700, #2500 and #2000. The same behavior was found in all these detectors. The jumps found in detector #2500 are shown in figure 4.6. The jumps occurred when the value of squant was changed, since each of 4.4. Detection of Jumps in the Radiometric Output 93

Figure 4.3: Data stream for detector #1800 (test TUN_0062). The plots were produced using ReVerie, the REBA analysis module integrated in Lama. 94 Chapter 4. Calibration and Verification of the LFI Data Compressor

Figure 4.4: Statistical information about the compression rate and quantization error for detector #1800 (test TUN_0062). The plots were produced using ReVerie, the REBA analysis module integrated in Lama. 4.4. Detection of Jumps in the Radiometric Output 95

Compression ratio

2311 2811 2310 2810 2301 2300 2801 2211 2800 2210 2711 2201 2710 2200 2701 2111 2110 2700 2101 2611 2100 2610 2011 2601 2010 2600 2001 2000 2511 1911 2510 1910 2501 1901 2500 1900 2411 1811 1810 2410 1801 2401 1800 2400

0.0 0.5 1.0 1.5 2.0 2.5 0.0 0.5 1.0 1.5 2.0 2.5

Sigma over q

2311 2811 2310 2810 2301 2300 2801 2211 2800 2210 2711 2201 2710 2200 2701 2111 2110 2700 2101 2611 2100 2610 2011 2601 2010 2600 2001 2000 2511 1911 2510 1910 2501 1901 2500 1900 2411 1811 1810 2410 1801 2401 1800 2400

0.0 1.0 2.0 3.0 0 1 2 3 4

Figure 4.5: The compression ratio cr and σ/q value measured during tests TUN_0062 and TUN_0063. The vertical bars show the requirements cr > 2.4 and σ/q ≥ 2.0. The require- ments on both parameters are met (considering that the requirement on the compression ratio is not per channel but on the overall instrument). 96 Chapter 4. Calibration and Verification of the LFI Data Compressor

Comparison between the AVR1 and the COM5 signals for #2500

Uncompressed (AVR1) sky 1.6 Compressed (COM5) sky

1.4

1.2

1

0.8

Detector output [V] 0.6

0.4

0.2

0 89100 89150 89200 89250 89300 89350 89400 89450 89500 89550 Time [s]

Figure 4.6: Output of detector #2500 during the FM test XXX_0144. The compressed data exhibited a jump whenever we changed the value of squant.

the three detectors exhibited jumps only when squant was larger than some threshold (different for each detector). The following table shows the be- havior of detector #2500:

squant Jump in #2500? 2.00 No 2.90 No 3.10 Yes 4.00 Yes

An explanation of these jumps was provided by Michele Maris. The minimum value of squant where the jump occurs is the value where the term into the square brackets in equations (4.4) and (4.5) becomes ∼ 216, and therefore the [·] (rounding modulo 216) operator saturates. This saturation causes a spurious offset to appear in the decompressed data. The saturation seen in #2601 was caused by OCA over-optimizing the compression. When multiple compressor configurations (r1, r2, ∆, squant) satisfy the cr and σ/q requirements, OCA always chooses the one with the least cr, since this implies small quantization errors. For the #2601 detector, the choice had a large squant (since larger values of this parameter necessar- ily lead to small quantization) and therefore saturated the [·] operator. The problem has therefore been conceptually resolved. OCA is cur- rently being upgraded to version 2.0 (foreseen for the end of 2007), and in the new version additional checks for the [·] saturation will be imple- mented. CHAPTER 5

Verification and Calibration of Planck/LFI with the LIFE Package

Achieving the high-level performances of the LFI instrument requires a careful calibration procedure, and in the preceding chapters I have illus- trated some of the tasks done in the last years. In this chapter I will focus on the software tools used during the calibration, since this has been the most important part of my P.h.D. work. To analyze the data collected during the LFI QM/FM tests, the LFI scientific team wrote the Lfi Integrated perFormance Evaluator (LIFE), a tool to verify the full functionality of LFI and calibrate the instrument (the general documents explaining the structure of LIFE are Galeotta, 2006a,b). LIFE has been used extensively during almost every phase of the RCA/RAA tests, and it has also been the main tool used to generate the results and the plots included in chapters 3 and 4. It will also be used for the forthcom- ing test of the Planck satellite at cryogenic temperatures to be held in the Centre Spatial de Liège (Belgium) and for the tests to be performed in flight during the Core Phase Verification (CPV) phase. Developing LIFE has been the primary topic of my work in the last three years, and in this chapter I will discuss the most important contributions I made to the software. The structure of the chapter is as follows:

• The first paragraph, LIFE, an Analysis Tool for the LFI Test Campaigns, describes the purposes of LIFE and the technologies it employs.

• The paragraph Radiometric Test Analysis with LIFE: the RaNA Package, describes RaNA, the LIFE tool used in the RCA tests.

• In paragraph LFI Test Analysis with LIFE: the LAMA Package we intro- duce Lama, the LIFE tool used during the RAA tests on the whole LFI instrument.

97 98 Chapter 5. Verification and Calibration of LFI with LIFE

• Finally, paragraph Future Developments of LIFE explains how the soft- ware will evolve to provide the Planck/LFI team with tools to be used during flight.

§ 5.1. LIFE, an Analysis Tool for the LFI Test Campaigns

Data collected during the RCA tests and the RAA tests use different data formats, but it is obviously desirable to have a common tool to analyze them both. Suppose for instance that the analysis on a RAA test is done using a different tool from the same analysis on the RCA. If the two results disagree, this could be ascribed either to a failure in the hardware or to some different assumptions made by the two analysis tools. Therefore, it is desirable to have one tool capable of doing the same analysis on the two tests. To address this issue, the Planck team decided to develop the LIFE soft- ware package. LIFE estimates a number of properties of the instrument (e.g. the white noise level of a detector, the optimal bias for the amplifiers) by means of a set of analysis modules that can be used both with RCA or RAA data. At the moment the LIFE analysis modules are runnable under two dif- ferent environments, while a third one is under development:

RaNA provides access to RCA data;

Lama provides access to RAA data;

Pegaso (under development) will provide access to the data measured dur- ing the flight operations, and it is heavily based on Lama, as the file format used during flight shares many characteristics with the RAA data format.

We summarize here the main features of LIFE:

1. It has the ability to load both RCA and RAA data acquired during the QM/FM tests and to analyze them using a common set of analysis modules.

2. Data can be plotted, zoomed, selected and panned with mouse. Plots can be scaled and shifted, and multiple plots are supported.

3. The analysis tool are mainly written using RSI IDL, since this is the standard analysis environment used by the most part of the scien- tific team. Using IDL, it is also very easy to implement new analysis modules. 5.1. LIFE, an Analysis Tool for the LFI Test Campaigns 99

Figure 5.1: The LIFE analysis modules access data through a Common Data Interface (CDI), a set of functions to access the radiometric and housekeeping data. The context determines if the module needs either RCA data (RaNA), RAA data (Lama) or flight data (Pegaso). Because of the complexity of their data format, Lama and Pegaso implement a fast kernel (written in C++) which is interfaced to the CDI by means of a tiny library, Lama Link and Pegaso Link. The Pegaso kernel has two ways to retrieve flight data (hence the two arrows): by accessing an Oracle database or by opening FITS files. The first option will be made available at the LFI DPC site only.

4. In many cases (especially during the RAA tests), IDL proved to be too slow for data I/O. Therefore, the most speed-critical routines have been written using compiled languages (C and C++).

5. The scientific team involved with the tests enlarged more and more, and the limitation of IDL as the only tool to access data began to be considered a limitation. Therefore I have restructured the code of Lama in order to ease the implementation of new language bindings other than RSI IDL as well for the future LIFE tool based on Lama that will be used during flight.

The key feature in LIFE is the Common Data Interface (CDI), a set of IDL functions first implemented by myself that allow access to the data (e.g. to get the radiometric output for the sky signal the functions are get_sky_x and get_sky_y). Each function in the CDI accepts as first parameter a vari- able named caller which is either 'rana' or 'lama' (and will soon include 'pegaso' as well). According to the value of caller, the CDI calls either RaNA or Lama and return the result to the caller. Figure 5.1 shows a sketch of how the tools work together. 100 Chapter 5. Verification and Calibration of LFI with LIFE

Name Batch GUI Purpose Bscope X Quick-look tool for radiometric biases and cur- rents FFT X X Spectrum analysis of detector output LinG X X Determination of the linearity coefficient and gain of the detectors OCA X Optimization of the REBA compressor (see chapter 4) ReVerie X Estimation of the quantization error performed by the REBA packet compressor Susc X X Susceptibility of the detector output to external impulses THF X Estimation of thermal transfer functions (see chapter 3)

Table 5.1: List of the analysis modules implemented in LIFE 3.0 and used during the RCA/RAA tests.

§ 5.2. Radiometric Test Analysis with LIFE: the RaNA Package

RaNA is a LIFE tool to analyze the data collected during the RCA QM cam- paign. It was the first tool to be implemented (2004). It has been developed in IDL with some small parts in C. Today is still used for the analysis of the tests on the LFI spare RCAs being held at the Thales/Alenia Space labora- tories in Vimodrone (activity started in May 2007). RaNA implements a Graphical User Interface (GUI) which allows to perform a number of standard tasks, but gives the user the ability to work also by typing commands in the IDL command line. For instance, the aver- age value of the sky signal from the first detector (number 0) can be calcu- lated with the following IDL command: IDL> print, mean (get_sky_y ('rana', 0)) 1.56310

The flexibility of the command line has proven crucial during the LFI tests.

5.2.1. Format of the LFI RCA Data. Each test saved during the RCA tests is composed by four files, written in FITS format:

1. A file containing the full scientific data for all the 4 detectors of the RCA under study; 2. A file containing the scientific data downsampled to 1 Hz. These data are used by RaNA to give a “quick-look” on the data when plotting the radiometric output. These data are downsampled by Rachel (see section 2.5.2). 5.3. LFI Test Analysis with LIFE: the LAMA Package 101

3. A file containing housekeeping data, such as amplifier biases and tem- peratures in the cryochamber.

4. A file containing textual messages with associated time stamps, writ- ten by the instrument operators during the tests. Examples are: “Tem- perature of the sky load changed from 20 K to 25 K”, or “Amplifiers have been switched off”.

This data format has been chosen in order to simplify the data acquisi- tion during the RCA tests, but does not resemble the file format that will be used during flight. We are going to consider this issue in the paragraph about the RAA tests.

5.2.2. The RaNA Data Analysis Modules. A list of the data analysis mod- ules implemented in RaNA is provided in table 5.1. Each module has been implemented in IDL by the same scientific team responsible for the related data analysis. Three modules (FFT, LinG, Susc) have the ability to be used in two modes:

GUI mode provides a graphical user interface to enter the input needed by the module and get plots of the output.

Batch mode allows to start the analysis from the IDL command line. This is very useful when an analysis has to be repeated a number of times (e.g. using Bscope to examine the biases applied to each of the 44 LFI detectors), because the call for the module can be included in a for loop, e.g.:

for i = 0, n_elements (feed_horns) do $ lama_bscope (feed_horns[i])

The double GUI/Batch mode has not been implemented for Lama-specific analysis modules like Bscope and ReVerie, since they were designed for RAA analysis only, where the high number of detectors to consider in cal- culations makes any GUI interaction unfeasible.

§ 5.3. LFI Test Analysis with LIFE: the LAMA Package

5.3.1. Format of the LFI RAA Data. Unlike the RCA data format (see § 5.2.1), the RAA format is composed by a variably large number of files. It will be used during the flight operations as well, and the choice of using the flight data format during the RAA tests has allowed to test the format itself and suggested some improvements. Each radiometric/housekeeping parameter is saved in a set of FITS file, each containing no more than one hour of data. Therefore, the more a test 102 Chapter 5. Verification and Calibration of LFI with LIFE lasts, the more files will be saved. As an example, the FM test ST1_0002 lasted about 45 hours and recorded all the 44 detectors plus 525 house- keeping parameters. The overall number of FITS files accessible to Lama were1 25,382. A simplified list of files that could be recorded after an RAA test is: 030LFI2710_RAA_FM_XXX_COM5_20060905180218.fits 030LFI2710_RAA_FM_XXX_COM5_20060905180231.fits 030LFI2710_RAA_FM_XXX_COM5_20060905180247.fits

030LFI2711_RAA_FM_XXX_COM5_20060905180218.fits 030LFI2711_RAA_FM_XXX_COM5_20060905180232.fits 030LFI2711_RAA_FM_XXX_COM5_20060905180247.fits

LM001322_RAA_FM_XXX_20060905180204.fits LM001322_RAA_FM_XXX_20060905180220.fits LM001322_RAA_FM_XXX_20060905180235.fits

LM001326_RAA_FM_XXX_20060905180205.fits LM001326_RAA_FM_XXX_20060905180221.fits LM001326_RAA_FM_XXX_20060905180236.fits

In this example only four parameters were recorded: the output of two de- tectors (#2710 and #2711; about the naming conventions for LFI radiome- ters, see 2.3.2) and two housekeeping parameters (identified by the code LM001322 and LM001326). The test lasted more than two hours but less than three, since each parameter was saved into three files. The file names end with a 14-digit date/time stamp that indicates when the file was created2. Note that the number of parameters recorded during the test (4) is pro- portional to the number of files (12). This is a general rule that will be used later to quantify the efficiency of the algorithms used by Lama to load these files.

5.3.2. Tree Representation of RAA Data under Lama. Of all the files saved when a test is completed, Lama only accesses the Time Ordered Information (TOI) files, which contain the radiometric and housekeeping data sorted in chronological order. (Other files might contain the raw packets and data in intermediate formats, and are used mainly for debug purposes. Log mes- sages are also saved in separate files.) In the development of Lama, we implemented a different method from RaNA to select the data to load and plot. The approach used in RaNA

1Note that this number is not exactly equal to (44 + 525) × 45 because some housekeep- ing parameters were recorded for only 44 hours. 2It has no relation to the date when the test was run, since these FITS files are always created once the test has been completed. 5.3. LFI Test Analysis with LIFE: the LAMA Package 103

Figure 5.2: Simplified view of the tree structure used by Lama to organize the data files of a test. Starting from a root node, data are organized in a hierarchical structure of nodes connected in a parent-children relationship. Every terminal node (a leaf, shown in gray) is a container for some data, while nodes at upper levels only provides the hierarchy.

(i.e. manually choose the file to open) was not flexible, because it would have been very difficult to find the right files to open out of a list of several thousands. The new method is based on the concept of a tree structure where all the parameters are categorized into multiple levels like the branches of a tree. The elements of the structure are called nodes, and are connected together by parent-children relationships. Terminal nodes (i.e. nodes with no chil- dren) are called leaves and represent a specific data in the TOI directory, e.g. the temperature of feed horn #28. A simple example is shown in fig. 5.2 with leaves drawn in gray. The template of the tree structure is specified in an eXtended Markup Language (XML) file which is loaded during the program initialization and contains all the parameters that can be recognized by Lama. Note that they are not necessarily present in every test. (E.g. it is possible that during an experiment some temperature sensors were still to calibrate, and therefore the operator did not record them.) When a new test is loaded, Lama creates a local copy of the tree and matches its leaves with the TOI files recorded in the test (this is a one- to-many relationship: in the example above, the 12 files would be asso- ciated with 4 leaves). After this process is completed, the leaves that are still empty (e.g. the uncalibrated temperature sensor in the example above) are removed from the tree. This operation is called tree optimization and is repeated since the tree does not change anymore3.

3This because every time a leaf is removed from the three there is the possibility that the parent node was left with no child nodes, and it has therefore became a leaf itself. 104 Chapter 5. Verification and Calibration of LFI with LIFE

Matching leaves with files is a conceptually simple operation, but a naive algorithm would make Lama unreasonably slow, especially for long tests like ST1_0002. To better appreciate the algorithm I have implemented in Lama and to understand how it works, I shall begin by explaining how this code was implemented in the first versions of Lama, and then show the improvements I have made in the code.

Linear Implementation of the File Matching Algorithm. The first version of the tree creation code was implemented in 2005 for the QM tests. It was written in IDL and implemented a linear algorithm to match the data files with the tree nodes. In the linear algorithm, each file is matched against every leaf in the tree, until the right leaf is found. Supposing that N is the number of files to match and that the number of leaves is proportional to N (see 5.3.1), this implies that the number of comparisons to be made is proportional to N2, that is, the algorithm has a time of O(N2). (For the meaning of O, refer to appendix A.) As a matter of fact, the code required several minutes to load any QM test longer than a few hours.

Binary Tree File Matching Algorithm. In the first development version of Lama 2.0 I rewrote the code in C++ and used a new binary search algo- rithm to improve the execution speed. Binary algorithms are a well-known method to perform fast searches in ordered sets Cormen et al. (see e.g. 1990). They use a recursive subdivision of the space of parameters into two parts, each time considering only one half, hence the adjective “binary”. Lama implements a binary search for matching leaves against files4. (This is opposite to the linear algorithm, where files are matched against leaves.) For each leaf, the binary search algorithm figures out the first let- ters of the file names which might match this leaf (the first letters are always the code of the housekeeping parameter or of the detector, see e.g. the ex- ample at page 102), and searches in the list of TOI files using the following “divide and conquer” algorithm:

1. Get the name of the file in the middle of the list (that is, the N/2-th file, or the (N + 1)/2-th if N is odd): this is the candidate;

2. Check if the candidate matches the name of the leaf;

3. If it is so then the candidate is associated with this leaf and removed from the list of files;

4The list of files must be ordered lexicographically for the algorithm to work. This is a general requirement for binary search algorithms. 5.3. LFI Test Analysis with LIFE: the LAMA Package 105

Figure 5.3: Binary search applied to a simple example. Suppose to look for the name “Cherry” in a list of 11 fruits (left). We consider as candidate the element in the middle of the list, “Melon” (highlighted with a Gray background). Since “Cherry” has a lower lexicographical order than “Melon”, the element must therefore be searched in the first half of the list (center). The comparison with the middle element (“Blueberry”) reveals that “Cherry” must come after, so (right) the next comparison gives us the right match.

4. If the name of the candidate has a smaller lexicographical order than the name of the leaf, then repeat from point 1 but consider only the second half of the list. Otherwise, consider only the first half.

The algorithm is sketched in fig. 5.3 (in each step, the candidate has a gray background).

Such an algorithm requires always dlog2 Ne comparisons or less, where dxe indicates a roundup operation. (For instance, in the example in fig. 5.3 the maximum number of comparisons is dlog2 11e = 4, which would have been reached if we had searched for “Coconut”.) Since Lama must apply this algorithm for each leaf in the tree, if the number of leaves is propor- tional to the number of files (see sec. 5.3.1) then the algorithm is O(N log2 N), to be compared with O(N2) for the linear algorithm. Moreover, removing each file once it has been matched means that while processing more and more leaves the list of files shrinks and the search gets faster. So a better figure for the number of comparison is given by

dlog2 Ne + dlog2(N − 1)e + dlog2(N − 2)e + ··· + dlog2 1e ≈ log2 N!, where the n-th term is the number of comparisons for the n-th leaf. (We have used the approximation dlog2 xe ≈ log2 x.) This can be rewritten us- ing Stirling’s formula (valid for large N): √ log2 N! ≈ N log2 N − N log2 e + log2 2πN. (5.1) 106 Chapter 5. Verification and Calibration of LFI with LIFE

The order of magnitude is still N log2 N, but we save a number of compar- isons proportional to N (second term). I implemented an additional improvement which takes advantage of the fact that many files with similar names refer to the same node (see the example at page 102). The name of these files begins with the same charac- ters and differs only in the last part, which contains the time stamp of the file. The optimization works by searching for the first file in the set (i.e. the one containing the first hour of data), and then tries to match any subse- quent file with the same leaf. Therefore, if m is the average number of files per parameters (in the example in sec. 5.3.1, m = 12/4 = 3) then the first

file is found with a O(log2 N) algorithm, while each of the following m − 1 files are found with a single comparison. The impact of this optimization in equation (5.1) is that every N is replaced with N/m, where m is the av- erage number of files per parameter, and a constant proportional to m − 1 is added at the end. The algorithm has the same asymptotic behavior as equation (5.1), but is roughly m times faster. With this new algorithm and the superior speed of C++ over IDL, test ST1_0001 is loaded in less than 10 seconds, time to be compared with the tens of minutes that would have been required by the first linear version of the algorithm.

5.3.3. Support for Multiple Measure Units in the Data. Scientific and housekeeping parameters loaded by Life have different measure units, and often the same data set allows for multiple units. Consider the radiometric output. It can be measured as a digital number (DEC) coming out from the DAE, a voltage (V) or a temperature (K or ◦C). Different analysis modules often need to use one unit instead of another. For instance, we have seen in chapter 4 that the ReVerie module needs radiometric data to be expressed as a stream of 16-bit numbers in order to estimate the compression error induced by the REBA. Lama implements a global table of measure units and a local measure unit list for each leaf in the node tree. The global table contains all the mea- sure units recognized by the program and specifies the conversion func- tions between them. The unit tables owned by each leaf are called local tables. They are lists of pointers to the elements of the global table that are supported by the data set of the leaf, and are initialized after the leaves are matched with filled files with sensible data units. See fig. 5.4. The advantage of having a single global table of measure units is that it is very fast to check if two leaves contain comparable data, because one needs to know if their local tables both reference the same entry in the global table. Also, each local table consumes very little memory because it has no real data in it other than pointers to the global table. When an analysis module calls one of the CDI functions to retrieve a 5.3. LFI Test Analysis with LIFE: the LAMA Package 107

Figure 5.4: Sketch showing how Lama assigns measure units to leaves. A global measure unit table (left) holds a predefined set of units. When a test is loaded and the node tree is being built, Lama creates a table for each leaf that contains a list of measure units suitable for that leaf (right), both for the X (time) and Y (data) streams. data set, it is free to specify a measure unit or not. If it does not, a default measure unit is used. Each leaf has its own default measure unit, defined in the XML tree template loaded by Lama during its initialization (see section 5.3.2).

5.3.4. Extension of the RaNA Approach to Multiple Feed-Horns. Unlike RCA tests, RAA tests allow to record the output of multiple feed horns, and up to two data processing types (see section 4.1.4) can be used at the same time to record the radiometric output. Moreover, Lama is able to load more than one test at the same time, while RaNA does not have this feature. This poses a number of problems for the analysis modules: for instance, how can a module let the user choose which test to analyze when used from Lama, if that module is supposed to work with RaNA too? To solve this problem, Lama implements the concept of default feed horn, default processing type and default test. By browsing the node tree, the user selects the test, feed horn and processing type that will be used by any subsequent call to the functions of the CDI. These functions however allow the use of keywords to override the defaults, e.g.: IDL> print, mean (get_sky_y ('lama', 0, feed_horn=23)) 0.82655

5.3.5. Downsampling of Radiometric Outputs. Lama provides quick ac- cess to radiometric data by implementing a downsampling technique sim- 108 Chapter 5. Verification and Calibration of LFI with LIFE ilar to the one used in RaNA. Downsampling is necessary because of the side of the RAA data sets, up to several megabytes5. The downsampling is applied only for radiometric data, since the sampling frequency is higher than for housekeeping (typically 30 times greater). Lama stores the downsampled radiometric data into AUX files, much like in RCA data. The difference is that the downsampling is not done by the acquisition system when the test is saved on disk. Rather, it is Lama which creates the AUX data “on the fly” when they are needed. (In com- puter science, such data access methods are called lazy, since the evalua- tion is postponed while possible.) AUX files are created whenever the pro- gram needs scientific data but no specific request about full sampled data is made. Consider for instance this example:

IDL> print, mean (get_sky_y ('lama', 0)) 0.82655

If no AUX files are available for the detector number 0, then get_sky_y forces Lama to create a new one before printing 0.82655 at the IDL prompt.

5.3.6. Accessing data. We present here the full algorithm used to access the test data, putting together all the concepts presented in the previous sections. When a radiometric or housekeeping parameter is required through the CDI interface, the Lama kernel searches for the leaf associated with that parameter and then runs algorithm 5.1:

1. The first step (line 2) is to determine if the local measure unit table of the leaf has already been initialized (see also fig. 5.4).

2. Line 5 checks if the files associated with this leaf have already been loaded; if not, it does it immediately (“lazy” evaluation).

3. Once data are available in memory, they are copied into a new raw vector (line 8).

4. Line 10 applies the right transformation to the raw vector to transform data in the measure unit specified by measure_unit. The result is copied into dataset.

5. The dataset vector is returned to the calling function.

The dataset vector is an array of floating-point numbers which the CDI returns to the caller (either an analysis module or the IDL command line).

5For instance, the data for the first detector of feed horn #28 in the FM test ST1_0002 are saved into 45 files whose overall size is almost 180 MB. 5.3. LFI Test Analysis with LIFE: the LAMA Package 109

Algorithm 5.1: The algorithm used to return a data stream. Refer to the body text for further information. Data: dataset_type, measure_unit Result: The datastream // Check if the measure unit table has been initialized 22 if measure unit table not initialized for this node then 3 initialize_units; // Check if the files have been already loaded or not 55 if this data set has not been loaded yet then 6 load_data_from_disk; // Retrieve the data in raw format (e.g. DEC units) 88 raw ← raw_data (dataset_type); // Calibrate the data using the appropriate measure unit 1010 if measure_unit is valid then 11 dataset ← transform (raw, measure_unit) 12 else 13 signal_an_error; 14 exit;

15 return dataset

5.3.7. The Lama Link module. The algorithm presented in the previous section provides a complete overview of the way Lama extracts data from files and transforms them to the requested format, but it does not tell the whole story. If readers look back to figure 5.1, they will note a difference between RaNA and Lama: while the RaNA kernel acts as a direct link be- tween the CDI and the RCA data files, Lama has Lama Link, an additional layer between the CDI and the kernel. This is a C library designed and de- veloped by myself during the first half of 2007. In this section I will explain the motivations behind Lama Link and how it has been implemented.

Purposes of the Library. Lama Link is a data transfer library that sends blocks of data of arbitrary size between two processes, the client (which asks for data) and the server (which provides data), running either on the same computer or remotely. Within LIFE, the server is Lama and the client is IDL. This approach has several advantages: 1. It ensures a better protection from program crashes: if Lama crashes or freezes, IDL is not affected (and vice versa). 2. Lama can be started without IDL (useful for systems where a valid IDL license is not present). 3. It greatly simplifies the debugging of C++ code. 110 Chapter 5. Verification and Calibration of LFI with LIFE LIFE 3.0 (w/o Pegaso)

IDL LIFE 2.0 C/C++

0e+00 1e+05 2e+05 3e+05

Lines of code

Figure 5.5: Impact of the implementation of Lama Link on LIFE 3.0, in terms of lines of code (comments in the source code were counted as well). Note that the source code for Pegaso (not present in LIFE 2.0) was not taken into account for this plot.

4. Using Lama Link, a new CDI can be in principle created for other languages than IDL. For instance, it is possible to port the CDI to Fortran. Of course this new CDI could run under Lama and Pegaso but not under RaNA (because there is no RaNA Link library).

5. Since the kernel can be run on a machine different from the one run- ning the IDL analysis modules, a long analysis can be started on dedi- cated machines. This is a particularly desirable feature for those mod- ules that take a long time to run (e.g. OCA) and do not require to monitor a complex graphical user interface.

Protocol Specification. Lama Link implements a network protocol to trans- mit information between the client and the server. Data are split into pack- ets and sent through a network socket. The format of a packet is shown in figure 5.6. In the first bytes there is a header containing low-level informa- tion on the data being sent:

1. The size field contains the number of bytes of the packet (including the header but excluding size).

2. Of the 32 bits allocated for flags, only the first is used: if it is 1, then the packet has been compressed6 using the LZO algorithm (Ober- hurmer, 2006). The other bits of flags are reserved for future use (e.g. they could be used to indicate if the packet has been encrypted

6In Lama 3.0, compression is used only for packets greater than 64 kB. 5.3. LFI Test Analysis with LIFE: the LAMA Package 111

0 8 16 24 32  size     flags  Header  (optional: uncompressed size)    type packet data hh hhhh hhhh hhh hhh hhhh hhhh hhh hhh hhhh hhhh hhh hhh hh hhhh

Figure 5.6: Lama packet format. The multi-byte fields (“size”, “flags” and “uncompressed size”) are stored in the network-byte order in order to preserve consistency when commu- nicating through machines with different endianness (e.g. i386 versus Power PC).

or not, or to signal that a compression algorithm different from LZO has been used).

3. The uncompressed_size field is present only if the packet has been compressed, and contains the size of the uncompressed data.

4. The type field is unused and set equal to zero.

An example of a full packet is provided here:

00 00 00 12 00 00 00 00 00 00 67 65 74 5f 76 65 72 73 69 6f 6e 00 packet size flags type packet contents

The first field, packet size, is the number of bytes that follow (18). Note that this number is in network-byte order: you would have expected 12 00 00 00 on a little-endian machine, such as on an Intel CPU. Since the packet is not compressed (flags is zero), then the uncompressed size flag is not included. After type (which is always zero) the packet data follow.

Encoding of Data into Packets. Nothing has been said yet about the struc- ture used for the data contained in a packet (i.e. anything following the type field). This seems reasonable, as the data can vary deeply among different contexts: a packet could in principle contain a string, a number or an array of numbers. I have developed a higher-level protocol to represent data into a packet. Packets encoded using this protocol are called7 command packets.

7The name is rather misfortunate. It was chosen when this kind of packets was thought to be useful only to encode commands to be sent from the client (IDL) to the server (the 112 Chapter 5. Verification and Calibration of LFI with LIFE

We explain the concept of command packets through an example. Sup- pose that a function get_test_path has been implemented by Lama: /* Return the full directory where TEST_NAME was saved */ char * get_test_path (char * test_name) { /* ... */ } To add the ability to call get_test_path from another program (includ- ing IDL), Lama Link implements command packets. When the external program wants to execute get_test_path, it sends a command packet to Lama, that answers with a new packet containing the function name and its argument test_name: llink_encode_and_send_cpacket (link, NULL, BCC_STRING, "get_test_path", BCC_STRING, "XXX_0121", BCC_END); The Lama Link function llink_encode_and_send_cpacket creates a com- mand packet containing two strings and send it through the connection identified by link. Lama then reads the packet using the following code: char * command = NULL; char * test_name = NULL;

/* This function will allocate memory for COMMAND and TEST_NAME and will copy the two strings into them */ llink_receive_and_decode_cpacket (link, NULL, BCC_STRING, &command, BCC_STRING, &test_name, BCC_END);

if (strcmp (command, "get_test_path") == 0) { char * path = get_test_path (test_name);

/* Now send PATH back to the client using a similar approach */ } The llink_receive_and_decode_cpacket function is a companion for llink_encode_and_send_cpacket: it receives a command packet and ex- tracts the types specified by its arguments. All the functions defined in the CDI follow the same approach as this simple example.

Lama kernel). This protocol has subsequently proven to be useful in many other contexts, yet the name has survived. 5.4. Future Developments of LIFE 113

§ 5.4. Future Developments of LIFE

The current version of LIFE (3.0) is usable with a large number of data sets: RCA QM, RCA FM, RAA QM, RAA FM and the new tests on the full satellite currently ongoing at the Thales laboratories in Cannes (France). Supporting each of these test types is a difficult task, as there are often subtle difference in the file formats used (e.g. QM and FM tests on the RAA use a different file format and require different calibration tables for the output of the detector). The implementation of a CDI has proved to be a reliable way to overcome these difficulties. The next step in the evolution of LIFE is the finalization and integration of the Pegaso tool. This will join RaNA and Lama in figure 5.1, allowing the CDI to access data from the flight database as well. Further information about Pegaso will be given in the next chapter. 114 Chapter 5. Verification and Calibration of LFI with LIFE CHAPTER 6

Conclusions and Future Work

§ 6.1. Use of LIFE in the LFI QM/FM Tests

Modern cosmology enables us to address scientifically some questions that are rooted in the whole history of human thought. Precision cosmology measurements are becoming reality with CMB anisotropy experiments, since they can grasp a look at the universe in its very early stages. The Planck mission, whose launch is scheduled for 2008, will help us in better under- stand the physics that has produced the observable universe. Its two instru- ments, the LFI and the HFI, will perform high precision measurements of temperature and polarization anisotropies with an unprecedented sensitiv- ity, angular resolution and spectral coverage. Because of Planck ambitious scientific goals, the two instruments require extreme care in ground and flight calibrations. This work has reported my contributions in the Planck/LFI calibration campaign, including the development and use of the LIFE software tools. The tests involved the verification and calibration of single radiometric chains (RCA) as well as the full instrument, and were repeated twice: the first time on prototypes, the second time on flight instruments. Repeating the tests provided the opportunity also to verify and improve LIFE, as it has been shown e.g. in section 5.3.2. The LIFE software has proven to be adequate for use in every test per- formed on the Planck Instrument. In this thesis I have discussed the two test that were performed under my supervision, i.e. the estimation of the focal plane thermal transfer functions (chapter 3) and the calibration and verification of the data compressor (chapter 4). Both analysis have pro- vided good results:

1. The validation of the thermal model showed a good agreement be- tween the thermal model and the measured transfer functions. In

115 116 Chapter 6. Conclusions and Future Work

all the temperature sensors analyzed the temperature damping, as expected, was better than in the thermal model. I have found that one of the 10 correspondences between sensors and nodes in the ther- mal model was not optimal (TS5R). After an analysis of the thermal model, I proposed a better correspondence that produces results for TS5R similar to the other 9 sensors. 2. The software tool used for the calibration of the REBA data compres- sor (OCA) has been used to find a set of parameters that would satisfy the requirements on the compression level and the quantization error. I have verified these parameters using LIFE and have found that the requirements are indeed satisfied. The cause of some unexpected jumps in the output of one radiometer (observed during the REBA calibration) was identified as a bug in OCA, and a patch has been proposed. Both the analysis have been made using a dedicated module of LIFE whose implementation was coordinated by myself. The results of such analysis are a good example of the capabilities of LIFE, i.e. a simple ac- cess to all the scientific/housekeeping data recorded during a test and the ability to develop batch analysis procedures that can be iterated as much as required with no user intervention.

§ 6.2. In-flight Testing and the Future of LIFE

6.2.1. Use in the next Satellite Tests. The usage of LIFE in the Planck mis- sion does not end with the LFI RCA/RAA tests. At the moment the inte- grated satellite is undergoing a number of tests in warm conditions at the Thales/Alenia Space Laboratories in Cannes (France), and in April 2008 a set of cryogenic tests is scheduled in the CSL laboratories in Liegi (Bel- gium). In the present and future ground tests of the Planck satellite Lama will be the instrument that will be used to verify the LFI. The same people involved in the testing of the LFI instrument alone will be responsible of these tests, and the experience gained in using LIFE will be vital in establish the success of the Planck test campaign.

6.2.2. Use in Flight. LIFE is currently being extended to be used in flight. The requirements for Planck/LFI include the production of a Daily Quality Report (DQR) once per day and a Weekly Health Report (WHR) once per week: 1. The DQR is a document that will be produced each day automati- cally by LIFE during flight operations and will contain a number of information that can be divided into two broad categories: 6.2. In-flight Testing and the Future of LIFE 117

(a) Data relevant to the entire observation day. These includes gen- eral information as well as warnings or comment about the data acquisition (e.g. presence of gaps in the radiometric output). (b) Data relevant to individual pointings. These includes the planned and effective galactic latitude/longitude, suspicious values in temperatures, biases or currents and additional statistics and noise information per each detector.

2. The WHR will be a LFI/HFI joint document containing the health status of both instruments. It will contain more complex information about the instruments as well as the estimated trend of some proper- ties of the detectors (e.g. white noise level).

Although the exact format of the two documents is still to be defined, it has been accepted by the LFI scientific team that the LFI DQR reports will be created by a new tool to be developed and integrated into LIFE: Pegaso. Pegaso is a fork of the Lama code base currently developed by Samuele Galeotta. The user interface is similar to Lama, but it has the additional ability to interface with the Oracle database held at the LFI DPC that will store all the data sent by Planck during flight, much in the same way as Lama did for the RAA tests. 118 Chapter 6. Conclusions and Future Work APPENDIX A

Algorithmic Asymptotic Behavior

In this appendix we provide some information useful to estimate the per- formances of an algorithm. I will follow closely some sections of Cormen et al. (1990). In chapter 5 we have used the notation O to indicate how the time re- quired for an algorithm scales with the amount of data. This is a parameter useful to estimate the efficiency of an algorithm. The definition of O is the following. Given some function g(n) with n ∈ N, the set Og(n) is defined as follows:  n O g(n) = f (n) : ∃ c > 0, n0 > 0 such that o (A.1) f (n) ≤ cg(n) ∀n ≥ n0 .

Therefore, Og(n) is the set of functions that are asymptotically smaller than g(n). The O sets satisfy the following hierarchy:

O(1) ( O(n) ( O(n log n) ( O(n2) ( . . . (A.2) The O concept is used in algorithm theory to indicate the number of operations needed to operate on a set of n data. An algorithm is e.g. said to be O(n) if the number of operations needed to process n input data scales with n in the worst case. It is important to note that O is not enough to establish the execution time of an algorithm because of a number of reasons: • It represents the “worst case” but gives no information about the av- erage case. For instance, seeking for a specific record in a set of N records is a O(N) algorithm, because N/2 comparisons are needed on average to find a match. But the best case (i.e. if the match is the first record) always requires one comparison.

119 120 Chapter A. Algorithmic Asymptotic Behavior

• It is not related to the number of steps needed to complete the al- gorithm. It only tells how the worst case scales when varying the number of input data. For instance, two algorithms operating on a set of N input data and requiring respectively N2 and 2N2 operations to complete are both O(N2), but the first one is two times faster than the second one. Furthermore, if a third algorithm required N2 + N operations, this algorithm would still be O(N2). List of Acronyms

ADC Analog-to-Digital Converter

ADU Analog-to-Digital Unit

API Application Programming Interface

BEM Back End Module

BEU Back End Unit

CDI Common Data Interface

CMB Cosmic Microwave Background

CPV Core Phase Verification

DAE Data Aquisition Electronics

DCA Direct Current Amplifier

DC Direct Current

DMR Differential Microwave Radiometer

DPC Data Processing Centre

DQR Daily Quality Report

ESA European Space Agency

FEM Front End Module

FEU Front End Unit

FM Flight Model

GUI Graphical User Interface

HEMT High Electron Mobility Transistor

121 122 Chapter A. List of Acronyms

HFI High Frequency Instrument

LFI Low Frequency Instrument

LIFE Lfi Integrated perFormance Evaluator

LNA Low Noise Amplifier

MOC Mission Operation Center

OMT OrthoMode Transducer

QM Qualification Model

RAA Radiometric Assembly Array

RaChEl Radiometric Channel EvaLuator

RCA Radiometric Chain Assembly

REBA Radiometer Electronics Box Assembly

SC Sorption Cooler

SPU Signal Processing Unit

SVM Service Module

TOI Time Ordered Information

XML eXtended Markup Language

WHR Weekly Health Report

WMAP Wilkinson Microwave Anisotropy Prober Bibliography

ESATAN User Manual. ALSTOM Power Technology Center, Whetstone, Leicester, UK, um-esatan-004 (esatan 8.9) edition, April 2003.

N. A. Bahcall, J. P. Ostriker, S. Perlmutter, and P. J. Steinhardt. The Cosmic Triangle: Revealing the State of the Universe. Science, 284:1481, May 1999.

C. L. Bennett, A. J. Banday, K. M. Górski, G. Hinshaw, P. Jackson, P. Keegstra, A. Kogut, G. F. Smoot, D. T. Wilkinson, and E. L. Wright. Four-year COBE DMR cosmic microwave background observations: Maps and basic results. Astrophysical Journal, 464:L1–L4, June 1996.

A. Benoît, P. Ade, A. Amblard, R. Ansari, E. Aubourg, J. Bartlett, J.-P. Bernard, R. S. Bhatia, A. Blanchard, J. J. Bock, A. Boscaleri, F. R. Bouchet, A. Bourrachot, P. Camus, F. Couchot, P. de Bernardis, J. Delabrouille, F.-X. Désert, O. Doré, M. Douspis, L. Dumoulin, X. Dupac, P. Filliatre, K. Ganga, F. Gannaway, B. Gautier, M. Giard, Y. Giraud-Héraud, R. Gis- pert, L. Guglielmi, J.-C. Hamilton, S. Hanany, S. Henrot-Versillé, V. V. Hristov, J. Kaplan, G. Lagache, J.-M. Lamarre, A. E. Lange, K. Madet, B. Maffei, D. Marrone, S. Masi, J. A. Murphy, F. Naraghi, F. Nati, G. Per- rin, M. Piat, J.-L. Puget, D. Santos, R. V. Sudiwala, J.-C. Vanel, D. Vibert, E. Wakui, and D. Yvon. Archeops: a high resolution, large sky coverage balloon experiment for mapping cosmic microwave background anisot- ropies. Astroparticle Physics, 17:101–124, May 2002.

M. Bersanelli. Planck-lfi calibration plan. Technical report, Universitá degli Studi di Milano, IASF/CNR, Jul 2003. On behalf of the Planck-LFI Cali- bration Team.

M. Bersanelli, M. Seiffert, R. Hoyland, and A. Mennella. Planck-lfi scientific requirements. Technical report, IASF/Università degli Studi di Milano, Mar 2002.

S. Bonometto, V. Gorini, and U. Moschella, editors. Modern Cosmology. In- stitute of Physics Publishing, 2002.

123 124 BIBLIOGRAPHY

T. W. Bradshaw. 4 K cooler block diagram. Technical report, Rutherford Appleton Laboratory, aug 1999. TD-PHDB0-990804-RAL.

Sean M. Carroll. The Cosmological Constant. Living Reviews in Relativity, 4: 1, February 2001.

Tomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest. Introduction to Algorithms. The Massachussets Institute of Technology, 1990.

Alain Costé. L’œuvre scientifique de Nicole Oresme. Bulletin de la Société historique de Lisieux, 37, Jan 1997.

P. de Bernardis, P. A. R. Ade, R. Artusa, J. J. Bock, A. Boscaleri, B. P. Crill, G. De Troia, P. C. Farese, M. Giacometti, V. V. Hristov, A. Iacoangeli, A. E. Lange, A. T. Lee, S. Masi, L. Martinis, P. V. Mason, P. D. Mauskopf, F. Melchiorri, L. Miglio, T. Montroy, C. B. Netterfield, E. Pascale, F. Pia- centini, P. L. Richards, J. E. Ruhl, and F. Scaramuzzi. Mapping the CMB sky: The BOOMERanG experiment. New Astronomy Review, 43:289–296, July 1999. URL http://adsabs.harvard.edu/cgi-bin/nph-bib_query? bibcode=1999NewAR..43..289D&db_key=AST.

R. H. Dicke, P. J. E. Peebles, P. G. Roll, and D. T. Wilkinson. Cosmic Black- Body Radiation. Astrophysical Journal, 142:414–419, July 1965. doi: 10. 1086/148306.

Pierre Maurice Marie Duhem. Medieval cosmology. University of Chicago Press, 1985. Edited and translated by Roger Ariew.

G. Efstathiou, C. Lawrence, and I. Tauber, editors. Planck: the Scientific Pro- gramme. European Space Agency, 2 edition, 2005. An online copy is avail- able at www.rssd.esa.int/SA/PLANCK/docs/Bluebook-ESA-SCI(2005) 1_V2.pdf.

Albert Einstein. Kosmologische betrachtungen zur allgemeinen relativ- itätstheorie. Sitzungsberichte der Prüßischen Akademie der Wissenschaften, pages 142–152, Feb 1917.

Albert Einstein. Zum kosmologischen problem der allgemeinen relativ- itätstheorie. Sitzungsberichte der Prüßischen Akademie der Wissenschaften, pages 235–237, Apr 1931.

D. J. Fixsen, E. S. Cheng, J. M. Gales, J. C. Mather, R. A. Shafer, and E. L. Wright. The cosmic microwave background spectrum from the full COBE FIRAS data set. Astrophysical Journal, 473:576, Decem- ber 1996. URL http://adsabs.harvard.edu/cgi-bin/nph-bib_query? bibcode=1996ApJ...473..576F&db_key=AST. BIBLIOGRAPHY 125

Samuele Galeotta. Life software specification document. Technical report, LFI DPC Development Team, November 2006a.

Samuele Galeotta. Life user’s requirements document. Technical report, LFI DPC Development Team, November 2006b.

George Gamow. Expanding universe and the origin of elements. Physical Review, 70:572–573, October 1946.

Owen Gingerich. A Brief History of Our View of the Universe. Publications of the Astronomical Society of the Pacific, 111:254–257, 1999.

A. H. Guth. Inflationary universe: A possible solution to the horizon and flatness problems. Physical Review D, 23:347–356, January 1981.

S. Hanany, P. Ade, A. Balbi, J. Bock, J. Borrill, A. Boscaleri, P. de Bernardis, P. G. Ferreira, V. V. Hristov, A. H. Jaffe, A. E. Lange, A. T. Lee, P. D. Mauskopf, C. B. Netterfield, S. Oh, E. Pascale, B. Rabii, P. L. Richards, G. F. Smoot, R. Stompor, C. D. Winant, and J. H. P. Wu. MAXIMA-1: A Measurement of the Cosmic Microwave Background Anisotropy on Angular Scales of 100 − 5◦. The Astrophysical Journal Letters, 545:L5–L9, December 2000. doi: 10.1086/317322.

G. Hinshaw, M. R. Nolta, C. L. Bennett, R. Bean, O. Doré, M. R. Grea- son, M. Halpern, R. S. Hill, N. Jarosik, A. Kogut, E. Komatsu, M. Limon, N. Odegard, S. S. Meyer, L. Page, H. V. Peiris, D. N. Spergel, G. S. Tucker, L. Verde, J. L. Weiland, E. Wollack, and E. L. Wright. Three-Year Wilkin- son Microwave Anisotropy Probe (WMAP) Observations: Temperature Analysis. Astrophysical Journal Supplement, 170:288–334, June 2007. doi: 10.1086/513698.

Wayne Hu and Scott Dodelson. Cosmic microwave background anisot- ropies. Annual Reviews of Astronomy and Astrophysics, 2002. URL http: //background.uchicago.edu/~whu/araa/araa.html.

Edwin Powell Hubble. A relation between distance and radial velocity among extra-galactic nebulae. Proceedings of the National Academy of Sci- ences, 15:168, 1929.

David Albert Huffman. A method for the construction of minimum- redundancy codes. Proceedings of the Institute of Radio Engineers, 40(9): 1098–1101, Sep 1952.

Dragan Huterer and Michael S. Turner. Probing dark energy: Methods and strategies. Phys. Rev. D, 64(12):123527, Nov 2001. doi: 10.1103/ PhysRevD.64.123527. 126 BIBLIOGRAPHY

Rocky Kolb. Blind Watchers of the Sky: the people and ideas that shaped our view of the universe. Addison-Wesley, 1996.

J. M. Kovac, E. M. Leitch, C. Pryke, J. E. Carlstrom, N. W. Halverson, and W. L. Holzapfel. Detection of polarization in the cosmic microwave back- ground using DASI. Nature, 420:772–787, December 2002.

J.-M. Lamarre, J. L. Puget, M. Piat, P. A. R. Ade, A. E. Lange, A. Benoit, P. De Bernardis, F. R. Bouchet, J. J. Bock, F. X. Desert, R. J. Emery, M. Gi- ard, B. Maffei, J. A. Murphy, J.-P. Torre, R. Bhatia, R. V. Sudiwala, and V. Yourchenko. Planck high-frequency instrument. In J. C. Mather, edi- tor, IR Space Telescopes and Instruments. Edited by John C. Mather . Proceed- ings of the SPIE, Volume 4850, pp. 730-739 (2003)., volume 4850 of Presented at the Society of Photo-Optical Instrumentation Engineers (SPIE) Conference, pages 730–739, March 2003.

D. E. Liebscher. Cosmology. Springer, 2005.

David J.C. MacKay. Information theory, inference and learning algorithms. Cambridge University Press, September 2003. Available online at http: //www.inference.phy.cam.ac.uk/mackay/itila/book.html.

Carlamaria Maderna and Paolo Maurizio Soardi. Lezioni di Analisi Matem- atica II. Cittá Studi, 1997.

M. Maris, S. Fogliani, N. Lama, F. Pasian, A. Zacchei, M. Bersanelli, D. Maino, P. Leutenegger, M. Miccolis, E. Franceschi, M. Malaspina, A. Mennella, and M. Salmon. Data Handling for Planck/LFI Ground Tests. In F. Ochsenbein, M. G. Allen, and D. Egret, editors, Astronomical Data Analysis Software and Systems (ADASS) XIII, volume 314 of Astro- nomical Society of the Pacific Conference Series, page 788, July 2004a.

M. Maris, D. Maino, C. Burigana, A. Mennella, M. Bersanelli, and F. Pasian. The effect of signal digitisation in CMB experiments. Astronomy and Astrophysics, 414:777–794, February 2004b. doi: 10.1051/0004-6361: 20031489.

Michele Maris, Marco Frailis, and Maurizio Tomasi. Fine tuning of REBA parameters: Methods and software. Technical report, LFI DPC Develop- ment Team, August 2006.

B. S. Mason, T. J. Pearson, A. C. S. Readhead, M. C. Shepherd, J. Sievers, P. S. Udomprasert, J. K. Cartwright, A. J. Farmer, S. Padin, S. T. Myers, J. R. Bond, C. R. Contaldi, U. Pen, S. Prunet, D. Pogosyan, J. E. Carlstrom, J. Kovac, E. M. Leitch, C. Pryke, N. W. Halverson, W. L. Holzapfel, P. Al- tamirano, L. Bronfman, S. Casassus, J. May, and M. Joy. The Anisotropy of the Microwave Background to l = 3500: Deep Field Observations with BIBLIOGRAPHY 127

the Cosmic Background Imager. The Astrophysical Journal, 591:540–555, July 2003. doi: 10.1086/375507.

J. C. Mather, D. J. Fixsen, R. A. Shafer, C. Mosier, and D. T. Wilkinson. Calibrator Design for the COBE Far-Infrared Absolute Spectrophotome- ter (FIRAS). The Astrophysical Journal, 512:511–520, February 1999. doi: 10.1086/306805.

Vittorio Mathieu. Storia della filosofia, voll. 1-3. La Scuola Editrice, 1966.

A. McKellar. Evidence for the Molecular Origin of Some Hitherto Unidenti- fied Interstellar Lines. Publications of the Astronomical Society of the Pacific, 52:187, June 1940.

A. Melchiorri, P. A. R. Ade, P. de Bernardis, J. J. Bock, J. Borrill, A. Bosca- leri, B. P. Crill, G. De Troia, P. Farese, P. G. Ferreira, K. Ganga, G. de Gasperis, M. Giacometti, V. V. Hristov, A. H. Jaffe, A. E. Lange, S. Masi, P. D. Mauskopf, L. Miglio, C. B. Netterfield, E. Pascale, F. Piacentini, G. Romeo, J. E. Ruhl, and N. Vittorio. A Measurement of Ω from the North American Test Flight of Boomerang. The Astrophysical Journal Let- ters, 536:L63–L66, June 2000. doi: 10.1086/312744.

A. Mennella, M. Seiffert, M. Bersanelli, and G. Morgante. Planck-lfi tem- perature stability requirements on the 20 k stage. Technical report, IASF/JPL/Università degli Studi di Milano, Sep 2002.

Aniello Mennella, Marco Bersanelli, Benedetta Cappellini, Angel Colin, Francesco Cuttaia, Ocleto D’Arcangelo, Samuele Galeotta, Anna Grego- rio, Rodrigo Leonardi, Stuart Lowe, Michele Maris, Luis Mendes, Peter Meinhold, Maria Salmon, Maura Sandri, Luca Stringhetti, Luca Terenzi, Maurizio Tomasi, Luca Valenziano, and Fabrizio Villa. Data analysis and scientific performances of the LFI FM instrument. Technical Report PL- LFI-PST-AN-006, LFI Project System Team, 11 November 2006a.

Aniello Mennella, Francesco Cuttaia, Rodrigo Leonardi, Michele Maris, Pe- ter Meinhold, Maria Salmon, Luca Stringhetti, Maurizio Tomasi, and Luca Valenziano. Data analysis and scientific performances of the LFI QM instrument. Technical Report PL-LFI-PST-AN-005, LFI Project Sys- tem Team, 2 March 2006b.

Maurizio Miccolis. Planck/lfi communication interface control document. Technical report, Laben, January 2004.

Milton K. Munitz. Cosmic Understanding. Princeton University Press, 1986.

Markus Oberhurmer. The miniLZO library, version 2.02, October 2006. http: //www.oberhumer.com/opensource/lzo/. 128 BIBLIOGRAPHY

L. Page, G. Hinshaw, E. Komatsu, M. R. Nolta, D. N. Spergel, C. L. Bennett, C. Barnes, R. Bean, O. Doré, J. Dunkley, M. Halpern, R. S. Hill, N. Jarosik, A. Kogut, M. Limon, S. S. Meyer, N. Odegard, H. V. Peiris, G. S. Tucker, L. Verde, J. L. Weiland, E. Wollack, and E. L. Wright. Three-Year Wilkin- son Microwave Anisotropy Probe (WMAP) Observations: Polarization Analysis. Astrophysical Journal Supplement, 170:335–376, June 2007. doi: 10.1086/513699. Arno Penzias and Robert Woodrow Wilson. A measurement of excess antenna temperature at 4080 mc/s. Astrophysical Journal, 142:419–421, July 1965. URL http://adsabs.harvard.edu/cgi-bin/nph-bib_query? bibcode=1965ApJ...142..419P&db_key=AST. S. Perlmutter, G. Aldering, G. Goldhaber, R. A. Knop, P. Nugent, P. G. Cas- tro, S. Deustua, S. Fabbro, A. Goobar, D. E. Groom, I. M. Hook, A. G. Kim, M. Y. Kim, J. C. Lee, N. J. Nunes, R. Pain, C. R. Pennypacker, R. Quimby, C. Lidman, R. S. Ellis, M. Irwin, R. G. McMahon, P. Ruiz- Lapuente, N. Walton, B. Schaefer, B. J. Boyle, A. V. Filippenko, T. Mathe- son, A. S. Fruchter, N. Panagia, H. J. M. Newberg, W. J. Couch, and The Supernova Cosmology Project. Measurements of Omega and Lambda from 42 High-Redshift Supernovae. The Astrophysical Journal, 517:565– 586, June 1999. doi: 10.1086/307221. Ugo Perone, Annamaria Perone, and Giovanni Ferretti. Storia del pensiero filosofico, voll. 1-3. Società Editrice Internazionale Torino, 1978. P. Radaelli. Lfi cryo-facility specification. Technical report, Laben, May 2003. A. G. Riess, A. V.Filippenko, P. Challis, A. Clocchiatti, A. Diercks, P.M. Gar- navich, R. L. Gilliland, C. J. Hogan, S. Jha, R. P. Kirshner, B. Leibundgut, M. M. Phillips, D. Reiss, B. P. Schmidt, R. A. Schommer, R. C. Smith, J. Spyromilio, C. Stubbs, N. B. Suntzeff, and J. Tonry. Observational Evi- dence from Supernovae for an Accelerating Universe and a Cosmological Constant. The Astronomical Journal, 116:1009–1038, September 1998. doi: 10.1086/300499. A. G. Riess, L.-G. Strolger, J. Tonry, S. Casertano, H. C. Ferguson, B. Mobasher, P. Challis, A. V. Filippenko, S. Jha, W. Li, R. Chornock, R. P. Kirshner, B. Leibundgut, M. Dickinson, M. Livio, M. Giavalisco, C. C. Steidel, T. Benítez, and Z. Tsvetanov. Type Ia Supernova Discoveries at z > 1 from the Hubble Space Telescope: Evidence for Past Deceleration and Constraints on Dark Energy Evolution. The Astrophysical Journal, 607: 665–687, June 2004. doi: 10.1086/383612. Jorma Rissanen and G.G. Jr. Langdon. Arithmetic coding. IBM Journal of Research and Development, 23(2):149–162, Mar 1979. BIBLIOGRAPHY 129

Michael Seiffert, Aniello Mennella, Carlo Burigana, Nazzareno Mandolesi, Marco Bersanelli, Peter Meinhold, and Phil Lubin. 1/ f noise and other systematic effects in the Planck LFI radiometers. Astronomy & Astro- physics, 391(3):1185–1197, September 2002.

Claude E. Shannon. A mathematical theory of communication. Bell System Technical Journal, 27:379–423, 623–656, 1948.

G. F. Smoot, M. V. Gorenstein, and R. A. Muller. Detection of anisotropy in the cosmic blackbody radiation. Physical Review Letters, 39:898–901, Octo- ber 1977. URL http://adsabs.harvard.edu/cgi-bin/nph-bib_query? bibcode=1977PhRvL..39..898S&db_key=AST.

D. N. Spergel, R. Bean, O. Doré, M. R. Nolta, C. L. Bennett, J. Dunkley, G. Hinshaw, N. Jarosik, E. Komatsu, L. Page, H. V. Peiris, L. Verde, M. Halpern, R. S. Hill, A. Kogut, M. Limon, S. S. Meyer, N. Odegard, G. S. Tucker, J. L. Weiland, E. Wollack, and E. L. Wright. Three-Year Wilkin- son Microwave Anisotropy Probe (WMAP) Observations: Implications for Cosmology. Astrophysical Journal Supplement, 170:377–408, June 2007. doi: 10.1086/513700.

The Cobe Homepage. WWW Page. URL http://space.gsfc.nasa.gov/ astro/cobe/cobe_home.html.

Maurizio Tomasi. Propagation of thermal fluctuations in the reference loads of the planck/lfi instrument. Master’s thesis, Università degli Studi di Milano, Nov 2002. An online copy is available at http://www. geocities.com/zio_tom78.

Robert M. Wald. General Relativity. University Of Chicago Press, June 1984. ISBN 0226870332.

C. Watson and R. Biggins. Packet store usage on herschel/planck. Technical report, European Space Agency, Apr 2005.

Steven Weinberg. The First Three Minutes. A Modern View of the Origin of the Universe. Bantam Books, 1977. Index

acoustic oscillations, 18 CN molecule, 13 Almagest, 3 COBE, 15 Anaximander, 2 compression, see data compression Anaximenes, 2 seethermal conductivity, 52 Andromeda galaxy, see M31 contact resistance, 55 arché, 2, 4 contact resistances, 56, 71–75 Archeops, 38 Copernicus, Nikolaus, 6 Archytas, 3 cosmic egg, see Big Bang Aristarchus of Samo, 3 Cosmic Microwave Background, see Aristotle, 3, 5 CMB atomism, 2, 6 cosmic variance, 18 Augustine of Hippo, 5 cosmological constant, 8, 22 cosmological principle, 8 baryon density, 22 Cosmological Standard Model, 13 Being cosmology in Christian philosophy, 4 ancient —, 2 in Greek cosmology, 2 creation, 4–5 Big Bang, 9–13 curvature parameter, 9 Big Crunch, 10 binary search, 104–106 DAE, 35 Boomerang, 17 dark matter, 21 dark energy, 20–22 CBI, 17 dark matter, 22 Change, in Greek cosmology, 2 DASI, 17 CMB, 12–14 data compression, 83–96 anisotropies, 15–20 arithmetic coding, 83–84 primary and secondary, 18 compression rate, 87 first generation experiments, 13 compressor, 83 importance in physics, 20–23 Huffman algorithm, 84 isotropy, 13 JPEG, 83 planckian shape, 13 lossy —, 85 polarization, 23 quantization error, 87 second generation experiments, sequence oriented —, 83 15 symbol oriented —, 83

130 INDEX 131

ZIP, 83 homogeneity, 8 Democritus, 2 Hot Big Bang, see Big Bang Descartes, Renée, 6 housekeeping data, 46 Digital Acquisition Electronics, see DAEHoyle, Fred, 12 dipole, 20 Hubble, Edwin Powell DIRBE, see COBE parameter, 11 DMR, see COBE Hubble, Edwin Powell, 10 Doppler effect, 18 parameter, 22 DPC, 35 Ia supernovae, 20 Einstein, Albert, 7 inflation, 14–15 endianness, 111 isotropy, 8 entropy Shannon’s —, see Shannon’s en- Johnson effect, 52 tropy epicycle, 4 Kant, Immanuel, 7 ESATAN, 54, 56 Kepler, Johannes, 6 ether, in Aristotle’s philosophy, 3 kinetic energy in Leibniz’ cosmology, 6 FIRAS, see COBE first change Laplace, Pierre Simon, 7 in Aristotle’s philosophy, 3 last scattering epoch, 14 Fourier, Jean-Baptiste lazy algorithm, 108 equation of heat transfer, 53 Leibniz, Gottfried, 6 Fourier, Jean-Baptiste Lemaître, Georges, 10, 13 discrete — transform, 65–70 LIFE, 97–113 series, 54 common data interface (CDI), 99, Friedmann, Alexander, 9 110 Lama, 101–112 Galilei, Galileo, 6 accessing many feed horns, 107 Gamow, George, 12 data access, 108 Gauss theorem, 53 data downsampling, 107–108 geocentrism, 4 Lama Link, 109–112 God, 3, 4, 6 measure units in —, 106–107 gravitational perturbations, 18 OCA, 88–96, 110 gravitational waves, 22 Pegaso, 113, 116–117 gravity purposes, 98–99 Newton’s law, 6 RaNA, 100–101 analysis modules, 101 heat linear motion equation, 52 in Aristotle’s philosophy, 3 heliocentrism, 6–7 LZO compression, 110, 111 in Pythagorean philosophy, 3 Heraclitus, 2 M31, 7 Herschels, 7 main arm, of an LFI radiometer, 34 132 INDEX materialism, 2 data acquisition modes, 87–88 matter-dominated universe, 13–14 data compressor, 86–87 MAXIMA, 17 calibration, 88–90 McKellar, Adam, 13 verification, 91–92 metric tensor, 8 jumps in the output, 92–96 multipoles, 17 packets, 82 recession of galaxies, 10 Newton, Isaac, 6 recombination time, 12 red shift, 10 Olber’s paradox, 8 reionization, 22 Olbers’ paradox, 9 relativity 1/ f noise, 35 general —, 7–8 Oresme, Nicole, 5–6 seecontact resistance, 54 Origen of Alexandria, 4 retrograde motion of planets, 4 Penzias, Arno Allan, 12 Ricci tensor, 8 Planck, 28–47 Sachs-Wolfe effect, 18 antenna bandwidth, 82 scalar curvature, 8 antenna bandwith, 82 scale factor, 9 cooling system, 36–38 CPV tests, 41 Scholastic, 5 dilution cooler, 38 Shannon’s entropy, 84–85 general characteristics, 28 side arm, of an LFI radiometer, 34 ground tests, 41 Silk damping, 19 HFI, 31–32 SNAP, 26 Joule-Thomson cooler, 38 sockets LFI, 33–35 in Lama Link, 110 RAA data format, 101–102 space dilation, 9 RAA tests, 46–47 space-time continuum, 8 radiometer names, 35 spherical harmonics, 17 RCA data format, 100–101 static cosmological models, 8 RCA tests, 42–46 Newton’s —, 7 orbit, 30–31 stoics, 5 RAA, 41 stress-energy tensor, 8 RCA, 41 Sunyaev-Zel’dovic effect, 19 telescope, 31 supernovae, and curvature, see Ia su- Ptolemy, 3, 6 pernovae Pythagoras, 3 sweeping source tests, 38 quadrupole, 20 Tempier, Étienne, 5 thermal analysis radiation-dominated universe, 13 static, 56 Radiometer Electronic Box Assembly, thermal analysis see REBA numerical, 54–56 REBA, 35, 46, 82–96, 106 calibration, 56 INDEX 133

static, 65 thermal conductivity, 53 thermodynamics first principle, 52 seetransfer functions, measurement, 58 Thomas Aquinas, 5 time, in Augustine’s philosophy, 5 transfer functions, 53–54 and FM tests, 70–76 and QM tests, 65–70 measurement, 58–65 direct method, 60–63 fitting method, 60 Fourier method, 59

Universe age, 3, 4 Einstein’s model of the —, 8 Friedmann’s model of the —, 9 Hoyle’s model of the —, 12 inflationary —, see inflation Newton’s model of the —, 7 originated by a nebula, 7 size, 3

V-grooves, 33 vis viva, in Leibniz’ cosmology, 6 vortexes, in Descartes’ cosmology, 6

Wilson, Robert Woodrow, 12 WMAP, 24–26 worst case estimates in numerical thermal analysis, 55

Zeno of Elea, 3 134 INDEX Acknowledgements

When we [. . . ] returned to faith, we came back under the tem- ple’s arches and there we found the realism we had lost.

Nikolaj Berdjaev

I would like to thank both my tutor, Prof. Marco Bersanelli, and my referee, Prof. Giorgio Sironi, for their help in writing this thesis. Marco’s encouragements helped me in following the approach I had in mind in writing down this text. I have appreciated Prof. Sironi’s thoughtful and encouraging comments, expecially considering that he was forced to read the whole text in a short amount of time. I thank all the people in Milan that have helped me during the last years: Benny, Andrea, Simona P., Simona D., Davide, Samuele and especially Daniele. This work has been partly done in the Istituto di Astrofisica Spaziale e Fisica Cosmica (INAF) in Milan (Italy). I thank the director, Dario Maccagni, and all the staff for their support. I thank all the people of the Planck/LFI team for their help and support: Fab- rizio, Luca T., Francesco, Gianluca, Enrico, Luca V., Andrea, Anna, Michele, Stuart, Luis, Leticia, Peter, Rodrigo. Although he is not part of the “official” team, I in- clude in this list Kenneth as well. Among my friends, there are a countless number of people whose help in these years has been fundamental: Rosa, Francesca, Dario, Arturo, Annalisa, Aldo, Marco, Romina, Federica, Paola, Domenico, Stefano, Lanfranco, Remi, Leo, Si- mona, Giovanni, Daniela, Franco, Gabriele, Chiara, Valerio, Eugenio, Sergio, Elena and her beautiful children Michela and Cristiano, Flavio, Salvo, and many others. The warmest thanks go to Jonah: without you, nothing would have happened! A big thank to my parents Elio ed Eliana and to my sister Flavia. Their love has helped me a lot in these years of hard work. Finally, a special thank to my wife Heidi, whose uninterrupted support and patience has helped me a lot, expecially during the final rush in preparing this thesis.

Maurizio Tomasi December 2007

135