The Millennium-XXL Project: Simulating the Galaxy Population of Dark Energy Universes

Total Page:16

File Type:pdf, Size:1020Kb

The Millennium-XXL Project: Simulating the Galaxy Population of Dark Energy Universes The Millennium-XXL Project: Simulating the Galaxy Population of dark Energy Universes Modern cosmology as encoded in the In particular, the arrival of the largest leading ⌳CDM model confronts astrono- galaxy surveys ever made is imminent, mers with two major puzzles. One is that offering enormous scientific potential the main matter component in today’s for new discoveries. Experiments like Universe appears to be a yet undiscov- SDSSIII/BOSS or PanSTARRS have ered elementary particle whose contribu- started to scan the sky with unprec- tion to the cosmic density is more than edented detail, considerably improving 5 times that of ordinary baryonic matter. the accuracy of existing cosmological This cold dark matter (CDM) interacts probes. This will likely lead to chal- extremely weakly with regular atoms lenges of the standard ⌳CDM paradigm Applications and photons, so that gravity alone has for cosmic structure formation, and affected its distribution since very early perhaps even discover new physics. times. The other is a mysterious dark en- ergy force field, which dominates the en- One of the most important aims of ergy content of today’s Universe and has these galaxy surveys is to shed light led to its accelerated expansion in recent on the nature of the dark energy via times. In the standard model, this com- measurements of the redshift evolution ponent is described in terms of Einstein’s of its equation of state. However, the cosmological constant (‘⌳’). Uncovering ability of these surveys to achieve this the nature of dark energy has become major scientific goal crucially depends one of the most actively pursued goals of on an accurate understanding of sys- obser vational cosmology. tematic effects and on a precise way 20 Autumn 2010 • Vol. 8 No. 2 • inSiDE to physically model the observations, The State of the Art in particular the scale-dependent bias The N-body method for the collision- between luminous red galaxies and the less dynamics of dark matter is a long- underlying dark matter distribution, or established computational technique the impact of mildly non-linear evolution used to follow the growth of cosmic on the so-called baryonic acoustic oscil- structure through gravitational instabil- lations (BAOs) measured in the power ity. The Boltzmann-Vlasov system of spectrum of galaxy clustering. equations is here discretized in terms of N fiducial simulation particles, whose Simulations of the galaxy formation pro- motion is followed under their mutual cess are arguably the most powerful gravitational forces in an expanding technique to accurately quantify and background space-time. While concep- understand these effects. However, tually simple, calculating the long-range this is an extremely tough computa- gravitational forces exactly represents tional problem, because it requires an N2-problem, which quickly becomes ultra-large volume N-body simulations prohibitively expensive for interesting with sufficient mass resolution to iden- problem sizes. However, it is fortu- tify the halos likely to host the galaxies nately sufficient to calculate the forces Applications seen in the surveys, and a realistic approximately, for which a variety of model to populate these halos with algorithms have been developed over galaxies. Given the significant invest- the years. ments involved in the ongoing galaxy surveys, it is imperative to tackle these This allowed the sizes of cosmological Figure 1: Dark matter numerical challenges to ensure that ac- simulations to steadily increase since distribution in the curate theoretical predictions become the early 1980s, roughly doubling the MXXL simulation on available both to help to quantify and particle number every 17 months and different scales. Each panel shows the pro- understand the systematic effects, and hence providing progressively more jected density of dark to extract the maximum amount of in- faithful models for the real Universe [1, matter in a slice of formation from the observational data. 2, 3, 4]. Such simulations have proven thickness 20 Mpc. Autumn 2010 • Vol. 8 No. 2 • inSiDE 21 to be an indispensable tool to under- than 1010 particles, exceeding the size stand the low- and high-redshift Uni- of previous simulations by almost an verse by comparing the predictions of order of magnitude. Its success was CDM to observations, since these cal- not only computational but most im- culations are the only way to accurately portantly scientific – more than 300 calculate the outcome of non-linear research articles in the fields of theo- cosmic structure formation. retical and observational cosmology have used the MR data-products since. A milestone in the development of cos- The MR has an exquisite mass resolu- mological simulations was set by the tion and accuracy but, unfortunately, Millennium Run (MR), performed by its volume is insufficient to get reliable our Virgo Consortium group in 2004 statistics on large scales at the level [3]. This simulation was the first, and needed for future surveys. for many years the only run with more Applications Figure 2: Differential halo abundance as a function of mass at the present epoch in the MXXL simulation. Note the good sampling far into the exponential tail. Apart from the last point, the error bars from counting statistics are smaller than the plotted symbols. 22 Autumn 2010 • Vol. 8 No. 2 • inSiDE Recently, French and Korean collabora- The computational Challenge tions [5, 6] have successfully carried However, it was clear from the start out simulations containing almost 70 that performing a simulation with these billion particles, but at considerably characteristics poses severe challenges, worse mass and spatial resolution than involving raw execution time, scalability the MS, which did not allow them to of the algorithms employed, as well as robustly identify Milky-Way sized halos. their memory consumption and the disk Also, the need to manipulate and ana- space required for the output data. For lyze a huge volume of data has proven example, simply storing the positions to be a non-trivial challenge in work- and velocities of the simulation particles ing with simulations of this size, a fact in single precision consumes of order that has made scientific exploitation of 10 TB of RAM memory. This figure, of these projects difficult. course, is greatly enlarged by the extra memory required by the complex data We have therefore set out to perform structures and algorithms employed in a new ultra-large N-body simulation the simulation code for the force calcu- of the hierarchical clustering of dark lation, domain decomposition, and halo matter, featuring a new strategy for and subhalo finding. Applications dealing with the data volume, and com- bining it with semi-analytical modelling The code we used is a novel version of galaxy formation, which allows a of GADGET-3 we wrote specifically for prediction of all the luminous proper- the MXXL project. GADGET computes ties of the galaxies that form in the short-range gravitational forces with a simulation. We designed the simula- hierarchical tree algorithm, and long- tion project, dubbed Millennium-XXL range forces with a FFT-based particle- (MXXL), to follow more than 303 billion mesh scheme [7]. Both the force particles (67203) in a cosmological box computation and the time stepping of of size 4.2 Gpc across, resolving the GADGET are fully adaptive. The code is cosmic web with an unprecedented written in highly portable C and uses a combination of volume and resolution. spatial domain decomposition to map While the particle mass of the MXXL different parts of the computational do- is slightly worse than that of the MR, main to individual processors. In 2005, its resolution is sufficient to accurately we publicly released GADGET-2, which measure dark matter merger histories presently is the most widely employed for halos hosting luminous galaxies, code for cosmic structure formation. within a volume more than 200 times larger than that of the MS. In this way Ultimately, memory requirements are the simulation can provide extremely the most serious limiting factor for cos- accurate statistics of the large-scale mological simulations of the kind stud- structure of the Universe by resolv- ied here. We have therefore put sig- ing around 500 million galaxies at the nificant effort in developing algorithms present epoch, allowing for example and strategies that minimize memory highly detailed clustering studies based consumption in our code, while at the on galaxy or quasar samples selected same time retaining high integration in a variety of different ways. This com- accuracy and calculational speed. prehensive information is indispensable The special “lean” version of our code for the correct analysis of future ob- we developed for MXXL requires only servational datasets. 72 bytes per particle in the peak for the Autumn 2010 • Vol. 8 No. 2 • inSiDE 23 ordinary dynamical evolution of MXXL. for our GADGET code. Instead of using The sophisticated in-lined group and 12,288 distributed-memory MPI substructure finders add a further 26 tasks, we only employed one MPI bytes to the peak memory consumption. task per processor (3,072 in total), This means that the memory require- exploiting the 4 cores of each socket ment for the target size of 303 billion via threads. This mixture of distrib- particles of our simulation amounts to uted and shared memory parallelism slightly more than 29 TB of RAM. proved ideal to limit memory and work-load imbalance losses, because The JuRoPa machine at the Jülich Su- a smaller number of MPI tasks re- percomputing Centre (JSC) appeared duces the number of independent as one of the best suited supercom- spatial domains in which the volume puters within Germany to fulfill our needs to be decomposed, as well as computing requirements, thanks to its the data volume that needs to be available storage of 3 GB per compute shuffled around with MPI calls.
Recommended publications
  • Curriculum Vitæ 0000-0002-2769-9507
    Institut de Physique des 2 Infinis de Lyon 4 rue Enrico Fermi, Université Claude Bernard Lyon 1 Lyon, 69622 Villeurbanne B [email protected] Oliver Newton MusicalNeutron Musical-Neutron Curriculum Vitæ 0000-0002-2769-9507 Employment & voluntary work Vocational 2019– Postdoctoral Researcher, Institut de Physique des Deux Infinis, Lyon, France. 2014–2015 Graduate Trading Platform Engineer, Fidessa, Woking, UK. Volunteering 2014–2020 Founding trustee & Treasurer, UniBrass Foundation (charity number: 1159359), UK. Education 2015–2019 PhD, Institute for Computational Cosmology, Durham, UK. Probing the nature of dark matter with small-scale cosmology Supervisors: Prof. Adrian Jenkins and Prof. Carlos Frenk 2010–2014 BSc MPhys, University of Warwick, Coventry, UK, 1st Class (Hons). Masters project: Determining the fundamental properties of Higgs candidates at the LHC Awards and scholarships 2019 ICC Research Scholarship, Institute for Computational Cosmology, Durham, UK. 2015–2019 STFC Postgraduate Studentship, Institute for Computational Cosmology, Durham, UK. Conference contributions Contributed talks July 2020 EAS (Online), Leiden, Netherlands. Constraining the properties of WDM using the satellite galaxies of the Milky Way Jan 2020 VIRGO Consortium meeting, Durham, UK. Constraining the properties of WDM using the satellite galaxies of the Milky Way Sep 2019 CLUES Collaboration meeting, IN2P3, Lyon, France. Constraints on thermal relic WDM from satellites of the LG July 2019 Small Galaxies, Cosmic Questions, Durham, UK. Constraints on the mass of the thermal relic warm dark matter particle Jan 2019 DEX XV, Edinburgh, UK. Constraints on the mass of the thermal relic warm dark matter particle Dec 2018 VIRGO Consortium meeting, Leiden University, Netherlands. Constraints on the mass of the thermal relic warm dark matter particle Aug 2018 XXX IAU General Assembly, Vienna, Austria.
    [Show full text]
  • Gravitational Waves from Resolvable Massive Black Hole Binary Systems
    Mon. Not. R. Astron. Soc. 000, 000–000 (0000) Printed 22 October 2018 (MN LATEX style file v2.2) Gravitational waves from resolvable massive black hole binary systems and observations with Pulsar Timing Arrays A. Sesana1, A. Vecchio2 and M. Volonteri3 1Center for Gravitational Wave Physics, The Pennsylvania State University, University Park, PA 16802, USA 2School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK 3 Dept. of Astronomy, University of Michigan, Ann Arbor, MI 48109, USA Received — ABSTRACT Massive black holes are key components of the assembly and evolution of cosmic structures and a number of surveys are currently on-going or planned to probe the demographics of these objects and to gain insight into the relevant physical processes. Pulsar Timing Arrays (PTAs) currently provide the only means to observe gravita- > 7 tional radiation from massive black hole binary systems with masses ∼ 10 M⊙. The whole cosmic population produces a stochastic background that could be detectable with upcoming Pulsar Timing Arrays. Sources sufficiently close and/or massive gener- ate gravitational radiation that significantly exceeds the level of the background and could be individually resolved. We consider a wide range of massive black hole binary assembly scenarios, we investigate the distribution of the main physical parameters of the sources, such as masses and redshift, and explore the consequences for Pulsar Timing Arrays observations. Depending on the specific massive black hole population model, we estimate that on average at least one resolvable source produces timing residuals in the range ∼ 5 − 50 ns. Pulsar Timing Arrays, and in particular the future Square Kilometre Array (SKA), can plausibly detect these unique systems, although the events are likely to be rare.
    [Show full text]
  • The Illustristng Simulations: Public Data Release
    Nelson et al. Computational Astrophysics and Cosmology (2019)6:2 https://doi.org/10.1186/s40668-019-0028-x R E S E A R C H Open Access The IllustrisTNG simulations: public data release Dylan Nelson1*,VolkerSpringel1,6,7, Annalisa Pillepich2, Vicente Rodriguez-Gomez8,PaulTorrey9,5, Shy Genel4, Mark Vogelsberger5, Ruediger Pakmor1,6, Federico Marinacci10,5, Rainer Weinberger3, Luke Kelley11,MarkLovell12,13, Benedikt Diemer3 and Lars Hernquist3 Abstract We present the full public release of all data from the TNG100 and TNG300 simulations of the IllustrisTNG project. IllustrisTNG is a suite of large volume, cosmological, gravo-magnetohydrodynamical simulations run with the moving-mesh code AREPO. TNG includes a comprehensive model for galaxy formation physics, and each TNG simulation self-consistently solves for the coupled evolution of dark matter, cosmic gas, luminous stars, and supermassive black holes from early time to the present day, z = 0. Each of the flagship runs—TNG50, TNG100, and TNG300—are accompanied by halo/subhalo catalogs, merger trees, lower-resolution and dark-matter only counterparts, all available with 100 snapshots. We discuss scientific and numerical cautions and caveats relevant when using TNG. The data volume now directly accessible online is ∼750 TB, including 1200 full volume snapshots and ∼80,000 high time-resolution subbox snapshots. This will increase to ∼1.1 PB with the future release of TNG50. Data access and analysis examples are available in IDL, Python, and Matlab. We describe improvements and new functionality in the web-based API, including on-demand visualization and analysis of galaxies and halos, exploratory plotting of scaling relations and other relationships between galactic and halo properties, and a new JupyterLab interface.
    [Show full text]
  • Machine Learning and Cosmological Simulations I: Semi-Analytical Models
    MNRAS 000,1{18 (2015) Preprint 20 August 2018 Compiled using MNRAS LATEX style file v3.0 Machine Learning and Cosmological Simulations I: Semi-Analytical Models Harshil M. Kamdar1;2?, Matthew J. Turk2;4 and Robert J. Brunner2;3;4;5 1Department of Physics, University of Illinois, Urbana, IL 61801 USA 2Department of Astronomy, University of Illinois, Urbana, IL 61801 USA 3Department of Statistics, University of Illinois, Champaign, IL 61820 USA 4National Center for Supercomputing Applications, Urbana, IL 61801 USA 5Beckman Institute For Advanced Science and Technology, University of Illinois, Urbana, IL, 61801 USA Accepted 2015 October 1. Received 2015 September 30; in original form 2015 July 2 ABSTRACT We present a new exploratory framework to model galaxy formation and evolution in a hierarchical universe by using machine learning (ML). Our motivations are two- fold: (1) presenting a new, promising technique to study galaxy formation, and (2) quantitatively analyzing the extent of the influence of dark matter halo properties on galaxies in the backdrop of semi-analytical models (SAMs). We use the influential Millennium Simulation and the corresponding Munich SAM to train and test vari- ous sophisticated machine learning algorithms (k-Nearest Neighbors, decision trees, random forests and extremely randomized trees). By using only essential dark matter 12 halo physical properties for haloes of M > 10 M and a partial merger tree, our model predicts the hot gas mass, cold gas mass, bulge mass, total stellar mass, black hole mass and cooling radius at z = 0 for each central galaxy in a dark matter halo for the Millennium run. Our results provide a unique and powerful phenomenolog- ical framework to explore the galaxy-halo connection that is built upon SAMs and demonstrably place ML as a promising and a computationally efficient tool to study small-scale structure formation.
    [Show full text]
  • Indra: a Public Computationally Accessible Suite of Cosmological 푁-Body Simulations
    MNRAS 000,1–12 (2021) Preprint 5 August 2021 Compiled using MNRAS LATEX style file v3.0 Indra: a Public Computationally Accessible Suite of Cosmological #-body Simulations Bridget Falck,1¢ Jie Wang,2,3 Adrian Jenkins,4 Gerard Lemson,1 Dmitry Medvedev,1 Mark C. Neyrinck,5,6 Alex S. Szalay1 1Department of Physics and Astronomy, Johns Hopkins University, 3400 N Charles St, Baltimore, MD 21218, USA 2National Astronomical Observatories, Chinese Academy of Sciences, Beijing, 100012, China 3School of Astronomy and Space Science, University of Chinese Academy of Sciences, Beijing 100049, China 4Institute for Computational Cosmology, Department of Physics, Durham University, Durham DH1 3LE, UK 5Ikerbasque, the Basque Foundation for Science, 48009, Bilbao, Spain 6Department of Physics, University of the Basque Country UPV/EHU, 48080, Bilbao, Spain Accepted XXX. Received YYY; in original form ZZZ; Draft version 5 August 2021 ABSTRACT Indra is a suite of large-volume cosmological #-body simulations with the goal of providing excellent statistics of the large-scale features of the distribution of dark matter. Each of the 384 simulations is computed with the same cosmological parameters and different initial phases, with 10243 dark matter particles in a box of length 1 ℎ−1 Gpc, 64 snapshots of particle data and halo catalogs, and 505 time steps of the Fourier modes of the density field, amounting to almost a petabyte of data. All of the Indra data are immediately available for analysis via the SciServer science platform, which provides interactive and batch computing modes, personal data storage, and other hosted data sets such as the Millennium simulations and many astronomical surveys.
    [Show full text]
  • Virtu - the Virtual Universe
    VirtU - The Virtual Universe comprising the Theoretical Virtual Observatory & the Theory/Observations Interface A collaborative e-science proposal between Carlos Frenk University of Durham Ofer Lahav University College London Nicholas Walton Cambridge University Carlton Baugh University of Durham Greg Bryan University of Oxford Shaun Cole University of Durham Jean-Christophe Desplat Edinburgh Parallel Computing Centre Gavin Dalton University of Oxford George Efstathiou Cambridge University Nick Holliman University of Durham Adrian Jenkins University of Durham Andrew King University of Leicester Simon Morris University of Durham Jerry Ostriker Cambridge University John Peacock University of Edinburgh Frazer Pearce University of Nottingham Jim Pringle Cambridge University Chris Rudge University of Leicester Joe Silk University of Oxford Chris Simpson University of Durham Linda Smith University College London Iain Stewart University of Durham Tom Theuns University of Durham Peter Thomas University of Sussex Gavin Pringle Edinburgh Parallel Computing Centre Jie Xu University of Durham Sukyoung Yi University of Oxford Overseas collaborators: Joerg Colberg University of Pittsburgh Hugh Couchman McMaster University August Evrard University of Michigan Guinevere Kauffmann Max-Planck Institut fur Astrophysik Volker Springel Max-Planck Institut fur Astrophysik Rob Thacker McMaster University Simon White Max-Planck Institut fur Astrophysik 1 Executive Summary 1. We propose to construct a Virtual Universe (VirtU) consisting of the \Theoretical Virtual
    [Show full text]
  • The 2Df Galaxy Redshift Survey and Cosmological Simulations Carlos S
    University of Durham The 2dF galaxy redshift survey and cosmological simulations Carlos S. Frenk Institute for Computational Cosmology, Durham Institute for Computational Cosmology The 2dF Galaxy Redshift University of Durham Survey 1997- 2002 250 nights at 4m AAT 221,000 redshifts to bj<19.45 median z = 0.11 First 100k z's released June/01 Full catalogue released July/03 Institute for Computational Cosmology 2dF Galaxy Redshift Survey: University of Durham Team Members Ivan K. Baldry10 Carlton M. Baugh2 Joss Bland-Hawthorn1 Terry Bridges1 Russell Cannon1 Shaun Cole2 Matthew Colless3 Chris Collins13 Warrick Couch5 Nicholas Cross6 Gavin Dalton9 Kathryn Deely5 Roberto De Propris5 Simon P. Driver6 George Efstathiou8 Richard S. Ellis7 Carlos S. Frenk2 Karl Glazebrook10 Edward Hawkins12 Carole Jackson3 Ofer Lahav8 Ian Lewis9 Stuart Lumsden11 Steve Maddox12 Darren Madgwick8 Stephen Moody8 Peder Norberg2 John A. Peacock4 Will Precival4 Bruce A. Peterson3 Mark Seaborne9 Will Sutherland4 Keith Taylor7 Institutions 1Anglo-Australian Observatory 2University of Durham 3The Australian National University 4University of Edinburgh 5University of New South Wales 6University of St Andrews 7California Institute of Technology 8University of Cambridge 9University of Oxford 10Johns Hopkins University 33 people at 11University of Leeds 12University of Nottingham 13 Liverpool John Moores University 12 Institute for Computational Cosmology institutions The 2dF galaxy redshift survey University of Durham QuickTimeã and a YUV420 codec decompressor are needed to
    [Show full text]
  • Public Release of N-Body Simulation and Related Data by the Virgo Consortium
    Mon. Not. R. Astron. Soc. 000, 1–4 (2000) Printed 25 October 2018 (MN LATEX style file v1.4) Public Release of N-body simulation and related data by the Virgo consortium. C. S. Frenk,1 J. M. Colberg,2 H. M. P. Couchman,3 G. Efstathiou,4 A. E. Evrard,5 A. Jenkins,1 T. J. MacFarland,6 B. Moore,1 J. A. Peacock,7 F. R. Pearce,1 P. A. Thomas,8 S. D. M. White2 and N. Yoshida.2 (The Virgo Consortium)10 1 Dept Physics, South Road, Durham, DH1 3LE. 2 Max-Planck Inst. for Astrophysics, Garching, Munich, D-85740, Germany. 3 Department of Physics and Astronomy, McMaster University, Hamilton, Ontario, L8S 4M1, Canada 4 Institute of Astronomy, Madingley Road, Cambridge CB3 OHA 6 Now at 105 Lexington Avenue, Apt. 6F,New York, NY 10016 5 Dept Physics, University of Michigan, Ann Arbor, MI-48109-1120. 7 Royal Observatory, Institute of Astronomy, Edinburgh, EH9 3HJ 8 Astronomy Centre,CPES, University of Sussex, Falmer, Brighton, BN1 9QJ 10 http://star-www.dur.ac.uk/∼frazerp/virgo/ 25 October 2018 ABSTRACT We are making available on the WWW a selection of the archived data from N-body simulations carried out by the Virgo consortium and related groups. This currently includes: (i) time-slice, lightcone and cluster data from the two 109-particle Hubble volume simulations described by Evrard 1998; (ii) time-slice data from simula- tions of 4 different cold dark matter cosmological models with 2563 particles analysed by Jenkins et al 1998; (iii) Dark halo catalogs, merger trees and galaxy catalogs from arXiv:astro-ph/0007362v1 25 Jul 2000 the GIF project described by Kauffmann et al 1999.
    [Show full text]
  • Mocking the Universe: the Jubilee Cosmological Simulations
    Mocking the Universe: The Jubilee cosmological simulations Deep galaxy surveys, large-scale structure and dark energy. Spanish participation in future projects, Valencia, 29-30 March 2012 JUBILEE Juropa Hubble Volume Simulation Project Participants: P.I: S. Gottlober (AIP) I. I. Iliev (Sussex) II. G. Yepes (UAM) III. J.M. Diego (IFCA) IV. E. Martínez González (IFCA) Why do we need large particle simulations? Large Volume Galaxy Surveys DES, KIDS, BOSS, JPAS, LSST, BigBOSS, Euclid They will probe 10-100 Gpc^3 volumes • Need to resolve halos hosting the faintest galaxies of these surveys to produce realistic mock catalogues. Higher z surveys imply smaller galaxies and smaller halos->more mass resolution. • Fundamental tool to compare clustering properties Zero crossing of CF. 130Mpc of galaxies with theoretical predictions from Prada et al 2012 cosmological models at few % level. Not possible only with linear theory. Must do the full non-linear evolution for scales 100+ Mpc (BAO, zero crossing) Galaxy Biases: Large mass resolution is needed if BAO in BOSS, internal sub-structure of dm halos has to be properly 100Mpc scale resolved to map halos to galaxies. e.g. Using the Halo Abundance Matching technique (e.g. Trujillo-Gómez et al 2011, Nuza et al 2012) We will need trillion+ particle simulations Large Volume Galaxy Surveys A typical example: BOSS (z=0.1..0.7) Box size to host BOSS survey : 3.5 h^-1 Gpc BOSS completed down to galx with Vcir >350 km/s - > Mvir ~5x10^12 Msun. To properly resolve the peak of the Vrot in a dark matter halo we need a minimum of 500-1000 particles.
    [Show full text]
  • Numerical Cosmology: Recreating the Universe in a Supercomputer The
    March 2006 The Alexandria Lectures Numerical Cosmology: Recreating the Universe in a Supercomputer Simon White Max Planck Institute for Astrophysics The Three-fold Way to Astrophysical Truth OBSERVATION The Three-fold Way to Astrophysical Truth OBSERVATION THEORY The Three-fold Way to Astrophysical Truth OBSERVATION SIMULATION THEORY INGREDIENTS FOR SIMULATIONS ● The physical contents of the Universe Ordinary (baryonic) matter – protons, neutrons, electrons Radiation (photons, neutrinos...) Dark Matter Dark Energy ● The Laws of Physics General Relativity Electromagnetism Standard model of particle physics Thermodynamics ● Initial and boundary conditions Global cosmological context Creation of “initial” structure ● Astrophysical phenomenology (“subgrid” physics) Star formation and evoluti on The COBE satellite (1989 - 1993) ● Three instruments Far Infrared Absolute Spectroph. Differential Microwave Radiom. Diffuse InfraRed Background Exp Spectrum of the Cosmic Microwave Background Data from COBE/FIRAS a near-perfect black body! What do we learn from the COBE spectrum? ● The microwave background radiation looks like thermal radiation from a `Planckian black-body'. This determines its temperature T = 2.73K ● In the past the Universe was hot and almost without structure -- Void and without form -- At that time it was nearly in thermal equilibrium ● There has been no substantial heating of the Universe since a few months after the Big Bang itself. COBE's temperature map of the entire sky T = 2.728 K T = 0.1 K COBE's temperature map of the
    [Show full text]
  • The Millennium Simulation: Cosmic Evolution in a Supercomputer Simon White Max Planck Institute for Astrophysics the COBE Satellite (1989 - 1993)
    The Millennium Simulation: cosmic evolution in a supercomputer Simon White Max Planck Institute for Astrophysics The COBE satellite (1989 - 1993) ● Two instruments made maps of the whole sky in microwaves and in infrared radiation ● One instrument took a precise spectrum of the sky in microwaves COBE's temperature map of the entire sky T = 2.728 K T = 0.1 K COBE's temperature map of the entire sky T = 2.728 K T = 0.0034 K COBE's temperature map of the entire sky T = 2.728 K T = 0.00002 K Structure in the COBE map ● One side of the sky is `hot', the other is `cold' the Earth's motion through the Cosmos V = 600 km/s Milky Way ● Radiation from hot gas and dust in our own Milky Way ● Structure in the Microwave Background itself Structure in the Microwave Background Where is the structure? In the cosmic “clouds”, 40 billion light years away What are we seeing? Weak gravito-sound waves in the clouds When do we see these clouds? When the Universe was 400,000 years old, and was 1,000 times smaller and 1,000 times hotter than today How big are the structures? At least a billion light-years across (in the COBE maps) When were they made? During inflation, perhaps 10-30sec. after the Big Bang What did they turn into? Everything we see in the present Universe The WMAP Satellite at Lagrange-Point L2 The WMAP of the whole CMB sky Bennett et al 2003 What can we learn from these structures? The pattern of the structures is influenced by several things: --the Geometry of the Universe finite or infinite eternal or doomed to end --the Content of the Universe:
    [Show full text]
  • Optimising FLASH for Cosmological Simulations
    I V N E R U S E I T H Y T O H F G E R D I N B U Optimising FLASH for Cosmological Simulations Chris Daley August 24, 2007 M.Sc. in High Performance Computing The University of Edinburgh Year of Presentation: 2007 Abstract Runtime performance is a limiting factor in large cosmological simulations. A paral- lel application code called FLASH, which can be used for cosmological simulations is investigated in this report. FLASH is an Adaptive Mesh Refinement (AMR) code, parallelised using MPI, and incorporating a Particle-Mesh (PM) Poisson solver. Pro- files of a cosmological simulation based on FLASH, indicate that the procedure which accumulates particle mass onto the mesh is the major bottleneck. A parallel algorithm devised by the FLASH center may help to overcome this bottleneck. In this project, the algorithm is implemented for the latest beta release of FLASH, which cannot currently perform cosmological simulations. There is interest in FLASH because the adaptive mesh can be refined to resolve shocks in cosmological simulations. In addition, the new version of FLASH is designed with an emphasis on flexible and easy to extend code units. The communication strategy in the implemented parallel algorithm does not involve guard cell exchanges (halo swaps). As such, it uses less memory than the previous im- plementation in the previous version of FLASH. Due to time restrictions, the delivered implementation is only compatible with a uniform grid in both uniform and adaptive grid modes. In addition, results indicate it is approximately 3-5 times slower than the previous implementation.
    [Show full text]